- It’s easy to imagine cases. For example, a human unexpectedly darts into a busy street. The self-driving cars around it rapidly communicate and algorithmically devise a plan that saves the pedestrian at the price of causing two cars to engage in a Force 1 fender-bender and three cars to endure Force 2 minor collisions…but only if the car I happen to be in intentionally drives itself into a concrete piling, with a 95% chance of killing me. All other plans result in worse outcomes, where “worse” refers to some scale that weighs monetary damages, human injuries, and human deaths.
It's an entertaining thought experiment but it falls apart as soon as you examine case law. LET'S ASSUME: 1) You own the car 2) It has some aspect of "googlism" in it If that aspect of "googlism" causes your property to cause you personal injury, Google has violated product liability law. LET'S INSTEAD ASSUME 1) You rented the car 2) It has some aspect of "googlism" in it If that aspect of googlism causes property in your proximate control to cause you injury, Google has violated product liability law. SO LET'S INSTEAD ASSUME 1) You're an innocent goddamn bystander 2) A car that has some aspect of "googlism" in it runs you over on the sidewalk in the interests of "the better good" Google has committed an unintentional tort against you. "the better good" has no part in case law. GM saved all of its consumers $3 by making shitty ignition switches. It killed 7 people because of it. Nobody said "way to go, GM, you saved consumers $3." The only way product liability works is if the vehicle that you own acts in your interests to the exclusion of all others.
Google's modus operandi is not so much to sell people products as it is to provide them services. What if Google cars weren't owned or leased but provided a distributed driving service like Uber? Couldn't the terms of that driving service stipulate that it would act in a certain way which might not prioritize preventing an accident to any particular car but would minimize injuries/deaths as much as possible for all users of the service? I'm not familiar with liability law like you seem to be so I'm honestly asking. Alternatively, maybe Google could sell you the car but you would have to choose which automatic driving data service to connect it to. If you choose not to connect it to any automatic driving data service, it would act like a standard human-controlled car. But if you chose to connect it to a driving data service, that would come with a license agreement that the data service would control your car in a way that could injure or kill you. I think it's a simplification to think of googlism as an innate aspect of the car/product when it's more a dynamic interaction between the car and an expansive data service. Even their current cars need a constant connection to work.
I think that's an excellent point. I suspect (and veen and I have discussed at length) that google is most likely to hit up auto manufacturers for licensing, which further complicates the issue. That's why I called it "googlism" instead of any specific product - if it were my business, I'd obfuscate liability with every move I made. So the way this typically works in the land of liability disclaimers is (1) I make you fill out a document whereby you foreswear any opportunity to hold me responsible for anything for any reason ever anywhere. (2) I do something that kills you or almost kills you. (3) You sue me anyway because it's not actually legal to sign away your life and any contract that overshoots the standards of reasonable risk generally gets invalidated the minute it rubs up against statute. So it kind of boils down to the same thing - yeah, you can throw disclaimers at the problem til the sun goes out but you reach the point where a judge and/or jury says "Google can't make a service that will voluntarily, willfully injure its contractees regardless of how many degrees removed they are. "
To add, this Atlantic article talks about who is liable when a self-driving car fails. It suggests that the Google car is no different from other products and that the products liability laws will adjust. There’s no doubt that accidents involving autonomous vehicles will put challenging new liability questions before the courts. But that doesn’t mean the courts will be unable to address them. And it doesn’t mean that we have to put the autonomous-vehicle industry on hold so legislators can attempt to preemptively draft and enact an entirely new set of liability laws that anticipate everything that might go wrong. It’s an exercise that would be as impossible as it is unnecessary.
Well as far as I can figure, the "ism" applies in this case but I just have horrible retching feelings just reading or typing googlism, ugh.
Now that I got that out of the way, what about how existing laws and how they are applied to the failure of a car that says it will react and brake better than you can or it did not see the other vehicle in the blind spot as well.
Surely there are disclaimers and some laws already enacted that deal with these instances of automated control services. (ACS, I like that better)
Is the failure of these services ultimately left on the consumer as they chose to implement or rely on them? The future is tricky.
That is my new catch phrase.
Use it if you like.
That makes sense. There's a big difference between saying "the parachute might fail and you might die" versus "if the diving instructor doesn't like you he's allowed you disconnect you." It will definitely be interesting to see how it all settles out. By and large I'm confident these will dramatically reduce injuries and deaths but Google is definitely opening itself up to a lot of lawsuits by being the first to try this.
I agree with you about the life-saving features but disagree about the lawsuits. I don't see Google deciding to play "the needs of the many outweigh" because liability law is pretty clearly "take care of your own." So long as the product is acting as advertised to you, to your advantage, they're good. This whole "Google as omnipotent force" aspect of it is a strawman, in my opinion.
This is an interesting piece of futurology and fun to think about. In the author's world where self-driving cars dominate the road, however, I think we'd also have to imagine other networked infrastructure. For example, the pedestrian might well have an implant/augmented reality/wrist notification that would inform them of when it was safe to cross the street. The crosswalk may well be networked and know when it was safe to switch to the Walk sign. It's easy to imagine horror scenarios with self-driving vehicles but it's equally easy to imagine countless improvements that could be made to other everyday objects if they were working in concert with the driving network. What we're looking at is not a one-off invention that will make getting to work easier, but a fundamental shift in how to imagine transportation.
By the time that self-driving cars are the norm, I imagine that crash safety would be a great deal better than it is now (which is pretty damn good already). It's up to a whole variety of factors, and in the case of a pedestrian it's essentially their fault especially if there are actually pedestrian crossings, why should the passenger have to die for the sake of someone else's mistake? I mean if we're talking just a normal suburban road that has a 50/60kmh speed limit (30-35mph) then I imagine the google car should have some reasonable method of detecting a pedestrian at risk of being hit at a long enough distance for it to use its brakes.