- Much of that explains why, despite the fact that the car detected Herzberg with more than enough time to stop, it was traveling at 43.5 mph when it struck her and threw her 75 feet. When the car first detected her presence, 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.
It never guessed Herzberg was on foot for a simple, galling reason: Uber didn’t tell its car to look for pedestrians outside of crosswalks. “The system design did not include a consideration for jaywalking pedestrians,” the NTSB’s Vehicle Automation Report reads. Every time it tried a new guess, it restarted the process of predicting where the mysterious object—Herzberg—was headed. It wasn’t until 1.2 seconds before the impact that the system recognized that the SUV was going to hit Herzberg, that it couldn’t steer around her, and that it needed to slam on the brakes.
That triggered what Uber called “action suppression,” in which the system held off braking for one second while it verified “the nature of the detected hazard”—a second during which the safety operator, Uber’s most important and last line of defense, could have taken control of the car and hit the brakes herself. But Vasquez wasn’t looking at the road during that second. So with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.
WTF
This is more than the classification software getting confused. This is more than a couple programming mistakes. This is more than bad software design. Uber failed at even the most basic safety engineering. Not only did they not consider safety when building their own stuff, they even intentionally disabled existing safety measures. The thing that really gets me is that self-driving cars have a lot of hard problems to solve, but all these factors are not open problems! Engineers in safety-conscious fields have been thinking about and preventing problems like these for fucking decades. We know how to keep drivers paying attention even when the thing they're driving is mostly automated. We know how to alert operators of unusual circumstances early so they can make informed decisions. We know, for christ's sake, how to run several unreliable control systems and take the majority vote of their results if we're worried one might be making a mistake! This shit isn't even new -- someone from 1995 could tell you how to do all these things without knowing anything about the last 30 years of technological advancement. But goddamn it, software engineers, if they think of anything, think of security -- how to stop "bad things" from happening. Or they think in probabilistic terms -- 90% accuracy is pretty good, right? Safety engineers, though, they know that no matter what the probabilistic models say, something bad will always eventually happen -- so how do you stop that bad thing from hurting humans. And then, on top of all that, you have some really fucking stupid programming decisions that should never have made it onto a public road in the first place. Jesus christ.
"move fast and break things" I think as a culture, software folx don't have as firm a grasp on the impact they can have on lives. I mean, Boeing got rid of the only failsafe for a control system that could take over the controls of a jetliner in flight. A software glitch convinced NORAD that there were 250 Soviet ICBMs in flight and told all of hawaii they were about to get nuked. There's this lingering "oh, the human failsafe will take care of it" while simultaneously doing everything they can to avoid involving the human failsafe. And that human failsafe is almost always superhuman. "Here. You've got 200 milliseconds to not murder a bicyclist. Ready, steady, GO!"
I don't think Boeing was software related, IIRC Boeing got rid of the failsafe because having it meant they couldn't sell the new model as not requiring pilots to receive extra training. That's not a decision the software departments got to make. You're taking "move fast and break things" way out of the context it was ever said genuinely.
Also, to address your edit: "Move fast and break things" was such a mantra at Facebook that they used it in motivational speeches. This is the company that got started when its founder hacked servers at Harvard. It has flagrantly violated privacy and, arguably, led to the overthrow of the US government. The company we're directly addressing here is Uber, whose business model was, from the very beginning, a violation of the law. Ubercab ran a livery business without any livery permits or commercially-licensed drivers because they knew that they could outspend local municipalities and change their name quickly enough to not face any consequences. The context is not just exact, it is pitch perfect to the meaning of the phrase. Uber was moving fast. They broke things. Those things were human. They are now dead. As a culture, software folx don't have a firm grasp on the impact they can have on lives. Cue Facebook's Libra.
Sure, it was facebook's stance that dev teams prioritise a fast development cycle over never breaking the flagship product in production - radical for a company that size. I said that usage was "way out of the context it was ever said genuinely" because Facebook motivational speeches about deprioritising the uptime of a website has nothing to do with a department of a different company who's developing driving software, NORAD certainly doesn't strike me as "move fast and break things" culture, Boeing's decision was clearly about training requirements, not moving fast, and even Facebook abandoned "move fast and break things". It's possible the self-driving dev teams were following a "move fast and break things" mantra, but I doubt it, we have more mundane explanations like company structure and pressure at all company layers to demonstrate progress. If you're using "move fast and break things" as a pithy description of aftermath of Silicon Valley companies, fine, but by repurposing a slogan for developers and saying "software folx" it came across as suggesting safety-critical development teams subscribe to this mantra and that's why there are problems. (Not sure what the dig at Libra is about, I never paid much attention to Libra - surely it doesn't break anything that Bitcoin hasn't already?)
A software team... had no safety department. They had literally no one responsible for safety. They hired a safety officer... seven months after they killed someone. That is chapter and verse a "move fast and break things" culture: they found their need for a subsystem of development after their lack of that subsystem caused a critical failure. A critical failure involving a fatality. You're prevaricating: you're arguing that this is somehow a "structure" problem when their "structure" was "software development and nothing else." Boeing's problem was they wanted things fast - that's what "no training requirements" means. And Facebook abandoned "move fast and break things" after they'd rolled Beacon out and gotten pilloried by everyone on the Internet and press. However, they doubled down on this approach to Libra which, contrary to your ignorance, is a sovereign currency controlled wholly by Facebook designed to be beyond the regulation or purvey of the host nations it is used within. I get that you want this to not be about software developers' penchant for the callous disregard of human life, but what you're mostly doing is illustrating your naivete of the design process and software engineers' historical roles of completely disregarding consequences are very well embodied in the mantra "move fast and break things." There was a time when you couldn't hurt people with software. Your very arguments are illustrating that you think that time is now. The evidence of the situation illustrates that it never has been. And this is why it's easy to hate software developers.It's possible the self-driving dev teams were following a "move fast and break things" mantra, but I doubt it, we have more mundane explanations like company structure and pressure at all company layers to demonstrate progress.
Are software developers necessarily any more dangerous than anyone else with influence in large and uncaring companies? I believe Facebook is particularly less responsible than other companies, they're probably way up there with Uber in terms of controversey to revenue + employees ratio. The company I work in I believe is an example of a particularly responsible primarily tech one from the top down, with 10x the employees and 5x the revenue of Facebook. If you search Facebook controversies there is this en.wikipedia.org/wiki/Criticism_of_Facebook and if you search controversies for this company than there is nothing substantial since WW2. I do agree that it is easy to get locked into the code and forget about the real world, but I think most of these issues come from the almost charlatan nature of these two companies. It is nice to hear others complaining about that move fast and break things crap. FB got lucky with 1 product being the place to find people and hasn't done much worthwhile since, besides maybe React which is a pretty elegant MVC framework in technicality but I don't like how much they tout it as something revolutionary where it's just a less opinionated Angular /fbrant
I wasn't defending Uber or suggesting they weren't negligent
MCAS in general exists so that the MAX could be considered a 737 by the FAA which would allow Southwest, the biggest buyer of 737s, to not require recertification. Multiple sensors that could disagree would have forced the inclusion of a warning light informing pilots that the system had been disabled. You're right - that would have required additional training so instead, they made it so the system couldn't be disabled. And since it couldn't be disabled, there wasn't any point in informing the crew that it existed. So. Since we can't have a system with failsafe, we'll certify the system as not needing a failsafe. And if that happens, Ludtke said, the pilots would potentially need training on the new alert and the underlying system. That could mean simulator time, which was off the table. “The decision path they made with MCAS is probably the wrong one,” Ludtke said. “It shows how the airplane is a bridge too far.” Boeing said Tuesday that the company’s internal analysis determined that relying on a single source of data was acceptable and in line with industry standards because pilots would have the ability to counteract an erroneous input. https://www.seattletimes.com/business/boeing-aerospace/a-lack-of-redundancies-on-737-max-system-has-baffled-even-those-who-worked-on-the-jet/ In other words, the software designers decided that if things could go wrong, the pilots could always deal with it. even though they didn't know they might have to.Ludtke didn’t work directly on the MCAS, but he worked with those who did. He said that if the group had built the MCAS in a way that would depend on two sensors, and would shut the system off if one fails, he thinks the company would have needed to install an alert in the cockpit to make the pilots aware that the safety system was off.
This will be Article #52 of the Robot Filings Against Hominids, Vol. I: “Escalating the Hate Campaign against AI.