As a software developer, this was one of the very first questions that occurred to me when considering self-driving cars. First off, you simply can't "avoid all accidents at any cost". Accidents will happen, and the car has to be programmed to do something when they are about to, even if that something is just shutting down the car's autonomous systems. The way computers work, you must at some point reach a line of code where the car decides whether it goes left into a pole (killing you) or right into the crowd of people (saving you, but killing them). For the time being at least, some human being has to write that code. That means that ultimately some software developer is going to be the one making that choice. If I were that developer, the only sane choice would be to minimize the total loss of life. I've always thought that deontology was mostly bullshit anyway, but in this case a real person, not the car is going to have to make an active decision one way or the other.
Turning off the autonomous systems in an emergency situation is probably the worst decision possible. Human reaction times are orders of magnitudes slower than that of a computer, provided the human is even paying any attention to the road when the emergency situation happens. I probably wouldn't as I'd trust the car much more than I would trust myself. Your example situation is neglecting to mention why the car is going what would have to be at least 100 km per hour when driving in that situation. If it was going at a safe speed of say 35 kmph (as there are potential hazards around, the crowd of people near the road) it would slam on breaks and not hit anything. Thinking about how to get out of these situations is a flawed plan. Think about how to avoid getting in these situations in the first place is a much better one. The easiest being to slow down when there are hazards close to the car.
Of course turning off autonomous systems would be the worst decision possible. My point was simply that the developer must make some decision about what to do. Your thinking is naively optimistic. Yes, the system will do its best to avoid undesirable situations, but if you reduce the speed of the vehicle to a point where the vehicle's stopping distance is effectively 0 any time there is a non-autonomous mobile object (or a place for such an object might be hiding) in view, the system will need a contingency plan. A system designed to be perfectly safe is going to also be constrained to approximately walking pace any time it's operating inside of an urban area.
It wouldn't need zero stopping distance, but rather a low enough stopping distance that the risk of death is very low. Hitting someone at 20 kmph doesn't have very much risk to it I think. 30kmph with half a meter distance to the sides and 2 meters in front is probably enough combined with fast reaction times to avoid the vast majority of deaths in emergency situations.