Edit: I should note, though, that STM still results in some determinism in code, it just causes sections to be re-run when they notice that their memory was changed underneath them. Edit#2: And he addresses the idea of probabilistic algorithms, too, cool! Neat article!
On the other hand, there are problems that are inherently unsolvable with locks because they just aren't available, like network programming. Other problems are difficult to imagine without some sort of central synchronization. If you're running reddit, how do you store comments/replies without using one central database? What if someone replies to a comment that was stored on server X and server Y receives that request before server X tells it about the first comment? Should it error? Or just store the comment in that inconsistent state and worry about resolving/cleaning up dangling references later?
Google results are different from different data centers at the exact same moment because of code and crawler propagation. Seems to work fine for everyone. Most software communicates with TCP instead of UDP and that seems to work fine. -XC
Also, I wasn't referring to TCP/UDP, rather large networks of computers, where things regularly break and it's hard to get the atomic synchronization that locks give without throwing performance out the window. Algorithms like mapreduce are built to mitigate the fact a fraction of servers in a cluster will regularly fail/stall.