I've been mired in A Prayer for Owen Meany for, like, 6 months. As an audiobook, I'm finishing off the Elon Musk biography, a dual recommendation from veen and insomniasexx. I recommend it. On Kindle, I finally sucked it up and started The 12 million dollar stuffed shark. It's awesome.
I'm glad you enjoyed Musk's book. I was quite impressed by his determination and the complete absence of subtlety in many situations throughout the book. As for the Literary Thing, I forgot to bring it along on the trip here to Sweden. I'm currently sitting at a beach near Malmo with a book from one of my friends - Roger Penrose's The Emperor's New Minds. Its 600 pages is not a summer / light read by any account and it is much more math-heavy than I was expecting, but it's also much more interesting than I'd expected. His argument (which takes him forever to build, starting with freakin' Plato and ending with Hawking) is that we have not yet developed (or will never develop) an intelligent AI because we don't understand the actual physics of the mind and of consciousness. I'm at chapter 8. bfv or mike, have either of you read it? Anyway, once I'm back home I'll start with Zero Days, a book on Stuxnet.
That point about AI reminds me of this time I heard China MiƩville speak. Someone asked him what it's like to try to imagine an alien intelligence. His response was something along the lines of - we can't, it's impossible for us to imagine something that isn't inherently biased toward our own world and experiences. But even though we will inevitably fail, those failures are pretty damn interesting on their own. Even if we don't make a Turing passing AI, I think so far all our failures have been pretty interesting, and I can't wait to see us fail some more :)
Regarding those 'failures,' here's an even more tempting possibility. In the process of trying to create a 'Turing-passing' AI, we invent/discover a class of transitionary intelligence that somehow defies our current understanding of machine learning/intelligence/consciousness.