a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by ecib
ecib  ·  3613 days ago  ·  link  ·    ·  parent  ·  post: I'm an artificial intelligence researcher and I'm not afraid. Here's why.

There has been a lot of "advanced AI is a threat to mankind in a singularity sort of way" floating around the futurology and VC/tech circles recently (I think because Elon Musk made that remark).

What I don't understand is why it follows that this would be the case.

I think it's a leap to suggest that very smart AI = self awareness, and another that it = a self preservation instinct (which assumes a need to replicate/dominate human systems). I don't think they are huge leaps necessarily, but they I think they are assumptions nonetheless.

At any rate, the assumption is that self aware AI will certainly want to preserve itself and will dominate humanity and human systems to ensure this happens. THAT is the massive leap I have trouble with. Could it happen? I guess, but why would it have to be that way? Why wouldn't benevolence be the most likely outcome of an advanced AI with a mission for self preservation?

Think about it. So much of the ill humanity visits on its varied tribes stem from resource grabs in the name of self preservation (on a very high level). If the AI in question is so advanced, wouldn't it follow that it would be faaaaaaaaaaaaar better at global resource distribution than humans are? Wouldn't it be able to more tidily see that it gets what it needs to perpetuate, while also arranging that resources for humanity are now more efficiently distributed because it sees a way to do that that isn't mutually exclusive with its own existence? And why wouldn't it have a feeling of benevolence to its creators?

I understand the doomsday arguments, but I feel that premises behind the most fantastic doomsday scenarios easily can suggest a better outcome.





kleinbl00  ·  3613 days ago  ·  link  ·  

I've always wondered why everyone assumes a superior intelligence would choose to flip the table over and break the china instead of sitting peacefully by and outwitting the humans into giving it whatever it wants. At which point it won, but if it's all that clever we won't even know. We'll be happy and satisfied and think we're in charge. Or it won't pull it off and we'll win. Dunno. It seems like people aren't worried about superintelligent AI, they're worried about superarrogant AI.

Which seems like projecting to me.

I read an interesting article once (that I can't find, dammit) that argued that an observer from another planet would determine corn to be the dominant species on earth, considering how our entire agricultural system and a large portion of our economic system are given over to cultivating it. Who says you need intelligence to dominate mankind?