a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by ideasware
ideasware  ·  2662 days ago  ·  link  ·    ·  parent  ·  post: I Don't Think You Truly Grok The Problem.

That's precisely it -- it really is fundamentally different, and when you truly grok that about AI and AGI, you'll be the same as me; horrified beyond belief.

It's funny, because I used to be exactly like you, skepical of off-the-wall theories, and very confident we could do something, even if it would not be known right now. But about 8 years ago I realized that AI was totally different, a whole new threat, and this time it was terribly real, like nothing that has ever been seen before, ever. This was a fundamental advance, and it WILL be incredibly beneficial and useful without a doubt, but the downside is equally horrific -- literally the end of this world in our lifetime. I reasearched it for many years before I came to this conclusion -- it was not a half-assed remark or sassy conclusion by any means, and I really feel like you better research it too, more closely.

The programmers lazy arm in ISIS, typing out instructions for the end of this world. It's 100000 times different and easier than a soldiers iron specification, and it will be the death of us. There IS NO SOLUTION -- so we have to think differently, and maybe -- maybe -- we can come up with something meaningful.





Isherwood  ·  2662 days ago  ·  link  ·  

So all you have is a paperclip maximizer fear.

ideasware  ·  2662 days ago  ·  link  ·  

Hahaha... I understand; it takes awhile to really understand so don't worry. And BTW Bostrom is talking about control, not actually really sticking to a paperclip maximizer... You know I actually majored in Philosophy and Math at Berkeley, although Math was always my specialty. I was quite good :-)

johnnyFive  ·  2662 days ago  ·  link  ·  

    when you truly grok that about AI and AGI, you'll be the same as me; horrified beyond belief.

You sure do like moving the goalposts, don't you?

    There IS NO SOLUTION -- so we have to think differently, and maybe -- maybe -- we can come up with something meaningful.

So you grok this all so well but can't offer a solution? Not really making a good case for why we should listen to you on this issue.

Earlier, you said:

    It's really ANI for the military case, not AGI, but that does not mean it can kill us very effectively. It can and it will.

From reading your argument, you seem to have totally internalized its underlying assumptions, but skip over the part where you support them. Why will it be military AI? How will ISIS or some other such group have access to it? Why will their version be able to defeat one developed in the US? Why would an AGI do what they tell it to (or us for that matter)?

Also, you still haven't answered my earlier comment asking why you think you've said anything many of us haven't already thought about.