a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by Isherwood
Isherwood  ·  2662 days ago  ·  link  ·    ·  parent  ·  post: I Don't Think You Truly Grok The Problem.

But some soldier (well, two soldiers) in nebraska or siberia could launch a nuke and kill us all. Why is AI more dangerous?

And if it's an arms race, then there won't be just one side with access to the weapons and, like all war, attack will become a cost benefit analysis in the face of retribution. Why won't there be counter AI to fight off attacks?

And we're a million times smarter than ants, right? So why are there still ants? What are the machines going to see in us that means we should all die? I've asked that question three times now.





ideasware  ·  2662 days ago  ·  link  ·  

Well I have answered it a couple of times too :-) but lets try again with a fresh take.

First off, that precisely my point about a couple of soldiers in Utah or Nebraska firing off nuclear weapons. They are very specialized soldiers, with very special training, at the time, and it takes two of them to do it AT THE SAME TIME, and a host of protections to back it up. But one programmer from ISIS can do it immediately, and there are literally infinitely more programmers who can do it in a heartbeat than highly specialized, trained soldiers with specialized equipment to back them up -- that's the entire point!

It's really important that a programmers shady arm from around the world (there is no concept of place in this internet-world) that can do the things that used to be specialized soldiers prerogative. And the defense will be no use at all, because soon, there will more than just one little programmer, but an 10,000-strong army, that can wipe the floor with any army for the first quarter hour, which is all it will take to take us back to the stone age. That's one of the many things I'm afraid of now -- it's not like it was when it was one big army against another -- that could be handled. This cannot.

kleinbl00  ·  2661 days ago  ·  link  ·  

...look. SAC is running 8" floppies. 'member Wargames? There's a reason they started it out with Michael Madsen failing to launch - they needed to wave hands and get rid of all the humans in the silos.

But it's been 35 years and the humans are still there. The tech has not changed. The doctrine has not changed. The Soviets probably built something Strangelovian but it's about as smart as an automatic transmission.

More than that, you're not talking about a malevolent AI, you're talking about an ISIS hacker. And okay- maybe your argument is ZOMG ISIS could hack our silos or something. But you can no more hack a missile silo than you can hack a garbage truck. It's not an access problem, it's a sophistication problem. Never mind the airgap. There simply isn't the instrumentation or automation that would allow a garbage truck to so much as malevolently drop a dumpster.

And that's at the heart of most of your worries - oh shit an AI could take over the world and wipe us all out. But that pesky "take over the world" part is messy, man. It really is. There ain't a lot of the world that's automated and that's not likely to change - "automated" is very different from "mechanized" or "computer-controlled" or even "digital."

Isherwood  ·  2662 days ago  ·  link  ·  

I mean, I heard this level of fear around nukes, and I'm glad they have regulations and I think regs should be on any weapon, but the situation you lay out just seems like sensationalism and I don't see how it's different than other weapons of mass destruction.

And we haven't had total war like you're describing since WWII - and at the time it couldn't be handled, no one knew what to do about nukes, or chain guns, or tanks. War is a never ending series of escalations that we're never ready for and our lack of preparedness pushes us to new escalations.

I get that it's scary but it seems like fear mongering.

But, if it's not, what's the suggestion to stop the world from ending?

Devac  ·  2662 days ago  ·  link  ·  
This comment has been deleted.
Isherwood  ·  2661 days ago  ·  link  ·  

I know I shouldn't feed, but sometimes I just can't help myself.

cgod  ·  2661 days ago  ·  link  ·  

Lol exactly.

He can't "grock" the difference between speculation and fact. He doesn't seem to understand social norms or be able to understand basic social cues. He either doesn't read comments closely or has a a reading disability or doesn't know the recent history of some very basic A.I. (self driving cars). He often seems unable to distinguish between criticism of his manner and the material he is presenting (as if the material and his behavior are the same thing. Taking criticisms of his behavior as disagreement with the matetial).

He is a super strange person who seem to either have a screw loose or is lingering on the far edge of the functional spectrum.

Hubski's A.I. Pope is super bizarre.

Devac  ·  2661 days ago  ·  link  ·  
This comment has been deleted.
user-inactivated  ·  2661 days ago  ·  link  ·  

There are a handful of hubskiers I know of, myself included, who have done/are doing real AI work. I've been trying to ignore this dude's threads, but I'll point out that everyone else I know of who knows their shit hasn't bothered to comment in them either.

Real-world AI is mostly boring, because things you can automate are always boring. PROFOUND/SCARY AI THING articles are always more science fiction than science.

ideasware  ·  2661 days ago  ·  link  ·  

You know it's quite hilarious -- my facebook friends (and I only have a few hundred, my actual friends) include Oren Etzioni and Ben Goertzal and Roman Yampolskiy and Toby Walsh and Sebastian Thrun and Rob Enderle and many others -- it's you who have your heads on backwards. I humbly suggest you rethink your position -- it's really not going to work any more.

kleinbl00  ·  2661 days ago  ·  link  ·  

Question: how much of current self-driving car tech counts as AI? My understanding is that it's pretty much just sophisticated telemetry and complex rulebooks.

user-inactivated  ·  2661 days ago  ·  link  ·  

Elaborating now that I'm not half-asleep, AI algorithms are either automating logical inference or automating statistical inference or a mixture. You can call a spam filter AI if you want and no one will call you on it, but it's just boring old hypothesis testing if you do what it does with pencil and paper. Likewise self-driving cars aren't different in kind from missile guidance systems, they can just do more because we have better computers. AI is more who you studied with, how you want to think about the problems you're solving and, less benignly, how the marketing people want to talk about the problems you're solving than a distinct kind of technology.

user-inactivated  ·  2661 days ago  ·  link  ·  

I'm not entirely sure how to answer that, but if you wikipedia Kalman Filter and follow along you're a couple of months from following Sebastien Thrun's book and the rest depends on which side of the "DARPA got us this far" and "google has hookers and blow" you fall on.

kleinbl00  ·  2661 days ago  ·  link  ·  

I want to believe I'd understand that if I hadn't polished off a bottle of Maker's Mark. In my heart of hearts I know I'll still be snowed in the morning.

user-inactivated  ·  2661 days ago  ·  link  ·  

Then it's pretty much just sophisticated telemetry (paid for for by DARPA), complex rulebooks (state DoTs and state university civil engineering departments), and an army of frustrated PhD candidates (DARPA again, except not paying)

cgod  ·  2661 days ago  ·  link  ·  

I think he is benevolent.

I'd probably say passionate.

But an odd duck with terrible social skills and an odd reality.

ideasware  ·  2661 days ago  ·  link  ·  

"Grok" is a word first used by Robert H. Heinlein, a Scifi writer, and is very real.

I'll just move on for now -- sometimes it's best just to move ahead, and not worry about cgod buffoonery.

cgod  ·  2658 days ago  ·  link  ·  

I use grock correctly in a sentence (with quotation marks because it's silly) and you have to act like I'm some kind if big dummy letting me know it a "real" word used by Heinlein. For some reason you just have to assume that every one is an ignoramus even when all the evidence is to the contrary.

I know my Heinlein, I even remember how to drive the tractor.

ideasware  ·  2662 days ago  ·  link  ·  

That's precisely it -- it really is fundamentally different, and when you truly grok that about AI and AGI, you'll be the same as me; horrified beyond belief.

It's funny, because I used to be exactly like you, skepical of off-the-wall theories, and very confident we could do something, even if it would not be known right now. But about 8 years ago I realized that AI was totally different, a whole new threat, and this time it was terribly real, like nothing that has ever been seen before, ever. This was a fundamental advance, and it WILL be incredibly beneficial and useful without a doubt, but the downside is equally horrific -- literally the end of this world in our lifetime. I reasearched it for many years before I came to this conclusion -- it was not a half-assed remark or sassy conclusion by any means, and I really feel like you better research it too, more closely.

The programmers lazy arm in ISIS, typing out instructions for the end of this world. It's 100000 times different and easier than a soldiers iron specification, and it will be the death of us. There IS NO SOLUTION -- so we have to think differently, and maybe -- maybe -- we can come up with something meaningful.

Isherwood  ·  2662 days ago  ·  link  ·  

So all you have is a paperclip maximizer fear.

ideasware  ·  2662 days ago  ·  link  ·  

Hahaha... I understand; it takes awhile to really understand so don't worry. And BTW Bostrom is talking about control, not actually really sticking to a paperclip maximizer... You know I actually majored in Philosophy and Math at Berkeley, although Math was always my specialty. I was quite good :-)

johnnyFive  ·  2662 days ago  ·  link  ·  

    when you truly grok that about AI and AGI, you'll be the same as me; horrified beyond belief.

You sure do like moving the goalposts, don't you?

    There IS NO SOLUTION -- so we have to think differently, and maybe -- maybe -- we can come up with something meaningful.

So you grok this all so well but can't offer a solution? Not really making a good case for why we should listen to you on this issue.

Earlier, you said:

    It's really ANI for the military case, not AGI, but that does not mean it can kill us very effectively. It can and it will.

From reading your argument, you seem to have totally internalized its underlying assumptions, but skip over the part where you support them. Why will it be military AI? How will ISIS or some other such group have access to it? Why will their version be able to defeat one developed in the US? Why would an AGI do what they tell it to (or us for that matter)?

Also, you still haven't answered my earlier comment asking why you think you've said anything many of us haven't already thought about.