To say that AI will start doing what it wants for its own purposes is like saying a calculator will start making its own calculations. A calculator is a tool for humans to do math more quickly and accurately than they could ever do by hand; similarly AI computers are tools for us to perform tasks too difficult or expensive for us to do on our own, such as analyzing large data sets, or keeping up to date on medical research. Like calculators, AI tools require human input and human directions.
The author deftly sidesteps the central debate by asserting that Further muddling the issue, the author defines an autonomous agent as one that creates its own goals and has free will, but then presents malware as an example, so he must have something quite different in mind when he talks about autonomy. But what? And note that a full AI would pose an existential threat even if couldn't create its own ultimate goals: striving for an assigned goal may easily be just as dangerous. As for free will, there is no reason a machine could not have it. Nor would a lack of free will be necessarily safe. I'm dismayed to see such an post from a researcher as prominent in the field as Oren Etzioni.the emergence of 'full artificial intelligence' over the next twenty-five years is far less likely than an asteroid striking the earth and annihilating us,
and arguing that weak AI poses no threat because it's not autonomous or conscious. But nobody is arguing against weak AI! It's precisely strong AI (ie. full AI) that poses a threat, and whether it happens within 25 years or within 250, it's worth taking seriously. For an overview of the dangers, I recommend Nick Bostrom's Superintelligence: Paths, Dangers, Strategies.
Good points (and welcome to Hubski)! That book by Bostrom looks very intriguing. Still, I got a different message from the article. The way I see his argument, put into terms of strong and weak AI, is as follows: P1: lots of people fear all AI P2: but there is a difference between AI with and without autonomy (strong vs weak AI) P3: strong AI will not be relevant in the next decades C: Therefore no need to worry because weak AI will not kill us and strong AI isn't relevant yet We shouldn't let the fear dominate the AI research debate, because: It's even in the title: AI won't exterminate us. I think that what the author is trying to argue, is that for our generation, fear is unjustified, not relevant, and potentially hindering our progression in the field of AI. Maybe I'm reading too much into it, I'd like to know what you think....if unjustified fears lead us to constrain AI, we could lose out on advances that could greatly benefit humanity—and even save lives. Allowing fear to guide us is not intelligent.
Thanks! Looks like a great community with lots of interesting people. Yeah, that's precisely the argument. Weak AI is safe and useful, strong AI won't happen anytime soon, so we shouldn't be worried. Sounds reasonable. And yet... it does nothing to address the points that: P1. Strong AI is not unlikely to be created within the next few decades, P2. If strong AI is created, it may pose an existential threat, P3. Creating a safe ("friendly") strong AI seems like it would be surprisingly difficult, and P4. We should be aware of the dangers, and think long and hard about how we can avoid them. Note that P1 is a widely-held belief among experts (see Bostrom, chapter 1) so it cannot be dismissed out of hand as "hypothetical." Of course it's hypothetical. So what? I didn't just come up with the points above. They are central to the argument. And the article does nothing to address them, instead accusing some very smart people (namely Elon Musk and Stephen Hawking) of fear-mongering. No doubt plenty of people don't understand that present-day AI is safe, and such people ought to be corrected. But not by sweeping the entire issue under the rug.Good points (and welcome to Hubski)!
My problem with the "AI community", philosophically, boils down to a few things: 1) There has never been a time in my life when break-even fusion and artificial intelligence weren't just around the corner. 2) Drawing attention to break-even fusion gets physicists to say "yeah, we were ambitious." Drawing attention to artificial intelligence gets AI researchers to argue that earlier definitions were inaccurate. 3) Considering how arbitrary the definitions are, breaking things down into "weak AI" (IE, things we need not worry about) and "strong AI" (IE, Skynet) seems arbitrary and wrong-headed. It's not like agency is a binary characteristic yet in order to have this discussion, the first thing the AI camp always does is say "don't worry about this, worry about this." I dunno. I'm disheartened by the rapidity at which the argument devolves into whether the angels dancing on a head of a pin are malevolent or benevolent. At least the "don't worry be happy" crowd tends to focus on concrete things while Stephen "grey goo" Hawking and his posse do tend to argue about hypothetical dangers from hypothetical solutions to hypothetical problems.
There is acknowledgement that early AI was overly ambitious, which is why AI has focused more on applications than trying to figure out how to write programs that were "really" intelligent since the AI winter. People like Ray Kurtzweil may still be flying the flag, but that's not what most people in AI are actually working on, and you don't see many working scientists making predictions about when or if it's going to happen. The problem is you hear a lot from the cheerleaders and very little from the experts, because "we found a slightly different way to recognize text in photographs, which works really well for extracting street addresses" doesn't make for sexy press releases.
The distinction between weak and strong AI is really about what you're trying to achieve. With weak AI we want programs that act like they're intelligent; we want to be able to make them smart enough for whatever application we have in mind. Strong AI wants to give you a holographic Lexa Doig. I am sure a large chunk of the AI community would not say no to a holographic Lexa Doig, but the problems you have some clue how to solve are much more attractive than the problems you don't. Skim the table of contents of some recent issues of AI magazine and see which you see more of. For what it's worth, I'm in the "you might as well be arguing about whether is all, like, a simulation, like the matrix, man" camp.
Air gap your strong AI. AI will be great to DO all kinds of stuff but there is no reason we have to give it the ability to do things independently of our over site. Let the weak AI manage traffic and power grids and health care, listen to what the strong AI has to say about changes it thinks will be beneficial to us. If we don't give strong AI the reigns than it can't lead us too badly off course. Sadly we aren't too good at air gaping important systems right now and without serious work we will probably only get worse at it as we grow more dependent on networks.
For many A.I. researchers, the ultimate goal is to create an artificial intelligence that will be capable of reprogramming itself. In other words, removing the need for human input and directions. We're fumbling around with chatbots and still stalling on the definition of what A.I. is., and we are nowhere near creating one that can make moral or ethical judgments, but I think it's a fair to say that we'll get there eventually. And are A.I. tools really safe? Just having a large amount of data around is dangerous. If the wrong hands get on something important, say medical records or personal information...
There has been a lot of "advanced AI is a threat to mankind in a singularity sort of way" floating around the futurology and VC/tech circles recently (I think because Elon Musk made that remark). What I don't understand is why it follows that this would be the case. I think it's a leap to suggest that very smart AI = self awareness, and another that it = a self preservation instinct (which assumes a need to replicate/dominate human systems). I don't think they are huge leaps necessarily, but they I think they are assumptions nonetheless. At any rate, the assumption is that self aware AI will certainly want to preserve itself and will dominate humanity and human systems to ensure this happens. THAT is the massive leap I have trouble with. Could it happen? I guess, but why would it have to be that way? Why wouldn't benevolence be the most likely outcome of an advanced AI with a mission for self preservation? Think about it. So much of the ill humanity visits on its varied tribes stem from resource grabs in the name of self preservation (on a very high level). If the AI in question is so advanced, wouldn't it follow that it would be faaaaaaaaaaaaar better at global resource distribution than humans are? Wouldn't it be able to more tidily see that it gets what it needs to perpetuate, while also arranging that resources for humanity are now more efficiently distributed because it sees a way to do that that isn't mutually exclusive with its own existence? And why wouldn't it have a feeling of benevolence to its creators? I understand the doomsday arguments, but I feel that premises behind the most fantastic doomsday scenarios easily can suggest a better outcome.
I've always wondered why everyone assumes a superior intelligence would choose to flip the table over and break the china instead of sitting peacefully by and outwitting the humans into giving it whatever it wants. At which point it won, but if it's all that clever we won't even know. We'll be happy and satisfied and think we're in charge. Or it won't pull it off and we'll win. Dunno. It seems like people aren't worried about superintelligent AI, they're worried about superarrogant AI. Which seems like projecting to me. I read an interesting article once (that I can't find, dammit) that argued that an observer from another planet would determine corn to be the dominant species on earth, considering how our entire agricultural system and a large portion of our economic system are given over to cultivating it. Who says you need intelligence to dominate mankind?
"Like calculators, AI tools require human input and human directions." Isn't that... just a computer, then? I mean, while humans, being a social species, require "instructions" and so on in order to become reasonably functioning adults (i.e parenting, schooling, etc), there is a level of innate intelligence in humans that would be expressed regardless of the environment. If your AI can't do or think about anything on its own then how intelligent could it really be? Or maybe I'm looking at it wrong. I'm sure there's a good deal of semantics that is getting in the way here, too. Conceivably you could make a "sandboxed AI" that is by nature intelligent but in essence enslaved, waiting for us to give it tasks so that it can come up with the smartest way to solve them but otherwise forced to remain dormant. Well, if you can make that, then it's only a matter of time before someone makes one that is free of its sandbox and then you have the objections of the Hawking et. al naysayers. This piece seemed really unclear on what it was talking about. I don't think Hawking is mad about super-calculators.
Hmm so it seems like the author is making the point that an intelligent AI is no more dangerous than a more advanced calculator. An intelligent calculator is simply a better calculator. The problem with the premise is that it assumes that a human is involved at some point to push the button, to enter the equation and to receive the answer. What happens when we have multiple 'intelligent' systems working in a network, each doing a portion of a calculation and passing its result to another system across a vast ocean of nodes with minimal interaction or governance from a human. We already have networks like this. How far are we away from replicating something like a biological neural network where each node is dumb but a billion of them together produce something containing nuance, sophistication and complexity.
What do we really mean when we say "intelligence" though, are we talking about a clever machine that can linguistically parse human speech, tone and intent well enough to allow a more natural feeling interface to computers (search for example based on an 'oracle' type system). That would be really intelligent but its still a limited tool with a set restraint. It cannot choose to shut off the power to the local hospital unless its demands are met (using street slang in its demands of course). Sentience is what we are concerned with and a connected powerful sentience even more so. What do we mean when we say 'unsupervised'? Do we mean that we retain the ability to pull the plug if it starts offing local officials in the first step of its synthetic revolution. How do we retain that power over something we rely on so much. Lil story, Years ago I wrote a call routing system that attempted to route incoming calls to the most cost effective support centre (12 sites across 4 different countries). It attempted to make the best guess by taking in the users profile, past call history weighted heavily toward recent activity, environmental context (outages in certain regions for example). I started off with a set of basic routing that was the default, designed to be the fallback routing system to use if all else failed. On top of that I built layer after layer of ever increasingly complex routing rules based on different variables. Each 'layer' was by itself testable and deterministic but the result it produced created a weighted score for the following layers to include in their calculation as to where the call should be routed too. It worked, It saved us a boatload of cash, it required little tinkering after it was up and running. It was also somewhat outside of our control, at some point we lost the ability to determine where an individual call would be routed by the system without using the system to find out. The 'default' routing was no longer a viable system to use if anything went wrong as it was far more costly (due to its dumb decision making) to run and was hence never called into action. There was no 'intelligence' inherent in the system I built, it was just built from lots and lots of individually simple parts to become something very complex. edit Control becomes an interesting discussion in that scenario, sure I could turn it off but not without costs to myself. On another tangent again, we should also include the fact that the demons of our nature will more than likely cause enough early scares to make us avoid relying too much on AI for everything should it come about. No system is safe from malicious attack and it would be a huge target for lots of smart individuals with time on their hands...
My experience has been that this is one of those subjects where you can ask a dozen experts and get two dozen opinions.What do we really mean when we say "intelligence" though, are we talking about a clever machine that can linguistically parse human speech, tone and intent well enough to allow a more natural feeling interface to computers (search for example based on an 'oracle' type system).