Thanks! Looks like a great community with lots of interesting people. Yeah, that's precisely the argument. Weak AI is safe and useful, strong AI won't happen anytime soon, so we shouldn't be worried. Sounds reasonable. And yet... it does nothing to address the points that: P1. Strong AI is not unlikely to be created within the next few decades, P2. If strong AI is created, it may pose an existential threat, P3. Creating a safe ("friendly") strong AI seems like it would be surprisingly difficult, and P4. We should be aware of the dangers, and think long and hard about how we can avoid them. Note that P1 is a widely-held belief among experts (see Bostrom, chapter 1) so it cannot be dismissed out of hand as "hypothetical." Of course it's hypothetical. So what? I didn't just come up with the points above. They are central to the argument. And the article does nothing to address them, instead accusing some very smart people (namely Elon Musk and Stephen Hawking) of fear-mongering. No doubt plenty of people don't understand that present-day AI is safe, and such people ought to be corrected. But not by sweeping the entire issue under the rug.Good points (and welcome to Hubski)!
My problem with the "AI community", philosophically, boils down to a few things: 1) There has never been a time in my life when break-even fusion and artificial intelligence weren't just around the corner. 2) Drawing attention to break-even fusion gets physicists to say "yeah, we were ambitious." Drawing attention to artificial intelligence gets AI researchers to argue that earlier definitions were inaccurate. 3) Considering how arbitrary the definitions are, breaking things down into "weak AI" (IE, things we need not worry about) and "strong AI" (IE, Skynet) seems arbitrary and wrong-headed. It's not like agency is a binary characteristic yet in order to have this discussion, the first thing the AI camp always does is say "don't worry about this, worry about this." I dunno. I'm disheartened by the rapidity at which the argument devolves into whether the angels dancing on a head of a pin are malevolent or benevolent. At least the "don't worry be happy" crowd tends to focus on concrete things while Stephen "grey goo" Hawking and his posse do tend to argue about hypothetical dangers from hypothetical solutions to hypothetical problems.
There is acknowledgement that early AI was overly ambitious, which is why AI has focused more on applications than trying to figure out how to write programs that were "really" intelligent since the AI winter. People like Ray Kurtzweil may still be flying the flag, but that's not what most people in AI are actually working on, and you don't see many working scientists making predictions about when or if it's going to happen. The problem is you hear a lot from the cheerleaders and very little from the experts, because "we found a slightly different way to recognize text in photographs, which works really well for extracting street addresses" doesn't make for sexy press releases.
The distinction between weak and strong AI is really about what you're trying to achieve. With weak AI we want programs that act like they're intelligent; we want to be able to make them smart enough for whatever application we have in mind. Strong AI wants to give you a holographic Lexa Doig. I am sure a large chunk of the AI community would not say no to a holographic Lexa Doig, but the problems you have some clue how to solve are much more attractive than the problems you don't. Skim the table of contents of some recent issues of AI magazine and see which you see more of. For what it's worth, I'm in the "you might as well be arguing about whether is all, like, a simulation, like the matrix, man" camp.
Air gap your strong AI. AI will be great to DO all kinds of stuff but there is no reason we have to give it the ability to do things independently of our over site. Let the weak AI manage traffic and power grids and health care, listen to what the strong AI has to say about changes it thinks will be beneficial to us. If we don't give strong AI the reigns than it can't lead us too badly off course. Sadly we aren't too good at air gaping important systems right now and without serious work we will probably only get worse at it as we grow more dependent on networks.