Long and richly-illustrated antidote to strong AI singularity fears, with mentions of Perry Bible Fellowship and The Emu War.
- We need better scifi!
Not much of an antidote, much of this is a straw-man argument. Plus a lot of it is just shitting on the kind of people who are interested in AI-risk, or saying that because certain logical arguments are uncomfortable they must be wrong. I'm fairly certain most AI-risk advocates agree there is a fairly small risk, but it's small like 1%. Plus if/when it does become a problem, it's likely to become a problem really fast, so it makes sense to be prepared. We've had decades to do something about climate change and we haven't, which will be catastrophic, making the same mistake again could be species-ending. >But if you’re committed to the idea of superintelligence, AI research is the most important thing you could do on the planet right now. It’s more important than politics, malaria, starving children, war, global warming, anything you can think of. There are already a few billion dollars being spent globally on malaria, I'm guessing a similar amount on food aid. A much much smaller amount is spent on AI research. MIRI spent about $1.8 million last year source; I'm guessing the worldwide total isn't more than $10 million. I'm not saying those numbers should be equal, but it may be that $10 million isn't enough. Or more to the point, a math/programming nerd interested in doing something which will help the world and hasn't picked a career path is probably better looking into AI-research than the other problems listed. Reason being, some problems which lend themselves to just throwing more money at them; last I knew $5 at the Against Malaria Foundation bought a bed-net. So if you just want to throw money at a problem the AMF is your best choice (note: I regularly donate to AMF among other charities). We have several drugs which treat malaria and bednets are effective at prevention, we also know how to prevent and treat starvation (give the person food). These are problems constrained by resources and will, not by knowledge. tl;dr Author is exaggerating though I agree with many of the conclusions edit: So apparently OpenAI has a $1 billion endowment. So my guess is that the cause has enough money right now. Also I think they have an image problem, so if anybody here is good at marketing and also cares about AI-research maybe you could help them out with that.
It's fairly certain that malaria will continue to kill people. It is very likely that your contributions to AMF will reduce this bad outcome, buying some time until a better solution is found. (Previous improvements in our response to polio and smallpox give reasonable hope for such progress.) Meanwhile, if the risk of AI catastrophe is 1%, then it is 99% certain that resources dedicated to averting that problem will be wasted (disregarding side benefits of the research, which could occur with malaria research as well). There is also some concern that a project like OpenAI could increase risk of a disaster. Asteroid impact could render all these problems trivial; it's hard to prioritize giant problems that have tiny probabilities. I agree that a lot of the essay is not very rigorous, but I think it makes some salient points: · It is not clear what "hyperintelligence" means, and not obvious that it's possible for anything to be exceedingly more intelligent than people. · We are not good at "baking in" robust reliability to complex systems; we make gradual improvements through trial and error. Such improvements are easily defeated, often unintentionally. · The cats and emus demonstrate that superior intelligence does not guarantee the ability to dominate inferiors.
How many existential risks are competing for our attention, brainpower, and funding? Let's brainstorm. * Asteroid impact * Solar flare * Epidemic of an infectious pathogen * Climate change * Artificial intelligence * Nuclear war That's all I got, and yes I think we should prepare for all of them.
The hard part is deciding what "prepare" means. Any money and time devoted to the asteroid threat is denied to pathogens, including malaria. If the only criteria for spending resources on a problem are 1) it could cause humans to go extinct and 2) we cannot prove that it is impossible, then the list will grow endlessly, with no guarantee that we have thought of everything: · supervolcano · grey goo · nearby supernova/hypernova · anoxic event · particle accelerator mishap · hostile alien invasion · wrath of a supreme being It's not easy, but I think we must do some kind of cost-benefit analysis before dedicating significant resources to improbable doomsday scenarios.
In terms of AI risk I hold the view that if it can be created, it will be, and it should be. Humanity holding back the ability of a machine with near infinite thinking and creative capability is idiotic and backwards. Should there be a potential, I will happily set such an AI in motion.
I'll be honest, this guy sounds just as full of himself as the people he claims are full of themselves. Could very well be false considering conservation of energy. At the very least, it can put a reasonable practical cap on intelligence that will not allow singularity to happen, and allow humanity to compete even with a smarter brain. Secondly, the idea that ideas are created with pure processing power and can be "made inside of a box" seems odd to me. You need random variation and a wide range of data and observation to produce new ideas and tech. We, humanity, are a reproducing system that feeds information around itself as it spreads over a large range of space as to make a large and diverse range of observation. An AI would, at least, have to ride on our "backs" before it manages to supersede our collective inventive ability. In fact, a huge reason that AI is booming right now is directly related to the fact that data collection and storage boomed before it. Data is required to be intelligent, not necessarily computing speed, and I'd bet on humanity being bottle-necked not by our processing power, but our observational power, and our observation-filtering power. Imagine we wanted to optimize for speed, and made a bike, before assuming that these bikes are going to drive antelopes extinct. No, because bikes aren't able to eat, they aren't able to reproduce, they don't survive very long, and they don't deal well with sand. This is the supercomputer argument, this idea that a thing optimized for speed is going to beat out a thing optimized to survive and reproduce and be fit in its environment.In particular, there's no physical law that puts a cap on intelligence at the level of human beings.
Previously on Bostrom The full thread, which is well-worth reading: It's particularly interesting that he discusses Art Bell, UFOs and cults. John Ronson outlines in The Men Who Stare At Goats how Art Bell's broadcasts were directly responsible for the Heaven's Gate mass suicide. The Strugatsky Brothers' Roadside Picnic was also zebra2's first sci fi club:
That's a lot of not very well thought out arguments. It's not that the arguments are wrong, but that they don't address the argument being made. The Argument From Wooly Definitions - It's true that we don't have an exact list of capabilities that an AI would need to be called AGI, but that doesn't matter. If you agree that one day we will create an AI that can figure out for itself how to improve it's effectiveness at achieving it's goals, taking anything relevant into account, then definitions of AGI don't matter. The AI can recreate itself with the necessary architecture to be a 'proper' AGI if it needs to. The Argument From Stephen Hawking's Cat, The Argument From Einstein's Cat and The Argument From Emus - This is the argument that intelligence is not enough to be all-powerful. Again, this doesn't matter, we are not all-powerful, yet we have the power to destroy all life on Earth (if we put our minds to it). Hawking, Einstein and the Australian Government all have the ability to herd cats and kill all emus if they ignored the negative consequences of the methods that would work. Our inabilities to do these things just show the limits of our intelligence, not of intelligence itself. The Argument From Slavic Pessimism - This is straight up in favour of the argument that AI is a threat and hard to control. The Argument From Complex Motivations - This would just make it harder for us to know why it's killing us if that's what it decides to do. The Argument From Actual AI - Wait, is this supposed to be an argument against AI becoming superintelligent? This just tells us that it isn't going to happen next year. The Argument From My Roommate - So the AI may be not very effective? We (or they) will just build a more effective one. The Argument From Brain Surgery - This is a fair argument, except it only applies to a very specific case. Like many of these arguments, you need a very specific way for the AI to work for the argument to work. Chances are there are many ways general AI could be built. The Argument From Childhood - The First general AI will likely need lots of 'growing up' time before it is effective (maybe), but any subsequent AI can be 'born' with all the experience and wisdom of all previous AI's The Argument From Gilligan's Island - "A recurring flaw in AI alarmism is that it treats intelligence as a property of individual minds, rather than recognizing that this capacity is distributed across our civilization and culture." - What? Is this assuming that the AI we are talking about doesn't have the ability to be specialized in many different areas? When did anyone say that? Why would that be true? The Outside Argument - This is just pointless character assassination of 'smart people in silicon valley', which does nothing to counter the argument. Sure, it's perfectly reasonable for you to be skeptical of people who appear crazy, but them being crazy-looking doesn't make what they say untrue.
#IAmVerySmartThe Idea That Eats Smart People