Not much of an antidote, much of this is a straw-man argument. Plus a lot of it is just shitting on the kind of people who are interested in AI-risk, or saying that because certain logical arguments are uncomfortable they must be wrong. I'm fairly certain most AI-risk advocates agree there is a fairly small risk, but it's small like 1%. Plus if/when it does become a problem, it's likely to become a problem really fast, so it makes sense to be prepared. We've had decades to do something about climate change and we haven't, which will be catastrophic, making the same mistake again could be species-ending. >But if you’re committed to the idea of superintelligence, AI research is the most important thing you could do on the planet right now. It’s more important than politics, malaria, starving children, war, global warming, anything you can think of. There are already a few billion dollars being spent globally on malaria, I'm guessing a similar amount on food aid. A much much smaller amount is spent on AI research. MIRI spent about $1.8 million last year source; I'm guessing the worldwide total isn't more than $10 million. I'm not saying those numbers should be equal, but it may be that $10 million isn't enough. Or more to the point, a math/programming nerd interested in doing something which will help the world and hasn't picked a career path is probably better looking into AI-research than the other problems listed. Reason being, some problems which lend themselves to just throwing more money at them; last I knew $5 at the Against Malaria Foundation bought a bed-net. So if you just want to throw money at a problem the AMF is your best choice (note: I regularly donate to AMF among other charities). We have several drugs which treat malaria and bednets are effective at prevention, we also know how to prevent and treat starvation (give the person food). These are problems constrained by resources and will, not by knowledge. tl;dr Author is exaggerating though I agree with many of the conclusions edit: So apparently OpenAI has a $1 billion endowment. So my guess is that the cause has enough money right now. Also I think they have an image problem, so if anybody here is good at marketing and also cares about AI-research maybe you could help them out with that.
It's fairly certain that malaria will continue to kill people. It is very likely that your contributions to AMF will reduce this bad outcome, buying some time until a better solution is found. (Previous improvements in our response to polio and smallpox give reasonable hope for such progress.) Meanwhile, if the risk of AI catastrophe is 1%, then it is 99% certain that resources dedicated to averting that problem will be wasted (disregarding side benefits of the research, which could occur with malaria research as well). There is also some concern that a project like OpenAI could increase risk of a disaster. Asteroid impact could render all these problems trivial; it's hard to prioritize giant problems that have tiny probabilities. I agree that a lot of the essay is not very rigorous, but I think it makes some salient points: · It is not clear what "hyperintelligence" means, and not obvious that it's possible for anything to be exceedingly more intelligent than people. · We are not good at "baking in" robust reliability to complex systems; we make gradual improvements through trial and error. Such improvements are easily defeated, often unintentionally. · The cats and emus demonstrate that superior intelligence does not guarantee the ability to dominate inferiors.
How many existential risks are competing for our attention, brainpower, and funding? Let's brainstorm. * Asteroid impact * Solar flare * Epidemic of an infectious pathogen * Climate change * Artificial intelligence * Nuclear war That's all I got, and yes I think we should prepare for all of them.
The hard part is deciding what "prepare" means. Any money and time devoted to the asteroid threat is denied to pathogens, including malaria. If the only criteria for spending resources on a problem are 1) it could cause humans to go extinct and 2) we cannot prove that it is impossible, then the list will grow endlessly, with no guarantee that we have thought of everything: · supervolcano · grey goo · nearby supernova/hypernova · anoxic event · particle accelerator mishap · hostile alien invasion · wrath of a supreme being It's not easy, but I think we must do some kind of cost-benefit analysis before dedicating significant resources to improbable doomsday scenarios.
In terms of AI risk I hold the view that if it can be created, it will be, and it should be. Humanity holding back the ability of a machine with near infinite thinking and creative capability is idiotic and backwards. Should there be a potential, I will happily set such an AI in motion.