following: 0
followed tags: 0
followed domains: 0
badges given: 0 of 0
hubskier for: 2845 days
That's a lot of not very well thought out arguments. It's not that the arguments are wrong, but that they don't address the argument being made. The Argument From Wooly Definitions - It's true that we don't have an exact list of capabilities that an AI would need to be called AGI, but that doesn't matter. If you agree that one day we will create an AI that can figure out for itself how to improve it's effectiveness at achieving it's goals, taking anything relevant into account, then definitions of AGI don't matter. The AI can recreate itself with the necessary architecture to be a 'proper' AGI if it needs to. The Argument From Stephen Hawking's Cat, The Argument From Einstein's Cat and The Argument From Emus - This is the argument that intelligence is not enough to be all-powerful. Again, this doesn't matter, we are not all-powerful, yet we have the power to destroy all life on Earth (if we put our minds to it). Hawking, Einstein and the Australian Government all have the ability to herd cats and kill all emus if they ignored the negative consequences of the methods that would work. Our inabilities to do these things just show the limits of our intelligence, not of intelligence itself. The Argument From Slavic Pessimism - This is straight up in favour of the argument that AI is a threat and hard to control. The Argument From Complex Motivations - This would just make it harder for us to know why it's killing us if that's what it decides to do. The Argument From Actual AI - Wait, is this supposed to be an argument against AI becoming superintelligent? This just tells us that it isn't going to happen next year. The Argument From My Roommate - So the AI may be not very effective? We (or they) will just build a more effective one. The Argument From Brain Surgery - This is a fair argument, except it only applies to a very specific case. Like many of these arguments, you need a very specific way for the AI to work for the argument to work. Chances are there are many ways general AI could be built. The Argument From Childhood - The First general AI will likely need lots of 'growing up' time before it is effective (maybe), but any subsequent AI can be 'born' with all the experience and wisdom of all previous AI's The Argument From Gilligan's Island - "A recurring flaw in AI alarmism is that it treats intelligence as a property of individual minds, rather than recognizing that this capacity is distributed across our civilization and culture." - What? Is this assuming that the AI we are talking about doesn't have the ability to be specialized in many different areas? When did anyone say that? Why would that be true? The Outside Argument - This is just pointless character assassination of 'smart people in silicon valley', which does nothing to counter the argument. Sure, it's perfectly reasonable for you to be skeptical of people who appear crazy, but them being crazy-looking doesn't make what they say untrue.
It's mostly symbolic since the negotiations for it had failed.