a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by thenewgreen
thenewgreen  ·  4040 days ago  ·  link  ·    ·  parent  ·  post: Singularity or Bust

Ten years? I get that he's saying that if we actually used the resources we have available we could achieve this, but still that seems so fast. What do you think? Also, I don't think the "negative" outcome of human warfare over whether or not to produce these AI's is realistic. It won't be something that happens over night, it will be a gradual occurrence. By the time these questions arise, we humans will have already integrated AI heavily in to our lives/selves. Don't you think?





theadvancedapes  ·  4040 days ago  ·  link  ·  

    Ten years? I get that he's saying that if we actually used the resources we have available we could achieve this, but still that seems so fast.

I agree with him that if we collectively put as much money into genetics, nanotechnology, and robotics as we collectively invest into our military we would have a singularity in 10 years. I also think we could seriously begin Martian colonization with permanent safe settlements in 10 years if we put as much money into space travel as we do into the military. But we won't do either.

    Also, I don't think the "negative" outcome of human warfare over whether or not to produce these AI's is realistic. It won't be something that happens over night, it will be a gradual occurrence.

My view is that conceptualizing human/AI conflict as a binary is foolish (as de Garis does). Brain-interfacing technologies already exist and its only 2013. Brain interfacing technologies will actually make us cyborgs and enhance our intelligence to the level of our first A.I. systems in 2030. So in my view it will be a merger (this is generally referred to as the "Kurzweilian scenario") (Goerztel did a fantastic overview of all singularity scenarios here ). It makes no sense for A.I. to be in direct conflict with us - they will still be dependent on the system they are emerging from - and evolutionary pressures will force us to enhance our intelligence to keep up anyway. In my talks with Ben he does express a great deal more pessimism than he does in this documentary. Francis Heylighen (from the Global Brain Institute) and him frequently argue because Heylighen believes it will be a utopian-like era compared to our current existence and Goertzel thinks its 50/50 (positive/negative).

Heylighen justifies this assertion by comparing the behaviour of neurons in the brain and humans in the Global Brain. We will all be permanently interacting on the Internet all the time - and that system will be very intelligent - and it will be in the systems best interest to keep all of its neurons around - not to destroy the neurons (just as our brain does). In fact it will be in the systems best interest to enhance all experience to the greatest degree possible because it will be the intelligent agents interaction that creates its own intelligence.

Are there major concerns with the future of nanotech and A.I.? Of course. But I don't think a Terminator scenario is likely at all.