a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by user-inactivated
user-inactivated  ·  2725 days ago  ·  link  ·    ·  parent  ·  post: What Do Buddhist Monks Think of the Trolley Problem? - The Atlantic

I was hoping for it to go deeper into the mentality of the monks that would've led them to believe the solution they chose to be the correct one. Overall, however, the topic of the article is "This may be our biological base for morality going haywire, and it relates to the prospect of driverless cars".





kleinbl00  ·  2725 days ago  ·  link  ·  

Whenever you see the phrase "trolley problem" used in the popular press, you can correctly assume that

(A) the author is going to speak about autonomous vehicles without knowing a fucking thing about them

(B) the author is going to speak about law without knowing a fucking thing about it

(C) the author is going to speak about philosophy without knowing a fucking thing about it

(D) the author is writing for an audience who knows even less than he or she does

(E) the author is actively increasing the audience's naivete and ignorance about the subject

Truly. These articles can all be skipped. It will probably make you smarter to do so.

user-inactivated  ·  2724 days ago  ·  link  ·  

    These articles can all be skipped.

Any good ones you've seen so far?

jadedog  ·  2722 days ago  ·  link  ·  

From the little I know of Buddhism, I think the reason that the monks pick the direct route of pushing the fat man is intention.

In Buddhism, there's a story of a man who killed the captain of a ship because the captain was leading the ship into danger that would kill all the men. As long as the man's intention was to save the crew, then killing the captain wasn't seen as bad.

Using the switch to divert the trolley would be deception, which is not a pure intention.

The idea is that if you're going to do a wrong thing, then don't disguise it by trying to deceive others you did it because that's not a pure intention.

My question would be whether you can program intention, and more importantly, whether that would be a good thing. Perhaps not, because humans having a pure intention is a good thing because it's clear who is taking responsibility. If the car is doing the killing, is it the car (or the owner of the car) that takes responsibility?

user-inactivated  ·  2722 days ago  ·  link  ·  

    My question would be whether you can program intention, and more importantly, whether that would be a good thing.

This is a sentiment I hear most often from people far from the field of artificial intelligence: that building a machine with intent may not be a good thing. I believe it's driven by the same idea as all anti-robotic sentiment: that human life is somehow special and more precious, and that human mind is equally special and precious.

To that, I raise a different question: what is so special about us? Sapience? It is, in fact, special - we know of no other species capable of such progressive reasoning - but why is it so precious that we want to prohibit ourselves from developing something similar by hand?

To answer "Why is it not?" is escaping the question. There's no obvious and reasonable rationale behind having ourselves as the only sapient species beyond our species-wide sense of natural superiority and control over our environment. Let me be clear: we're not talking about killing human beings here - it's a completely different argument on the preciousness of human life. We're talking about making something equally precious in its ability to reason.

Once we build something similarly sapient - let's dub it an artificial general intelligence, or AGI, for simplicity of terminology, though there are similar issues with human cloning - we're letting go of our solitary control of our environment and giving it away, consciously, to a piece of mind we have no sway over - beyond, of course, turning it off, much like with a human being. This makes us no longer the king of the hill when it comes to advanced thought: we'd have to make space - or, worse, give up the position altogether - for something similarly- or more capable than us. This run entirely contrary to our idea of natural superiority as a species, irrational and entirely narcissistic as it is.

But let's put AGI off and talk about something simpler: AI as a sole driver of an automobile, with no support from a human being. It makes all the decisions, communicates with other cars on the road for the optimal routing and does its best to avoid collision when the incident seems imminent. We no longer have control over our environment: we're giving it away completely to an automaton we have no sway over. Naturally, this is terrifying for humanity: that we might submit to an outside intelligence for decisions that we can't review or argue over. We won't be able to even if we wanted to: an AI driver processes environmental information much more swiftly than we could ever hope for at the conscious level.

So, now we have a black box mechanical mind that we have no control over in our daily lives, which drives us around as we need and decides for itself on all the required question: how quickly, using which street and whether to avoid hitting the man crossing the road at an illegal moment. Terrified yet?

However, I think the concept of such a monstrous mechanism is a lie. It's a product of many biases coupled with common misunderstanding of the whole process. We don't usually think about it, instead presenting such mechanisms as holistic, but they're nothing but each part working together. You can only have a murderous, rampaging machine if something in its programming led it to believe this is the most efficient way of solving the problem it was built to solve. That might involve external data messing with native assumptions of the thought system, but that's a different story. Let's just say that, given an intelligence capable of learning from observation, you can teach it anything, and learning what we would consider a bad thing is not the intelligence's fault but the teacher's.

This has gotten way long. To summarize: that we can create something capable of thought is not a bad thing. That we can teach the newly-born intelligence to live according to our values is a distinct possibility reliant entirely on the builders and their intentions. An AI driver is not, in itself, a bad thing: it's how you program it that matters. Can we build one? We absolutely can. Can we build one with intent in its "blood"? We absolutely can - as long as we clearly define what "intent" is and how it is expressed in the machine.

P.S. Sounds like I need to read up on Buddhism. Their concepts about living and deeds are interesting, to say the least.