a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by aeromill
aeromill  ·  3281 days ago  ·  link  ·    ·  parent  ·  post: My thoughts on the Syrian refugee crisis

    you cannot, in good faith, simply argue that "we should do X because it is the moral thing to do"

The lens I would view this through is that the decision to act or not is a moral question. That being said, you're 100% right because through my lens that all actions are moral ones then saying "we should act because its moral" is the same as saying "this is moral because it's moral", circular reasoning and all. I should have been more clear in saying that all my points to my argument (in the OP) were reasons to support the morality of acting in this situation. Good catch though.

    If both disagree, and are of similar levels of power, then all actions are immoral, as no actions would be taken

Before addressing this specific point, I think a visual would make it clearer. I think I understand your construct here though so: In this case, what if the inaction would cause harm to one or the other? The system would have to find a way of balancing the wants and needs one the two moral agents with one another.

    Overall, if you can draw a "box" around an object, and that actor would result in an action, or would have the mindset required to set that action to occur, then that action can be called "subjectively moral"

Agreed. I said it in passing in one of my earlier comments that if all life were to go extinct, then morality would go with it. So while morality isn't objective in the sense that it's omnipotent, written in the fundamental laws of nature regardless of who's perceiving it, it is "objective" in regards to who it pertains to: humans (you called this subjective morality).

However, I'm not 100% clear on your notions that all actions are subjectively moral because all immoral actions don't occur. What do you mean by that?

    It may be useful to get rid of "immoral" as a category entirely, and instead only consider actions to be "moral" or "not moral", or "I would cause this" and "I would not cause this"

Similar to above, what happens when your actions and inaction have consequences? At that point you can't simply abstain from acting since both acting and not acting will cause (let's say) harm to one person in one case and another person in another case.

    There is no objective morality based on the definition I give above

I spoke to this above too, but to be clearer here since I don't think I have been completely: I think we're using different definitions of "objective." I don't think there is an objective aim (since we both know of the concept of good and bad are measured against an aim, let's talk about an aim instead of the word "morals"; means to an end, and all that) in that it's written in the stars and will continue to exist outside the scope of humanity. I think the aim is objective in regards to the scope of humanity (drawing the box around humanity, so to speak). But what do I mean by "objective." I mean that there exists an aim that we all aim at by default and without the need for argument for it, or against another. For me, that aim is human well being. In that sense, there exists an objective aim.

While I did read it and have some points on mind, I think the common ground we found is far more interesting to discuss. Besides, the overarching theme was addressed above anyways (objective aim).

I do want to comment on the last part specifically (points 1-5). I would actually like to revise the points to make them clearer and to highlight that these are observations rather than a self containing argument:

1) All humans desire well being as the aim to their actions

2) Actions that increase human well being are good in regards to humans' natural aim (1)

3) There's no compelling argument to change our aim

4) Our aim remains as it is, and the goodness of our actions are measured against it

    it assumes that "actions that increase human well being" exist. Due to the first point being false, humans can seek different things for their well-being, and as a result no single category can satisfy this point fully. Actions that increase "human well being" may well not exist.

First, let me give an example of a specific action that will increase well being: Me taking a breath right this very instant. No one suffers, I gain (ever so slightly): well being increases. Second, when you say "no actions exist that can increase well being" I think you're thinking of general actions (e.g. donating to charity), then thinking of an instance where that general action can decrease well being in a given circumstance (e.g. the charity robbed you). But remember that we're dealing with a system that takes these variables into account here. So if you're faced with a dilemma (donate to charity?), you look at the variables (are they going to rob me? No) then you can act knowing (reasonably so) that your action increased well being.





bioemerl  ·  3281 days ago  ·  link  ·  

    In this case, what if the inaction would cause harm to one or the other?

Harm done is not a consideration of the situation. In this situation it is still a case where one person wants something to happen, and the other one does not.

The view looks at these two people as a decision making system itself, and the lack of agreement turns the decision making system into one which cannot make decisions. As a result, it acts more like a leaf than as two people, where any action to upset the balance is immoral.

Remember, also, that this is from the view of a system with only those two people. From the view of society then it is absolutely true that one person coming to harm would be something society would want to stop, and such an action would be immoral. However, from the view within only that system, there are only two actors, and they do not agree with one another, so no choice will be made, and as such, the system will be treated as if it can make no choices.

    The system would have to find a way of balancing the wants and needs one the two moral agents with one another.

I assume it cannot. If it can, then it once again becomes a decision making system, and the decision it makes is moral. Again, from the view of only the two people in that system. I did give the example of a massive guy beating on a little guy until things go the massive guy's way as a "moral" action of such a system, after all. Not typically what one sees as moral, but rarely do we look at morality in such a limited scope.

    However, I'm not 100% clear on your notions that all actions are subjectively moral because all immoral actions don't occur. What do you mean by that?

That was me more extrapolating on the definition than trying to make statements about if something is moral or immoral. Pushing the boundaries, so to say. Specifically thinking about what happens if you define the scope of moral consideration to be "the entire universe".

If morality is defined by the "choices of the internal state of a system" and you pick "the system" to be "the universe" than all actions are moral, as they are actions within the scope of the universe. In such a case, the only way for an action to be immoral is if that action does not occur at all, as if it does occur in the universe, it is moral, and if it occurs, it occurs in the universe.

Perhaps, instead, it may be better to say that actions in a system are not applicable to morality. A system can only act morally when effecting, or being effected by, external systems. In that case you couldn't define "the universe" to be the scope, and it would fix that odd idea.

Or, perhaps it is that the universe, as it cannot act, or be acted upon by another, cannot be considered a moral actor in any form. Which would fix the solution in a way that is a bit less destructive to what I laid out.

    Similar to above, what happens when your actions and inaction have consequences?

Those consequences do not matter unless they are in the scope of the system you are considering. If they are in that scope, then the consequences are "considered" when the system makes it's choice and an outcome occurs.

    At that point you can't simply abstain from acting since both acting and not acting will cause (let's say) harm to one person in one case and another person in another case.

Remember that a subjective moral choice is the decision of a system, as defined. If the system involves two actors, one who is harmed by inaction, and one who is not, then it is still true that whatever that system decides to do is the moral choice. If it is the person who is not harmed by inaction forcing inaction, or the person harmed by inaction forcing action, doesn't matter, the action that occurs is the action which is moral, from the view of that system only.

Again, change your scope, and you may find that the addition of that scope changes the results of the decision.

If the scope is only those two (who disagree), then the entity is not considered to have any form of conscious thought or cohesion. If the scope changes to society (a group of people under a rule of law, a consistent moral direction), for example, a conscious direction will appear again, and that direction may dictate that society would not allow that to happen should it have the option to change it, making the action immoral from that viewpoint.

    I think the aim is objective in regards to the scope of humanity (drawing the box around humanity, so to speak).

I would consider humanity to have this be less true, but for nations and such it is certainly true. There are some things were it is always true, but I think human interactions, across the board, are too complex and convoluted to say much that is definite about them.

    But what do I mean by "objective." I mean that there exists an aim that we all aim at by default and without the need for argument for it, or against another. For me, that aim is human well being. In that sense, there exists an objective aim.

If that aim is broadly defined as "humanity aims to have all it's actions be towards well being of some form", then I can agree, but the massive numbers of wars, fighting, psychopaths, and so on, clearly show that not all humans are concerned with total human wellbeing.

As to the points:

1) It depends on your definition of well being. Is it "actions that cause at least one individual to be happy", or is it "actions which cause the most individuals to be happy", or "actions which cause happiness without causing the opposite." (with happiness being utility, or satisfaction, or whatever). Only in the first place, "at least one individual gains well being from an action" will I agree with this point in total, for all humans.

2) Is making the same point as 1, or seems to be.

3) Is this a condition or a statement? If your first definition, the idea of well-being to be "only one individual" which allows harm to others in the case of psychopaths, then there is no real argument against it, because it is inherently true. However, if it is any of the latter, you can argue for a person to choose selfishness, or psychopathy.

4) I got nothing for this one.

    First, let me give an example of a specific action that will increase well being: Me taking a breath right this very instant. No one suffers, I gain (ever so slightly)

You actually gain pretty majorly, as if you didn't breathe you would die.

Otherwise I don't have much to say. I agree with the idea that, if you limit your scope to human actions, then there are a set of things that are "moral", and a set of things that are "immoral", and those are inherently defined by the average considerations of the humans inside of humanity, and I agree entirely when you mention "draw the box around humanity" in why I agree with you as to why that is a true thing. I wouldn't necessarily call those things well being, but I agree with the concept otherwise.

aeromill  ·  3280 days ago  ·  link  ·  

    this is from the view of a system with only those two people. From the view of society then it is absolutely true that one person coming to harm would be something society would want to stop, and such an action would be immoral. However, from the view within only that system, there are only two actors, and they do not agree with one another, so no choice will be made, and as such, the system will be treated as if it can make no choices

If we considered an action that affected millions of people (a government leader) then this system would almost certainly fail to produce any actions in which everyone in the system (e.g. a country's population) wants a given action to occur. Therefore this system would not produce any moral actions and is functionally useless to that aim, which is something you agree with when you say:

    I assume it cannot.

Which leads me to think that you're essentially saying: Any action that everyone wants to happen is moral. This isn't particularly groundbreaking. The real difficulty (and field of interest) is how to deal with actions where each outcome has pros and cons.

    but the massive numbers of wars, fighting, psychopaths, and so on, clearly show that not all humans are concerned with total human wellbeing.

Now you're charting into some interesting ethical philosophy that compares which form of well being we should aim towards: total or average well being. To sum it up both sides have issues (called the Repugnant Conclusion, check my post history for the link + discussion):

"Total Utilitarianism" would essentially favor the addition of any life that is even marginally worth living. So having 500 billion humans with barely enough resources to survive (let's say 1 happiness point each) is favorable to a smaller population of 1 billion with much higher average happiness (let's say 100 happiness each). 500 billion 1 is greater than 1 billion 100 so the former is better than the latter according to Total Utilitarianism. This clearly is counterintuitive and not worth our time.

"Average Utilitarianism" states that having the higher average utility is favorable (take the above example and just flip which one is favorable). The issue with this is that this justifies enslaving a small population for the increase in average happiness for the masses.

My personal solution to the Repugnant Conclusion is to do what I mentioned earlier: add some rules to actions that have to be held for them to be considered moral. For me that rule is the preservation of justice (no infringing human rights like liberty, etc). This prohibits the idea that we should kill/enslave a minority to bring up the average happiness.

Thoughts?

For the points, keep the above on mind when rereading them.

bioemerl  ·  3280 days ago  ·  link  ·  

    If we considered an action that affected millions of people (a government leader) then this system would almost certainly fail to produce any actions in which everyone in the system (e.g. a country's population) wants a given action to occur.

Remember that this is assuming that the two actors in the system are of equal levels of power.

In society, this is never true. Where it is true, a thing does not become moral or immoral for quite some time. See topics such as abortion, which for some time were quite heavily debated, and only now, as the free-choice groups gain more power, is it becoming more of a moral action.

    Therefore this system would not produce any moral actions and is functionally useless to that aim, which is something you agree with when you say:

if society had actors on two sides, of equal levels of power, with no ability to resolve those view differences, then no action would be produced. Society is so large, and so complex, that this situation rarely remains true for quite some time.

And, of course, this is not a pure matter of power, a group with a lot of guns is not going to exist forever, and if their actions have negative effects on society in the long run, while the society they rule over will consider their actions moral, all societies that result from that one will look back on them as immoral.

As well, social power is a thing, and morality is often based on opinion more than it is on other topics.

It all matters how you define the scope, how you look at the actions, and so on. There is no simple, concrete, answer.

    Which leads me to think that you're essentially saying: Any action that everyone wants to happen is moral.

Only if you are considering the scope of only that person.

    so the former is better than the latter according to Total Utilitarianism. This clearly is counterintuitive and not worth our time.

That isn't counterintuitive at all. It's actually something quite a lot of people think is the better option, with fewer people living better lives.

    "Average Utilitarianism" states that having the higher average utility is favorable (take the above example and just flip which one is favorable). The issue with this is that this justifies enslaving a small population for the increase in average happiness for the masses.

Which has been done, and was considered moral, in the past. We even do it today, killing pigs and cows for meat so that humans may have more things, along with destroying forests and so on for the same reason.

    add some rules to actions that have to be held for them to be considered moral

In my opinion that is evidence for the idea that, the theory of utilitarianism is too weak, it requires exceptions in order to function.

aeromill  ·  3280 days ago  ·  link  ·  

    Remember that this is assuming that the two actors in the system are of equal levels of power.

I don't see how it is. You have the leader and the population affected. The leader has two decisions: (1) help subset x at the expense of y or (2) do nothing to protect subset y at the expense of x. There's no need to measure power or anything. This is a simple case of 1 individual's decision affecting multiple people. With either decision (action or inaction) people are harmed and benefitted.

    That isn't counterintuitive at all. It's actually something quite a lot of people think is the better option, with fewer people living better lives.

Did you quote the wrong passage here? Because I was referring to how many many lives barely worth living being the best option is counter intuitive, but you responded saying that many people would find the idea of few lives with a lot of happiness the better option. Could you clarify which option are you saying is intuitive?

    Which has been done and was considered moral, in the past. (in reference to slavery)

But that clearly isn't the best way to maximize happiness. Just because people thought that slavery was the moral action doesn't actually make it the moral action (moral being measured against well being, that is).

    In my opinion that is evidence for the idea that, the theory of utilitarianism is too weak, it requires exceptions in order to function. (in reference to adding rules)

The rules will be based on the original end of well being. These rules (or rule), whatever they are, should be rules that generally maximize well being in the long run. That way its still consistent with the original aim of well being.