a thoughtful web.
Good ideas and conversation. No ads, no tracking.   Login or Take a Tour!
comment by bioemerl
bioemerl  ·  3281 days ago  ·  link  ·    ·  parent  ·  post: My thoughts on the Syrian refugee crisis

Ok, just thought for a bit, went out to eat lunch:

I'm moving this section up to the top from the bottom, just because I like it enough that I'd prefer it to be the main focus of this post. I know this whole argument has been about "I don't hold a moral position", but honestly, by your definitions of what a moral position is, I do.

My original statement was simply to say that there is no definite set of moral and immoral things. That's is what I meant when I said objective morality. I meant to say that you cannot, in good faith, simply argue that "we should do X because it is the moral thing to do". All arguments need to be larger in scope and have real points in them. Keep in mind that this was written everything after "THE ACTUAL POST", so if you want more context, read there.

___

This would likely best sum my view of the topic:

___

1) An actor is any system which can be defined. If it exists, if we can name it, it is a moral actor.

2) A subjectively moral action is any action which, depending on the internal state of an actor, that actor would cause to happen.

___

Now, this is a very vague definition. All conclusions you can draw from it depend on what actor you are considering.

You can draw that box around something that is inanimate, and call a leaf an "actor". If that leaf, depending on it's internal state, would move somewhere, then that movement is a moral action. However, if that leaf would not choose to do such a thing, but an external actor, such as wind, interacts with the leaf and makes it move, then the action is immoral.

This is a bit more complex when you draw the box around a human (or sentient being), as external actors can act on human beings, while the scope of morality does not include them, and still be moral actions. A human can "want" to have a hundred dollars, and if you give that person a hundred dollars, and they would choose to have that happen if they could, then that action is moral.

Say you draw that box around two humans. Now you have two sets of "wants", and creates many possibilities:

a) (want want). Any action on that actor is moral

b) (want unwant). The state of the actor becomes more like that of the leaf. If the two people in the box, despite their conflicting views, would take an action regardless (140 lb man and a 300 lb titan), then that action is moral. If an actor disturbs that balance, moving the system to a position it would not take on it's own, that action becomes immoral. If both disagree, and are of similar levels of power, then all actions are immoral, as no actions would be taken.

c) (unwant unwant). The action is immoral.

This idea scales up to as many humans, or conscious actors, as you would like.

Say you draw a box so large that it includes all possible entities. The universe. By this measure, there can be no external actors, and as a result, all things that occur, all decisions, are the result of the internal state of the actor, and all actions are moral.

Overall, if you can draw a "box" around an object, and that actor would result in an action, or would have the mindset required to set that action to occur, then that action can be called "subjectively moral". As you can draw a box around the universe, all actions are technically "subjectively moral".

If you can draw a box around an object, and that actor would not result in an action, or would not have the mindset required to set that action to occur, then that action could be called "subjectively immoral". This category is more interesting, as there are things which will never happen, and even if you define the scope of the universe, these actions will never occur. By this idea, there are actions that are subjectively immoral, despite that all actions are subjectively moral. However, subjectively moral actions are actions which will never occur. All possible actions are subjectively moral.

If you draw a box around an object, and that actor will either not encounter an event, or would be unable to take a position on the subject, then the action is not a moral one. This one is hard to simplify, as these things just do not exist when the scope is at "the universe", as a result, actions are only amoral when the scope is smaller than that of the whole universe, or the scope is limited.

So this creates two classes of entities. One which can conceptualize events and ones which cannot. Sentient and non-sentient beings. A non-sentient being is one of which all external actions which effect it are immoral. A sentient being is one which an external action can be moral or immoral.

It may be useful to get rid of "immoral" as a category entirely, and instead only consider actions to be "moral" or "not moral", or "I would cause this" and "I would not cause this". Perhaps it is better to say that an immoral action is "any action which a sentient being would work against", while actions which would not be caused by a being are simply "not moral".

So you could, for any action, create a list of possible or obvious scopes, and define the morality when looking at any individual scope.

Murder:

Knife: Not-moral. I would not have moved.

Self-Victim: Immoral. I do not want to be killed.

Self-Murder: Moral. I want to kill

System-Murderer-Victim: Moral. The Victim died as a result of internal actions.

System-Society: Immoral. Murder is illegal.

System-Nature/biosphere: Moral. The Victim died as a result of internal actions.

System-Earth: Moral. The Victim died as a result of internal actions.

System Universe: Moral. The Victim died as a result of internal actions.

It may be useful to remove all non-sentient, or non-decision-making, non-computing actors, which were not acted upon.

Murder:

Knife: Not-moral. I would not have moved.

Self-Victim: Immoral. I do not want to be killed.

Self-Murder: Moral. I want to kill

System-Murderer-Victim: Moral. The Victim died as a result of internal actions.

System-Society: Immoral. Murder is illegal.

Perhaps you could look at this and say "The moral action is the highest level moral action after the removal of non-decision making, non-computing actors, which were not acted upon". Utilitarianism.

Perhaps you could look only from where you would be as an individual. The murder or the victim, and decide if you yourself would want such a thing to happen.

Perhaps you could look at things from the universe standpoint, and consider all things moral.

Perhaps you could look at things from the view of all those acted upon, the victim and the knife, and allow the least-moral consideration to take importance.

Whatever view you take, you cannot deny that all the things I list are subjectively true. No matter what position you take the subject matter may be moral, immoral, or not-moral if you were to take a different position. There is no objective morality based on the definition I give above.

Perhaps, then, there is a category of definite things where actions are truly moral? I hadn't considered it until now, but it's possible that, for any actor chosen, an action will always be moral. However, I believe such a thing to be unlikely.

___

___

___

ACTUAL POST:

___

___

___

    You believe that the aim of self gain is the best aim for a reason.

I focus not of self gain, but of the satisfaction of those drives we are born with. We avoid pain, we seek pleasure. We avoid things that cause pain, and seek things that do not cause it. We have empathy, meaning that pain or pleasure we cause in others is felt in ourselves. These drives explain human action better than any other I am aware of.

However, I say this not as a standard with which to judge how good an action is objectively.

I can create three people. I can show each of these people a situation, and each can tell me, correctly, that this situation is moral, neutral, and immoral. I myself could be considered as one of those people.

The point is, what is and isn't moral under my situation is not defined by anything but the system. You can create a being that seeks only pain and death, and to that being, pain and death are moral things. You can have a being that seeks only joy, and to that being joy is the only moral thing. In this way, morality is only "the state of a system that determines what that system attempts to change in the world". You cannot define it any further, as those states in a system vary highly depending on where and who you are.

I am not saying my view is correct when I push humanity, free speech, and so on. I simply push my view. This is what I want, this is where I stand, I stand here because these positions benefit me in some way.

It is objectively correct that I think it is moral to have the middle east follow my views. It is objectively correct that the middle east thinks it is immoral that I try to make them follow my views.

Neither side is incorrect, both sides are equally "moral". Objectively, no statement or position is more immoral than another, ever. You can always change a view and shift the morality of a statement.

When I speak, I speak my view, not the objective one. I am a human, I am an American. I push the views, the moral system that will benefit humans and/or Americans. This is good because this is what I believe is good, but that doesn't mean this is actually good, only that I think so.

    You therefore believe it to be the objectively correct aim by the logic that the best solution is the most correct one.

"the best solution is the most correct one" is either a tautology or a subjective definition, it cannot be false. This statement can't be used to prove anything.

    So the very fact that you choose one aim over another is you implicitly saying that your aim is the best of all possible ones.

You can hold the view that neither side is more moral than the other, while still deciding to push and enforce your own view. Not because you believe your position is more moral, but because you are a selfish bastard who doesn't view the world through the eyes of others.

I am well capable of thinking of something both from my subjective viewpoint, where I can say "this should happen", and realizing it is incorrect when viewed objectively.

    Let's play that game and take a naturalistic approach.

1) The first premise is wrong, as humans absolutely do have different natures.

2) The second is correct inherently, as it states that "we all seek the thing which our minds are designed to seek".

3) The third is true as well, as the second is inherently true, and you cant make something that is inherently true untrue.

4) The fourth is also inherently true. However, it assumes that "actions that increase human well being" exist. Due to the first point being false, humans can seek different things for their well-being, and as a result no single category can satisfy this point fully. Actions that increase "human well being" may well not exist.

5) 5 is not provable, in my opinion. It is a "should" statement, and "should" is a moral question. Chicken, Egg.

___





aeromill  ·  3281 days ago  ·  link  ·  

    you cannot, in good faith, simply argue that "we should do X because it is the moral thing to do"

The lens I would view this through is that the decision to act or not is a moral question. That being said, you're 100% right because through my lens that all actions are moral ones then saying "we should act because its moral" is the same as saying "this is moral because it's moral", circular reasoning and all. I should have been more clear in saying that all my points to my argument (in the OP) were reasons to support the morality of acting in this situation. Good catch though.

    If both disagree, and are of similar levels of power, then all actions are immoral, as no actions would be taken

Before addressing this specific point, I think a visual would make it clearer. I think I understand your construct here though so: In this case, what if the inaction would cause harm to one or the other? The system would have to find a way of balancing the wants and needs one the two moral agents with one another.

    Overall, if you can draw a "box" around an object, and that actor would result in an action, or would have the mindset required to set that action to occur, then that action can be called "subjectively moral"

Agreed. I said it in passing in one of my earlier comments that if all life were to go extinct, then morality would go with it. So while morality isn't objective in the sense that it's omnipotent, written in the fundamental laws of nature regardless of who's perceiving it, it is "objective" in regards to who it pertains to: humans (you called this subjective morality).

However, I'm not 100% clear on your notions that all actions are subjectively moral because all immoral actions don't occur. What do you mean by that?

    It may be useful to get rid of "immoral" as a category entirely, and instead only consider actions to be "moral" or "not moral", or "I would cause this" and "I would not cause this"

Similar to above, what happens when your actions and inaction have consequences? At that point you can't simply abstain from acting since both acting and not acting will cause (let's say) harm to one person in one case and another person in another case.

    There is no objective morality based on the definition I give above

I spoke to this above too, but to be clearer here since I don't think I have been completely: I think we're using different definitions of "objective." I don't think there is an objective aim (since we both know of the concept of good and bad are measured against an aim, let's talk about an aim instead of the word "morals"; means to an end, and all that) in that it's written in the stars and will continue to exist outside the scope of humanity. I think the aim is objective in regards to the scope of humanity (drawing the box around humanity, so to speak). But what do I mean by "objective." I mean that there exists an aim that we all aim at by default and without the need for argument for it, or against another. For me, that aim is human well being. In that sense, there exists an objective aim.

While I did read it and have some points on mind, I think the common ground we found is far more interesting to discuss. Besides, the overarching theme was addressed above anyways (objective aim).

I do want to comment on the last part specifically (points 1-5). I would actually like to revise the points to make them clearer and to highlight that these are observations rather than a self containing argument:

1) All humans desire well being as the aim to their actions

2) Actions that increase human well being are good in regards to humans' natural aim (1)

3) There's no compelling argument to change our aim

4) Our aim remains as it is, and the goodness of our actions are measured against it

    it assumes that "actions that increase human well being" exist. Due to the first point being false, humans can seek different things for their well-being, and as a result no single category can satisfy this point fully. Actions that increase "human well being" may well not exist.

First, let me give an example of a specific action that will increase well being: Me taking a breath right this very instant. No one suffers, I gain (ever so slightly): well being increases. Second, when you say "no actions exist that can increase well being" I think you're thinking of general actions (e.g. donating to charity), then thinking of an instance where that general action can decrease well being in a given circumstance (e.g. the charity robbed you). But remember that we're dealing with a system that takes these variables into account here. So if you're faced with a dilemma (donate to charity?), you look at the variables (are they going to rob me? No) then you can act knowing (reasonably so) that your action increased well being.

bioemerl  ·  3281 days ago  ·  link  ·  

    In this case, what if the inaction would cause harm to one or the other?

Harm done is not a consideration of the situation. In this situation it is still a case where one person wants something to happen, and the other one does not.

The view looks at these two people as a decision making system itself, and the lack of agreement turns the decision making system into one which cannot make decisions. As a result, it acts more like a leaf than as two people, where any action to upset the balance is immoral.

Remember, also, that this is from the view of a system with only those two people. From the view of society then it is absolutely true that one person coming to harm would be something society would want to stop, and such an action would be immoral. However, from the view within only that system, there are only two actors, and they do not agree with one another, so no choice will be made, and as such, the system will be treated as if it can make no choices.

    The system would have to find a way of balancing the wants and needs one the two moral agents with one another.

I assume it cannot. If it can, then it once again becomes a decision making system, and the decision it makes is moral. Again, from the view of only the two people in that system. I did give the example of a massive guy beating on a little guy until things go the massive guy's way as a "moral" action of such a system, after all. Not typically what one sees as moral, but rarely do we look at morality in such a limited scope.

    However, I'm not 100% clear on your notions that all actions are subjectively moral because all immoral actions don't occur. What do you mean by that?

That was me more extrapolating on the definition than trying to make statements about if something is moral or immoral. Pushing the boundaries, so to say. Specifically thinking about what happens if you define the scope of moral consideration to be "the entire universe".

If morality is defined by the "choices of the internal state of a system" and you pick "the system" to be "the universe" than all actions are moral, as they are actions within the scope of the universe. In such a case, the only way for an action to be immoral is if that action does not occur at all, as if it does occur in the universe, it is moral, and if it occurs, it occurs in the universe.

Perhaps, instead, it may be better to say that actions in a system are not applicable to morality. A system can only act morally when effecting, or being effected by, external systems. In that case you couldn't define "the universe" to be the scope, and it would fix that odd idea.

Or, perhaps it is that the universe, as it cannot act, or be acted upon by another, cannot be considered a moral actor in any form. Which would fix the solution in a way that is a bit less destructive to what I laid out.

    Similar to above, what happens when your actions and inaction have consequences?

Those consequences do not matter unless they are in the scope of the system you are considering. If they are in that scope, then the consequences are "considered" when the system makes it's choice and an outcome occurs.

    At that point you can't simply abstain from acting since both acting and not acting will cause (let's say) harm to one person in one case and another person in another case.

Remember that a subjective moral choice is the decision of a system, as defined. If the system involves two actors, one who is harmed by inaction, and one who is not, then it is still true that whatever that system decides to do is the moral choice. If it is the person who is not harmed by inaction forcing inaction, or the person harmed by inaction forcing action, doesn't matter, the action that occurs is the action which is moral, from the view of that system only.

Again, change your scope, and you may find that the addition of that scope changes the results of the decision.

If the scope is only those two (who disagree), then the entity is not considered to have any form of conscious thought or cohesion. If the scope changes to society (a group of people under a rule of law, a consistent moral direction), for example, a conscious direction will appear again, and that direction may dictate that society would not allow that to happen should it have the option to change it, making the action immoral from that viewpoint.

    I think the aim is objective in regards to the scope of humanity (drawing the box around humanity, so to speak).

I would consider humanity to have this be less true, but for nations and such it is certainly true. There are some things were it is always true, but I think human interactions, across the board, are too complex and convoluted to say much that is definite about them.

    But what do I mean by "objective." I mean that there exists an aim that we all aim at by default and without the need for argument for it, or against another. For me, that aim is human well being. In that sense, there exists an objective aim.

If that aim is broadly defined as "humanity aims to have all it's actions be towards well being of some form", then I can agree, but the massive numbers of wars, fighting, psychopaths, and so on, clearly show that not all humans are concerned with total human wellbeing.

As to the points:

1) It depends on your definition of well being. Is it "actions that cause at least one individual to be happy", or is it "actions which cause the most individuals to be happy", or "actions which cause happiness without causing the opposite." (with happiness being utility, or satisfaction, or whatever). Only in the first place, "at least one individual gains well being from an action" will I agree with this point in total, for all humans.

2) Is making the same point as 1, or seems to be.

3) Is this a condition or a statement? If your first definition, the idea of well-being to be "only one individual" which allows harm to others in the case of psychopaths, then there is no real argument against it, because it is inherently true. However, if it is any of the latter, you can argue for a person to choose selfishness, or psychopathy.

4) I got nothing for this one.

    First, let me give an example of a specific action that will increase well being: Me taking a breath right this very instant. No one suffers, I gain (ever so slightly)

You actually gain pretty majorly, as if you didn't breathe you would die.

Otherwise I don't have much to say. I agree with the idea that, if you limit your scope to human actions, then there are a set of things that are "moral", and a set of things that are "immoral", and those are inherently defined by the average considerations of the humans inside of humanity, and I agree entirely when you mention "draw the box around humanity" in why I agree with you as to why that is a true thing. I wouldn't necessarily call those things well being, but I agree with the concept otherwise.

aeromill  ·  3280 days ago  ·  link  ·  

    this is from the view of a system with only those two people. From the view of society then it is absolutely true that one person coming to harm would be something society would want to stop, and such an action would be immoral. However, from the view within only that system, there are only two actors, and they do not agree with one another, so no choice will be made, and as such, the system will be treated as if it can make no choices

If we considered an action that affected millions of people (a government leader) then this system would almost certainly fail to produce any actions in which everyone in the system (e.g. a country's population) wants a given action to occur. Therefore this system would not produce any moral actions and is functionally useless to that aim, which is something you agree with when you say:

    I assume it cannot.

Which leads me to think that you're essentially saying: Any action that everyone wants to happen is moral. This isn't particularly groundbreaking. The real difficulty (and field of interest) is how to deal with actions where each outcome has pros and cons.

    but the massive numbers of wars, fighting, psychopaths, and so on, clearly show that not all humans are concerned with total human wellbeing.

Now you're charting into some interesting ethical philosophy that compares which form of well being we should aim towards: total or average well being. To sum it up both sides have issues (called the Repugnant Conclusion, check my post history for the link + discussion):

"Total Utilitarianism" would essentially favor the addition of any life that is even marginally worth living. So having 500 billion humans with barely enough resources to survive (let's say 1 happiness point each) is favorable to a smaller population of 1 billion with much higher average happiness (let's say 100 happiness each). 500 billion 1 is greater than 1 billion 100 so the former is better than the latter according to Total Utilitarianism. This clearly is counterintuitive and not worth our time.

"Average Utilitarianism" states that having the higher average utility is favorable (take the above example and just flip which one is favorable). The issue with this is that this justifies enslaving a small population for the increase in average happiness for the masses.

My personal solution to the Repugnant Conclusion is to do what I mentioned earlier: add some rules to actions that have to be held for them to be considered moral. For me that rule is the preservation of justice (no infringing human rights like liberty, etc). This prohibits the idea that we should kill/enslave a minority to bring up the average happiness.

Thoughts?

For the points, keep the above on mind when rereading them.

bioemerl  ·  3280 days ago  ·  link  ·  

    If we considered an action that affected millions of people (a government leader) then this system would almost certainly fail to produce any actions in which everyone in the system (e.g. a country's population) wants a given action to occur.

Remember that this is assuming that the two actors in the system are of equal levels of power.

In society, this is never true. Where it is true, a thing does not become moral or immoral for quite some time. See topics such as abortion, which for some time were quite heavily debated, and only now, as the free-choice groups gain more power, is it becoming more of a moral action.

    Therefore this system would not produce any moral actions and is functionally useless to that aim, which is something you agree with when you say:

if society had actors on two sides, of equal levels of power, with no ability to resolve those view differences, then no action would be produced. Society is so large, and so complex, that this situation rarely remains true for quite some time.

And, of course, this is not a pure matter of power, a group with a lot of guns is not going to exist forever, and if their actions have negative effects on society in the long run, while the society they rule over will consider their actions moral, all societies that result from that one will look back on them as immoral.

As well, social power is a thing, and morality is often based on opinion more than it is on other topics.

It all matters how you define the scope, how you look at the actions, and so on. There is no simple, concrete, answer.

    Which leads me to think that you're essentially saying: Any action that everyone wants to happen is moral.

Only if you are considering the scope of only that person.

    so the former is better than the latter according to Total Utilitarianism. This clearly is counterintuitive and not worth our time.

That isn't counterintuitive at all. It's actually something quite a lot of people think is the better option, with fewer people living better lives.

    "Average Utilitarianism" states that having the higher average utility is favorable (take the above example and just flip which one is favorable). The issue with this is that this justifies enslaving a small population for the increase in average happiness for the masses.

Which has been done, and was considered moral, in the past. We even do it today, killing pigs and cows for meat so that humans may have more things, along with destroying forests and so on for the same reason.

    add some rules to actions that have to be held for them to be considered moral

In my opinion that is evidence for the idea that, the theory of utilitarianism is too weak, it requires exceptions in order to function.

aeromill  ·  3280 days ago  ·  link  ·  

    Remember that this is assuming that the two actors in the system are of equal levels of power.

I don't see how it is. You have the leader and the population affected. The leader has two decisions: (1) help subset x at the expense of y or (2) do nothing to protect subset y at the expense of x. There's no need to measure power or anything. This is a simple case of 1 individual's decision affecting multiple people. With either decision (action or inaction) people are harmed and benefitted.

    That isn't counterintuitive at all. It's actually something quite a lot of people think is the better option, with fewer people living better lives.

Did you quote the wrong passage here? Because I was referring to how many many lives barely worth living being the best option is counter intuitive, but you responded saying that many people would find the idea of few lives with a lot of happiness the better option. Could you clarify which option are you saying is intuitive?

    Which has been done and was considered moral, in the past. (in reference to slavery)

But that clearly isn't the best way to maximize happiness. Just because people thought that slavery was the moral action doesn't actually make it the moral action (moral being measured against well being, that is).

    In my opinion that is evidence for the idea that, the theory of utilitarianism is too weak, it requires exceptions in order to function. (in reference to adding rules)

The rules will be based on the original end of well being. These rules (or rule), whatever they are, should be rules that generally maximize well being in the long run. That way its still consistent with the original aim of well being.