What is your moral philosophy? | Page 6 | INFJ Forum

What is your moral philosophy?

wolly.green said:
The goal "to win" is a far deeper and more descriptive moral! It explains why the rules even exist to begin with. Does this make sense to you?

I think I see what you're getting at --- you're trying to say even with games, where it may seem on a basic level that we just make up the rules, that there is something more analogous to the external reality of physical objects (that is, just as we make the methods of science in accordance with the goal of describing external reality -- that is, a goal which is not determined entirely by us + has a contribution from an external world... you'd want to say the same holds of the methods/rules of the game...that they're ultimately about a goal not entirely made up by us).

Interesting idea -- that even the goals we seem to set may not be entirely determined by us on a deeper level.
 
  • Like
Reactions: wolly.green
I think I see what you're getting at --- you're trying to say even with games, where it may seem on a basic level that we just make up the rules, that there is something more analogous to the external reality of physical objects (that is, just as we make the methods of science in accordance with the goal of describing external reality -- that is, a goal which is not determined entirely by us + has a contribution from an external world... you'd want to say the same holds of the methods/rules of the game...that they're ultimately about a goal not entirely made up by us).

Interesting idea -- that even the goals we seem to set may not be entirely determined by us on a deeper level.

Yeah exactly! Its not upto us to decide our goals. Reality decides for us! Reality decides what works and what does not. We know this because the goals that don't work eventually kill us. Albeit very slowly.
 
Last edited:
Actually, I believe goals and morals can be improved progressively as our knowledge grows. The more we know about ourselves and the world, the more we can improve our morals. There is nothing subjective about that whatsoever. What do you think though @charlatan
 
Just reading through the thread again today, I'm getting more and more uncomfortable with an approach to morals that doesn't incorporate conscience, and the process of perceiving right and wrong. I think that any discussion on the philosophical foundation of morals must include the feeling of guilt that hits me if I'm in the wrong, and the feeling of virtue when I'm in the right - these feelings are at the core of my response to rightness and wrongness, and it is feeling judgement that is really at the heart of my day to day practical moral behaviour. I don't normally analyse what is right or wrong - my reaction is instinctive, emotionally grounded and practical.

Of course there are situations where I might need to consider a situation: when faced with a choice limited to two evils for example, or if feel my guilt is misplaced, or where I might reconsider whether something I thought was OK is actually wrong. At it's most challenging I will go through a period of feeling conflicted, and may even suffer from considerable anxiety - which I'll resolve in the end using Ni/Fi/Fe primarily to feel my way to a solution.

So I'm thinking that any philosophical basis for morals must take account of the eye of conscience that we are all born with (well, most of us) that allows us to perceive them and challenges us to comply with them.
 
Just reading through the thread again today, I'm getting more and more uncomfortable with an approach to morals that doesn't incorporate conscience, and the process of perceiving right and wrong. I think that any discussion on the philosophical foundation of morals must include the feeling of guilt that hits me if I'm in the wrong, and the feeling of virtue when I'm in the right - these feelings are at the core of my response to rightness and wrongness, and it is feeling judgement that is really at the heart of my day to day practical moral behaviour. I don't normally analyse what is right or wrong - my reaction is instinctive, emotionally grounded and practical.

Of course there are situations where I might need to consider a situation: when faced with a choice limited to two evils for example, or if feel my guilt is misplaced, or where I might reconsider whether something I thought was OK is actually wrong. At it's most challenging I will go through a period of feeling conflicted, and may even suffer from considerable anxiety - which I'll resolve in the end using Ni/Fi/Fe primarily to feel my way to a solution.

So I'm thinking that any philosophical basis for morals must take account of the eye of conscience that we are all born with (well, most of us) that allows us to perceive them and challenges us to comply with them.

That's understandable because you're a feeler rather than a thinker like me. You make decisions based on what feels right. I do not though! I cannot made a decision without have a ton of reasons first.

Anyway, feeling based morality is definitely a subset of all morals.
 
That's understandable because you're a feeler rather than a thinker like me. You make decisions based on what feels right. I do not though! I cannot made a decision without have a ton of reasons first.

Anyway, feeling based morality is definitely a subset of all morals.

Oh yes - quite right. I'm thinking of balance not a feeling-based alternative. Feeling without Thinking could be ungrounded and Thinking without Feeling could be uncomfortably Spock-like.
 
Oh yes - quite right. I'm thinking of balance not a feeling-based alternative. Feeling without Thinking could be ungrounded and Thinking without Feeling could be uncomfortably Spock-like.

Oh absolutely! I do believe our morality is guided by our feelings. We cannot just ignore them. If we do, our morals will not work and we will suffer for it. This goes back to the fact that our emotions are as much a part of reality as the laws of physics.

Anyway, what does this imply about psychopaths? They cant ignore their feelings either lol.
 
@wolly.green -- assuming I'm right about the things I consider immoral, definitely we can better protect those morals as we understand the world better

I think the objectivity of suffering is a big thing to me---it's certainly true, like you say, that without some notion of 'ought' we're lifeless cabbages, in the sense that the alternative to acting rationally seems to be acting completely randomly, i.e. to have no progress/no goals whatsoever.... however, if one actually thought it's possible for someone to live randomly, with no oughts, I think the objectivity of suffering kills that, because it introduces oughts at the very most basic level, not relative to any more fundamental goal.
 
@wolly.green -- assuming I'm right about the things I consider immoral, definitely we can better protect those morals as we understand the world better

I think the objectivity of suffering is a big thing to me---it's certainly true, like you say, that without some notion of 'ought' we're lifeless cabbages, in the sense that the alternative to acting rationally seems to be acting completely randomly, i.e. to have no progress/no goals whatsoever.... however, if one actually thought it's possible for someone to live randomly, with no oughts, I think the objectivity of suffering kills that, because it introduces oughts at the very most basic level, not relative to any more fundamental goal.

Yeah thats one way of looking at it! But suppose you were not biologically designed to suffer, what happens then? Well, nature is not friendly. It's ruthless and unrelenting. It could kill you through starvation, thirst, predators, heatstroke, disease, cancer, suffocation, and the list goes on. If you fail to compensate for the dangers of merely living, you die. If your goals don't account for the treachery of nature, you will die. ALL intelligent life in the universe -- including Artificial Intelligence -- is confined by these realities? Hell, even animals face these same problems. But is it possible for a dog to have a morality? What do you think?
 
@wolly.green I think dogs suffer! So the same things I apply to human suffering would apply to them.

As to whether there's an 'ought' behind survival, i.e. ought we to pursue survival, that's a pretty interesting question!
It's more complicated to me than the question of suffering as of now...will have to think about it
 
@wolly.green I think dogs suffer! So the same things I apply to human suffering would apply to them.

As to whether there's an 'ought' behind survival, i.e. ought we to pursue survival, that's a pretty interesting question!
It's more complicated to me than the question of suffering as of now...will have to think about it

Ive been thinking about this, and realized maybe that's the wrong question to ask? I was contrasting this with science and philosophy and realised something. We create philosophical and scientific knowledge to solve problems! The whole purpose of reason is to solve problems. If morality is subject to reason, then it follows that morals, like scientific theories are designed to solve problems. When you say "ought we to persue survival", perhaps thats the wrong question. Maybe, instead, we should ask "If we want to survive, what are we aught to do". The problem in this case is finding morals that help us to survive? What are your thoughts?
 
wolly.green said:
Ive been thinking about this, and realized maybe that's the wrong question to ask? I was contrasting this with science and philosophy and realised something. We create philosophical and scientific knowledge to solve problems! The whole purpose of reason is to solve problems. If morality is subject to reason, then it follows that morals, like scientific theories are designed to solve problems. When you say "ought we to persue survival", perhaps thats the wrong question. Maybe, instead, we should ask "If we want to survive, what are we aught to do". The problem in this case is finding morals that help us to survive? What are your thoughts?

That makes sense to me, too -- I don't think survival is something we inherently ought to pursue, though of course some might try to go that route. Like you, I think it's more a matter of 'if you want to survive, then..."

As you say, oughts are about solving problems -- ultimately about being rational. The thing I like about the mental states of obvious agony is that, where most oughts take the form 'if you are trying to solve problem X, you ought to Y,' in this one case, the problem is thrust on you by definition.
That is, you DO want to find ways you wouldn't need to experience agony, else it ain't agony.

That's actually the reason the bedrock of my morality is reason+ending agony -- both seem extremely fundamental. You can't avoid rationality, as accepting contradictions screws up any other pursuit of yours/leads to gibberish. And, as explained above, the problem of agony seems to thrust itself on you as a primitive 'ought'

(the significance of freedom is really that it is a precondition to reasoning)
 
That makes sense to me, too -- I don't think survival is something we inherently ought to pursue, though of course some might try to go that route. Like you, I think it's more a matter of 'if you want to survive, then..."

As you say, oughts are about solving problems -- ultimately about being rational. The thing I like about the mental states of obvious agony is that, where most oughts take the form 'if you are trying to solve problem X, you ought to Y,' in this one case, the problem is thrust on you by definition.
That is, you DO want to find ways you wouldn't need to experience agony, else it ain't agony.

That's actually the reason the bedrock of my morality is reason+ending agony -- both seem extremely fundamental. You can't avoid rationality, as accepting contradictions screws up any other pursuit of yours/leads to gibberish. And, as explained above, the problem of agony seems to thrust itself on you as a primitive 'ought'

(the significance of freedom is really that it is a precondition to reasoning)

Actually that's a fantastic point! I wander if it's possible to have an intelligent agent that cannot suffer. I mean it seems like the impetus that drives humans forward is not just reward, but also suffering. I mean all living agents in the universe MUST consume energy from its environment to exist. Without consumption, an agent cannot live. And in order to keep absorbing energy, it needs some sort of negative emotion to tell it WHEN energy stores are low. When to, say, expend energy to go hunting and capture that much needed resource. And it also needs ways of experiencing negative emotion every time energy is wasted on unsuccessful hunting methods. Because failure is incredibly costly!

What do you think? Sorry to drag this conversation. I think we are making really good progress.
 
Last edited:
wolly.green said:
Actually that's a fantastic point! I wander if it's possible to have an intelligent agent that cannot suffer

This is one of those very profound areas -- what exactly does it take to create an artificial intelligence that is genuinely intelligent? An unfortunate thing to me is there seems to run a strain of pragmatism-smelling thinking in a lot of AI folk. Notably, not Deutsch, who seems to take things like the problem of what is conscious/what isn't seriously, even if I think he acknowledges we don't know the answer yet.

But basically, the main point to be reckoned with here is whether a being learning something is an objective fact (I seem to lean this direction), or just a matter of physically behaving in a way that it is possible to interpret as that (so for instance, we interpret various physical systems as running linux, and doing various intelligent things, but it seems like our minds may be learning/thinking in a more objective sense). Notably, people of the latter school don't worry about things like whether there's an objective difference between seeing the color red /recognizing it and simply knowing the mathematical laws of physics relevant to the brain/light interactions involved.
They'd say the seeing of red is just a useful fiction we use to describe that interaction.

I so far have put this brand of thinking under the kind I tend to be wary of -- namely the 'shut up and calculate' variety. While that may produce some results, there's no reason to think that's all there is to knowledge.


This background is relevant, because I do think if one wants there to be any objective oughts, not just the 'behaves as if' kind, some kind of objective meaning to positive and negative needs to be there. feeling things like agony seems to offer the foundation for such objective meaning in a way nothing else I've seen does

I have no idea what other ways there are of being objectively intelligent (that is, where it's an objective fact that learning has taken place), but we can at least say conscious creatures like ourselves seem to be one example
 
Last edited:
This is one of those very profound areas -- what exactly does it take to create an artificial intelligence that is genuinely intelligent? An unfortunate thing to me is there seems to run a strain of pragmatism-smelling thinking in a lot of AI folk. Notably, not Deutsch, who seems to take things like the problem of what is conscious/what isn't seriously, even if I think he acknowledges we don't know the answer yet.

But basically, the main point to be reckoned with here is whether a being learning something is an objective fact (I seem to lean this direction), or just a matter of physically behaving in a way that it is possible to interpret as that (so for instance, we interpret various physical systems as running linux, and doing various intelligent things, but it seems like our minds may be learning/thinking in a more objective sense). Notably, people of the latter school don't worry about things like whether there's an objective difference between seeing the color red /recognizing it and simply knowing the mathematical laws of physics relevant to the brain/light interactions involved.
They'd say the seeing of red is just a useful fiction we use to describe that interaction.

I so far have put this brand of thinking under the kind I tend to be wary of -- namely the 'shut up and calculate' variety. While that may produce some results, there's no reason to think that's all there is to knowledge.


This background is relevant, because I do think if one wants there to be any objective oughts, not just the 'behaves as if' kind, some kind of objective meaning to positive and negative needs to be there. feeling things like agony seems to offer the foundation for such objective meaning in a way nothing else I've seen does

I have no idea what other ways there are of being objectively intelligent (that is, where it's an objective fact that learning has taken place), but we can at least say conscious creatures like ourselves seem to be one example

Yeah I pretty much agree David Deutsch is right that science explains reality. Actually, come to think of it, if a moral prescription/action does not improve reality in any objective way, then it is morally wrong. I know this is hard to believe, but bare with me. We have already agreed that morals have an associated explanation! And that these explanations are either objectively true or objectively false. Since true morals have true explanations, it follows that the search for morals amounts to the search for truth!

Do you remember when David Deutsch said that progress tends to unify knowledge? This generally happens because whenever we find errors in our current theories, we try to correct them with deeper, more fundamental theories. One way to visualise this is to consider when a theory explains facts. They don't just explain one fact, they explain a whole bunch of them. This is what unification means. In this example, the theory unified the facts. Another way to visualise it. Remember how earlier I briefly went over why Einsteinian physics replaced Newtonian Physics? Well, one of the biggest problems that physicists were facing at the time was that Newton failed to explain how gravity works. Here is a great YouTube video.
. Newtonian physics was supposed to explain the properties of ALL physical objects in the universe. But it failed to explain gravity. And so it was eventually replaced by Relativity; which actually explains everything that Newtonian physics did, but so much more! This is general trend, we go from one explanation to a much deeper explanations!

So we know that morals are meant to solve problems right? Which is precisely what it means to make improvements. But improving ones morals does not automatically translate into improving ones life right? It's always possible that improving your morals means making your life worse. Especially when your goal is to increase suffering! But consider this: your goal to increase suffering HAS an associated explanation. And that explanation is either objectively true, or objectively false! If you have two contradictory moral goals -- to increase suffering and to decrease suffering -- then you have two separate explanations! These two explanations will be different, and there will be a reason why both of them can be true simultaneously! And the answer to this must itself be a theory! A deeper, more fundamental theory. But this situation looks exactly like the relationship between Newton and Einstein! Which means that moral progress directly implies moral unification! But that leads to a contradiction! You can't have two true morals that individually increase and decrease happiness because morals that contain contradicts cannot be unified.

I realise this is difficult to understand because I'm not the clearest writer. But what do you think? To my mind, I can see how morals are unified and how they will become progressively more so in the future!
 
Last edited:
@wolly.green -- assuming I'm right about the things I consider immoral, definitely we can better protect those morals as we understand the world better

I think the objectivity of suffering is a big thing to me---it's certainly true, like you say, that without some notion of 'ought' we're lifeless cabbages, in the sense that the alternative to acting rationally seems to be acting completely randomly, i.e. to have no progress/no goals whatsoever.... however, if one actually thought it's possible for someone to live randomly, with no oughts, I think the objectivity of suffering kills that, because it introduces oughts at the very most basic level, not relative to any more fundamental goal.

@ Charlatan, I'm coming back to the thread after a couple of days, so apologies if my thoughts rewind your discussions a bit. Your focus on suffering as being at the substratum of moral philosophy is very interesting. I've found that these thread comments can sound a bit dogmatic - I'm just throwing out thoughts and questions here rather than taking a position.

As far as I understand it, suffering was the big trigger for the development of Buddhism - and the Buddha's response was to finger craving as the cause of suffering. By craving, I understand that he was talking about an almost involuntary attachment to the things of the world and a compulsive desire for them. Have you looked at Buddhist philosophy at all? I'd be interested in your take on it given your focus.

How do you structure your thinking about suffering in relation to morals? Just to enlarge on this question ...
  • Some suffering just happens - you wake up in the middle of the night in agony because you have got appendicitis - there are no moral issues here, unless we want to judge the gods or the laws of the universe that way. So the moral dimension perhaps arises from situations in which there is human free choice and it can only apply when we have a free choice. I think you are saying the the moral imperative is then on making the choice that has the best chance of minimising suffering, and this has the force of an ought.
  • Empirically it seems to me that not all suffering is necessarily a bad thing, at least at first base. Some examples: I resented school a lot of the time, the way it made me do things I didn't want to and I was unpopular with the people, so I didn't have a happy time there until I was in my later teens - but with hindsight it was well worth it. An athlete will put up with a lot of pain in training in order to develop their physical potential. If I didn't suffer pain when I touched something hot, I might burn my hand so badly I would lose it. It seems to me that pain and suffering are a built into an energy-analogue process that drives our potential, but just like fire or electricity they can do terrible harm when they are not contained.
These suggests to me that suffering, like morals, is also a very complex thing in philosophical terms.
 
  • Like
Reactions: Skarekrow
. Maybe, instead, we should ask "If we want to survive, what are we aught to do". The problem in this case is finding morals that help us to survive? What are your thoughts?

One thought that occurs to me here comes from Jung's idea that religious behaviour is hard wired into humans. It's very easy to misunderstand the guy here - he's not saying anything about the truth or otherwise of any religious perspective, but just that it seems to be a natural way for people to behave over the millennia. A suggestion I have come across is that tens of thousands of years ago religious behaviour bound people together into much larger collaborating social groups that their competitors and this gave them an overwhelming competitive edge in evolutionary terms - which of course reinforced that kind of behaviour over the succeeding generations, like any other successful trait. Moral imperatives naturally came along with the religions, at least until the modern world.

This by no means pulls away from the idea of objective morals - our very knowledge of objective science is developed from capabilities we have evolved for quite different reasons - sight, reasoning ability, self-reflection, imagination etc
 
@John K
Also moral choices can lead to suffering. For example I have learned to not kill mosquitoes even though they are very irritating and the bites make me itch.
Definitely. My wife is always having a laugh at me because I try and catch wasps in the house and send them on their way rather than killing them. I haven't been stung yet, but it's only a matter of time. I draw the line if we get a nest in the loft though. You can't live with it safely and you can't remove it to a safe place - but I always feel really bad if we have to get the pest people in.
 
  • Like
Reactions: Skarekrow