In response to this: http://merechristianitystudyguide.blogspot.com/2008/12/c-s-lewiss-three-arguments-for-moral.html

They make good points about how human morality, as we observe it, doesn’t seem to be just totally subjective. Their problem is that they jump right from morality not being totally subjective, to there being an objective morality baked into the universe.

There is no objective standard for morality.

However, there are plenty of objective standards for human well being. As long as we can agree on those, we’re set morality-wise.

So let’s try to give people as much life, health, and happiness as possible.

If you’re on board with that, I’m happy. If you’re not even in theory on board with the idea that giving people longer, healthier, happier lives is what we should be striving for, then I’m not happy, and my assumption is you’re a bad person. If anyone disagrees please let me know, and I’d be fascinated to discuss what morality means if separated from that goal.

So while there’s no objective standard of morality baked into the universe, clearly there are objectively better and worse ways to produce the outcomes we want. Firing a machine gun into a crowd of people is morally wrong in the sense that it is a really stupid way to produce those outcomes. Feeding starving people is morally right, in the sense that it’s likely to produce more life, health, and happiness than if you didn’t do it.

So we define “good” as “what’s good for people’ and “bad” as “what’s bad for people”. Of course if you just don’t care about people, this won’t mean anything to you, but that’s just the way it is. Obviously it’s at least possible for there to be minds that have very different morality from us. Maybe convergent evolution towards certain kinds of social norms would make our kind of morality understandable to an alien intelligence, but there’s no guarantee. And why should there be? What if the species was solitary and had the same relation to it’s children as a tree does: just blasting out seeds and figuring some will make it. There’s no reason it would develop love or compassion or altruism (the question is how it would have developed intelligence at all other than the intellectual arms race of living in complex social groups that gave rise to our intelligence).

So if you care about people, do some research and try to figure out what sort of thing objectively promotes human flourishing. That’s the closest you’re going to get to objective morality. Use that knowledge to advocate for the social order you prefer, and if others accept the basic premise that human flourishing is good, you should all be speaking the same language.

Most moral thought seems to be about trying to figure out a good justification for why we should care about people / people other than ourselves. This can come in the form of religion (“because god says so!”), or Kantian ethics, or whatever, but none of those do anything but push the problem back a little bit. Everything has to be justified in terms of something, and if you don’t have a moral sense, moral reasoning will be lost on you. My solution is to push it back to the moral axiom of “as humans, we want good things to happen to humans”, and accepting that I simply live in a different moral universe from anything that doesn’t agree.

38.907192 -77.036871