What is Your Goal in Life?
If there is no God, so the argument goes, there is no objectivity in ethics. This article will later attempt to specify what exactly “objective ethics” could refer to. First however, we’ll get God out of the way: Roughly 2400 years ago, the Greek philosopher Plato raised a dilemma for those who believe that God is needed for morality.
“Is the good loved by the gods because it is good, or is it good because it is loved by the gods?”
If whatever is “good” is so independent of the will of god, then there exists a god-independent standard also accessible to humans. And if “good” is merely whatever god wills, then the matter is arbitrary – it would be an open question whether one wants to follow all such commands regardless of their contents.
Eutyphro’s dilemma can be construed as a constructive dilemma: either morality is god-independent because it doesn’t depend on the will of the gods, or it depends on their will. In the latter case we would simply ignore their verdict if it disagrees with our own moral views (e.g. if they tell us to torture babies) – hence (our) morality is also god-independent. Whatever horn we pick, morality ends up being independent of the god’s doings.
If there is no divine standard we can defer to, what then determines what is ethical and how do we figure it out? Before giving any sort of answer, we first need to clarify the question! Otherwise there is the risk that all we’ll end up discussing are merely verbal disputes.
What is Ethics?
Although the terms “morality” and “ethics” are sometimes used synonymously, it is useful to adopt the following distinction:
Morality is descriptive, it is concerned with social norms and people’s moral intuitions. Ethics on the other hand is normative; it doesn’t describe how people act, but how they should act (this still requires a clarification for the word “should”!). Ethics ideally involves reflection and questioning assumptions.
On the basis of an essay by Peter Singer, the following will introduce two useful definitions for ethics and the corresponding meaning for “should” that goes with them.
The Broad View
According to a broad definition, ethics is about figuring out one’s terminal goal(s). In this sense, ethics is the most fundamental question there is: it lies at the heart of every decision we make. We “should” act ethically, in this sense, because it is what we most want to do. Or, perhaps more precisely, what our “rational self” most wants to do. Primitive desires or addictions don’t count as goals if we would take a magic pill that got rid of them (without affecting anything else). Our “terminal goals” are whatever is left if we could redesign our own motivational brain-architecture however we wanted.
The broad view is appealing because it makes ethics relevant for everyone. However, it also seems like something is missing, because there is no content-requirement for goals. If someone’s goal (even under reflection!) is to cause pain to others, then this would be the “ethical” thing to do for such a person. Still, it is important to note that this definition of ethics, being about personal goals only, is not to be confused with selfishness, as there likewise is no reason why goals couldn’t also (or even exclusively) contain the well-being of others.
The Narrow View(s)
A more narrow definition of ethics is tied to a specific content requirement that seems to be best summarized by the notion of taking other-regarding (i.e. altruistic) reasons for action seriously. With such a basic axiom, the goal of the game is specified, which makes room for the distinction between correct and incorrect moves. This definition leaves open whether any particular agent has reason to be concerned with ethics (it’s about a notion of what matters in a world-at-large sense – as opposed to what matters to one particular decisional agent), and the “should” in this sense corresponds to “what would be the altruistic thing to do”.
Is There More?
It remains an open question whether my terminal value (what would be “ethical_narrow” for me) is altruistic to some extent or not. According to the second (broad) definition too, it is an open question whether I care about “ethics_broad” – stipulated that way – in the first place. If my terminal value doesn’t include altruism, I would still agree that e.g. saving a child from drowning is altruistic or “ethical” in this specified sense, but I would simply not care about being “ethical”.
It seems that both notions above capture a core intuition about ethics, but neither captures both. Suppose we define “super-ethics” as ultimate-goal-involving (for all moral agents!) AND other-regarding. This would seem to capture both the intuition that ethics is motivating/provides reasons and that there are content constraints on ethics. But is there such a thing as super-ethics, or is it just an empty notion?
Philosophers have been trying hard to come up with universally compelling arguments for being either altruistic or egoistic. So far, none have succeeded. Once we declare a particular goal, we can distinguish between rational and irrational actions in regard to that goal. But can we make a case for there being rational and irrational goals? In regard to what could a goal be irrational? The challenge for people who believe in there being a universal moral truth is to explain the sort of mistake someone with a non-coinciding would be making. The problem with that is: No matter the account one comes up with, opponents can always declare that they don’t care about the premiss of the argument. For instance, if someone makes the Kantian arguments that egoists are making a “mistake” because their maxime “do whatever benefits yourself the most” is not a maxime that everyone would want everyone else to follow, the simple response would be: “OK. And why again would I care about that?”
Although we might have the strong intuition that there is a universal moral truth, it seems impossible to clarify what exactly this would imply, and what sort of mistake those who don’t follow it would be making.
Morality as an adaptation
Our innate sense of right and wrong is a biological adaptation – it proved useful in the environment of our ancestors. The content this sense gets filled with is likely also influenced by culture, but the fact that we have strong moral beliefs, and that we feel like we are doing something “objectively wrong” when we break norms, is likely part of a biological adaptation.
The philosopher of science Michael Ruse put it as follows:
To be blunt, my Darwinism says that substantive morality is a kind of illusion, put in place by our genes, in order to make us good social cooperators. I would add that the reason why the illusion is such a successful adaptation is that not only do we believe in substantive morality, but we also believe that substantive morality does have an objective foundation. An important part of the phenomenological experience of substantive ethics is not just that we feel that we ought to do the right and proper thing, but that we feel that we ought to do the right and the proper thing because it truly is the right and proper thing.
The view that moral intuitions are a biological adaptation is also supported by the Moral Foundations Theory developed by social psychologist Jonathan Haidt. If this view is correct, it would explain the strong appeal of the idea that there are universal moral facts making certain moral statements true, despite the facts that no philosopher can give a good clarification, in clear and non-question-begging terms, of what moral statements would even mean.
Anti-Realism: Is the Truth Depressing?
This position, that there is no universal sense in which ethics is true, is called moral anti-realism. Some people may find it disheartening and would thus prefer if there was some objective standard, perhaps God-given, that saves them from “arbitrariness”. But as we have seen above, theists are in the same boat if Plato is right. Furthermore, such despair seems uncalled for, because anti-realism in no way implies nihilism (the view that it doesn’t matter what we do).
The mistake that is often made is the following:
1) There is no universal moral truth
2) If there is no universal moral truth, nothing determines how people should act
3) Nothing determines how people should act (i.e. nihilism is true!)
This argument is flawed because the second premise is wrong, or better: not even wrong. It is entirely unclear what the “should” in the sentence could mean. There may not be a universal moral truth, but there certainly are things that are more in accordance with one’s personal goals than other things, and in that sense, it matters to you whether you do one thing rather than another thing. Furthermore, although there may not be any forces of logic that compel every agent to act altruistically, it is very much an objective matter whether actions classify as altruistic or not. IF one is interested in helping others, then we can turn to ethics (in the narrow sense) in order to figure out more on what exactly this would entail. Once we do specify an axiom, the matter indeed becomes objective: Letting a child drown in a pond instead of saving her objectively leads to more suffering and frustrated preferences. Fortunately (for all sentient beings), most humans share empathy and a basic motivation to have meaning in their lives, and there is no better place to look for personal meaning than by doing things that serve the needs and well-being of others.
Under moral anti-realism, the world is still the same world. Your feelings and motivations are still the same feelings and motivations. And the reasons you give for doing what you do will still be the same reasons, unless you say things like “it’s just wrong” or “it’s what God commands”, but if these are the only moral reasons we lose by abandoning moral realism, then they will likely, upon reflection (if they are even found to mean anything intelligible at all), not be missed!
There is something empowering about anti-realism too: Would you rather be the person who helps others because you think you are somehow under a “moral obligation” to do so? Or do you help others because you discovered that you care about them and that this is what you would like to do, according to your own volition?
Handling Our Freedom: An Outlook on the Ethics Sequence
Secular ethics is challenging. Without rules to follow, we suddenly find ourselves looking at a vast landscape of possibilities, not sure where to go. Our intuitions are not adapted to this kind of situation: it is the first time in the evolutionary history of life on earth that beings are thinking systematically and open-mindedly about their own goals, instead of just blindly following instincts, intuitions or rules passed on by society.
In order to navigate in the landscape of possible goals, we need rationality and thinking skills. We need thought experiments in order to gain clarity about our values and we need to differentiate the normative level from the empirical level.
And it also pays to have an understanding of where we came from, so that we don’t mistake evolved intuitions for god-given moral compasses tracking some external moral truth. The way things are is not necessarily the way things should be. We should look at the big picture, at the whole playing field instead of getting lost in the domain of small, tribal communities where our moral intuitions have evolved in. The 21st century offers technologies that can change the face of the earth, that can improve or diminish the quality of life of millions of sentient beings. It is now more important than ever to think thoroughly about what it is that we want to achieve in life, and how we can make this world a better place for the beings that inhabit it.
This process can generate insights that are quite amazing and revolutionary. For instance, one might realize that – despite intuitions to the contrary – geographical distance to a victim isn’t a relevant factor. Or that there is no fundamental distinction between actions and omissions. Or that we likely underestimate the importance of large numbers, and that the vast majority of suffering we can prevent may lie in the far future.
With great power comes great responsibility. If we are at least in part altruistically motivated, we owe it to our descendants, and to all the sentient beings on the planet, to think thoroughly about where we want to go from here.