top of page
Search
Writer's pictureSeth Garrett

Meta-Wellbeing Defined

Updated: Jan 27, 2023

An explication of Sam Harris's philosophy within his book - "The Moral Landscape"

Part 1


In order to discuss Sam Harris’s book, “The Moral Landscape”, an intimate understanding of Hume’s guillotine will be necessary, since the book is largely an attack on this philosophic principle [IS OUGHT MORAL LOGIC].


Morality is Natural

As you might have seen from the link above, this approach to building a hierarchy of moral logic highlights the fundamental importance of evolution in guiding morality. It seems like evolution by natural selection guides game theory, game theory guides the evolution of the brain, the brain guides wellbeing, wellbeing guides subjective feelings, subjective feelings guide moral principles, and those moral principles give us moral conclusions. Perhaps with all this in mind, we are ready to embark on our journey through Sam Harris’s “The Moral Landscape”.


The Moral Landscape

Sam Harris imagines a moral landscape, a 3d topography of a variety of potential states of the world. Each peak is a type of possible utopia, each valley is a type of possible dystopia, or as Jordan Peterson would like to say, a type of heaven or hell.




Morality is About Climbing the Moral Landscape

Sam Harris seems to be claiming that all of our best moral intuitions are naturally organized around looking to make the world a better place, which would be searching for a higher point on the moral landscape. Conversely, with this moral landscape in mind, intentionally guiding society downwards towards a worse place would be immoral.

Moral Dogmatists and Relativists are Both Confused

Sam Harris believes that the world is largely divided into the irrational religious who believe that their sacred books contain objective morality, and the rational irreligious who disbelieve in the existence of objective morality [1]. Sam Harris has issues with both of these groups, but “the Moral Landscape” is mostly an attempt to attack the positions of the rational irreligious who seem to have lost their grounding with respect to morality, creating a philosophical weakness that Sam fears the religious aim to exploit.


Complexity is Not an Excuse

Sam seems to believe that the complexity of morality seems to be a key argument used to deny the existence of objective morality. Sam attempts to swat away all the complexity of morality by jumping to the conclusion that complexity is not an excuse for moral relativism nor moral nihilism. Moral relativism would be the idea that no moral perspective is worse than another moral perspective because each perspective has equal value. Moral nihilism would be the idea that morality as a concept is a dead end - an impossible project because there can be no right or wrong answers, so we should just give up on the whole concept of morality. By appealing to his moral landscape, Sam Harris immediately strikes down both of these positions by claiming that there are objective differences between each of these potential realities and these objective differences matter, even in a moral sense.


Wellbeing Morality

This is where Sam begins to attempt his most bold claim that Hume’s guillotine has been a problematic philosophic illusion that has blocked moral progress and inspired both moral relativism and moral nihilism. He thinks we should embrace a type of objective morality that is based on the concept of wellbeing.


Hume’s Guillotine

Hume’s guillotine, as reviewed in my other blog post, shows us how it seems impossible to summon a moral conclusion without having a moral axiom. Hume’s guillotine takes the position that “is” statements, or facts about the world, cannot singlehandedly guide you to an “ought” statement, or a moral conclusion. Many people derive the conclusion that in order for morality to be objective, we would need some sort of cosmic moral axiom to guide morality.

Cosmic Morality

A cosmic moral axiom would be like a moral imperative given to us by God, or by the structure of reality. The philosophic moral argument for God’s existence seems to imply that God must exist because we feel like morality is real, unchanging, and built into the structure of reality, and the only way for morality to be real is if God makes it real. Proponents of this argument are likely to think that without a God to make morality real, then morality is just an arbitrary opinion of groups of people. Under this framework, no one has the authority to condemn other people as evil if it is just a difference of opinion. So, if you want to condemn Hitler, you must appeal to moral realism, and hence appeal to God’s existence.


Supernaturalism isn’t Rationally Justified

If moral rules were truly written into the fabric of space-time, and good actions magnetically attracted “goodness” subatomic energy particles, and evil actions magnetically attracted “evilness” subatomic energy particles, then we could be justified in a science of cosmic morality. But we have no good reason to believe that there are goodness or evilness particles to measure. We don’t find moral laws written into the fabric of space-time. All appeals to a God’s existence seem to be based on intense wishful thinking. Time and time again, science has found naturalistic explanations for things that subvert the supernaturalistic explanation given to us by religion. For example, in the Biblical book of Job, it claims that God micromanages the lightning and the waves of the sea. We now know that natural electromagnetic forces and gravity govern lightning and the waves of the sea. We can predict their motions with physics. If they follow natural predictable patterns, then we can’t be justified in saying that they are intelligently managed by a supreme will. Hence, we have every reason to believe that naturalism is the true cause of everything, not supernaturalism. So, rationally, we have to appeal to more natural systems to get our morality.


Natural Moral Axioms

Sam Harris refutes the need for a cosmic moral axiom, instead believing that natural moral axioms are sufficient. He thinks that all creatures naturally and axiomatically value wellbeing. He claims that “wellbeing” is the only thing possible for humans to care about, and hence, wellbeing is our only possible moral axiom [2]. Since morality is based on goals, reaching the goal of wellbeing would by default be “good”, and increasing our distance from our goal of wellbeing would by default be “bad”.


Meta-wellbeing

At first this point seems very debatable. For example, what if people decide that they have a goal of increasing suffering? Or perhaps increasing equality? Or perhaps a goal of increasing inequality to enrich oneself? At first glance, it seems absurd to say that wellbeing is the only goal possible. But we will see that the way Sam Harris defines wellbeing forces us to include every human desire within its definition. So, the desire for suffering, equality, and enrichment will all be factors in his calculation of wellbeing. By including “every possible value” within his definition of wellbeing, Sam Harris brings every possible goal under his umbrella term of wellbeing. Because Sam’s definition of wellbeing is so broad, I personally feel it would be better represented by the term “meta-wellbeing”.



Objective Wellbeing

Sam Harris further claims that the concept of wellbeing isn’t limited to being a subjective arbitrary feeling, but rather an objective fact about people’s brains [2]. The gap between objectivity and subjectivity is often framed as the gap between measurable facts in the real world, and unmeasurable facts in the minds of people. Historically, the minds of others have never been able to be tapped into, so, pontificating about subjective experience in people’s minds has been a type of philosophic dead end. Technically we can’t know how others experience the world. Yet, scientifically, we are coming to a stronger consensus that experiences in the mind come from objectively measurable facts about the brain. Neuroscience and brain imaging technology are allowing us to peer deeper and deeper into the state of minds, technologically bridging the measurement gap between objectivity and subjectivity. As brains can be measured with all sorts of brain scanning technology, Sam believes that the future of brain scanning is destined to give us accurate pictures of an individual’s level of wellbeing. To the extent that wellbeing can be measured, and to the extent wellbeing can form the basis of morality, “right” and “wrong” actions can be objectively defined based on how those actions impact brains.


Subjective Wellbeing

But, in case anyone isn’t convinced by Sam Harris’s claim to have defeated Hume’s guillotine, he offers an a less bold argument for his case for an ethic based on wellbeing. He offers the analogy of science of medicine to give us some perspective on how no one allows Hume’s guillotine to destroy the science of medicine, so why should we allow it to destroy the science of morality? Specifically, when it comes to medicine, there are “is” statements (facts about the bodies of the patients), and then there are “ought” statements (treatments doctors determine should be given to the patient). Based on Hume’s guillotine, we should never be able to jump from an “is” condition to the “ought” treatment, unless we have a deeper axiom that “we ought to make the patient more healthy”. The fact that “health” is a complex topic doesn’t justify “health relativism” nor “health nihilism”. Despite complexity, we try to figure out how to balance the benefits of a treatment against the potential risks of a treatment. Just because there is no cosmic rule that dictates “we ought to make patients healthy” doesn’t mean we all can’t agree to accept this axiom for the science of medicine. If we can do all this for medicine, why can’t we do this for morality [3]?


Subjective Axioms

In the softer version of Sam Harris’s proposition, he basically says, “If you struggle to accept the fact that morality is naturally objective, then why not subjectively accept ‘wellbeing’ as the goal of morality (since it is a better axiom than any other), and the allow us to pursue an objective science of morality that rests upon this subjective axiom?" Many scientific fields have subjective axiomatic goals that motivate and guide their epistemology. The science of medicine rests upon “health”, education upon “learning”, physics upon “predicting movement”, logic upon “non-contradiction”, and math upon “predicting quantities”. We can objectively measure which educational method is more successful based on which method helps students learn most efficiently. The fact that there is no cosmic rule demanding that “we must learn” is not a sufficient reason to say that it is impossible to define education in terms of learning and then measure it objectively. We don’t need cosmic rules in order to make progress in different areas of inquiry. Similarly, we don’t need a God to give us permission to measure goodness in terms of wellbeing.

So, for purposes of this review of the book, I will first address the soft proposition and then the hard proposition as formulated –

  1. Soft Proposition - Meta-wellbeing is subjectively the best axiom to use as the foundation of morality

  2. Hard Proposition - Meta-wellbeing is objectively the foundation of morality and can be measured by science.



DEFINITION OF WELLBEING

Inclusive of All Values [4]: Sam Harris seems to want to define wellbeing in a very broad way. His definition of wellbeing basically includes everything people can possibly value. For example, there seem to be at least two types of psychological happiness, happiness derived from pondering on happy memories or on a hopeful future and happiness derived from the current amount of pleasure being experienced. One is virtual, the other actual. If the brain values both of these types of happiness, then both of those factors are relevant to wellbeing. If people’s brains value collectivism, collectivism is a factor in wellbeing. If people’s brains value individualism, individualism is a factor in wellbeing. If people value harm reduction, fairness, loyalty, respect for authority, and purity, each of these factors are components of Sam’s wellbeing. If humans care about risk factors, then risk factors play a role in wellbeing. If humans care about their neighbor’s intentions (regardless of consequences), then intent plays a role in wellbeing. Similarly, philosophic issues between deontology (duties/rules/rights) and consequentialism (good consequences) ends up being a question of how much brain’s value rules as opposed to consequences. I believe we will find that this inclusivity helps Sam’s philosophy jump over many philosophic landmines.


Maximize Over Populations [5]: If people desire peace, peace increases wellbeing. If people desire war, war increases wellbeing. Sam’s wellbeing calculation always assumes a collective interpretation, so if 99% of people value peace and 1% of people value war, pursuing peace maximizes wellbeing by satisfying the brains of 99% of the people. The 1% who’s wellbeing is harmed by peace are the cost that Sam’s philosophy seems willing to make on behalf of wellbeing.


Balance Over Time [6]: Normally, people critique consequentialist philosophies (like utilitarianism or hedonism) as being faulty in a variety of ways. But Sam thinks that wellbeing is a better goal than pleasure. Sam’s perspective on wellbeing operates at the meta level, meaning that his wellbeing at a higher level than mere pleasure. Pleasure is often interpreted as a short-term benefit-oriented emotion. People might be wary of hedonic philosophies since one might end up with a society of people doing harmful drugs to maximize pleasure. When the society collapses under the weight of a drugged-up population, it becomes obvious that the hedonic philosophy is sacrificing future happiness for short-term happiness in the present. Sam’s wellbeing isn’t so shortsighted. Sam seems to think that wellbeing naturally needs to be balanced over time and across relevant factors if it is to be maximized properly.


Psychological vs Physical [7]: Sam believes that all human values are both factors in wellbeing and/or reduce to concerns about wellbeing. So, one critique of consequentialist philosophy is that it ignores evil intentions. But by caring about all human values, Sam’s wellbeing includes our care about intentions. Sam shows that it can be inferred evolutionarily that humans care about other’s evil intentions, not only because of harm to wellbeing in the present, but also because of harm to wellbeing in the future. By valuing the reduction or suppression of evil intentions, humans value the reduction of the ability of those intentions to manifest themselves as harm against the wellbeing of humans. So, satisfying the brain’s need to suppress evil intentions is one layer of wellbeing (psychological), and prevents future harm to people (physical).


Accurate vs Inaccurate Wellbeing Calculations [8]: Sam makes an apt analogy to how human brains desire to calculate physics when playing sports, yet the human brain can be fallible and make incorrect calculations with regard to the trajectory of a ball, for instance. Similarly, Sam believes that human brains are trying to perform wellbeing calculations when assessing morality. Yet, human brains can be wrong in their wellbeing calculation, and hence wrong in their morality. One example of a complex moral dilemma is regarding how much risk it is moral to accept. For example, Americans are often much more afraid of a terrorist attack than dying in a car accident. The brain may produce emotions in an inaccurate way if it doesn’t understand which risk factors need to be emotionally prioritized. By understanding that the brain’s goal for producing its emotions is wellbeing, we can learn to ignore human emotions when they inaccurately sabotage wellbeing. We can orient ourselves around the objectively true risk amount instead of the subjectively wrong amount of risk. This is because the subjective wellbeing calculation is less important than the objective wellbeing calculation, since fears related to future outcomes (psychological wellbeing) impact wellbeing less than actual bad results in the future (physical wellbeing). So, physical wellbeing can be prioritized over psychological wellbeing when intensities are compared. So, when people naturally have a high sensitivity to terror attack risk, this sensitivity harms their wellbeing. But the car accidents harm them much more in reality. So, it is better for a society’s wellbeing to prioritize solving car accidents than solving terror attacks. But, if a correct understanding of reality led us to the conclusion that terrorist attacks could become objectively more harmful than car accidents, then it would make sense to reprioritize anti-terrorism activities. We need not allocate our resources to incorrect understandings of wellbeing, merely because people value an incorrect thing. This is because, if they had better information, they would value the correct thing. So, it could be concluded that if we care about wellbeing then we should prioritize objective measurements about future consequences to wellbeing over subjective opinions about future consequences to wellbeing.


Optimal Tradeoffs [9]: Sam acknowledges that often values can be in zero-sum wars with each other. He highlights how freedom, security, and privacy are three values that mutually harm each other. The fact that values harm each other means that wellbeing requires finding the right balance between these values, which is based on how intensely they affect wellbeing.


Neurological [10]: Sam believes that wellbeing necessarily is based on conscious experience, which is based on facts about the brain. When people in a society suffer, their brains are triggering pain. He believes that pain and happiness will be able to be measured in the brain. The way we value justice, mercy, reality, intention, and other values likewise are expressed at the level of the brain. This will give us accurate ways to compare different consequences by how intensely they affect brains.


Voluntarism [11]: While I don’t believe Sam Harris mentioned this factor explicitly, it seems to be logically deductible from his line of reasoning that whatever option satisfies the brain’s values the most, that is the option the brain will freely choose (free from the perspective of the agent – not a metaphysical claim to free will). Hence, a seeming key factor in wellbeing is voluntary consent. For example, if the cost and benefit of a legal contract is worth it to someone’s brain, then they would be inclined to sign the contract. When value systems fight for supremacy in the brain, the value system that is most important is the one that wins from moment to moment. For example, the desire for sleep could be one value system that fights with your productivity value system. Perhaps sleep isn’t the most important factor in wellbeing for the first 16 hours of your day. But as the day drags on, sleep increases in importance as a factor of wellbeing. Eventually your brain will choose to satisfy that need. Yet, if you are working on a project that has a deadline the next day, your brain may suddenly summon the energy to stay up all night, since your brain has decided that the project is more important to your wellbeing than sleep is. Of course, the brain can be wrong. Perhaps the student has been deluded to view the project as more important than it actually is. Regardless, absent coercion and delusion, free choice is a way to measure wellbeing. As neuroscience delves into the brain, we will be able to more accurately calculate wellbeing in this way.



Meta-wellbeing vs Wellbeing [12]

Many of Sam Harris’s critics seem to fail to realize how robust of a definition Sam is trying to impute to his idea of wellbeing. Many of the critiques fail to appreciate one or more of the above nuanced factors. Often the idea of wellbeing is misinterpreted as something like pleasure. Despite being refuted in the book, this mischaracterization can still be found in different critiques. I think a different choice of words might help Sam ward off some of the criticisms by highlighting the robustness of his idea. Specifically, I think replacing the term wellbeing with meta-wellbeing can highlight this robustness.





Check out the Part 2 here:






[1]





[2]



[3]

[4]





[5]




[6]



[7]



[8]




[9]






[10]



[11]



[12]




77 views0 comments

Recent Posts

See All

Comments


bottom of page