top of page
Search
Writer's pictureSeth Garrett

Why Meta-Wellbeing is Good

Updated: Jan 13, 2023

An explication of Sam Harris's philosophy within his book - "The Moral Landscape"

Part 3


If you haven't already, check out part 2 first:

Narrated Version:




Table of Contents:




So, as I said in part 1, for purposes of this review of the book, I will first address the soft proposition and then the hard proposition as formulated –

  1. Soft Proposition - Meta-wellbeing is subjectively the best axiom to use as the foundation of morality.

  2. Hard Proposition - Meta-wellbeing is objectively the foundation of morality and can be measured by science.


SOFT PROPOSITION - Meta-Wellbeing is Good

In the review of moral issues (in Part 2) ranging across simple, complex, religious, applied ethical principles, and moral dilemmas, I have found that the common denominator of our moral intuitions across these landscapes is an ethic of meta-wellbeing. Whenever a set of options includes one option that clearly contains more meta-wellbeing, our moral intuitions will gravitate to that option. Complex moral issues seem to occur when there is a tradeoff between different loci of wellbeing. Absent personal religious biases, we naturally know when religious practices are immoral when we can clearly identify the harm they pose to meta-wellbeing. The variety of principles involved in applying ethics can be reduced to different value systems within our meta-wellbeing. Moral dilemmas often look for obscure examples to help us question moral theories by pointing us to our moral intuitions. When our moral intuitions don’t like a certain moral conclusion, we are appealing to deeper value systems within ourselves. A meta-wellbeing ethic includes these deeper value systems and hence addresses any moral dilemma. It seems to me like an ethic of meta-wellbeing provides the most robust axiom upon which to lay our moral foundations.


Standards for Measuring the Quality of a Meta-Ethical Theory

  • Reflective equilibrium - as we weigh our moral intuitions against the theory, we come across contradictions, after which we tweak our theory to match these intuitions, the process of which we repeat until we are sufficiently comfortable with our theory's ability to match our intuitions [1].

  • Reliability challenge - to propose that our moral theory is better than others, we must give a proper explanation for how other ethical theories got morality wrong, and why our methodology is more immune to the moral stumbling blocks that others have succumbed to [2][3][4].


Reflective Equilibrium

A meta-ethical theory can be considered of a higher quality than another if it is able to accurately account for our moral intuitions after deep moral reflection. Obstacles to reflective equilibrium include 1) moral intuitions fail to be consistent across dilemmas (outliers), 2) self-contradictory moral intuitions (shallow), 3) the gap between the moral intuitions we have ("is") and the moral intuitions we should have ("ought").


  1. Moralistic Resonance - degree to which a moral theory can match our intuitions across a variety of issues

  2. Moralistic Integrity - degree to which a moral theory is not self-contradictory

  3. Moralistic Goodness - degree to which a moral theory reflects which moral intuitions we should have

Socrates

Socrates was the first Greek philosopher to delve deep into ethics. He inadvertently invented the Socratic method, which was basically a special type of conversational dialectic where Socrates' interlocutor would make a claim, and then Socrates would begin to interrogate this claim. Socrates method of interrogation was often aimed at testing moralistic resonance, moralistic integrity, and moralistic goodness.


Testing Moralistic Resonance

In Plato's book "Republic", Plato walks us through a conversation between Socrates and Cephalus. As usual, Socrates allows his interlocutor to guide the conversation in making certain claims, and then Socrates challenges those claims. Cephalus is humbler than most, and hence hesitant to claim that he knows the definition of righteousness (often poorly translated as 'justice'). To help continue the conversation, Socrates asks him if he would agree that the definition of righteousness is "being honest and returning that which is owed". Socrates then challenges the definition by suggesting that it isn't always right to return that which is owed. Socrates puts forth an extraordinary case of a rare situation where one has borrowed a weapon from a friend. Later the friend returns in a state of madness, asking for the weapon to be returned to him. Considering the friend's state of madness (perhaps an angry drunken murderous rage), it would not be right to return the weapon to him at this time (since harm might be resulted thereby). Hence, by pointing to outlier cases, Socrates shows how the principle of "returning that which is owed" does not resonate with our intuitions as always being good no matter the situation. Socrates tests this principle for moralistic resonance and exposes a situation where it fails.




Testing Moralistic Integrity

When Euthyphro claims that that which causes fear also causes shame, Socrates begins looking for contradictions. He points out that we are fearful of disease, but we are not ashamed of having disease. By finding a contradictory example, Socrates pokes holes in the initial claim. This gives Euthyphro an opportunity to revise his perspective regarding the relationship between fear and shame, so as to discover a more accurate perspective.


Euthyphro's first claim:

All Shame -> is Fear

Therefore, all Fear --> is Shame.


Socrates' counterargument:

All Disease -> is Feared,

But no Disease -> is Shameful.


Adjustment after Socrates' counterargument:

All Shame -> is Fear,

But not all Fear -> is Shame.



Testing Moralistic Goodness

In "Euthyphro's dilemma", Socrates discusses the source of goodness (or piousness) with Euthyphro. Socrates presents the dilemma to Euthyphro in a question, "Is goodness loved by the gods because goodness is good, or is goodness good because it is loved by the gods?" In one simple question, Socrates opens wide the deepest question of meta-ethics - what is goodness? Or what is the foundation of goodness? If the foundation of goodness is some god's opinion, then goodness becomes arbitrary. If the foundation of goodness is outside some god's opinion, then gods are not the source of the definition of goodness. Or in other words, we must look outside religion in order to understand goodness. Over thousands of years, the foundation of goodness has been debated without resolution. To test moralistic goodness, we must have a standard for which to measure goodness against. Then this standard must be proven to not be arbitrary. If goodness is arbitrary, then it is arguable that goodness doesn't exist. Socrates accepts the typical standard that goodness is measured against the gods' opinions. Then Socrates shows how this standard for goodness fails the test of moralistic goodness by being so arbitrary that in one moment murder could be evil and in the next moment murder could be good, merely based on a change in the commands of the gods from one moment to the next.




Moralistic Resonance

Looking for outlier examples has been a favored method of finding errors in moral theories. If a moral theory is of high quality, it should provide us with satisfactory moral solutions for any variety of moral dilemma. If a moral theory, when presented with a complex moral dilemma, fails to give us an answer that resonates with our innate moral intuitions, it has failed to provide us with moralistic resonance.


If we wanted to measure the quality of a variety of moral theories in terms of moralistic resonance, we could come up with a suite of moral dilemmas and pose each of them to each theory. If we were to have 100 moral dilemmas at our disposal, we could test each theory against them and grade their answers to each dilemma. Perhaps consequentialism gives us a satisfactory answer 60% of the time. Perhaps virtue ethics gives us a satisfactory answer another 20% of the time. Perhaps deontology gives us a satisfactory answer another 40% of the time. We could then determine that in terms of moralistic resonance, consequentialism is the superior moral theory among the three.


Currently, philosophers largely struggle between these three moral theories. In a 2009 poll performed by Philosopher David Chalmers, they found that among the over 3000 philosophically educated polled, the popularity of these three ethical theories was roughly equal. A follow-up poll in 2020 found similar results [5].


To me, this represents a large problem. As I see it, each of these theories has a low degree of moralistic resonance. Based on the way you play with the variables in a trolley problem you can destroy any of these three normative ethical theories.



If you want to break simple deontology, all you have to do is stacking up more and more hypothetical people on the track until the deontologist is forced to admit that it is foolish to destroy the entire human race over a deontological rule to not kill an innocent person no matter what the consequences. When the deontologist admits this, she is admitting that this version of deontology fails to resonate with her own moral intuitions when forced to address a more complex dilemma.


If you want to break simple consequentialism (utilitarianism), all you have to do is switch the analogy from merely flipping a switch to actually pushing a fat man to his death in order to save the many. Most consequentialists will be forced to admit that when violence is added to the equation, they are ready to abandon their consequentialism in favor of deontology. If that doesn't do it, there are many other moral dilemmas available that will be able to do the trick. Check part 2 of this series ("Meta-Analysis of Meta-Wellbeing) for some popular examples.


If you want to break virtue ethics, you just need to present them with any ethical dilemma whatsoever. Virtue ethics do not provide any guidance as to how one should act, since it is a theory of inward attributes, not outward actions. A proponent of virtue ethics might say, that in order to face a trolley problem, one must cultivate the virtue of courage in their life by avoiding the deficiency of cowardliness and refraining from the excess of rashness and foolhardiness. Once one has cultivated courage, they will be internally ready to face the ethical dilemma. But then what? What should a courageous person do? Perhaps, jump in front of the trolley to sacrifice themselves to protect the five? What happens when you increase the size of the trolley so that heroism is no longer a solution? Virtue ethics runs out of ideas fast.




What Breaks?

Each time you attempt to break a meta-ethical theory, you tend to appeal to some examples of how the theory fails to accurately resonate with our collective moral intuitions. As humans, we are naturally appealing to the moral authority of our collective intuitions in order to judge a theory as good or bad. This inner standard is what everything is being measured against. The problem is that this inner standard is deeper, more complex, and more robust than our minds can comprehend moment to moment. It's like we have a deep reservoir of unconscious wisdom that we are trying to explore within ourselves. We are imperfect at introspection, so we can't fully see the shape of our internal wisdom each time we look for it. Collectively, we are like a group of blind people trying to feel the nature of an elephant in front of us. Each of us is feeling a different aspect of the elephant, and hence develops a different perspective as to how we can model the structure of the elephant. But, if we can merge all of our perspectives together, then we can get a more accurate understanding of the object of our exploration.




Challenges for Moralistic Resonance

Moralistic resonance relies on 1) the intuitions of the 2) majority of people as a standard for which to judge a theory. Different people have different temperaments which influence their moral intuitions. A psychopath may find no problem with certain aspects of moral dilemmas as their innate moral intuitions may be warped by their condition. The idea of resonance is the degree to which there is a steady connection of agreement between a group and an idea. The theory with the smallest number of people that disagree will have the greatest amount of resonance. Perhaps a moral theory will be unable to resonate with psychopaths, but if it can resonate with the rest of the population, that is good enough, until an even better theory comes along that resonates with both the majority of people and psychopaths.


Moralistic Synthesism as a Solution

The idea of a synthesis implies the act of joining things together, often in a synergistic way. When different populations of people disagree over something, there are often two opposing values that are causing the conflict (perhaps freedom vs safety). Perhaps deontology prefers freedom and consequentialism prefers safety. Neither the thesis (deontology) nor the antithesis (consequentialism) is a satisfactory end place for society because neither of them resonate with a sufficiently large swath of the population. Moralistic synthesism is about looking for a new way of viewing a conflict that allows us to conjoin two opposing philosophies together - a way to reconcile, compromise, include, unify, and transcend. Moralistic synthesism can be measured as the degree to which a moral theory synthesizes opposing factors or values.


Moralistic Synthesism in the Brain

Brain imaging studies conducted by Joshua Greene seem to show that, biologically, we all have moral tastebuds for both deontology and consequentialism. Since meta-wellbeing is all about making brains happy, the moral choice will involve whichever choice satisfies both our deontological circuit and our consequentialist circuit to a degree that establishes an equilibrium between the two. Meta-wellbeing implies moralistic synthesism in order to achieve moralistic resonance.


Joshua Greene has done a variety of experiments on the brain in order to understand how we cognitively process trolley problems. His work has largely found that we have an 1) emotional, fast, energy efficient, autopilot, and ancient deontological circuit, along with a 2) rational, slow, energy inefficient, calculating, and recently developed utilitarian circuit [6]. Philosophic "rationalists" like Socrates and Kant would be wrong to reduce morality to just the rational utilitarian circuit. Philosophic "sentimentalists" like Hume would be wrong to reduce morality to just the emotional deontological circuit. What actually happens during moral processing is a complex synthesis among many parts of the brain, and a cooperation between these two circuits [7].


Potential Areas to Apply Moralistic Synthesism:

  1. Deontology vs Utilitarianism

  2. Virtue Ethics vs Ethics of Care

  3. Left Brain vs Right Brain

  4. Emotion vs Rationality

  5. Skepticism vs Faith

  6. Progressivism vs Conservatism

  7. Cooperation vs Competition

  8. Self-love vs Selfless Love

  9. Freedom vs Safety

  10. Feminine vs Masculine

  11. Short-term vs Long-term

  12. Individualism vs Collectivism

  13. Role-conformity vs Self-expression



Moralistic Integrity

As said above, moralistic integrity is the degree to which a moral theory is not self-contradictory. Most people go about life without "self-examination" and are unaware of their many contradictory beliefs and actions. Philosopher Immanuel Kant produced one of the greatest moral theories for promoting moralistic integrity. In Kant's categorical imperative he states, "Act only according to that maxim whereby you can, at the same time, will that it should become a universal law." What he was aiming at with this categorical imperative was something like 1) "imagine all the rules that you want others to follow", 2) "recognize that you have the same duties as they do", 3) "follow the rules that you want others to follow". Kant was generating a philosophy based on the assumption that hypocritical morality had no moralistic integrity. How can we be mad at someone for stealing from us when we ourselves steal from others? This "external ought" that we would wish to apply to others, must necessarily be applied to ourselves as an "internal ought" if we are to maintain moralistic integrity.


Kantian style logic can similarly be applied to Sam Harris's axiom that wellbeing is good. We all necessarily have an "internal ought" of which we feel we "ought to maximize our wellbeing". Yet, similarly, other people also have their own desires to maximize their wellbeing. In order to not be hypocritical, we must accept that all types of wellbeing are important, not just ours.



Moralistic Goodness

In order to satisfy our intuitions about moralistic goodness, we must find the foundation of goodness and explain how it is not arbitrary. If meta-wellbeing is the foundation of goodness, we can find that this isn't arbitrary given that neuroscience increasingly has the ability to measure and quantify meta-wellbeing. If game theory is the foundation of goodness, we can find that this isn't arbitrary given that within a game, each constraint (or rule) applies a limitation to the number of optimal strategies. Given that our universe is a universe of entropy, there are many constraints on a viable social strategy. For example, humans require cooperation to survive. Murderous strategies are hence "not good" in all cultures since this harms our basic strategy as a species. If one is to have access to all the variables that constrain a game, it is possible to calculate which strategy is best, hence game theory is not an arbitrary definition of goodness.



Reliability challenge

In order to demonstrate the reliability of an ethical framework, one must generate a convincing error theory to explain why other theories have gone wrong, and why we have not. An error theory must have 1) an explanation for the telos of morality, 2) an explanation for what an error is, 3) an explanation for how errors arise, 4) a means of measuring how much a moralistic process is subject to error, and 5) a means of measuring how good a job a moral theory does at accomplishing its purpose.


  1. Moralistic Telos - the innate purpose of morality

  2. Moralistic Error - definition of error

  3. Moralistic Error Generation - how errors arise

  4. Moralistic Invulnerability - the degree to which a moral theory is immune to error

  5. Moralistic Optimization - the degree to which a moral theory maximizes its goal/telos


Moralistic Telos

A telos implies an innate purpose built into the structure of something. The telos of a seed would be to grow into a plant. If we dig into our own introspection of morality, we can know that we have rational moral conclusions that are inspired by deeper moral emotions. We can dig further and realize that our rationality and emotions come from brain states. Brain states come from our biological states. Biological states come from our genetic make-up (nature) and the epigenetic/developmental impacts of our environment (nurture). To the extent our morality comes from our genetic make-up, that aspect of our morality comes from the forces of natural selection that determine which moralities survive and thrive. To the extent our morality comes from our environment, that aspect of our morality can be socially constructed by culture. To the extent that cultural norms are beneficial or harmful to survival, natural selection will also determine these socially constructed norms. Successful cultures will survive and thrive, unsuccessful cultures will disappear. If the nature and nurture aspects of our morality both boil down to natural selection, and natural selection boils down to game theory, that means that game theoretic principles determine which moralities will be more successful.



As evolution generates a variety of strategies, evolution will simultaneously generate a variety of moralities. The success of a morality will depend on how successful its underlying game theoretic principles are. Hence, the telos of morality is to express the most successful set of game theoretic principles the creature is able to comprehend.


Different types of moralities could be evaluated as game theoretic strategies with different aptitudes for success. Hence, moralities could be ranked into a hierarchy revealing the most successful morality.


It seems clear that -

P1) if a moral code is beneficial to the game theoretic strategy of the individual's genes, and

P2) evolution has the ability to program an individual with such a moral code (via genetic or cultural programming), then

C) evolution will program that individual to intuit that moral code as good (goodness naturally containing a positive motivational valence).


So, given these premises, the pure essence of our moral intuitions is merely that which evolution has inspired us with. In order to get us to actually follow beneficial moral codes, evolution must hardwire our wellbeing to be intimately connected with these moral codes. We must feel good when we do good things in order for us to have the biological motivation to actually be good. So, meta-wellbeing is a necessary implementation mechanism for game theoretic moral coding. So, since evolutionary pressures (both cultural and genetic) produce moral intuitions, evolutionary pressures define what is good. Hence, the purpose of goodness is to satisfy evolutionary pressures via the best game theoretic strategy.


The definition of goodness is inescapable from evolutionary pressures, since any creature/group that has a definition of goodness that runs contrary to evolutionary pressures will be destroyed by those pressures. Yet, evolutionary pressures are not so clearly defined so as to affirm only one set of moral principles. Reality is complex and there are multiple viable strategies for surviving and thriving. Not every deviation from optimal morality will be instantly destroyed. Some strategies maximize short-term benefits at the cost of the long term. It could be that certain strategies are very successful in the first millennium of their operation but start to backfire in the second millennium of their operation. It is not surprising that we find a variety of moralities in the world, each struggling forward. It is most likely the case that no group of humans has landed on the most successful morality, as none of us are competent enough to understand it. Yet, these evolutionary pressures are constantly guiding us towards it, nevertheless.


A key element to make clear here is that evolutionary pressures apply to genes, not individuals. The best game theoretic strategy for the individual is not necessarily the best game theoretic strategy for the genes. The genes can live distributed throughout a group. Heroism, defined as risking or sacrificing one's life for others, is a bad game theoretic strategy for the individual, but a good game theoretic strategy for the genes. If one carrier of the genes sacrifices himself to save five carriers of the genes, that is a net-win for the genes. One could imagine a tribal landscape where frequently the village hero must risk his life to defend women and children from a lion attack. Since genes operate on ratios, it would make sense that different sets of genes have different propensities for producing heroes. Perhaps one tribe's genes have a 1% rate of producing a hero. Another tribe's genes have a 0% chance of producing a hero. If two tribes of humans are competing, the tribe that contains heroes might grow at a faster rate than the tribe that contains no heroes, because the heroes are reducing total lion deaths within their tribe. The tribe that grows faster will eventually be the one that will win in a war, so these heroic genes would likely become a continuous aspect of tribal genes going forward. What this example shows is that a tribal morality that lacked a heroism element was an inferior morality to one that contained a heroism element.


Moralistic Error

A moralistic error would be a moral intuition that fails to align with the optimal game theoretic strategy for the situation. Theoretically, there exists some optimal game theoretic strategy for all situations. Furthermore, there also exist a large number of suboptimal game theoretic strategies along a spectrum of varying degrees of strategic quality. The gap between an executed game theoretic strategy and the optimal game theoretic strategy is a measurement of moralistic error.


Game theoretic strategies can be volatile - the optimal strategy can change when factors in the environment change. It makes sense that each culture would evolve a different morality, since there might be nuances to each area that require adjustments to their moral strategy. Not every moral gap between cultures or environments would be considered a moralistic error. It is imaginable that cultures that arose in deserts might have special moral rules revolving around management of water, whereas cultures that arose in areas with plenty of water might completely lack these water-oriented moral intuitions. Cultures in one environment might requires special rules to make their strategy work, whereas cultures in another environment won't need these rules for their strategy.


Moralistic Error Generation

Evolution can make mistakes. Evolution can randomly generate new moral codes as it explores reality with random mutations, but there is no guarantee that these new moral codes will be successful. Reality can be more complex than the moral equipment offered by evolution. Perhaps, just as evolution gave us a fallible "physics engine" for calculating ballistic movements of objects that allows us to play sports, evolution also gave us a fallible "moral engine" for calculating social strategies that allow us to cooperate. Natural selection has always been applied to the quality of our "moral engine", so it is reasonable to presume that our "moral engines" approach the optimal game theory. Yet, our moral intuitions will have a perennial bias towards "optimal game theoretic strategies for our ancient environment". Given the gap between our ancient environment and our modern environment, it is reasonable to assume that our moral intuitions are not fully calibrated to our modern environment. Also, the history of science and philosophy has had a bias towards parsimony which results in reductive theories that crudely reduce our understanding of reality to one principle. This reductivism sacrifices accuracy by entertaining simplicity. Reductive moral conclusions are likely to produce errors since the complexity of game theory cannot be reduced to one factor. Another source of harmful errors is a short-term timeframe bias. Each time harmful behavior, criminality or atrocities are favored, a short-term benefit is being prioritized over long-term stability, hence a short-sighted game theoretic strategy.


Types of Error:

  1. Random mutations

  2. Fallibility of our "moral engine"

  3. Perennial bias

  4. Reductionism

  5. Short-term bias


Moralistic Invulnerability

Moralistic invulnerability is the degree to which a moral theory is immune to error. A theory is invulnerable to error to the degree it contains mechanisms for resolving or avoiding errors. An exploration of the types of mitigation factors available is shown below.





Moralistic Optimization

Moralistic optimization is the degree to which a moral code optimizes itself to meet its goals. Within the philosophy of ethics, different moral theories might have different goals and hence different optimization strategies. The goal of deontology is to abide by rules. The goal of consequentialism is to ensure the best consequences. Moral game theoreticism, as proposed here, would define morality as an exploration of which principles maximize game theoretic success of the genes.


When assessing approaches to morality, game theoretic success will be the judge that determines the path forward. When debating whether our moral theory should prioritize moral emotional instincts over moral cognitive conclusions, the right answer is to prioritize the intuition that maximizes game theoretic success.


For example, on the abortion issue, it makes sense that we have a perennial emotional bias to want to protect fetuses, since, throughout our evolutionary history, children have been very important to our game theoretic strategy. Yet, in our modern age, we face the risk of overpopulating our planet. Our perennial instincts no longer represent the best game theoretic strategy forward. We can cognitively deduce that if we overpopulate ourselves, we may face a self-destruction event, which is very bad for our game theoretic strategy. Our rational conclusions give us a way to adapt to our new environment in ways that our perennial emotions can't. To the extent deep moral instincts sabotage our game theoretic strategy, they must be suppressed by rational moral forces.


Additionally, time frame plays a huge role in game theoretic optimization. A game theory could be optimized for the present moment, for the short-run, for the medium-run, for the long-run, or for perpetuity. As addressed above, a short-term bias accounts for a moralistic error. The reason for this is that true optimization must sum up success over all time frames. A short-run strategy of crime may net a game theoretic advantage in the short-run, but it fails in the long run due to retribution. If you measure up all of the gains and losses of a crime strategy over eternity, the short-term benefit is eclipsed by eternal game theoretic failure when retribution results in death or life in jail. The punishment destroys any future evolutionary success; hence a short-run strategy fails at optimizing a game theoretic strategy.


If we analyze the course of evolution, it has been a constant battle between competitive tendencies and cooperative tendencies. Competitive tendencies were manifestations of short-term strategies that tried to gain an advantage over others. Each time these competitive tendencies manifest, the victims of abuse will begin to evolve retributive strategies to punish those who abuse them. Examples of this can be found in bacteria, insects, chickens, or humans.


One type of bacteria may evolve a mechanism to eat faster than all the competition. This initially makes it much more successful than regular bacteria. But then a more vicious bacteria will evolve to generate poison to destroy the others, igniting an evolutionary arms race, where the other bacteria scramble to know how to respond to this new threat. The bacteria that evolved to eat faster ends up dying faster because it eats up all the poison. Yet, another type of bacteria will evolve immunities to the toxins. This results in a paper-rock-scissors stand-off, as the poisonous bacteria can overcome the gluttonous bacteria one on one, the gluttonous bacteria can overcome the immune bacteria one on one, and the immune bacteria can overcome the poisonous bacteria one on one. It turns into a stalemate, because each type of bacteria is wasting excess energy on its strategy, of which their efforts cancel each other out. The bacteria arms race shows that competition results in wasted energy in fighting and perhaps bacteria that avoided the conflict all together would be more productive [8].




In fact, the evolution from single celled organisms (bacteria) to multicellular organisms looks like a transition from competition to cooperation. The mitochondria (energy generator) within each of our cells looks like it used to be a separate bacterium anciently, but somehow it got absorbed into our cells, creating a symbiotic relationship of mutual aid and cooperation. Afterwards, more cells started cooperating, until they became multicellular harmonious entities. These multicellular organisms were able to survive and thrive because this cooperation gave them an advantage. The evolutionary trend has been towards greater and greater cooperation of cells, as large mammals were eventually produced.


In fly experiments, researchers have found examples of male flies evolving poisonous sperm. The game theoretic advantage given to the male flies was in their sperm's ability to kill the sperm of competing males that were involved with the same mate. By killing competing sperm, the poisonous fly was able to achieve reproductive superiority. The only problem was that the poison also harmed the mother, so it was a short-term strategy. The mothers eventually evolved ways to neutralize the poison, so the entire competitive enterprise was a waste of energy for the group. Other competing populations of flies would easily outcompete their strategy, since cooperative fly populations don't need to waste energy on poisons and poison neutralizers. This example shows how short-term strategies that operate on individual selfishness can harm the group, and groups without internecine conflict will be more successful.


Within chickens, breeding experiments looked for the difference between 1) breeding from only the best egg laying chicken within each coop, or 2) breeding only from the best egg laying coop of chickens among all their coops of chickens. This tested the gap between individual selection and group selection. At the individual layer, the best egg laying chicken was often the most aggressive female who beat the other females into submission so she could take the greatest proportion of the food. At the group layer, the best egg laying groups of chickens were the cooperative and peaceful groups that wouldn't harm each other, but rather let each other take their fair share of food. Generations of chickens born from the cooperative groups were way more productive at producing eggs than the generations of chickens born from the aggressive females, since a coop of aggressive females resulted in lots of internecine conflict that harmed collective health and egg laying capacities. This shows that the short-term strategy of aggression is not a successful strategy in the long run.


Within humans, we can see the same trend away from small group competition and towards large group cooperation. The history of civilization is a history of evolution towards larger and larger scopes of cooperation - from families to tribes, to cities, to states, to nations, to empires, to religious hegemonies, to global orders. This is showing again that cooperative strategies are much more successful than competitive strategies in the long run. It seems apparent that game theory is naturally attracted towards more cooperation, more love, and more wellbeing. This means that we can measure the optimization level of our morality by seeing how large a scope of cooperation, love, and wellbeing it can support.



What all of these examples show is that the trend of evolution is towards greater and greater scopes of cooperation, love, respect, and unity. As long as time is unlimited, greater scopes of cooperation will defeat narrower scopes of cooperation. Hence, we can use moral scope to measure the optimization of a morality. Moralities that focus on smaller scopes are inferior to moralities that focus on larger scopes, given that, in the long run, larger scopes are more successful.


Optimization vs Inclusion

One might critique this perspective by introducing the moral inclusion of other animals within the scope of our morality. Most people of our modern age would agree that it is morally superior to incorporate the wellbeing of animals within our moralities. Yet, the critic might claim that this intuition contradicts the premises of moral game theoreticism, given that an optimization function for human genes does not require the moral consideration of other animals. There are many reasons why this is untrue.


First, we have a symbiotic relationship with the animals of the earth. Their extinction is to our disadvantage. We rely on a variety of animals for food, labor, medicine, pollination, fertilizer, scientific research, etc.


Second, enmity between human genes and animal genes can result in evolutionary arms races to defeat each other. The more pressure we put on other species, the more likely they are to evolve ways to combat the pressures we apply to them.


Third, the evolutionary temperaments required for humans to abuse animals include 1) lack of empathy, 2) malice, 3) sadism, 4) moral contradiction. Only those who lack empathy will be able to abuse animals. Malice and sadism motivate abuse. The ability to act in a morally contradictory way enables a human to simultaneously understand the principle of "I don't want to suffer", yet fail to respect the fact that "animals don't want to suffer". All of these attributes that allow for animal suffering are attributes that cause internecine conflict within human tribes.


Perhaps evolution could find a way to program us with moral contradictions by having 1) empathy for humans but apathy towards animals, 2) geniality towards humans and malice towards animals, 3) compersion towards humans and epicaricacy towards animals, 4) moral integrity on human issues, moral contradiction on animal issues. We would perhaps have a successful human game theoretic morality, despite containing the capacity for great abuses of animals. Yet, this would just be to human advantage in the short run. In the long run, abused animals may be able to attack back in some version of evolutionary karma, where they evolve to emit certain poisons or diseases that wipe us out. The best long term game theoretic strategy for humans is to find ways to cooperate with other animals so that it isn't in their evolutionary interests to take revenge on us.



Why Meta-Wellbeing is Good

As we have discussed, evolutionary pressures push morality towards the most ideal game theoretic strategy. In this sense, evolutionary pressures are defining the good. Evolution's means of communicating the good is via wellbeing. A creature knows that pain is bad and to be avoided. What they don't know is that the reason pain is bad is because pain signifies a behavior that harms the game theoretic strategy of its genes. Similarly, a creature knows that pleasure is good, and to be sought out. What they don't know is that the reason pleasure is good is because pleasure signifies a behavior that helps the game theoretic strategy of its genes.


Simple wellbeing at the level of pleasure and pain (hedonism) is the best game theoretic strategy simple creatures can accomplish. But humans are more advanced. While monkeys may fail a variant of the marshmallow test on delayed gratification, many human children can pass the test in order to get a greater reward in the future. This means that on top of our wellbeing system is a meta-wellbeing system that is analyzing pleasure across time. While our simplistic animal wellbeing motivations want the short-term reward, our "meta-wellbeing circuit" is calculating the fact that delayed gratification is a superior strategy for maximizing wellbeing, and hence a better game theoretic strategy.


Perhaps our short-term animalistic motivations desire to do harsh things to those we don't like. But then it is likely that we feel regret and remorse over our anti-social actions. This is signifying a greater wisdom located at the meta-wellbeing level has noticed that we have sacrificed the benefit of a good reputation over the long run for a short-term reward of punishing someone we don't like. The long run wisdom at the meta-wellbeing level most likely desires to punish the rest of the brain for allowing the short-term hedonistic brain circuits to make that decision. This pain of regret is likely meant to reprogram the brain to be more careful next time and avoid trading the long run for the short run again.


Meta-wellbeing is split between bodily wellbeing, and moralistic wellbeing. While evolution programs hedonistic wellbeing to optimize bodily success, evolution programs moralistic wellbeing to respond to game theory implementation. Humans are social creatures, so the structure of our society is very important to us. We want to raise our children in societies that will be friendly to them. Hence, prima facie, promoting a friendly society is obviously a better game theoretic strategy for raising children than promoting a violent society.



These evolutionary pressures will then reward people who have the emotions needed to promote friendly societies. It isn't farfetched to imagine that our minds have evolved to model the type of social strategy it would like implemented in the world. Or in other words, we evolved to each imagine our own versions of a utopia. Our moralistic wellbeing would then be programmed to attach to this vision of a utopia. If we desire to live in a world where children are not bullied on the playground, upon the sight of bullying, we will sense a gap between our ideal society and the actual society before us. This will trigger a negatively valanced emotion that will give us the desire to brainstorm ways to undo this negative aspect of our current reality and push reality towards our ideal game theoretic utopia. Hence our moralistic wellbeing can include all sorts of features, ideal rules (deontology), ideal virtues (virtue ethics), ideal consequences (utilitarianism). Our brains will attempt to do the math as to how to prioritize each of these aspects of its desired utopia.

Yet, it is obvious that the ideal game theoretic strategy is too complex for anyone to calculate. If evolutionary pressures molded the structure of moralistic wellbeing to approach the best utopia it can imagine, then that means that moralistic wellbeing is attempting to track ideal game theory. Since moral calculations trigger errors at the individual level, relying on the intuitions of single individuals (monarchies/religions) will not provide the most ideal game theory. Increasing the scope of moralistic wellbeing to collective moralistic wellbeing would be the way to approach the best game theoretic strategy.


We need not worry about the gap between moralistic wellbeing and bodily wellbeing, since moralistic wellbeing should naturally include the considerations of bodily wellbeing. Since these forces inform and assist each other, the umbrella term of meta-wellbeing would be the more useful term to encompass the entire scope of the moral enterprise.



Goodness is Meta-Wellbeing, but is Meta-Wellbeing is Good?

G.E. Moore formulated the "open question" argument, meant to challenge anyone who tried to define goodness. He basically says that if something (X) is equivalent to goodness, then the question, "Is 'X' good?" will be meaningless. If we can find any contradiction (perhaps find something that 'X' does not include) then this definition of goodness will be debunked.


If we reduce goodness to "simple pleasure", "rules", or "virtues", we can find plenty of contradictions because there are many things that we consider "good" that are outside the scopes of these definitions.

But if we reduce goodness to an all-encompassing concept like "complex pleasure", "meta-wellbeing", or "game theory", we will find that these definitions hold. The only way something can be "good" is if it falls within some scope of pleasure, for if anyone thinks 'X' is good, they must have pleasurable emotions towards 'X'. Rules, virtues, and consequences are all forms of psychological pleasure in the minds of those who value them. Since evolution determines what is pleasurable, the best game theoretic strategy will contain the most enduring source of pleasure.

Each person may have a different definition of goodness, deriving pleasure from different things. There may be multiple levels of "psychological goodness" as people learn to evolve their psychology from self-oriented moralities towards universal moralities. The intentions (psychological) of someone's mind may be good, but the results (ontological) they bring to the world may be bad. Evolution will slowly guide all of our psychologies towards the best form of goodness.



Is Fake Wellbeing Good?

A philosophic dilemma meant to challenge the primacy of pleasure in wellbeing-oriented philosophies is called the "experience machine" or the "pleasure machine". Philosopher Robert Nozick posited the existence of a machine that would have the ability to enchant the mind with delusions of maximal pleasure. Robert argues that if pleasure is the only thing that is important to us, then we will not have any qualms with the proposition to enter into a virtual reality and live out the rest of our lives within a manufactured fantasy of happiness.


The key problem with this dilemma is that it reduces the idea of pleasure to its most simplistic form - simple pain and pleasure. It does not include the pleasure of knowing that you embody the virtue of bravery in embracing a true reality with the ability to effect true change. A common problem in human thinking is a desire for parsimony - the idea of reducing our explanation of things to just one simple factor. Moral psychologist Jonathan Haidt is an outspoken critic of the epistemic virtue of parsimony, specifically within the realm of morality. Johnathan puts forward a model of morality that includes five moral dimensions, namely 1) care/harm, 2) fairness/cheating, 3) loyalty/betrayal, 4) authority/subversion, 5) sanctity/degradation. The concept of meta-wellbeing (consistent with the type of wellbeing put forward by Sam Harris) is inclusive of ALL the values within the brain.


The mere fact that the brain values truth (reality) might be enough to reject the pleasure machine. For people who voluntarily reject the pleasure machine, this rejection would be evidence that their brain values something other than "simple wellbeing". The pleasure obtained only by possessing the truth might alone be a great enough factor to surpass the pleasure obtainable by the pleasure machine.



Of course, the pleasure obtained by possessing the truth can be illusory. It is possible that the pleasure machine can likewise generate the pleasure of the illusion of possessing the truth. Before entering the machine, we will know that the machine cannot give us the pleasure of possessing the real truth. Our greater knowledge might reject the machine on this account. Yet, once within the machine, it might be harder to judge if one should be exposed to the truth and lose the pleasure derived from their illusion of possessing the truth. As someone who used to be a devout theist, I have experienced the pleasure derived from the illusion of understanding a reality based on a religious fantasy. Having to detach my brain from this fantasy disconnected me from the pleasure derivable only from that fantasy. Yet, I can affirm that for me, the pleasure of knowing that I was approaching a more accurate version of the truth was preferrable to remaining within a fantasy. I would never choose to go back and allow my brain to reenter the enchanted fantasy of believing there was a loving God hovering over my shoulder at all times. Despite the comfort and encouragement this fantasy provided, truth is almost infinitely more important. The only way we can have an optimal game theoretic strategy is if our strategy is connected to the truth. Hence, the deepest purpose of our wellbeing rests on truth.



Criticisms of Evolutionary Moral Game Theoreticism:

“However, empirical research on GT outside of perfect competitive market interactions shows that “game-theoretic predictions based on the self-regarding actor model generally fail. In such situations, the character virtues, as well as both altruistic cooperation (helping others at a cost to oneself) and altruistic punishment (hurting others at a cost to oneself) are often observed” [9].

This criticism of game theory fails because it makes a false assumption that evolutionary moral game theory is focused on individual success, instead of the success of the genes.



Check out part 4 here:




References:


[1]

Wikipedia. Reflective equilibrium. Last modified in 2021. Accessed at https://en.wikipedia.org/wiki/Reflective_equilibrium


[2]

Smyth, N. (2017). Moral Knowledge and the Genealogy of Error. The Journal of Value Inquiry, 51, 455-474. Accessed at https://philarchive.org/archive/SMYMKA


[3]

David Faraci. (2019). MORAL PERCEPTION AND THE RELIABILITY CHALLENGE. Journal of Moral Philosophy. Accessed at https://davidfaraci.com/pubs/perception2.pdf


[4]

SINAN DOGRAMACI. (2016). EXPLAINING OUR MORAL RELIABILITY. Accessed at https://philpapers.org/archive/DOGEOM-4.pdf


[5]

Bourget, David & Chalmers, David J. (2014). What do philosophers believe? _Philosophical Studies_ 170 (3):465-500. Accessed 3/3/2022. Accessed at https://philpapers.org/surveys/results.pl?fbclid=IwAR1yvFWsXprHxMBOXW2D5nXFjOknM6je1LrPk-k5lBPlnloWyXIv2ty-z7M.


[6]

JOSHUA GREENE. (2022). Moral Cognition. Accessed at https://www.joshua-greene.net/research/moral-cognition

[7]

PETER SAALFIELD. (2012). The Biology of Right and Wrong. Accessed at


[8]

J. M. BIERNASKIE, et al. (2013). Multicoloured greenbeards, bacteriocin diversity and the


[9]

Gonzalo Alonso-Bastarreche and Alberto I. Vargas. (2021). Gift Game Metatheory: Social Interaction and Interpersonal Growth. Accessed at https://www.frontiersin.org/articles/10.3389/fpsyg.2021.687617/full




323 views0 comments

Recent Posts

See All

Comments


bottom of page