An explication of Sam Harris's philosophy within his book - "The Moral Landscape"
Part 2
Before we get into the meta-ethics of meta-wellbeing, be sure to check out the definition of meta-wellbeing in Part 1.
TABLE OF CONTENTS
META-ETHICS
Meta-ethics is a subbranch of axiology, or the study of values. By defining wellbeing as “all human values” Sam basically includes all of axiology into his definition of wellbeing. It’s almost like saying that the definition of goodness is “all the good things”. It’s a sneaky answer, but maybe a useful answer. Meta-ethics is a narrower question within the study of values, specifically questions regarding moral values (not the value of art, for example). One of the most important questions meta-ethics asks is “What is the nature of moral judgments?” Meta-ethics is trying to figure out the root of morality. Where does morality come from? What is the foundation of morality?
Is morality a magical knowledge of good and evil given to us by magical apples? Is morality an evolutionary instinct? Is morality an eternal law written into the fabric of reality by a God? Is morality based on subjective feelings? Or is morality objectively measurable? Is it based on the arbitrary opinion of a god, or does it come from deeper facts about reality? As I discussed in my other blog post, I believe the answer to this question is evolution [The Naturalistic Root of Morality | TranscendentPhilosop].
Evolution determines game theory, game theory determines the brain, and the brain determines wellbeing. Sam Harris makes it clear that he doesn’t intend to reduce wellbeing to evolutionary drives [1]. Personally, I think that by delineating this four-step process (between evolution, game theory, psychology, and wellbeing), we can understand that wellbeing is not the same as evolution, but they do influence each other. The lack of wellbeing in a creature (sickness and death) causes a creature to fail the game of evolution. The possession of wellbeing in a creature (family and food) causes a creature to succeed at the game of evolution. Morality is the rules of the game that evolution has programmed into our brains to give us the best chance at wellbeing, and hence success. If a certain moral code fails to provide wellbeing, that moral code will disappear. If a moral code succeeds, it will propagate.
Later on, when we start discussing religion, we will see how we can compare a morality based on selfish evolutionary goals and a morality based on meta-wellbeing. The simple answer seems to be that there are two types of game theory strategies, short-term and long-term. Short-term game theory focuses on getting a quick advantage. Long-term game theory focuses on building a stable society for the propagation of genes into the future. A short-term view of evolution might make thinks like murder, rape, and theft seem like good strategies. But a long-term view of evolution knows that these things harm the structure of society. A smart system of evolution would program our wellbeing to increase when we follow a long-term strategy and punish us when we utilize short-term strategies. Hence, wellbeing should be attuned to the best long-term strategy evolution is able to come up with.
SIMPLE MORAL PROBEMS – is meta-wellbeing the answer?
Simple Morality is Consistent with Meta-wellbeing
When we think about simple moral universals, it seems obvious that the aim of each moral principle is to make the world a better place by increasing sustainable meta-wellbeing over time. For example, when people lie, not only are they destroying their own relationships, but they are also destroying the social fabric of trust. Each deceitful person harms our collective ability to trust each other. This makes the entire society a worse place because no one can have the peace of mind that their wellbeing will be secured by other’s commitments. People will lose the ability to trust each other to fulfil their end of the deal economically. People will lose the ability to trust each other as sources of reliable information. This destabilizes a society both economically and politically. The end result of this could easily be a civil war with massive amounts of bloodshed – all because too many people decided that honesty wasn’t important. The end result of this immorality is a huge hit against meta-wellbeing.
COMPLEX MORAL PROBLEMS – is meta-wellbeing the answer?
Complexity Arises Due to Wellbeing Trade-offs
When we get to complex moral issues, the kind that are frequently debated, we find that the reason they are complex is because the decision is no longer between one good option and a bad option, but between two good options. Abortion, in part, is a struggle between the wellbeing at the level of the lifestyle of the mother, father, and child, and then on the other hand, the wellbeing of the fetus with respect to the pain it experiences and the damage to its precious life. The pro-abortion camp might have fears about the wellbeing of victimized girls, especially in extreme cases of rape, sex trafficking, incest, or medical complications. They might also fear the harm done to women who feel forced to perform a non-medically supported abortion. The anti-abortion camp may fear the damage done to the idea of the human right to life and the damage to the idea of the sanctity of life. This fear boils down to a concern for the wellbeing of all human life when it is treated so carelessly. They might also fear the harm done to the emotional wellbeing of the mother who has to live with the knowledge of her abortion. There might also be a religious concern regarding the spiritual wellbeing of people involved in committing abortion sins. Almost all the concerns involved can be reduced to different ways to measure wellbeing. The problem ends up being an argument about which policy maximizes wellbeing best.
Deontology Creates Stubborn Perspectives
Often these issues don’t feel like an argument regarding which policy provides the most wellbeing. I would argue that the political centrists are more likely to view the problem with a utilitarian lens of wellbeing and look for a balanced solution. I would further argue that the far-right and the far-left begin to abandon the flexibility of utilitarian philosophy as they begin to dive into their own respective deontological philosophies. The more deontological a perspective becomes, the more dogmatic and the less flexible they will be. Deontology is kind of like the stubborn insistence on a certain rule. For the abortion argument, the far-left deontologists might be those who believe that women have a right to abortion, regardless of the circumstances. The far-right deontologists might be those who believe that the fetus has the right to life, regardless of the circumstances. This appeal to rights is a deontological philosophy that maintains a stubborn stance on an issue. This dogmatic perspective makes compromise difficult.
Deontology is Actually Consequentialistic
I believe that any deontological perspective can become self-defeating when the stakes are raised high enough. Perhaps a far-right perspective might say abortion is never justified, even if the mother will die during pregnancy. But if you raise the stakes further, implying that the child is going to suffer from immense diseases; perhaps they will spread their diseases to others; perhaps the disease will start a zombie apocalypse; perhaps the fetus is destined to become the next Hitler and ignite a World War 3 scenario bringing hell to the earth - perhaps abortion will be justified in such an extreme case. If deontology breaks down at a certain point, that is evidence that there is an element of wellbeing optimization behind these rules. The deontologist subconsciously believes that the deontological rule will make the world a better place, hence maximize collective wellbeing. But if stubborn insistence on the deontological rule will make the world a worse place by harming collective wellbeing, an open-minded person might be inclined to make exceptions to their rules. If the person refuses to budge on their deontological rule, it is probably because they don’t really believe that adherence to the rule will be so bad. They are choosing to have faith that the rule will bring good consequences. And even if the rule doesn’t bring good consequences, their desire for the rule to be observed is evidence that they prefer the consequence of a world governed by said rule, over a world not governed by said rule. The wellbeing pursued in this instance might be their psychological comfort in knowing the world operates on rules that they like. In the end, I would argue that even deontological rule-based philosophical systems are consequentialist in nature and appeal to meta-wellbeing.
Moral Hypocrisy Leads to Unbalanced Wellbeing
But an important thing to note is that just like brains’ attempts to calculate physics can be wrong, brains’ attempts to calculate wellbeing can be wrong as well. The brain suffers from many optical illusions that can introduce visual biases that skew their understanding of physics. Similarly, our brain suffers from many selfish biases that can skew our personal calculation of wellbeing. Most people have a bias towards increasing the wellbeing of their own children at a much higher priority than that which they apply to orphans. People have a bias to optimize the wellbeing of their own tribe, religious group, race, or nationality, at the sacrifice of outgroups. Philosophically this is unjustifiable because of its hypocrisy. Each life is of equal metaphysical value, so we must work as a collective to undo these biases so that we can pursue wellbeing in an accurate way.
RELIGIOUS MORAL PROBLEMS – is meta-wellbeing the answer?
Wellbeing vs Game Theory
In this painstaking effort to map out a variety of religious imperatives, we can mentally intuit how each factor will impact a society. Each imperative may have its own pros and cons. We can roughly weigh the pros against the cons to see if a religious imperative poses a net gain or a net loss to collective meta-wellbeing. Then we can additionally weigh the benefits of a religious imperative to an evolutionary game theoretic strategy to increase group strength so as to win in competition with other groups. Game theory doesn’t care about wellbeing, it just cares about winning the game of evolution. For example, Nazism employed genocide and empire building as an attempt to win the game of evolution. Nothing about WW2 was motivated by wellbeing. Often religious imperatives carry an imbedded evolutionary strategy behind them. For example, promoting family values might help your group birth more children which will boost group strength for winning the next war. Raising children can boost wellbeing, but children can also help you win the game, so this religious imperative might provide a win/win for both wellbeing and game theory.
Moral Intuitions Choose Wellbeing over Game Theory
If you look through the list of religious imperatives, you may find that the imperatives that improve wellbeing are naturally attractive to you. The ones that harm wellbeing are naturally unattractive. Yet, often it is the case, as shown in this list above, that the interests of wellbeing and the interests of game theory can conflict. You will probably intuit, as do I, that whenever there is a conflict between wellbeing and game theory, the right answer is to promote wellbeing and the wrong answer is to promote game theory. This is because game theory, in its crudest form, is a short-term selfish strategy for winning the game of evolution. It would seem to me that selfish tricks and strategies are naturally disgusting to morally minded people. I believe this is because we have a deeper and more successful long-term strategy built inside our evolutionary wisdom that recognizes the stupidity of selfish strategies.
Evolution of Meta-wellbeing
Just as our physical bodies are smelted in the furnace of evolution and molded by the pressures of natural selection, our emotions likewise evolved from these same pressures. To me, it makes sense that the most successful emotions will be ingrained most powerfully and deeply within us. The least successful emotions will be the most superficial and overridable. For example, if you have ever experienced a VR setting at the top of a skyscraper, you most likely have felt the cognitive desire to push the limits of your experience and step off of the ledge because you know that in actuality you are safe and won't die. Yet, a deeper survival instinct activates. An irresistible drive for self-preservation turns on and you find yourself utterly powerless to take a step off the ledge. This deeper instinct overrode your cognitive wishes, obviously because this instinct helped many of your ancestors survive. Of course, the desire for self-preservation is a component of meta-wellbeing. If we add up all of these deep desires within us, we get a better picture of what our evolutionary wisdom desires to see implemented in the world. These deep desires represent the most reliable strategies we have been able to develop over millions of years. These deep moral desires for wellbeing represent the most far-sighted strategies we have. Our religious, philosophical, or tribal moral perspectives are more likely to be cognitive derivatives that produce more short-sighted strategies. Evolution likes to randomly play with different strategies to see if it can find some advantage. But usually, whenever one group implements an aggressive strategy, the other groups will implement defensive strategies. These aggressive strategies usually get countered, and evolution finds them to be inefficient. For example, a murderous strategy is often punished by revenge or jail-time, so this strategy severely backfires and harms the unit’s ability to reproduce. At the group level, Hitler implemented this type of short-term strategy to see if the Germans could get an evolutionary advantage with violence, genocide, and appropriation of resources. But Hitler’s efforts were foiled, and the Germans were evolutionarily punished for their aggressive strategy by their global neighbors.
Meta-wellbeing Includes Equality
The first conflict between wellbeing and game theory on the list is the issue of slavery. Slavery can be viewed as a selfish evolutionary strategy to force one set of genes to work for another set of genes. If the genes are operating at a tribal level, it makes sense that they will want to try a selfish strategy to see if they can give their genes a benefit at the cost of foreign genes. But, over millions of years of evolution, we have a deeper moral emotion to see each other as equals. I would argue that this deeper emotion is the smarter emotion. When tribal genes try to take advantage of foreign genes, they create bad evolutionary karma. Perhaps they harbor a resentful slave population that will eventually revolt. In the short-run this strategy might be beneficial, but in the long-run it is self-destructive. Over millions of years, treating each other as equals was most likely a more sustainable strategy than trying to force others to obey you, hence equality got ingrained into us at a deeper level.
Selfish Strategies Harm Long-term Meta-wellbeing
Similarly, individual perspectives on wellbeing match that which is helpful for our evolution. If food, drink, and sleep are helpful to our evolution, we will naturally hate hunger, thirst, and sleep-deprivation. If friendship, romance, and family-time are beneficial to our evolution, we will naturally value them as a part of our meta-wellbeing. If we want our tribe to succeed evolutionarily, we will want to optimize the meta-wellbeing of each individual in the tribe. If we want all humans to succeed evolutionarily, then we need to optimize meta-wellbeing for all humans. Anything less than an equal desire for meta-wellbeing for all humans would be a short-sighted evolutionary strategy that allows group selfishness to be more important. This group selfishness is what leads to war – a self-destructive strategy.
Meta-wellbeing is our Best Long-term Game Theory
What this means is that our instinct for collective meta-wellbeing is actually representative of the smartest, most far-sighted evolutionary wisdom we have. Every time we go against it, we are employing a short-sighted strategy for temporary advantage. Collective meta-wellbeing is equivalent to our best game theoretic strategy for the long haul.
TRADE-OFFS – is meta-wellbeing the answer?
Pleasure vs Pain
When it comes to the details of a meta-wellbeing calculation, inevitably the quantification of pleasure and pain will come to the forefront. This leads to questions of measurement and priority. Mathematically, measurable factors can be boosted in importance by applying a weighted factor to them. For example, if action A causes 10 units of pain to Bob, but delivers 100 units of pleasure to Suzie, a utilitarian calculation would say that action ‘A’ is good because it gives us a net 90 in pleasure to the world. But a weak negative utilitarian would say that pain should be weighted much more severely since it is morally much worse. In order to make action ‘A’ morally bad, we would need to multiply the 10 units of pain by a factor of 11 to increase its badness beyond the goodness of the pleasure Suzie derives. 110 weighted units of pain is greater than 100 units of pleasure, so we can calculate action A to be bad.
Sam Harris is confident that the future of neuroscience will help us measure these factors more accurately. Pain is often associated with the neurotransmitters glutamate and substance P. Pleasure is often associated with dopamine. Emotional contentment is often associated with oxytocin. Theoretically we will be able to measure which neurotransmitters are more valuable at the level of decision-making in the brain. Perhaps we can do experiments where a brain gets to accept or reject a reward conjoined with a punishment. Perhaps pressing a button will trigger a certain number of volts of electricity to punish the experimentee. But immediately afterwards, they get a dopaminergic treat, or an oxytocin-boosting hug. If the experimentee rejects the offer, then we know that X units of pain is not worth Y units of pleasure/contentment. This may help us build the proper weights into our calculations.
But there is a further issue of justice here. Is it just to punish Bob so that Suzie can get pleasure? Even if the math balances out at the level of brains, why should Bob’s wellbeing be sacrificed for Suzie’s pleasure? The desire for justice is also a need in our brains. Perhaps all humans have 10 units of desire for justice to be implemented around them. Perhaps punishing Bob really does deal 10 units of pain, yet gives 100 units of pleasure to Suzie. While this transaction would give the world an additional 90 units of pleasure, it would also give every human aware of this injustice 10 units of pain in their justice cortex. If 9 people are aware of this injustice, then all of the additional benefits will be cancelled out by the suffering endured by brains that are appalled by this injustice. So, by including all values into meta-wellbeing, Sam Harris is able to synthesize the maximization of pleasure with the need for justice.
While most people prefer pain avoidance over pleasure, in economic there is a phenomenon of prospect theory which finds that when evaluating risks, people have a bias for rewards over punishments. With neuroscience, we can measure the difference between psychological preferences and physical preferences. It may be that humans have a psychological preference for risky rewards. But when the risk backfires, they will be dealt a more physical blow, perhaps to their wallet. I assume that we will find that the psychological pain of the actual financial loss will be much more that the psychological pleasure of making a risky decision for a reward. I assume that we will find it the case that the way to maximize wellbeing is to advise people to not make risky decisions for dubious rewards, since the short-term excitement does not justify the long-term hit against wellbeing.
Future vs Present
Meta-wellbeing would want to balance itself over time and across populations. An attempt to understand the neuroscience of wellbeing will help decisions more accurately maximize meta-wellbeing. Future effects are often discounted in the finance world as a way to represent the time value of money. Perhaps a similar discount method can be found within our brains for the purpose of understanding the time value of wellbeing. We are always making choices between pleasure now and pleasure later. The neurotransmitters must be battling in their intensities in order to determine which option is more desirable. A careful study of these dynamics will no doubt give us insight in how to maximize meta-wellbeing over time. For example, the brain is most likely attempting to estimate the value of future wellbeing by estimating a probability for the wellbeing to manifest itself, and the intensity of that wellbeing over a certain duration of time. Some sort of present value function would estimate how good a future reward might be. In order to get the future reward, we might have to sacrifice pleasure in the present. If pleasure in the present is more intense than the future reward, it makes sense to choose the short-term reward. If the long-term reward is of greater intensity, then we are obliged to choose it in order to maximize our wellbeing. But sometimes our probability estimations are wrong. Sometimes our wellbeing intensity over time estimations are also wrong. By getting better measurements on how wellbeing is affected can help us be more accurate in choosing the answer that maximizes wellbeing.
Duration vs Intensity
One moral dilemma facing utilitarian philosophies is the issue of how to compare 1) a long period of time with mild pain, with 2) a short period of time with intense pain. I believe that the principle of voluntarism can help us philosophically balance wellbeing maximization functions against intensity over time dilemmas. If the options are described well enough, we can often form a gut opinion about which option is better. Our memory system probably catalogues pain experiences in our past with the goal of optimizing our desire to not get hurt again. If a person has experienced a long-term pain, their brain will probably assign a trauma intensity score to that memory that combines the length of time with the intensity. Similarly, our brain probably assigns a trauma intensity score to short intense pains as well. When given the option to incur pain for a benefit, our brain must begin weighing the pros against the cons. When our brain makes that voluntary decision, it must weigh the trauma intensity score against the perceived benefit score. If the benefit outweighs the cost, then we might voluntarily take on the pain. Be learning how neuroscience works at a deeper level, we might discover how our brain is recording these trauma intensity scores that conjoins time with intensity. This will help us understand which path is worse for wellbeing.
Ripple Effects (Small Probabilities)
The idea of a butterfly effect is how one action can trigger a chain reaction of effects until something unpredicted occurs. When making decisions in a complex landscape, it is hard to accurately measure all of the potential consequences of different actions. There is probably a bias in the brain for ignoring low probabilities. But, when we make population level decisions, low probabilities become important. A disease that only kills 1% of people might be an ignorable amount of risk at the individual level, but perhaps not ignorable at the level of the population. Understanding how wellbeing is affected by these issues can give us a meaningful understanding of how damaging pandemics can be at the level of the population, despite our natural intuitions not really knowing how to calculate this. Further, understanding how policies like lockdowns, vaccines, or masks affect wellbeing can help us balance the equation properly.
Stable Effects vs Bouncing effects
Perhaps between two options, one option will provide a mild amount of constant wellbeing (being single), and one option will provide an intense amount of volatile wellbeing (being in a relationship). Again, voluntarism is one way to help us figure out which option maximizes wellbeing. If someone knows that a relationship might be an emotional rollercoaster, but they still choose it, to the extent that they are accurate in measuring their future wellbeing, that choice is what maximizes wellbeing for them. Neuroscience can only make these issues even more clear.
Rights vs Consequences
I common moral dilemma used to poke holes in consequentialism is the idea of forced organ donation for the collective good. It is posited that one healthy individual can be sacrificed, all of their organs then available to being donated to a wide variety of people, essentially saving the lives of many at the cost of one. If this is a beneficial consequence, then consequentialism may feel justified in forcing this transaction. But the philosophy of rights (deontology) enforces a type of voluntarism, where people cannot be forced to harm themselves, they must donate organs of their own volition. Creating a society that randomly violates the right to life harms the emotional need for deontology within every member of that society.
When assessing collective meta-wellbeing, the psychological measurement of how much we value rights must be measured and then offset by the benefit of consequentialism. When the psychological value of a right is low, but the consequence of violating it is high, we are justified in violating those rights under an ethic of collective meta-wellbeing. For example, the right to not wear a face mask during a pandemic is a right with low psychological value. The benefit of reducing a percentage of the collective suffering and death during a pandemic is a benefit that far exceeds the psychological value of the right to not wear a mask.
Revenge, Punishment and Justice
Understanding neurology can help us see how different systems of reinforcement affect behavior and hence affect wellbeing. Perhaps certain types of punishments are justified if they have the effect of disincentivizing behavior that harms collective meta-wellbeing.
Rights / Ownership
Rights are just rules that we follow, because we believe that those rules make the world a better place. Ownership is the right to own property. It’s a universal human value to respect one another’s property. This is no doubt a function of evolutionary game theory. Since property is necessary for survival, fighting over property is no doubt a risky game that harms survival. When we respect ownership of property, it promotes positive-sum industry over zero-sum competition for resources. No doubt the neuroscience of meta-wellbeing will only confirm how a life of looting is less preferable to a life of productivity, both at the individual level and at the level of the society. But it is also conceivable that rights to property can become harmful to collective meta-wellbeing. If taken to the extreme, one could conclude that the government doesn't have the right to violate ownership of property when collecting taxes or seizing property needed for collective wellbeing via the law of eminent domain. Perhaps to the extent that ownership harms collective meta-wellbeing, these rights should be restricted.
PHILOSOPHIC MORAL DILEMMAS – is meta-wellbeing the answer?
Scapegoating
Utilitarianism often struggles with questions of harming the innocent to benefit the masses. Scapegoating is an example of this in that one innocent person or group can be blamed for a set of problems, and then punished. This scapegoating is a method to ease the anxiety and rage of the masses. While there might be a benefit to the wellbeing of the masses, there is a harm done to meta-wellbeing in that everyone who cares about justice will be offended by scapegoating. And everyone’s meta-wellbeing will be harmed with increased anxiety in the knowledge that next time it might be them or their group that gets scapegoated. This type of utilitarian dilemma is resolved with rule-based utilitarianism by recognizing that violating the rule of not punishing the innocent harms wellbeing more than the benefit done to the masses via scapegoating. This is how the synthesis between consequentialism and deontology can be achieved.
Trolley Problem
Trolley problems are very similar to scapegoating moral dilemmas, in that there is a choice between intentionally violating the right to life of a minority in order to benefit a majority from their fate. The standard formation of the trolley problem is that the trolley is set to run over five individuals tied to the track. The bystander has the ability to flip a switch to redirect the trolley away from the five and towards one individual tied to the other track. This moral dilemma elucidates the tension between deontology and consequentialism. Which is more important – the one individual’s right to life (rule-based morality) or the wellbeing of the five (simple wellbeing-based morality. Is the action of murdering one individual more heinous than inaction despite being able to save five individuals? Often when people evaluate trolley problems, individual moral biases come to the forefront. People with deontological moral values want to justify why they refuse to help the five. They might appeal to the character of the five on the track. Are the five people good citizens or are they a bunch of criminals? How did they get there in the first place? Did they make foolish decisions and now deserve to reap the consequences of their actions? When someone appeals to the character of the individuals involved, they seem to be making an argument that “if the five individuals are evil, letting them die is good for collective wellbeing”. When they appeal to the choices of the five individuals, they seem to be making an argument that “if people make bad choices, they deserve to suffer the consequences because allowing karma to operate means that the world is a more just place.” Appealing to making the world a more just place highlights the fact that their brain enjoys the idea of a just world. A just world is what maximizes meta-wellbeing for them. Killing off the one to save the five violates this justice. In the end, the deontological rule to not harm the innocent one seems to be a type of meta-consequentialism – desire for the consequence of a world based on rules that certain brains like. A science of morality involved in studying meta-wellbeing at the level of the brain can help us find what rules are beneficial and how we can maximize the benefits of rules in tandem with maximizing the benefits of consequences. Perhaps the right to life is such a powerful instinct in the brain that it is too costly for society to violate this rule for short-term benefits.
Wealth Redistribution
The legend of Robin Hood is a narrative about the ethic of redistributing wealth from the rich to the poor, even if by force or theft. Some people claim that the legend is about an ethic of recovering unjustified taxation from the government. There is no official myth, so it is largely open to interpretation as the rich and the government were often the same people in the medieval ages. A consequentialist approach to morality can justify redistribution because the poor need wealth more than the rich. An ethic of simple wellbeing maximization might agree with Robin Hood. But meta-wellbeing includes deeper layers of wellbeing than just superficial consequences. Meta-wellbeing includes the drop in satisfaction from living in an unstable society where wealth can be redistributed somewhat arbitrarily or tyrannically. Meta-wellbeing includes the satisfaction of living in a society that respects your evolutionary need for the right to ownership of property. Meta-wellbeing respects the idea of voluntarism – in that there is more wellbeing when wealth is redistributed voluntarily as opposed to via force. But meta-wellbeing also recognizes that wealth inequality harms society, so certain tyrannical measures might be justified at the extremes. While some may say that taxation is a type of forceful wealth redistribution, meta-wellbeing would probably recognize that a nation without cooperation would provide less wellbeing that a nation where resources are pooled to finance collective goods. Since we all benefit from taxation, existing within a nation is basically expressing willingness to benefit from the collective goods produced by this nation, so it is everyone’s duty to help fund these collective goods that they want to receive.
Euthanasia
One could argue that people who suffer from horrible diseases or disabilities experience negative wellbeing and that their death would reduce their suffering and increase collective wellbeing. For me, this circles us back to the principle of voluntarism. If people are willing to die, then that must mean that death is worth it for their wellbeing. If people are not willing to die, then there must be a net positive in wellbeing for staying alive. Hence, an ethic of meta-wellbeing would ensure the right to choose to be euthanized, hopefully in a painless way.
Culling the Herd (Eugenics)
An ethic of simple wellbeing might envision that killing off all the people who suffer from mental illness, disabilities, poverty, or other diseases might be the right thing to do. Not only does killing off these groups improve their wellbeing by reducing their suffering, but it also reduces the economic burden of the state in caring for them by taking resources from the healthy populous. Additionally, culling the herd can be seen in evolutionary terms as purifying the genetic material of the populous, ensuring that the next generation of citizens will be only born to those who are healthy. This means that future generations will suffer less. Perhaps they may argue that the benefits of culling humanity justify the state in violating the principle of voluntarism, essentially eliminating these groups without their consent. But like the other moral dilemmas, this largely fails to look at the big picture in how policies like this impact the world. You have to think about all the anxiety this creates when people realize that they could be the next on to get culled. You have to think about how disgusting and horrifying it is for people to live in a world where the innocent can be murdered by the state. The amount of negative emotion produced by these policies can be argued to be much greater than the potential benefits. Understanding the brain can only make this even more clear.
Tragedy of the Commons
The tragedy of the commons is a class of problems that involve public resources. When no one has ownership over a public resource, individuals will not treat the public resource with the respect it deserves. Individuals will use and abuse the resource until there is nothing left for others to partake in. For example, if there is a fishing lake, a lack of ownership might lead to individuals fishing until there is no more fish. When the government or private entity takes ownership of the lake, they will care about the future value of the lake and will implement rules that preserve the value of the lake over time. Collective meta-wellbeing can only be maximized if the value of the "commons" is preserved over time. If the resources get depleted, future wellbeing will be severely reduced.
Supererogatory Actions
Erogatory actions are actions for which you have a moral duty to perform. Supererogatory actions are good actions that go above and beyond the call of duty. Failure to perform erogatory actions would be immoral. Failure to perform supererogatory actions would be morally neutral. Acts of heroism are most likely to fall under this category of “good actions that no one would blame you for not being brave enough or virtuous enough to perform.” Getting tortured for a loved one, dying for a loved one, taking on great risk for a loved one, going into financial distress for a loved one – all of these represent ways we can do good things that are not required. Under consequentialism, one might expect that people have a duty to perform heroic actions because they produce the best consequence for the collective. Meta-wellbeing is more robust than consequentialism, so we can factor in the anxiety produced by a society that morally requires everyone to do heroic things. We can also factor in our instinct to not morally blame people for failing to be heroic. If people care more about giving leniency to cowards than they care about consequences, then within meta-wellbeing a world based on leniency is more valuable than a world based on forced heroism. Neuroscience will no doubt inform us with ways to measure these value systems in the brain for more accurate comparisons.
Moral Aggregation Issue
One common critique of utilitarianism is the moral aggregation issue. Essentially, it can be argued that utilitarianism promotes the idea that pleasure can offset pain. If this is the case, one individual’s suffering can be offset by pleasure to other people. Despite violating justice, this would seem to be worth it for the collective. Weak negative utilitarians would say that suffering has a much higher weight than pleasure. But the critic might come back with the following example. Suppose that on a certain planet, the electricity to run television shows for the masses must be extracted via the suffering of individuals. Imagine that one individual must be tortured so that billions of people across the planet can watch TV. Under weak negative utilitarianism, as the pleasure grows in size, eventually there is enough pleasure to justify the torture. Yet, this conclusion doesn’t settle well with our natural moral intuitions. When we appeal to our intuitions, we are appealing to values that we care about within meta-wellbeing. A strong negative utilitarian might solve this dilemma by saying that pleasure never justifies suffering. But then this would lead us to a different conclusion that it is morally wrong for parents to make their kids suffer through homework and piano lessons. This path violates our moral intuitions as well. When we zoom in to the level of neuroscience, we will see more clearly how different factors are building our moral intuitions. We will see how the intensity of the suffering is connected to the benefits produced thereby. We will see how voluntarism and consent factor in. We will see how justice and karma factor in. Perhaps mandating that people suffer with wearing face masks during a pandemic is an acceptable level of suffering. Perhaps electric shocks are not acceptable levels of suffering to be imposed at the level of the state. Our moral intuitions find cruel and unusual punishment morally atrocious. Perhaps, as suffering increases in intensity, our moral aversion to it grows exponentially. So, the intense suffering of one individual surpasses the minor pleasure of the masses due to this exponentiation. Only by optimally satisfying all of the values in our brains can we fully follow our moral intuitions in maximizing meta-wellbeing.
In a similar vein, the "mere addition paradox" also addresses the issue of moral aggregation, except with the valence flipped upside-down. This paradox compares the great happiness of the few (option A) with the "barely surviving on potatoes" happiness of infinite masses of people (option Z). The repugnant conclusion is the idea that this analogy makes us feel uncomfortably forced to say that infinite numbers of barely surviving people is better than few largely happy people.
But this repugnant conclusion is only forced upon us when we view happiness aggregation in a linear sense. When we view happiness as something that scales exponentially in intensity, we can find more value in the great happiness of the few. Of course, with neuroscience we will be able to analyze more accurately how happiness scales in intensity for more accurate trade-off calculations.
Consequences vs Intentions
If all that matters are good consequences, then we might be led to the unintuitive conclusion that there is nothing bad about a failed terror bombing since the consequence was benign. Yet, all humans naturally intuit how important intent is. While simple-minded consequentialism only focuses on the immediate consequences of actions, meta-wellbeing includes factors like intent. Intent represents a propensity to perform actions. Perhaps a terrorist failed in his first attempt at a terror attack. Even though the consequences are fine, the terrorist still represents a moral problem since his intent implies a propensity to harm others in the future. This harm in the future is something that humans care about and, hence, a factor within meta-wellbeing. Policies that lock up people with bad intentions help maintain collective wellbeing into the future.
Check out this link for a discussion on the ethical dilemma of killing chickens -Discussion on the Validity of Sam Harris's Moral Landscape | TranscendentPhilosop
Meta-Wellbeing Covers All Outliers
It seems as though all of our moral intuitions are geared to either short-term wellbeing, or long-term wellbeing. We know we have violated a moral intuition when we sabotage short-term or long-term wellbeing. Conflicts between moral theories can be boiled down to a desire for parsimony over robustness. Human psychology is most likely a combination of many moral tastebuds, and a moral action is an action that satisfies all of the moral principles in an optimal way. Whenever a philosophy reduces morality to one factor, like pleasure, reduction of suffering, or rules, we find that we can poke holes in these theories by positing an outlier example that violates a different moral tastebud. Sam Harris’s attempt to include all values within wellbeing ends up being the most robust way to include all of our moral tastebuds. This meta-wellbeing ends up being a philosophic synthesis of all theories of morality.
META-ANALYSIS CONCLUSION
So far, we have applied an ethic of maximizing collective meta-wellbeing over simple moral issues, complex moral issues, religious moral issues, trade-offs, and philosophic dilemmas. I think I can say that after completing this exercise it seems apparent that not only is this ethical system coherent, but it seems to address moral issues better than other moral theories. It is largely consistent with our natural moral intuitions across a variety of circumstances.
Check out Part 3 here:
REFERENCES:
[1]
Related:
Comments