Interlocutor:
Sam Harris really, really, really wants to bridge the is/ought gap, and can’t and won’t.
Seth:
My view on the is/ought gap is that Hume highlights the fact that every time someone goes from an is to an ought, they are making tacit assumptions about what we should value. There is a meta-value that is being implicitly assumed. Is/ought jumps only make sense when people share the same meta-value. For example, god exists (is), therefore we ought to obey him (ought). This precept only makes sense when the meta-value of "we value God's will because it is always morally good" is mutually accepted.
The impression I get from Sam's book is that he wants to define morality in terms of wellbeing, and therefore set the cultural meta-value to assume a context of wellbeing as the goal. Once this meta-value is set, then we can jump from is to ought. Because technically it isn't a jump from is to ought, it's a meta-ought + is = ought.
Sam Harris claims we make meta-value assumptions all the time, like in the science of medicine, it is assumed that the minimization of pain is an assumable value to guide all medical is/ought jumps. So we shouldn't be shy about morality, we should just get our meta-values set and proceed on ahead.
How do you feel about this framing of Sam's argument?
Interlocutor:
Yes, it does. it jumps the is ought gap before jumping the is ought gap..
It’s the same work around people do to try to make 2+2=5 Well if you let me redefine what two is.
The Sam Harris crowd normally sneers so hard at Peterson for saying “truth” has multiple meanings, when obviously literarily and historically it does. Then allow Sam the free range to try to redefine what axioms and spirituality are all to construe it into something he can pretend he’s not playing the same games that religious thinkers are.
“Well-being” is already loaded with moral assumptions. And I know Sam tried to head that off at the pass, and he fails there too.
Seth:
I would like to understand the problem with his angle of analysis, but I'm not seeing it. Like, everything requires axioms. What is philosophically wrong about using axioms, and encourages others to use the same axiom?
Interlocutor:
I didn’t say it’s wrong. Saying something is “wrong” is making an ought. Philosophical axioms that you need to convince others into accepting is “ought” making. Assuming humans should exist at all jumps the is ought gap.
Morality making, no matter what, requires a moment where you alone, or a group at large chooses to make a jump into assuming anything should be or go a certain way rather than just examining how it is or does behave.
Seth:
Right, so we could collectively choose between "we ought to obey God" or "we ought to maximize wellbeing" as our meta-oughts. As you said, we have to bridge the is-ought gap whether we like it or not if we are going to have morality. So as a society, we need to figure out the best meta-ought and then run with it. All future is-ought jumps can be performed once our axiom is accepted.
Interlocutor:
Yes, and that is a process of moral reasoning no matter what. The "is" can only inform. Sam believes empiricism/science the "is" can actually perfectly spell out the "the ought." It can't.
The endless trolley car cases are all designed to show how you can't make that jump scientifically. Axioms inherently clash. Basic example (not exactly a trolley car example but in the same vein), a deontological axiom vs a utilitarian axiom. Say you are a hardcore vegan. Deontological axiom, no chicken eating no matter what. Utilitarian Axiom, save the most chickens. A devil pops up and puts you and ten other hardcore vegans in a trolley car style gambit to test your veganism. The devil places a fried chicken in front of each of you. The devil says, each time you refuse to eat the murdered chicken in front of you I will exponentially kill more chickens for each you refuse. The hardcore deontologist will not eat any chicken even until all chickens in the world are dead. The hard core utilitarian would gobble down the first plate. When you examine all ten of the people put through the gambit, the number between eating the first chicken to save more chickens vs always following the rule of absolutely no chicken.....will be different for each person put on the spot. That number is NOT something you can science. The answer exists in the gut of each. Of course you can get closer to a meta analysis of what was the number most people chose. Still, that's a utilitarian angle and it will leave a large portion of people pissed of about the "scientific" landing point and they will leave the group to form their own group of the correct chicken murder number sect. Sam and most other people who take the line of reasoning that they can for sure solve it are almost always utilitarian. Best for the greatest good. It's a pretty typical angle for people who think they can autistically sort the world out this way. You don't have to search far though to see all the critiques about how utilitarianism can go haywire. Extent cases where it gets bonkers. I won't list them ot here because this will get too long. However, sometimes utilitarianism can seem perfectly acceptable and sensible.
Now, knowing that just those two little axioms can clash, you have Jonathan Haidt's work, along with most people who have ever gotten into moral philosophy, that there are numerous possible axiomatic substrates. but Haidt has done some level of clinical work in moral foundations to show that he can measure how they differ in people. Whether that theory perfectly teased out all foundations or not, it clearly shows humans all come at morality from very different angles. When you realize that just two dimensions of moral foundations like care/harm centered vs loyalty/betrayal centered moralities will never even agree on what "well-being" means let alone exactly how to act in the world, the gap widens.
Now you arrive at Peterson trying to communicate something to Harris that Peterson was always a bit crippled in because he'll hedge on the existence of God or not. I believe he only does that because he knows both sides listening would shut off the second he stakes a flag because that's how tribal humans act and his goal is to get both sides to listen. Maybe I as an atheist can maybe try to explain the point a bit better, The point doesn't matter if a God is actually there or not. David Sloan Wilson, Geoffrey Miller, many other evolutionary psychologists point to this concept. Looking at how you paint it up above, it seems you make a claim that the "god" part must exist in religious morality for it to be valid. If god isn't actually there, what do we say of religious reasoning? It was just fairy tales, the end? No, it was the evolutionary process of morality done by humans half doing science, sorting things out by what seemed to work for "well-being" since long before Sam was asking the question. What we have left is what worked. And our modern day morality atheistic or not, was developed through the same processes. And an evolutionary process can land on capacity for survival and thriving, better than a centralized galaxy brained technocratic prescription without the millennia and millennia of trial and error.....even if it is using "metaphorical" concepts.
Basic fast example of that: look at birthrates of Mormons and Islam vs Atheists. Across millennia, who survives? Is the future going to be star trek, or Islam? Who is breeding? Which one wins? Well we don't know yet, but whichever does will have survived as the winner the same way passed moral paradigms did. The winner will form the future of morality....because it survived. No other reason. Not because it was "right" or True. But because it survived. In fact, the parsing of the word "truth" Harris had trouble with Peterson might be given the synonym of survival. Harris and fans might get irritated with that, but insisting truth only means "empirically true" gets no where near encompassing how that word has been used over time. Saying it only means empiricism is just as much of a construal.
These concepts are fleshed out by the New Pragmatists like CS Pierce and William James. Where do the wild different world views get sorted out? By evolution. Where do the seemingly postmodern concepts get put into hierarchies....By what survives or thrives (Well-being) But the joke is, evolution believers look worse at getting through evolutionary pressures across time than evolution deniers....so far. There are unforeseen things about "wellbeing" that Sam is not yet able to control or anticipate via "science" It's like comparing an evolved eyeball vs a modern tech eyeball. Sam is announcing to us this new tech eyeball is better and we should pluck our eyeballs out for the upgrade. Can we create a better eyeball, yet? We are getting good. We still can't match the evolved thing, even with all it's flaws and possible deformities. Our science has gotten to a point where it can help inform and assist, or repair an eyeball. Well, it's similar with morality, we aren't yet at the point of deciding new morals via some technocracy. Science can inform and assist. But, when the very smart overlords who believe in such "scientism" gain active, top-down "science based" control to announce moral change for all us humans, we will suffer the same problems the religious evolutionary processes did. Refinement through failure and death. And a lot of it across time.
Furthermore, the agreement part, how do you achieve the "Get everyone to agree" part.... We can already see in real time how well a science, bio-power technocracy isn't uniting people overnight in any way. And the machine would struggle even if it were perfectly honest, well-intentioned and rigorous (it isn't). Maybe Sam Harris is, but why doesn't he get the following and loyalty at the scale of a religion? Because people gel better around moral philosophy that has story, liturgy, ritual, community, and esoteric interest. It's why Nietzsche trying to make his own morality came up with the esoteric book "Thus Spoke Zarathustra" instead of doing some long, dry, didactic, patronizing podcast. Nietzsche understood the image dense storytelling was his only chance to form some real movement and reinvention of morality. Long form boring moral philosophy is never going to get there. I'm sure your eyes have glazed over in my diatribe, imagine someone who jut hates this type of philosophical rambling. That's why Christian imagery has so much in it about uniting the wise man and the Shepard, Nietzsche still failed with his book. Joseph Smith and L Ron Hubbard seemed to have grasped this too and were more successful than Nietzsche. Although he did have some. There have been religions formed around his stuff. Nonetheless, turns out it's not so easy to invent uniting religious narrative out of whole cloth. However, there are plenty of modern movements that are starting to figure out the same religious, tribal, unifying paradigms from atheistic standpoints. Although they are showing to be as loaded with liturgy, outgroups, scientism, wild metaphysical claims, anti-science BS etc etc as any religion ever.
We get to find out in our times why Nietzsche warned us to watch out for when moral reasoning humans decided they thought they defeated god and mastered moral reasoning and could now control it.
Seth:
"I didn’t say it’s wrong. Saying something is 'wrong' is making an ought." - I think that the identification of a false premise or internal contradiction would be enough to find reason to believe Sam's logic is wrong without implying an ought.
So, I think G.E. Moore's naturalistic fallacy is really relevant for this discussion, basically the fallacy of conflating that which is natural with that which is good. Per your vegan/devil/chicken example, the natural state of collective human psychology in averaging out a response to the devil is not the same as "good". Democratic opinions are not the same as "oughts". This is because it is possible to be wrong about what is good, what ought to be. Similarly, behaviors that evolve out of the furnace of natural selection are not "good" or "oughts" either. They are just natural things.
Regarding the number of chickens to be consumed in the thought experiment you said that the number "will be different for each person put on the spot. That number is NOT something you can science."
Under Sam's paradigm of goodness being defined in relation to the maximization of wellbeing, and wellbeing defined in terms of brain states, one can theoretically measure the wellbeing of chickens in the wild. For example, let's say that each chicken enjoys '10' units of happiness per year, as measured in some neurotransmitter production. Then, we can measure the pain neurotransmitters emitted during the butcher of a chicken for consumption. Perhaps the death of a chicken causes 30 units of pain. We can use neuroscience to measure the relationship between pain and pleasure, in that, given a potential reward of 31 units of pleasure, a chicken might risk 30 units of pain. This means that 31 units of pleasure is preferable to 30 units of pain at the level of wellbeing. Given that brain states can allow us to compare competing neurotransmitters, we can conclude that the maximization of wellbeing would be to kill 1 chicken in exchange for the wellbeing of 4 chicken years, because the pain of 1 chicken death is -30 wellbeing, but the joy of 4 years of chicken life is 40, netting a profit of 10 wellbeing units.
If you are afraid that this type of wellbeing utilitarianism will go off the rails in examples like "kill one healthy person to distribute his organs to many individuals who need organ donations", Sam Harris admits that certain deontological needs would need to be factored into wellbeing utilitarianism - essentially a type of rule-based utilitarianism. For example, "the right to your organs" is a type of deontological need from the perspective of psychological wellbeing. How much wellbeing would be lost if you were constantly afraid that your government would randomly arrest you to murder you for your organs? So, the right to your organs would be a necessary rule for maximizing collective wellbeing, because the cost of the lack of the rule is greater than the benefit to wellbeing by sacrificing rights.
Regarding Jonathan Haidt's work, it doesn't matter if people have different moral systems because natural moral systems don't guarantee that they are good moral systems. For example, just because humans might have evolved racist tendencies doesn't mean those tendencies are good. Sam highlights in his book the religion of the Dobu islanders. Because of their lack of resources on a small island, they evolved an incredibly toxic religion of black magic, assuming that bad luck means your neighbors cursed you, and the more potent the bad luck, the more likely it was your family members, in that relational closeness increases the strength of their black magic. So cooperative tendencies were completely subverted by this religion of internecine hatred. Just because a lack of resources evolved a dog-eat-dog culture doesn't mean that a dog-eat-dog culture is good, specifically when goodness is defined in terms of wellbeing.
So, in my opinion, one of Jordan Peterson's problems is that he is overly optimistic about the goodness of things that come out of evolution. Just because Christianity beat Greek religion in the evolutionary memetic war of religious syncretism doesn't mean that Christianity is morally superior to Greek religion. It means that Christianity is comparatively adaptive - again, naturalistic fallacy to assume otherwise, something JP teeters dangerously close to.
Why might Christianity beat Greek religion in the game of memetic evolution? Perhaps Christianity promotes more child rearing via sexually oppressive memes. Perhaps Christianity's monotheism promotes a more intense cultural unity. Perhaps Christianity's introduction of the theological psychotechnologies of "heaven" and "hell" introduce pascal-wager style psychological manipulation towards certain behaviors. Perhaps the Christian narrative of "evil in high places" and "endure your enemies" memes allowed Christianity to survive hostile cultural environments, while maximizing their self-martyrdom unity, maximizing tribal strength. Also, the vilification of doubt - another adaptive psychotechnology to ensure people can't easily defect on the meme tribe. But just because someone has a good set of memes for manipulating others doesn't mean that such manipulation is "good".
For example, if a parent often beats their child until their child becomes an Olympic prodigy, does the end justify the means? At a fundamental level, we always default to measuring wellbeing as the mode for judging an action to be good or evil. Beating a child is a greater cost to the child's wellbeing than the reward of success. But, conversely, parents requiring their children to suffer through piano lessons is considered an acceptable amount of suffering in order to obtain the increased wellbeing discovered in the ability to play the piano.
It could be argued that Islam evolved even more powerful psychotechnologies than Christianity, since their ability to manipulate is stronger. If anyone leaves the religion, death can be an appropriate punishment. This is an incredibly powerful method for maintaining the structure of the meme tribe, but is it good? How much does it cost the wellbeing of Muslims who have to feel imprisoned by their tradition and fear for their lives if they want intellectual freedom?
So, fundamentally what I would argue is that "wellbeing" ethics are what naturally evolve in the furnace of natural selection. Cultural ethics evolve on top of our wellbeing ethical substrate to modulate social adaption. Religious ethics evolve as a type of meme virus to hack into the cultural ethical substrate of the brain, and manipulate it into more extreme strategies. More extreme strategies might be successful in a dog-eat-dog style of a way, uniting large swaths of people in religious wars, for example. But that isn't necessarily a good thing. It's not good when a stronger dog bullies a weaker dog.
I would also argue that wellbeing ethics focuses on a longer-term strategy than religious ethics might take. For example, the more inequality in a society, the more likely it is to increase in violence, ultimately converging onto war. A focus on collective wellbeing mitigates behaviors that destabilize a population. For example, a tendency to murder invites the population to evolve a tendency towards revenge. This "anti-wellbeing" strategy of murder is self-destructive because it evolves an internecine conflicting response that hurts the growth of the population because growth is neutered by conflict. A neighboring population that evolves to care about wellbeing will grow faster than the murderous population because they aren't self-destructing a percentage of their people every year. Religion hacks into the benefits of tribalism, but it doesn't foresee the long term evolutionary costs of tribalism. For example, Christianity has only been around for 2000 years. That is not a long enough time for it to be purified by evolution. So, it might be polarizing a culture for a religious war in 3000 years. This tribal cost can be completely self-destructive. Religions that aren't tribal (polytheistic religions which have been vetted by evolution longer than monotheistic ones) would be less likely to polarize their people into self-destruction thousands of years later.
So, finally, to address the adeptness of religion for hacking into the psyche, I think that we need to appropriate the methods of religion for the promotion of superior ethics and ideas. Almost like we need a religion of philosophy, a Sunday school of philosophy - so that we can get everyone on board with superior ideas that maximize the benefits of religion and minimize the costs of religion.