If causal determinism is correct –
1. “Then everything that happens couldn't have happened any other way.”
2. “Then beliefs about everything are strictly determined.”
3. “Then moral responsibility would also technically be gone.”
4. “Then why do anything at all? Telling someone that they ought to do this or believe that would be meaningless, because the notion of choice is gone and is an illusion.”
“Then everything that happens couldn't have happened any other way.”
‘Determine’ has at least two meanings – “to be caused” (causation) and “to be ascertained” (prediction). Chaos theory displays a type of unpredictable determinism – a system where every step is caused by the prior step, but future states are impossible to predict. It is not necessarily the case that because things are determined that “everything that happens couldn't have happened any other way”. It means that in order for things to happen a different way they need to be caused to happen a different way. It emphasizes the importance of causation. Causes can change the world! And you are a link in a chain of causes that has impact on everything downhill from you! Who you are, what you think, how you feel, what you believe, what you like and dislike, your moral emotions – these are all extremely relevant to what type of cause you will be in this chain of events. The fact that the type of person you are was determined by prior causes (like your genes and environment) doesn’t change how important you are as a cause for future effects.
“Then beliefs about everything are strictly determined.”
What are beliefs? Beliefs are things our brains create. Why do our brains create them? Because they are useful. If we believe it will rain, we can prepare. If we believe a harsh winter is coming, we can prepare. Our brains are prediction machines that constantly analyze patterns and develop beliefs based on those patterns. So, when we say, “our beliefs are determined” what we should mean is, “our beliefs are caused by the data that our pattern recognition faculties process.” This interpretation of beliefs should not be concerning.
I believe the interpretation you are jumping towards is something like, “all of our beliefs are inevitably controlled by forces outside of us, puppeteering us”. This type of conclusion is only true in a zoomed-out way – as technically everything is puppeteered by the laws of physics. If we zoom in, your brain is very relevant to the production of beliefs and how your brain works has a causal effect on future beliefs. If you train your brain to be a better belief production machine, then you can get better beliefs. There is no direct demonic force manipulating your brain to have false beliefs. You are very capable of generating true beliefs despite each belief being caused by the data processing that came before.
Pardon the following anthropomorphization of evolution, it just makes it quicker and easier to communicate about complex ideas when we talk about evolution this way.
Some people argue:
P1: Evolution causes your beliefs.
P2: Evolution doesn’t care about truth.
C: Evolution causes you to have beliefs that are independent of the truth.
The problem with this view is that P2 is false – evolution DOES care about truth. How can a creature survive without knowing where it is? Or where it needs to be? Or where the food is? Or where the predator is? Or what quantity of food? Or what quantity of predators? Proper action requires a true understanding of the situation.
But evolution is not monomaniacal in its focus on truth. It does care about utility more than truth. To the extent that false beliefs provide evolutionary utility, evolution may be motivated to deceive you. Religion is actually a perfect example of this. We seem hard-wired to belief in magical invisible beings that punish murderers into scaring us into more social cohesion and evolutionary utility.
So it is valid to be concerned about false beliefs, but we should not fall into epistemic nihilism over this issue. Evolution has provided us with enough tools to debunk our false beliefs if we so desire. And yes, your desires matter. You are the agent that will develop yourself. Evolution has given you the meta-desires that you need to survive. You just have to apply those wisely to continue developing in a good direction.
“Why?” you continue to ask. You are still confused as to why your feelings matter under determinism. The reason is that your feelings are a part of the causal chain. You ARE determinism. If you feel motivated, then that feeling is a deterministic cause that will get you to do something. If you feel demotivated, then that feeling is a deterministic cause that will get you to NOT do something.
But why does it matter? Do pain and pleasure matter? Does suffering and wellbeing matter? Mixed in with the chains of deterministic causes are feelings. One feeling leads to another feeling. Do you want to cause suffering with your inaction? Or do you want to cause wellbeing by your actions? Evolution has already given you your value systems. You already know that you don’t like it when you suffer, and you don’t like it when others suffer. Your thoughts, feelings, and beliefs are important causal chains that will lead to either more suffering in the world or more wellbeing. You already know you care. You already know you must take your thoughts, feelings and beliefs seriously, so you can be the most effective link in the deterministic chain there is.
“Then moral responsibility would also technically be gone.”
Morality would disappear under determinism? Impossible. You already have your moral systems determined within you by evolution. They aren’t going anywhere. Moral responsibility is also an aspect of your evolutionary drive. Evolution figured out that tit-for-tat was an effective game theoretic strategy for enforcing cooperation for better social cohesion, and hence evolutionary utility. Sure, religious moral responsibility doesn’t make any sense, because an infinite punishment in hell cannot fairly be applied to a finite sin that was committed by a creature evolutionarily designed to sin. In the cosmic sense, it isn’t their fault that they committed their crimes. There is no value in punishing them at the cosmic level. But at the practical level, at the level of preventing crime, at the level of evolutionarily punishing harmful behavior, punishment makes a lot of sense. They may not have moral responsibility in the free will sense, but they can have moral responsibility in the agentic sense, and also in the causal sense.
In the agentic sense, creatures are agents with goals. Agents can learn and be conditioned by punishment and reward. Agents have a social duty to condition themselves to follow the rules of social cohesion. Society only wants to cooperate with people who follow the rules. If you want the benefits of society, then you have to return benefit for benefit. If agents hurt the society, then the society will take revenge on them and punish them. That punishment may psychologically condition them to follow the rules next time. If a person is so deranged that punishment cannot condition them, then agentic punishment doesn’t make sense. They have lost their agency due to mental illness.
In a causal sense, creatures that harm other creatures are the nexus of a chain of causes that bring harm. Even if they have no free will, they are still the source of the harm. Evolutionary forces motivate us to defend ourselves from sources of harm. We don’t care if the harm has free will or not, we want to reduce it. Tornados have no free will, but we want to avoid them. A criminal with biological tendencies to commit crime is the human version of a tornado. We must lock them up so that they don’t continue their path of destruction. Maybe they can be conditioned into a harmless tornado. Or maybe they must be locked up for life.
To reemphasize this point, if we had AI robot creatures running loose in the wild, autopoetically reproducing, we would have a problem on our hands if they randomly murdered people. Whether or not the robots had free will would not be central to determining how to deal with these murders. We would want to know the chain of causes. Was there a defective robot? Or a group of defective robots? Or is the water in an area causing the defects? Or are they developing innate desires for murder? We would have to take these scenarios seriously, and apply punishments to the robots as needed to prevent further harm. None of this would require free will.
“Then why do anything at all? Telling someone that they ought to do this or believe that would be meaningless, because the notion of choice is gone and is an illusion.”
First of all, it is impossible to not do anything at all! You don’t have free will! You will get itchy and scratch yourself. You will get bored and walk around. You will get hungry and grab some food. There is no such thing as “just don’t do anything at all” because you are not a creature that lacks value systems. You have innate things that you care about and you will pursue those things because of the causal drives within you.
Second of all, instead of being demotivating, determinism should be extremely motivating! You should understand this as hacking the source code of life. Instead of accidentally getting results in the world, you now understand that results come via causes. And if you create the right causes you can obtain the effects you desire. You already have your desires. Now you must use your knowledge of determinism to obtain them! Just because your desire to make the world a better place comes from the laws of physics doesn’t make it any less valid. Go, accomplish your goals, and make your causal impact on the world. You don’t have any other choice.