top of page
Search
Writer's pictureSeth Garrett

Intent

Updated: Mar 10, 2022




Punishment

Intent is of prime importance when trying to assess moral guilt. In legal terms, intent to commit a crime is called "mens rea". Mens rea is often defined along a spectrum between unintentional acts, negligent acts, and intentional acts. We use intent and motive to determine how evil someone is, and consequently, how worthy of punishment they are. The accusation of "intent" is a serious allegation because punishment implicitly entails a dangerous claim - the assertion that someone is likely to deserve violence committed against them. Because of the inflammatory nature of intent, we should be careful in asserting intent. Perhaps before asserting intent to do ill, we should ask ourselves "Am I confident that this person should be punished?". In courts, the standard of confidence is usually set to "beyond a reasonable doubt" when assessing intent.


Low Standards for Intent

If people are loose about the attribution of intent - then it is easy for this logic to backfire. Imagine Republicans applying very loose standards for intent to Dr. Fauci - CDC said we don't need masks; now they say we need masks. They consequently view it as an objective fact that the CDC is trying to deceive the nation, so they can feel confident in concluding that the CDC is evil, and now they don't need to listen to the CDC on anything. This would lead to the obviously disastrous consequences of millions of people not being conscientious about mask use, social distancing, and vaccine avoidance, hence harming the health of thousands, if not millions.


Well, for Fauci and the CDC, if one can get their mind outside the box of the Religious-Blue ideology, there is another possible explanation that 1) the science and political application of the science is very nuanced and evolving, therefore the message shifted. If Republicans want to prove intent to deceive, they need to debunk all other possible explanations for the observation of contradictions in CDC messaging. Similarly, if one wanted to prove that Mike Lindell, the My Pillow CEO, had the intent to deceive regarding the supposed evidence of election fraud, you need to debunk the explanation that he might just be severely confused over what he thinks is good evidence.


Inductive Reasoning

When it comes to understanding reality, a fundamental methodology is inductive reasoning. The average person takes many “truths” for granted as if they were solid facts. The idea that the sun will come up tomorrow, the idea that the laws of gravity will not change, the idea that your bones and muscles are reliable each time you take a step – these are all assumptions we make with inductive reasoning. We don’t know that the sun will come up tomorrow, but we have observed a reliable pattern in the past, so we trust that the sun will come up tomorrow like it has in the past. Inductive reasoning is about using patterns of observations to generate informal probabilities to predict the future.


Induct Intent

When it comes to intent, often we rely on inductive reasoning to help us analyze patterns of behavior. If we have negative social encounters with someone, we might sense a pattern of negativity from them, and be able to predict bad intentions. In fact, in broadly analyzing the patterns of human behavior, we might come up with conclusions about whether or not the average person is trustworthy. A naïve person who has never been hurt might incorrectly conclude that because no one has hurt them before, no one will hurt them going forward. Upon realizing that some people are vicious enough to harm them, they might get frustrated because they don’t know if the average person is trustworthy anymore. They have to build a new calculation for the trustworthiness of the average person, and hopefully identify patterns of untrustworthiness to protect themselves going forward.


Information Sources

The world is vast and complex. It is nigh impossible to be an expert in every field. Given that, it is almost impossible to NOT rely on others to inform you about the world. As a child we rely on our parents to inform us about the world. And then as we get older we rely on teachers, books, newspapers, social interactions and other forms of media. Each time we trust a source of information, we are applying inductive reasoning to that source. When we are young, we trust our parents to give us accurate information. Our inductive reasoning tells us that “the information my parents gave me in the past was reliable, therefore the information they give me in the future will probably also be reliable”. If a parent gives us a piece of information that we later discover to be false, we might adjust our trust in them as a source of information. For example, if 99 times out of 100 your parents’ information is reliable, then you can have 99% trust in them as a source of information. Each failure might decrease this percentage of trust.


Epistemology

I am not an epistemic nihilist, nor am I an epistemic relativist. I don't think each narrative has equal truth value nor equal moral value. I don't think its impossible to get to the truth (or a reasonable degree of confidence in the truth). If you have a good methodology, you can fight different biases in the information landscape, navigate multiple studies (perspectives) for meta-studies (meta-perspectives), measure things from multiple angles to triangulate onto a truth, and test repeatability of observations over time to narrow down on reliable truths. Getting to the truth is possible, but I do believe that this is getting harder and harder for the average person to accomplish. It requires significant subject knowledge expertise, philosophical expertise, and time investment - and since time is limited, no human alive is capable of becoming an expert in every field and for every issue.


As an adult, facing a landscape of complex geopolitical issues intersecting with economic rivalries, we almost always have to choose a side to trust. Even if we don’t have enough information, we still have to choose a side – in fact, not choosing a side is equivalent to choosing a side sometimes. For example, in determining whether or not you should vote for a candidate, you have to choose whether or not to trust a rightwing media narrative or a leftwing media narrative. Perhaps you have a more nuanced approach, then at the level of each news article, you have to choose whether to trust it or not. No one has the time and resources to investigate every single claim generated. Perhaps the leftwing media claims that Trump is a Russian asset. When voting, you have to choose whether or not to factor that into your vote. Being epistemologically cautious and not factoring that into your vote is choosing to not trust leftwing media. The failure to trust leftwing media when they are right is a problem, because your decisions are not being made upon an accurate model of reality. Similarly, if rightwing media claims Biden is a Chinese asset, failing to factor this in when it is true is also a problem. Accurately choosing what we trust or don’t trust is an essential component of epistemology.


Bad Faith

Bad faith is a type of deception. For example, in the times of British warfare, if a subdued enemy unit raised the white flag to surrender, but upon approach the subdued enemy instead reengages the British unit with gunfire, their surrender was in bad faith. This indicates that bad faith is not merely deception, but a type of manipulation founded upon the desire to harm someone.


Jean-Paul Sartre's on Bad Faith - Bad faith - Wikipedia

- "But there is debate as to whether this self-deception is intentional or not."

- "In philosophy, after Jean-Paul Sartre's analysis of the concepts of self-deception and bad faith, the latter concept has been examined in specialized fields as it pertains to self-deception as two semi-independently acting minds within one mind, with one deceiving the other."

- "In his book Being and Nothingness, the philosopher Jean-Paul Sartre defined bad faith as hiding the truth from oneself. The fundamental question about bad faith self-deception is how it is possible. In order for a liar to successfully lie to the victim of the lie, the liar must know that what is being said is false. In order to be successful at lying, the victim must believe the lie to be true. When a person is in bad faith self-deception, the person is both the liar and the victim of the lie. So at the same time the liar, as liar, believes the lie to be false, and as victim believes it to be true. So there is a contradiction in that a person in bad faith self-deception believes something to be true and false at the same time."


Self-Deception

This is my attempt to try and self-introspect on my Mormon days to understand how I deceived myself for so long. I think what happens is that the brain builds a reward center around "strength of belief in X" because religion defines virtue as belief; and we want to be good, so our "goodness reward system" gets connected to our "belief system." So, when you obtain information that conflicts with X, your reward center gets very unhappy. When the reward center is unhappy, the other areas of the brain try to make new discoveries that can satisfy the needs of the "belief reward circuit." This becomes mental gymnastics. If one can reinterpret the data in a way that satisfies the belief cortex, we get a surge of positive emotion. We also feel like we have tapped into a deeper layer of understanding God.


For example, when I was 13, I was a young earth creationist. I went online to debate people in a science forum. I got schooled hard. Every argument I had was destroyed. I then formed a new interpretation - when God says "created in a day" he is being metaphorical!! A day could mean a long period of time! This allowed me to satisfy my "belief cortex", absorb the new information, and feel more intimately connected to a deeper understanding of God.


Similarly with evolution, I learned how evolution worked from my debates on that forum. And then I realized that evolution was a beautiful natural system for creation - and that if I was a God, I might be inclined to use the tool of evolution for creation as well! Hence, I felt a new intimacy in my understanding of God, and even stronger faith than before!


So, from my own religious experience, it seems like it wasn’t bad faithed, in sense of deliberately trying to manipulate and harm myself. Rather, it seems more like a function of living in a social environment that has a certain hierarchy of values, and trying to manipulate oneself in a self-serving way to be able to continue to climb this social hierarchy.







Tribal-Red Epistemology

To me, it would seem that Trump and China are two modern examples of the “red” ethos within Spiral Dynamics. Red is very power oriented, almost a “might makes right” mentality. Hence, not only do the powerful make the rules, but they also determine what is true.


When Trump’s facts get contested, Trump almost never attempts to cite objective evidence to support his claims. He merely calls anyone who presents unflattering facts about him as bearers of “fake news”. His propensity to label anything as fake news shows that his strategy isn’t meant to be a surgically accurate knife for debunking false information, but rather a blunt-force club to attack anything that attacks Trump. Its less about truth, and more about what supports the powerful.


China similarly blocks any narratives that show the powerful in a less that pleasant light. Even the "Winnie the Pooh" movie was banned in China for fear that the people would make jokes about how their "President for life" Xi Jinping visually resembled the cuddly plump bear. China unilaterally restricts video game usage and portrayals of weak men in the media, trying to engender a culture of masculine strength and power.


When trying to assess whether or not “red” epistemology is in bad faith, it almost is a question of values. At the red level, they value loyalty more than truth. So, if you want to call it bad faith you can, but I don’t think it isn’t the type of bad faith that has evil intentions for someone. They truly want to make the world a better place and they think loyalty is the way to do it.


I think it is very possible that those with a “red” mentality employ some similar level of self-deception as what I had within Mormonism. Their value system places “loyalty”, “patriotism”, “strength”, “justice”, and “results” as higher values than truth. So, when they are confronted with unpleasant information about Trump, they might consider the source of information “unpatriotic” – evil from Red’s perspective. They then can discount information from evil sources. If the information is hard to discount, then they might use mental gymnastics to reinterpret the information in ways that displays Trump’s genius in playing 3d chess with the world.


Religious-Blue Epistemology

Religious-Blue people act like they know the truth all the time, but that is a HONEST feeling, not a deception. In order for it to be bad faith, they have to be knowledgeable of another epistemology, and intentionally choosing a lesser quality epistemology. Religious people act like they know the truth because that is the only epistemology they know. They aren't in bad faith, they are just underdeveloped with respect to the truth.


Their epistemology involves applying good and evil to the information landscape. If the CDC is designated as an evil organization, then their data is not to be trusted. If Mike or Trump is considered to be "righteous" then their data is to be trusted. That is their epistemology - one giant metaphysical appeal to righteous authorities. But if you look at the core of traditional monotheistic epistemologies, that's all they know. The Bible has righteous characters in it who claim authority. These righteous authorities claim that the book is infallible. Therefore, they appeal to these righteous authorities and assume that it is all true. It makes it easy for them to ideologically defend themselves against outsiders because they can define anything that criticizes righteous authorities as evil and untrustworthy - further insulating themselves from better methods of obtaining truth.


When religious-blue promotes false narratives, it very likely that they have merely brainwashed themselves into believing a "good/evil" narrative about society that informs their conclusions. It is very unlikely that religious-blue has the intent to deceive others by promoting their narratives. Growing up Mormon, I have seen how powerful the ability to self-brainwash is. Some exmormons believe that there is no way that top level Mormon leadership is ignorant of all the ways Mormonism has been debunked. These exmormons impute "intent to deceive" upon the leaders of the Mormon church. I have extremely hesitant to take this extra step in assuming intent. I think it is extremely likely that they have used a plethora of mental gymnastics to satisfy their cognitive dissonance regarding issues of veracity and allow themselves to continue to believe in the truth of the church's claims. Within the church hierarchy, virtue is established by strength of belief. Under this value system, whoever has the best ability to self-induce believe will be able to rise the hierarchy. So essentially, the Mormon leadership would be comprised of those who were best at self-deception.


One of the uniquely harmful things about Judeo-Christian religious memes I have been pondering lately is their assessment of the average person. According to their most fundamental myth, because of Adam and Eve's disobedience, womankind is cursed with much "sorrow", "pain in bringing forth children", and her "husband shall rule over you." Additionally, the "ground is cursed" because of Adam, "in toil" he will work for food, and "thorns and thistles" will torment him. The fundamental principle is how evil natural people are - so evil that we deserve cursings. This principle is propagated throughout scripture - God becoming so frustrated with the average person at the time of Noah that he destroyed them all with a flood. This meme infects the mind so successfully that even genocides against the Philistines are considered acceptable in the Old Testament - because they probably deserved it given that the average person is evil.


I fear that this meme about the average person being evil leads to some drastic conclusions in the political sphere. If the average person is evil, and there is "spiritual wickedness in high places" then it makes sense for the government to be an evil unit. It makes sense that there are evil units in government trying to collude with China on intentionally developing harmful viruses to kill people. It makes sense that the evil powers within government are trying to intentionally develop harmful vaccines and then force them on people. It makes sense that politicians are secretly engaged in rituals to worship Satan and eat children. But it there is a fundamental flaw in this assessment of the average person; if the average person is good - then none of these conspiracies make sense. We need to have an accurate model for the moral nature of the average person if we want to judge conspiracies accurately.



Epistemology by Color


When trying to assess bad faithed intentions, its important to remember that people are usually trying their best at the level that they are at. We should try to give people the benefit of the doubt whenever possible. But what if we insist on discovering their bad intentions? Then we must investigate the concept of "mens rea."


Intentional vs. Unintentional
Intentional harmful behavior is often criminal, but unintentional harmful behavior comes in two basic forms. The first is "mistake in fact" and the second is "mistake of law."
Mistake in fact means that, although your behavior fit the definition of a crime in an objective sense, you were acting based on mistaken knowledge. For example, a person could objectively be selling drugs, but mistakenly believe that he or she is just selling a bag of baking soda. As a result, that person is likely lack the necessary mens rea or mental intent necessary under a drug law, because he or she never intended to sell an illegal drug, just baking soda (although few people will believe that you honestly thought baking soda could be sold for that much money).
Committing a Crime "Knowingly"
Many criminal laws require a person to "knowingly" engage in illegal activity. Which part of the offense needs to be done knowingly depends on the crime. For example, a drug trafficking law might require that the person "knowingly" import an illegal drug into the United States. If the defendant had been given a gift to deliver to someone in the U.S., and the defendant honestly did not know that the gift contained an illegal drug, then the necessary mens rea or mental state has not been established and no crime was committed.


The court will have little difficulty in establishing mens rea if there is actual evidence – for instance, if the accused made an admissible admission. This would satisfy a subjective test. But a significant proportion of those accused of crimes makes no such admission. Hence, some degree of objectivity must be brought to bear as the basis upon which to impute the necessary components. 
It is always reasonable to assume that people of ordinary intelligence are aware of their physical surroundings and of the ordinary laws of cause and effect (see causation). Thus, when a person plans what to do and what not to do, he will understand the range of likely outcomes from given behaviour on a sliding scale from "inevitable" to "probable" to "possible" to "improbable". The more an outcome shades towards the "inevitable" end of the scale, the more likely it is that the accused both foresaw and desired it, and, therefore, the safer it is to impute intention. If there is clear subjective evidence that the accused did not have foresight, but a reasonable person would have, the hybrid test may find criminal negligence. In terms of the burden of proof, the requirement is that a jury must have a high degree of certainty before convicting, defined as "beyond a reasonable doubt" in the United States
Relevance of motive
One of the mental components often raised in issue is that of motive. If the accused admits to having a motive consistent with the elements of foresight and desire, this will add to the level of probability that the actual outcome was intended (it makes the prosecution case more credible). But if there is clear evidence that the accused had a different motive, this may decrease the probability that he or she desired the actual outcome. In such a situation, the motive may become subjective evidence that the accused did not intend, but was reckless or willfully blind.
Motive cannot be a defense. If, for example, a person breaks into a laboratory used for the testing of pharmaceuticals on animals, the question of guilt is determined by the presence of an actus reus, i.e. entry without consent and damage to property, and a mens rea, i.e. intention to enter and cause the damage. That the person might have had a clearly articulated political motive to protest such testing does not affect liability. If motive has any relevance, this may be addressed in the sentencing part of the trial, when the court considers what punishment, if any, is appropriate.

Test Intent

1) Find a document that shows their plan/intention in verbiage;

2) Ask good questions in a court room and get them to accidentally reveal their intent;

3) Debunk all other possible explanations so that the court can deduce that criminal intent is the most reasonable interpretation of the events;


In-Depth Strategy to Test Intent

  • Patterns - Identify patterns of behavior, connections, relationships, or evidence that hints at a narrative of a certain motivation.

  • Narratives - Identify any other possible narratives and see how much evidence there is for them.

  • Evaluation - Establish a probability estimate for each possible narrative.

  • Debunk - Try to debunk each narrative by finding any piece of evidence that could contradict the narrative.

  • Confidence - Check if the leading narrative is so probable and convincing that it would seem self-evidently true beyond a reasonable doubt.

  • Conclude - Conclude that the narrative displays a consistent story of how a motivation is being played out by a series of behaviors with the ill-intent in mind.



Type I vs Type II errors [Learn more here]

These errors identify the gaps between our conclusions and reality - since sometimes we can be wrong in our conclusions. With respect to a crime, we can conclude that someone 1) intended to commit a crime, or 2) accidentally committed the crime. Since intent is a tricky subjective problem relating to the nature of the perpetrator's mental state at the time, it is hard for us to be completely accurate in our conclusions about other people's minds when there isn't a lot of evidence. Hence, two errors can arise in our assessment of intent, type 1 - concluding that evil intent exists, but the reality is that it was an accident and you are punishing an innocent-minded person; and type 2 - concluding that there is no evil intent, but the reality is that the criminal has fooled you into believing that he didn't intend it.





Probability of Errors

In general, criminals are motivated to lie and obscure the truth in ways that lead to their benefit. So, it would make sense that criminals have a vested interest in promoting type II errors. Therefore the probability of a type II error goes up when the intelligent guilty are involved in the process. But type I errors are likely to occur as the crime or conspiracy become more complex in scope and number of downstream effects. As things become more complex, people are less able to predict the future, so unintended consequences are more likely to arise.


Type II Error Risk Assessment

Intelligent criminals is a small subgroup of the criminal section of any given population. Criminals can often get locked up even without proving intent, so there isn't a large danger of "repeat type II error offenders". If perchance an intelligent criminal gets away from punishment due to a type II error, if they never commit another crime, they are not harmful to society, even with the failure to take complete justice against them. If the intelligent criminal does become a repeat offender, then his chances of getting away without punishment become drastically reduced as the pattern of criminal behavior is evidence of intent, so the type II error wont happen a second time. Hence, the statistical frequency of type II errors would probably be low, and the harm to society is also low.


Type I Error Risk Assessment

Type I errors, on the other hand, have a growing statistical propensity as social media expands the ability to accuse others without evidence. Data overload allows people to construct convincing narratives by cherry-picking data from the internet. The Dunning-Kruger effect shows how people are not very good at matching their confidence in their positions to the strength of their positions. Modern issues are increasing in complexity at an unsustainable rate, meaning that people can no longer trust even PHD-level authorities to be accurate in their assessments of issues. So combining these issues shows that type I errors are extremely likely to occur based on our overconfidence in our ability to interpret the data/evidence. What are the dangers of a type I error? Perhaps a right-wing insurrection on the capital, threatening civil war on the entire nation - because a group of people were unjustifiably confident in their interpretations of election data? Perhaps the propagation of left-wing BLM riots that cause immeasurable damage - because a group of people were unjustifiably confident in their interpretations of policing data? Perhaps a libertarian rejection of Covid regulations, harming millions of people - because a group of people were unjustifiably confident in their interpretations of Covid data? The harm to society is so much greater under type I errors, and they are so much more likely to occur.


Conclusion

When it comes to complex social issues, I don't think that the average person who accuses others of "lies" is smart enough to determine that those statements are 1) false (objectivity) and 2) intentionally deceptive (subjectivity), with a high degree of accuracy. If the average person isn't smart enough to do this, then the average person needs to STOP doing this. The solution is epistemic humility. Epistemic humility means resisting the Dunning-Kruger effect. It means that we need to stop thinking that we know things when we don't have evidence to back it up. It means we need to stop thinking in one narrative, and expand our perspective to assess multiple narratives. It means we need to stop assuming guilt without warrant. It means that at the social level we need to start implementing the right to be "innocent until proven guilty" that has always been guaranteed at the legal level via the fifth amendment to the Constitution. It means that we need the average citizen to level up in the way they address social issues.



Learn more:


51 views0 comments

Recent Posts

See All

Comments


bottom of page