

Discover more from Openly Fallible
So far, my newsletter has been about how and why we get things wrong. I worry that this focus might lead to a bias toward believing that humans are overly gullible. This, I think, portrays a misleading view of human nature. In a previous newsletter, I place partial blame on the ‘Truth Default Theory’ (TDT) for our receptivity to pseudo-profound bullshit. Though TDT is undoubtedly a culprit here, it is not only that. TDT is the heuristic that allows us to make accurate predictions, and the foundation on which we trust one another.
Truth Default Theory
TDT states that to comprehend an idea, we must accept it as true. This premise was first articulated by Spinoza:
Experience seems to tell us most indisputably that we are able to suspend judgment so as not to assent to things that we perceive… (and) that the will, that is, the faculty of assenting, is free, and different from the faculty of understanding… (but) I reply by denying that we have free power to suspend judgment.
Spinoza equated believing and comprehending as two words for the same mental operation. For Spinoza, mere understanding – without simultaneously believing – was impossible.
Spinoza’s hypothesis has held up impressively well against the current literature. Psychologist Dan Gilbert is perhaps most responsible for substantiating Spinoza’s hypothesis. In a series of experiments, he found that distraction made subjects more likely to consider false propositions true but not vice versa. Participants exhibited this truth bias even when the veracity of the proposition was revealed beforehand. Additionally, they found that merely comprehending a proposition increased the likelihood of participants considering it true. Gilbert’s results demonstrate that information is initially represented as true, and that the truth bias is difficult to snap people out of.
A lot has been made of TDT. Some suggest that TDT confirms that we will believe nearly any assertion that gets thrown our way. If this were true, we could get anyone to listen to our thoughts, and they would gladly take our assertions at face value (if you have a spouse, I’m sure you are already skeptical!). TDT makes no such claims, and it doesn’t require that we believe everything we hear. It only asserts that we believe during the process of comprehension, and we may or may not reject it after.
Some will use TDT as a point in favor of human irrationality. Having a truth bias is hardly irrational, considering that most people tell the truth most of the time. In fact, at least half of all lies are told by only a few prolific liars (got someone in mind?). If most people lied most of the time, then everyone would suffer reputation costs, rendering cooperation impossible. This is unlikely to be the case, considering that lying is considered a universal taboo. All known human cultures, religions, and legal systems inflict penalties for lying. Lastly, trusting others saves us the time and energy of scrutinizing everything ourselves.
Though the truth default theory is quite robust, it has also failed to replicate in some notable ways. Context matters. For example, law enforcement professionals have exhibited a lie bias rather than a truth bias (though it has yet to be shown that the lie bias occurs after comprehension, in which case the TDT would still be valid). Base rates also matter. The lower the base rate (the prior probability of something occurring before receiving new information), the weaker the effect of truth bias. Lastly, the truth bias is weaker when people are given the choice to express uncertainty. But since uncertainty is an uncomfortable state of mind, it is hard to tell whether this refutation would hold up outside of the lab.
Detecting Lies
It seems we have all agreed that telling the truth and trusting each other is generally a good thing. We presume that when people are communicating with us, they are doing so in the spirit of cooperation and a desire to be understood. The truth-default can be abandoned though, especially if one suspects deceptive intent. If someone appears to have a motive for lying, we may snap out of the truth-default state. Without something triggering us to abandon the truth default, lies generally go unnoticed. Trigger events include (but are not limited to):
Verbal
1. Obvious lack of coherence
2. Assertions that contradict reality
3. Transparently illogical conclusions
Nonverbal
1. A projected motive for lying
2. Dishonest demeanor
3. Reputation for dishonest behavior
The literature on lie detection shows that detecting lies on the basis of nonverbal cues is difficult, even if the lie detectors are experts. It is often cited that we can spot a lie with only 54% accuracy – but this may simply be because some people are exceptionally bad at lying, not because we are 54% clever. TDT implies that improved accuracy rests on attention to the content of the statement, rather than on nonverbal cues. In fact, most lies are detected through verbal and argumentative triggers, or through direct confession by the liar.
The Case for Epistemic Vigilance (Against Gullibility)
Lies can be used in justifiable ways, such as to avoid violence. Consider a common example: you are harboring Anne Frank in the attic and the Nazis come banging on your door asking if you have seen any Jews. Some capability to lie is necessary in such cases, but lying may also be abused to pursue one’s own self-interest. It follows that we should be endowed with cognitive tools that allow for skepticism, or epistemic vigilance.
Given that we have an ability to suspect deceit, it is best for one to keep their lies at a minimum and use them only when necessary. One can generally expect true and relevant information, in exchange for one’s trust. If this is violated, the punishment is mistrust. Epistemic vigilance and our bias toward truth likely evolved together to allow for efficient cooperation and communication. An absolute belief-default would have evolved with a tendency to deceive others whenever it serves one’s best interest. An absolute skeptical default would render us incapable of trust.
The most damning case against gullibility is perhaps supplied by developmental psychology. At 16 months, children begin to notice when a familiar word is misused. The tendency to debate, criticize, and correct others, arises as early as 2. At 3, children show a preference for benevolent and competent informants. At 4, children show a major transition in the ability to judge informants as dishonest or incompetent, they become more selective in who they trust, and they also begin to lie. This last point – that greater skepticism of others and the manipulation of others develop around the same time - supports the idea that epistemic vigilance and lying coevolved.
So why do people believe weird things? My guess is that it has less to do with gullibility, and more to do with social pressure/desirability. Believing weird things offers a sense of community. It may even be rational for people to believe weird things, given that their goal is not epistemic soundness, but making friends. To an otherwise isolated individual, holding false beliefs may very well be worth it. If those around you believe that the moon landing was faked, there may be social pressure to go along with it. Nobody wants to be the skeptical killjoy.
People also get rewarded for holding fanciful beliefs. For example, as I detail in a previous post, people may share exaggerated information or conspiratorial ideas for a modicum of attention. Sharing information that is not widely accepted - perhaps because it is false - makes it seem like one is privy to a set of facts that the general public isn’t. Holding false beliefs can also provide meaning in an otherwise meaningless world. This may be part of the reason why religions are found all over the world, and why they all emphasize promises of rewards in this life or in the next.
Though we may be Spinozan during comprehension, we also retain the ability to reject hyperbolic assertions that go against our preconceptions. TDT doesn’t imply that we are all gullible morons, but that we are giving our best guess in a world where most people tell the truth. Lastly, the weird beliefs that we do accept usually go without us giving them much thought - whether that is because they are inconsequential, questioning them isn’t worth our time, or because the beliefs are functioning as a signal of group membership. In short, gullibility results not from our taking false ideas too seriously, but from not taking them seriously enough.
Twitter: @ryanbruno7287
References
Bond Jr, C. F., & DePaulo, B. M. (2006). Accuracy of deception judgments. Personality and social psychology Review, 10(3), 214-234.
DePaulo, B. M., Kashy, D. A., Kirkendol, S. E., Wyer, M. M., & Epstein, J. A. (1996). Lying in everyday life. Journal of personality and social psychology, 70(5), 979.
Ekman, P. (2009). Telling lies: Clues to deceit in the marketplace, politics, and marriage (revised edition). WW Norton & Company.
Gilbert, D. T., Krull, D. S., & Malone, P. S. (1990). Unbelieving the unbelievable: Some problems in the rejection of false information. Journal of personality and social psychology, 59(4), 601.
Harris, S. (2013). Lying. Four Elephants Press.
Koenig, M. A., Clément, F., & Harris, P. L. (2004). Trust in testimony: Children's use of true and false statements. Psychological Science, 15(10), 694-698.
Koenig, M. A., & Echols, C. H. (2003). Infants' understanding of false labeling events: The referential roles of words and the speakers who use them. Cognition, 87(3), 179-208.
Levine, T. R. (2014). Truth-default theory (TDT) a theory of human deception and deception detection. Journal of Language and Social Psychology, 33(4), 378-392.
Mascaro, O., & Sperber, D. (2009). The moral, epistemic, and mindreading components of children’s vigilance towards deception. Cognition, 112(3), 367-380.
Meissner, C. A., & Kassin, S. M. (2002). “He's guilty!”: Investigator bias in judgments of truth and deception. Law and human behavior, 26(5), 469-480.
Mercier, H. (2020). Not born yesterday. In Not Born Yesterday. Princeton University Press.
Neace, W. P., Deer, K., Michaud, S., & Bolling, L. (2011). Uncertainty is psychologically uncomfortable: A theoretic framework for studying judgments and decision making under uncertainty and risk. In Advances in Entrepreneurial Finance (pp. 93-117). Springer, New York, NY.
Pea, R. D. (1982). Origins of verbal logic: Spontaneous denials by two-and three-year olds. Journal of child language, 9(3), 597-626.
Povinelli, D. J., & DeBlois, S. (1992). Young children's (Homo sapiens) understanding of knowledge formation in themselves and others. Journal of Comparative Psychology, 106(3), 228.
Serota, K. B., Levine, T. R., & Boster, F. J. (2010). The prevalence of lying in America: Three studies of self-reported lies. Human Communication Research, 36(1), 2-25.
Shermer, M. (2002). Why people believe weird things: Pseudoscience, superstition, and other confusions of our time. Macmillan.
Sperber, D., Clément, F., Heintz, C., Mascaro, O., Mercier, H., Origgi, G., & Wilson, D. (2010). Epistemic vigilance. Mind & language, 25(4), 359-393.
Spinoza, B. (1992). Ethics: With the Treatise on the emendation of the intellect and Selected letters. Hackett Publishing.
Street, C. N., & Richardson, D. C. (2015). Descartes versus Spinoza: Truth, uncertainty, and bias. Social Cognition, 33(3), 227-239.
Street, C. N., & Richardson, D. C. (2015). The focal account: Indirect lie detection need not access unconscious, implicit knowledge. Journal of Experimental Psychology: Applied, 21(4), 342.
Zimmerman, T., Njeri, M., Khader, M., Allen, J., Rosellini, A., & Eaves, T. (2020). A review of truth‐default theory: Implications for information behavior research. Proceedings of the Association for Information Science and Technology, 57(1), e312.