Every day, people and organizations are confronted with the challenge of making good decisions. They can determine whether a company succeeds or fails, or whether governments meet the needs of the public they serve. In philanthropic organizations like ours, these decisions determine who we partner with and the strategies we use to tackle some of the biggest challenges of our time.
With so much at stake, how can we improve the consistency and rigor of our decision-making processes – and ensure they aren’t infected by bias or made arbitrary by too much ‘noise’? How do we ensure our choices are driven by evidence and expertise, while also incorporating experience and intuition? That they reflect the core values of the foundation, while also including the voices of those who are most impacted by the outcome.
As part of the foundation’s efforts to strengthen our strategic learning practices, we invited economist and psychologist Daniel Kahneman to discuss his research into how systematic bias and noise can impact professional judgment. Dr. Kahneman is the author of Thinking, Fast & Slow and co-author with Cass Sunstein and Olivier Sibony of Noise: A Flaw in Human Judgment. He won the 2002 Nobel Prize in Economic Sciences.
Below are some of the highlights from our conversation with Dr. Kahneman on intuition, judgment and decision-making.
How did you first get interested in decision-making and, in particular, errors in judgment as a topic for research and study?
I was interested in judgment during my service in the Israeli army, where I spent part of my time as a psychologist, evaluating candidates for officer training. This is where I first encountered the phenomenon that I call the ‘illusion of validity,’ which means that people feel a great deal of confidence in judgments, where the confidence is actually based on little or nothing. I learned something about intuition and how to tame intuition and use it best.
How do you define cognitive bias and heuristics?
Bias is a systematic error in the cognitive analysis of judgment. I think there's general agreement that people use shortcuts – heuristics. Some people admire them. Others focus on their flaws. The way that we reach judgment is not modeled by a statistical or fully logical analysis. We have shortcuts. Any shortcut is associated with systematic error. This is in the nature of the heuristic. It doesn't utilize all the information and therefore it is prone to systematic error. This is not to say that the errors are so prevalent that they make judgments useless. In general, people are quite reasonable, but they do make systematic errors. And systematic errors are produced by the way that people think, not by their emotions or wishes, but simply by the way people make judgments. That entails errors, just like the way that the visual system entails visual illusions. That's what we call cognitive biases.
What is the distinction between thinking fast and slow?
Almost any bit of thinking that we do, certainly any action that we perform, involves components of both fast and slow thinking. But in general the characteristics of slow thinking are that it is slow and that it is effortful. Mental work is involved. Whereas with association or perception, we just perceive the world. There is work involved. There is mental activity involved but it seems to be effortless. [Slow thinking] involves work. It involves control. It involves deliberate mental action. Those are the characteristics. It is not necessarily a rational system. It is a system that makes errors. We can try our best to reason and still make errors of reasoning. Heuristics are mistakes that come from fast thinking, but slow thinking is [also] far from perfect.
How do you define noise?
What we call system noise is noise in the decisions of an organization. For example, your organization makes decisions and system noise is the noise in the decision. Now, we think that an organization should speak in one voice, because your decision makers, your individual grant givers, they speak on behalf of the organization. So it seems shocking that there should be a lottery to who will see an application that can determine the outcome of that application.
Can you provide an example of noise in decision-making?
There is just a huge amount of noise in the judicial system. The example that I find striking is a [1981] study of 208 federal judges making decisions about 16 schematic cases. The fact that the cases are schematic means that in principle, that should help people agree because there are no distracting details when you look at the crime and the defendants. Where the average sentence given by the 208 judges is seven years, and you look at two judges at random, what would you expect the difference between those two judges to be? It is almost four years. If you think of that as a lottery that the defendant [who is being sentenced] faces in facing a judge, that is shocking.
For an organization like ours that might want to reduce noise or increase accuracy [in decision making] where might we start?
I think you should start by measuring noise … by doing what we call a noise audit. Basically it’s to present the same problems, of the type you are making routinely, to a sufficiently large number of members of the organization, so that you can measure the standard deviation of judgment. The beauty of a noise audit is you don’t have to know the truth in order to measure noise. You can measure noise in a purely evaluative judgment and you can measure noise in the predictive judgement where the outcome isn’t known. That’s really the beginning. It may turn out that you are fortunate and you don’t have much noise. Or it may turn out in an audit that you have an unacceptable level of noise. Or very likely, in different types of decisions you make there are different amounts of noise.
Please watch the full video of our conversation here.
This conversation has been edited for length and clarity.