"... He was the engine to drive change!" - Hristina Funa, Director, SYNPEKS - Macedonia

Want to hear more?

"... He returned the faith in ourselves to be able to make great and significant changes!" - Karolina Peric. Director, IMACO Systemtechnik - BIH

Want to hear more?

"... Antonio has succeeded in three months what we have been trying to do for years..." Dejan Milovanović - AutoMilovanović

Want to hear more?

"... With Antonio we dramatically improved our cash flow ..." - Edvard Varda, Director, Zoo hobby

Want to hear more?

No one wakes up thinking, “I am going to make bad decisions today.” Yet we all make them.

The book Think twice by Michael Mauboussin explores how counterintuitive decision making process is and gives us set of suggestions of how to avoid common mistakes during the process of making decisions. 

So this is my assessment of the book Think twice by Michael Mauboussin according to my 8 criteria:

1. Related to practice - 5 stars - author says "if you define theory as an explanation of cause and effect, it is eminently practical." And that's why I give them 5 

2. It prevails important - 4 stars

3. I agree with the read - 5 stars

4. not difficult to read (as for non English native) - 4 stars 

5. Too long (more than 500 pages) - short and concise (150-200 pages) - 4 stars

6.Boring - every sentence is interesting - 4 stars

7. Learning opportunity - 5 stars

8. Dry and uninspired style of writing - Smooth style with humouristic and fun parts - 4 stars 

Total 4.37 stars


Here are some highlights and excerpts from the book that I find worth remembering:

◆ Introduction

▪ No one wakes up thinking, “I am going to make bad decisions today.” Yet we all make them

▪ What is particularly surprising is some of the biggest mistakes are made by people who are, by objective standards, very intelligent. Smart people make big, dumb, and consequential mistakes

▪ Keith Stanovich, a psychologist at the University of Toronto, argues “Although most people would say that the ability to think rationally is a clear sign of superior intellect,” he writes, “standard IQ tests devote no section to rational thinking.”

▪ if you explain to intelligent people how they might go wrong with a problem before they decide, they do much better than if they solve the problem with no guidance

▪ Intelligent people perform better only when you tell them what to do!” exclaims Stanovich

▪ I will take you through three steps:

  1. Prepare. The first step is mental preparation, which requires you to learn about the mistakes
  2. Recognize. Once you are aware of the categories of mistakes, the second step is to recognize the problems in context, or situational awareness. Here, your goal is to recognize the kind of problem you face, how you risk making a mistake, and which tools you need to choose wisely
  3.  Apply. The third and most important step is to mitigate your potential mistakes. The goal is to build or refine a set of mental tools to cope with the realities of life, much as an athlete develops a repertoire of skills to prepare for a game

▪ I present the class with a jar of coins and ask everyone to bid independently on the value of its contents. Most students bid below the actual value, but some bid well above the coins’ worth. The highest bidder wins the auction but overpays for the coins. This is known as “the winner’s curse.”

▪ gave each participant a list of ten unusual questions of fact (e.g., gestation period of an Asian elephant) and asked for both a best guess and a high and low estimate, bounding the correct answer with 90 percent confidence. For example, I might reason that an elephant’s gestation is longer than a human’s and guess fifteen months. I might also feel 90 percent assured that the answer is somewhere between twelve and eighteen months. If my ability matches my confidence, then I would expect the correct answers to fall within that range nine times out of ten. But, in fact, most people are correct only 40 to 60 percent of the time, reflecting their overconfidence

▪ Richard Thaler, one of the world’s foremost behavioral economists, asked us to write down a whole number from zero to one hundred, with the prize going to the person whose guess was closest to two-thirds of the group’s average guess. In a purely rational world, all participants would coolly carry out as many levels of deduction as necessary to get to the experiment’s logical solution—zero. But the game’s real challenge involves considering the behavior of the other participants. You may score intellectual points by going with naught, but if anyone selects a number greater than zero, you win no prize. The winning answer, incidentally, is generally between eleven and thirteen.8

▪ Three factors determine the outcomes of your decisions: how you think about the problem, your actions, and luck

▪ That statistical reality begs a fundamental question: should you evaluate the quality of your decisions based on the process by which you make the decision or by its outcome?

▪ The intuitive answer is to focus on outcomes. Outcomes are objective and sort winners from losers. In many cases, those evaluating the decision believe that a favorable outcome is evidence of a good process. While pervasive, this mode of thinking is a really bad habit.

▪ Our most challenging decisions include an element of uncertainty, and at best we can express the possible outcomes as probabilities.

▪ Further, we must make decisions even when the information is incomplete. When a decision involves probability, good decisions can lead to bad outcomes, and bad decisions can lead to good outcomes (at least for awhile

▪ In a probabilistic environment, you are better served by focusing on the process by which you make a decision than on the outcome

▪ If you make a good decision and suffer a poor outcome, pick yourself up, dust yourself off, and get ready to do it again

▪ When evaluating other people’s decisions, you are again better served by looking at their decision-making process rather than on the outcome. There are plenty of people who succeed largely by chance

◆ Chapter 1

▪ An inside view considers a problem by focusing on the specific task and by using information that is close at hand, and makes predictions based on that narrow and unique set of inputs. These inputs may include anecdotal evidence and fallacious perceptions. This is the approach that most people use in building models of the future and is indeed common for all forms of planning

▪ The outside view asks if there are similar situations that can provide a statistical basis for making a decision. Rather than seeing a problem as unique, the outside view wants to know if others have faced comparable problems and, if so, what happened. The outside view is an unnatural way to think, precisely because it forces people to set aside all the cherished information they have gathered

▪ Kahneman and Amos Tversky, a psychologist who had a long collaboration with Kahneman, published a multistep process to help you use the outside view.21 I have distilled their five steps into four and have added some thoughts. Here are the four steps:

1. Select a reference class. Find a group of situations, or a reference class, that is broad enough to be statistically significant but narrow enough to be useful in analyzing the decision that you face

2. Assess the distribution of outcomes. Once you have a reference class, take a close look at the rate of success and failure

3. Make a prediction. With the data from your reference class in hand, including an awareness of the distribution of outcomes, you are in a position to make a forecast. The idea is to estimate your chances of success and failure. For all the reasons that I’ve discussed, the chances are good that your prediction will be too optimistic.

4. Assess the reliability of your prediction and fine-tune. How good we are at making decisions depends a great deal on what we are trying to predict. Weather forecasters, for instance, do a pretty good job of predicting what the temperature will be tomorrow. Book publishers, on the other hand, are poor at picking winners, with the exception of those books from a handful of best-selling authors. The worse the record of successful prediction is, the more you should adjust your prediction toward the mean (or other relevant statistical measure). When cause and effect is clear, you can have more confidence in your forecast.

◆ Chapter 2

▪ Anchoring is symptomatic of this chapter’s broader decision mistake: an insufficient consideration of alternatives. To be blunter, you can call it tunnel vision. Failure to entertain options or possibilities can lead to dire consequences, from a missed medical diagnosis to unwarranted confidence in a financial model.

▪ mental model is an internal representation of an external reality, an incomplete representation that trades detail for speed.5 Once formed, mental models replace more cumbersome reasoning processes, but are only as good as their ability to match reality. An ill-suited mental model will lead to a decision-making fiasco.6

▪ Anchoring is relevant in high-stakes political or business negotiations. In situations with limited information or uncertainty, anchors can strongly influence the outcome. For instance, studies show that the party that makes the first offer can benefit from a strong anchoring effect in ambiguous situations.

▪ Cognitive dissonance is one facet of our next mistake, the rigidity that comes with the innate human desire to be internally and externally consistent.14 Cognitive dissonance, a theory developed in the 1950s by Leon Festinger, a social psychologist, arises when “a person holds two cognitions—ideas, attitudes, beliefs, opinions—that are psychologically inconsistent.”15 The dissonance causes mental discomfort that our minds seek to reduce

▪ Stressed people struggle to think about the long term. The manager about to lose her job tomorrow has little interest in making a decision that will make her better off in three years

▪ How do you avoid the tunnel vision trap? Here’s a five-point checklist:

1. Explicitly consider alternatives. As Johnson-Laird’s model of reasoning suggests, decision makers often fail to consider a sufficient number of alternatives. You should examine a full range of alternatives, using base rates or market-derived guidelines when appropriate to mitigate the influence of the representativeness or availability biases

2. Seek dissent. Much easier said than done, the idea is to prove your views wrong. There are a couple of techniques. The first is to ask questions that could elicit answers that might contradict your own views. Then listen carefully to the answers. Do the same when canvassing data: look for reliable sources that offer conclusions different than yours. This helps avoid a foolish inconsistency.

3. Keep track of previous decisions. We humans have an odd tendency: once an event has passed, we believe we knew more about the outcome beforehand than we really did. This is known as hindsight bias. The research shows people are unreliable in recalling how an uncertain situation appeared to them before finding out the results

4. Avoid making decisions while at emotional extremes. Making decisions under ideal conditions is tough enough, but you can be sure your decision-making skills will rapidly erode if you are emotionally charged. Stress, anger, fear, anxiety, greed, and euphoria are all mental states antithetical to quality decisions. But just as it’s hard to make good decisions during emotional upheaval, it’s also hard to make good decisions in the absence of emotion. Antonio Damasio, a neuroscientist, suggests that “our reason can operate most efficiently” when we have some emotional poise. Whenever possible, try to postpone important decisions if you feel at an emotional extreme.

5. Understand incentives. Consider carefully what incentives exist, and what behaviors the incentives might motivate. Financial incentives are generally easy to spot, but nonfinancial incentives, like reputation or fairness, are less obvious yet still important in driving decisions. While few of us believe that incentives distort our decisions, the evidence shows that the effect can be subconscious. Finally, what may be individually good for group members can be destructive for the group overall

◆ Chapter 3

▪ he gave a few hundred people in the organization some basic background information and asked them to forecast February 2005 gift-card sales. When he tallied the results in March, the average of the nearly two hundred respondents was 99.5 percent accurate. His team’s official forecast was off by five percentage points

▪ the prediction market has been more accurate than the experts a majority of the time and has provided management with information it would not have had otherwise

▪ The expert squeeze means that people stuck in old habits of thinking are failing to use new means to gain insight into the problems they face. Knowing when to look beyond experts requires a totally fresh point of view, and one that does not come naturally

▪ Experts are initially important for these problems because they figure out the rules

▪ Once you have properly classified a problem, turn to the best method for solving it. As we will see, computers and collectives remain underutilized guides for decision making across a host of realms including medicine, business, and sports

▪ experts remain vital in three capacities

  1. First, experts must create the very systems that replace them. Severts helped design the prediction market that outperforms Best Buy’s in-house forecasters.
  2. Next, we need experts for strategy. I mean strategy broadly, including not only day-to-day tactics but also the ability to troubleshoot by recognizing interconnections as well as the creative process of innovation, which involves combining ideas in novel ways
  3. Finally, we need people to deal with people. A lot of decision making involves psychology as much as it does statistics. A leader must understand others, make good decisions, and encourage others to buy in to the decision

Scott Page, a social scientist who has studied problem solving by groups, offers a very useful approach for understanding collective decision making. He calls it the diversity prediction theorem, which states:

▪ Collective error = average individual error − prediction diversity

▪ Page discusses the diversity prediction theorem in depth in his book The Difference, and provides numerous examples of the theorem in action

▪ Also important is that collective accuracy is equal parts ability and diversity. You can reduce the collective error either by increasing ability or by increasing diversity

▪ With the diversity prediction theorem in hand, we can flesh out when crowds predict well. Three conditions must be in place: diversity, aggregation, and incentives

▪ Diversity reduces the collective error. Aggregation assures that the market considers everyone’s information. Incentives help reduce individual errors by encouraging people to participate only when they think they have an insight

▪ intuition does not work all the time. This idea introduces our third decision mistake, inappropriately relying on intuition. Intuition can play a clear and positive role in decision making. The goal is to recognize when your intuition will serve you well versus when it will lead you astray

▪ Consider the two systems of decision making that Daniel Kahneman describes in his 2002 Nobel Prize lecture. System 1, the experiential system, is “fast, automatic, effortless, associative, and difficult to control or modify.” System 2, the analytical system, is “slower, serial, effortful, and deliberately controlled.”


• Experts perceive patterns in their areas of expertise. 

• Experts solve problems much faster than novices do. 

• Experts represent problems at a deeper level than novices do. 

• Experts can solve problems qualitatively


▪ Intuition therefore works well in stable environments, where conditions remain largely unchanged (e.g., the chess board and pieces), where feedback is clear, and where cause-and-effect relationships are linear. Intuition fails when you are dealing with a changing system

▪ Despite its near-magical connotation, intuition is losing relevance in an increasingly complex world

▪ final mistake: leaning too much on either formula-based approaches or the wisdom of crowds. While computers and collectives can be very useful, they do not warrant blind faith.

▪ An example of overreliance on numbers is what Malcolm Gladwell calls the mismatch problem. The problem, which you will immediately recognize, occurs when experts use ostensibly objective measures to anticipate future performance. In many cases, experts rely on measures that have little or no predictive value.

▪ Gladwell argues that the mismatch problem extends well beyond sports. He cites examples from education (credentials are poor predictors of performance), the legal profession (individuals accepted to law school under lower affirmative-action standards do as well as their classmates after graduation),

▪ Unchecked devotion to the wisdom of crowds is also folly. While free-market devotees argue that prices reflect the most accurate assessments available, markets are extremely fallible. That is because when one or more of the three wisdom-of-crowds conditions are violated, the collective error can swell. Not surprisingly, diversity is the most likely condition to fail because we are inherently social and imitative

▪ For example, information cascades occur when people make decisions based on the actions of others, rather than on their own private information

▪ Without diversity, collectives large or small can be wildly off the mark.

▪ So what can you do to make the expert squeeze work for you instead of against you? Here are some recommendations to consider

1. Match the problem you face with the most appropriate solution.

2. Seek diversity

Tetlock’s work shows that while expert predictions are poor overall, some are better than others. What distinguishes predictive ability is not who the experts are or what they believe, but rather how they think.

Tetlock sorted experts into hedgehogs and foxes

▪ Hedgehogs know one big thing and try to explain everything through that lens

▪ Foxes arrive at their decisions by stitching “together diverse sources of information,” lending credence to the importance of diversity

3. Use technology when possible. Offset the expert squeeze by leveraging technology

◆ Chapter 4

▪ Asch then launched the experiment, cueing the confederates to give the wrong answer to see how the subject, who answered last, would respond. While some did remain independent, about one-third of the subjects conformed to the group’s incorrect judgment. The experiment showed that group decisions, even obviously poor ones, influence our individual decisions.

Based on close observation, he suggested three descriptive categories to explain the conforming behavior:

• Distortion of judgment. These subjects conclude that their perceptions are wrong and that the group is right. 

• Distortion of action. These individuals suppress their own knowledge in order to go with the majority. 

• Distortion of perception. This group is not aware that the majority opinion distorts their estimates

▪ The heart of this chapter’s message is that our situation influences our decisions enormously. The mistakes that follow are particularly difficult to avoid because these influences are largely subconscious. Making good decisions in the face of subconscious pressure requires a very high degree of background knowledge and self-awareness.

▪ Social influence arises for a couple of reasons. The first is asymmetric information, a fancy phrase meaning someone knows something you don’t. In those cases, imitation makes sense because the information upgrade allows you to make better decisions.

▪ Peer pressure, or the desire to be part of the in-group, is a second source of social influence. For good evolutionary reasons, humans like to be part of a group—a collection of interdependent individuals—and naturally spend a good deal of time assessing who is “in” and who is “out.”

▪ When French music played, French wines represented 77 percent of the sales. When German music played, consumers selected German wines 73 percent of the time. (See figure 4–2.) The music made a huge difference in shaping purchases. But that’s not what the shoppers thought

▪ This experiment is an example of priming, which psychologists formally define as “the incidental activation of knowledge structures by the current situational context”

▪ The donor statistics point to our second mistake: the perception that people decide what is best for them independent of how the choice is framed. In reality, many people simply go with default options

▪ Richard Thaler, an economist, and Cass Sunstein, a law professor, call the relationship between choice presentation and the ultimate decision “choice architecture.” They convincingly argue that we can easily nudge people toward a particular decision based solely on how we arrange the choices for them

▪ The buyer of insurance and lottery tickets also personifies a third mistake: relying on immediate emotional reactions to risk instead of on an impartial judgment of possible future outcomes

▪ Using the distinction between System 1 (the fast, experiential one) and System 2 (the slower, analytical one), this mistake arises when the experiential system overrides the analytical system, leading to decisions that deviate substantially from the ideal.

▪ The central idea is called affect, or how the positive or negative emotional impression of a stimulus influences decisions.

▪ Affective responses occur quickly and automatically, are difficult to manage, and remain beyond our awareness.

▪ final mistake: explaining behavior by focusing on people’s dispositions, rather than considering the situation

▪ Inertia, or resistance to change, also shows how the situation shapes real-world decisions. A common answer to “Why do we do it this way?” is “We’ve always done it this way.” Individuals and organizations perpetuate poor practices even when their original usefulness has disappeared or better methods have surfaced. The situation keeps people from taking a fresh look at old problems

▪ To overcome inertia, Peter Drucker, the legendary consultant, suggested asking the seemingly naïve question, “If we did not do this already, would we, knowing what we now know, go into it?”26

▪ Creating rigid procedures, like a pilot’s checklist, feels overly restrictive. Yet using checklists can help doctors save lives. It’s not that the doctors don’t know what to do—for the most part, they know their stuff—it’s just they don’t always follow all the steps they should

▪ In the United States, medical professionals put roughly 5 million lines into patients each year, and about 4 percent of those patients become infected within a week and a half. The added cost of treating those patients is roughly $3 billion per year, and the complications result in twenty to thirty thousand annual preventable deaths.

▪ Encouraged by these results, Pronovost convinced the Michigan Health & Hospital Association to adopt his checklists. The rate of infection in that state was above the national average. But after using the checklists for just three months, it had dropped by two-thirds. The program saved an estimated fifteen hundred lives and nearly $200 million in the first eighteen months

Here are some ideas to help you cope with the power of the situation:

1. Be aware of your situation. You can think of this in two parts. There is the conscious element, where you can create a positive environment for decision making in your own surroundings by focusing on process, keeping stress to an acceptable level, being a thoughtful choice architect, and making sure to diffuse the forces that encourage negative behaviors. Then there is coping with the subconscious influences. Control over these influences requires awareness of the influence, motivation to deal with it, and the willingness to devote attention to address possible poor decisions

2. Consider the situation first and the individual second. This concept, called attributional charity, insists that you evaluate the decisions of others by starting with the situation and then turning to the individuals, not the other way around

3. Watch out for the institutional imperative. Warren Buffett, the celebrated investor and chairman of Berkshire Hathaway, coined the term institutional imperative to explain the tendency of organizations to “mindlessly” imitate what peers are doing. There are typically two underlying drivers of the imperative. First, companies want to be part of the in-group, much as individuals do

4. Avoid inertia. Periodically revisit your processes and ask whether they are serving their purpose

◆ Chapter 5

▪ The colonies successfully feed, fight, and reproduce, with each insect following simple rules, acting on local information, and remaining clueless about what’s going on in the colony as a whole

▪ Let’s define a complex adaptive system and explain why it flummoxes observers. You can think of a complex adaptive system in three parts (see figure 5-1).5 First, there is a group of heterogeneous agents. These agents can be neurons in your brain, bees in a hive, investors in a market, or people in a city. Heterogeneity means each agent has different and evolving decision rules that both reflect the environment and attempt to anticipate change in it. Second, these agents interact with one another, and their interactions create structure—scientists often call this emergence. Finally, the structure that emerges behaves like a higher-level system and has properties and characteristics that are distinct from those of the underlying agents themselves

▪ Even though the individual ants are inept, the colony as a whole is smart

▪ If you want to understand an ant colony, don’t ask an ant. It doesn’t know what’s going on. Study the colony

▪ Humans have a deep desire to understand cause and effect, as such links probably conferred humans with evolutionary advantage

▪ When a mind seeking links between cause and effect meets a system that conceals them, accidents will happen

▪ In dealing with systems, the collective behavior matters more.

▪ The bungling supervision of Yellowstone illustrates a second mistake that surrounds complex systems: how addressing one component of the system can have unintended consequences for the whole

▪ All three mistakes have the same root: a focus on an isolated part of a complex adaptive system without an appreciation of the system dynamics

▪ What should you do when you find yourself dealing with a complex adaptive system? Here are some thoughts that may help your decision making

1. Consider the system at the correct level. Remember the phrase “more is different.” The most prevalent trap is extrapolating the behavior of individual agents to gain a sense of system behavior

▪ If you want to understand the stock market, study it at the market level

2. Watch for tightly coupled systems. A system is tightly coupled when there is no slack between items, allowing a process to go from one stage to the next without any opportunity to intervene

3. Use simulations to create virtual worlds. Dealing with complex systems is inherently tricky because the feedback is equivocal, information is limited, and there is no clear link between cause and effect. Simulation is a tool that can help our learning process. Simulations are low cost, provide feedback, and have proved their value in other domains like military planning and pilot training.23

▪ Even though complex adaptive systems surround us more now, our minds are no more adept at understanding them. Our innate desire to grasp cause and effect leads us to understand the system at the wrong level, resulting in predictable mistakes

◆ Chapter 6

▪ the importance of understanding context. Frequently, people try to cram the lessons or experiences from one situation into a different situation. But that strategy often crashes because the decisions that work in one context often fail miserably in another. The right answer to most questions that professionals face is, “It depends


▪ Most professionals are wary of the word theory because they associate it with something that is impractical. But if you define theory as an explanation of cause and effect, it is eminently practical. Sound theory helps to predict how certain decisions lead to outcomes across a range of circumstances

  1. The first stage is observation, which includes carefully measuring a phenomenon and documenting the results
  2. The second stage is classification, where researchers simplify and organize the world into categories to clarify differences among phenomena. Early in theory development, these categories are based predominantly on attributes
  3. The final stage is definition, or describing the relationship between the categories and the outcomes. Often, these relationships start as simple correlations

▪ Theories improve when researchers test predictions against real-world data, identify anomalies, and subsequently reshape the theory

▪ Outsourcing is not universally good. For example, outsourcing does not make sense for products that require the complex integration of disparate subcomponents. The reason is that coordination costs are high, so just getting the product to work is a challenge

▪ Before the 787, Boeing had controlled the design and engineering processes for its planes, ensuring the compatibility of the components and a smooth final assembly. But by ceding design and engineering to suppliers, Boeing’s 787 program became a case study in when to avoid outsourcing. Boeing was drawn to outsourcing as an attribute, without fully recognizing the circumstances under which it would work

▪ our second decision mistake, a failure to think properly about competitive circumstances

▪ the failure to distinguish between correlation and causality. This problem arises when researchers observe a correlation between two variables and assume that one caused the other. Once you are attuned to this mistake, you will see and hear it everywhere—especially in the media. Vegetarians have higher IQs. Nightlights lead to nearsightedness. Kids who watch too much television tend to be obese

▪ Here are some thoughts on how you can make sure you are correctly considering circumstances in your decision making

1. Ask whether the theory behind your decision making accounts for circumstances

2. Watch for the correlation-and-causality trap. People have an innate desire to link cause and effect and are not beyond making up a cause for the effects they see. This creates the risk of observing a correlation—often the result of chance—and assuming causation

3. Balance simple rules with changing conditions. Evolution provides a powerful argument for circumstance-based thinking. In evolution, the ability of an individual to survive and reproduce does not simply reflect specific attributes like size, color, or strength. Rather, the inherited characteristics that lead to survival and reproduction are inherently circumstantial. One approach to decision making—especially for rapidly changing environments—balances a handful of simple but definite rules with the prevailing conditions. For example, priority rules help managers rank the opportunities they identify, or exit rules tell them when to leave a business. The rules make sure that managers uphold certain core ideals while recognizing changing conditions, allowing for the requisite flexibility to decide properly.

4. Remember there is no “best” practice in domains with multiple dimensions. While many people, especially Westerners, are keen to determine which organization is best, crowning a winner in a high-dimensionality realm makes no sense

◆ Chapter 7

▪ Feedback can be negative or positive, and within many systems you see a healthy balance of the two. Negative feedback is a stabilizing factor, while positive feedback promotes change. But too much of either type of feedback can leave a system out of balance

▪ thermostat, which detects deviations from the temperature you set and sends instructions to return the temperature back to your desired level. Negative feedback resists change by pushing in the opposite direction

▪ Positive feedback reinforces an initial change in the same direction. Imagine a school of fish or a flock of birds eluding a predator. They move in unison to avoid the threat. We also see positive feedback at work in fads and fashions, where people imitate one another

▪ The focus of this chapter is phase transitions, where small incremental changes in causes lead to large-scale effects

▪ Put a tray of water into your freezer and the temperature drops to the threshold of freezing. The water remains a liquid until—ah-whoom—it becomes ice. Just a small incremental change in temperature leads to a change from liquid to solid

▪ in all these systems cause and effect are proportionate most of the time. But they also have critical points, or thresholds, where phase transitions occur. You can think of these points as occurring when one form of feedback overwhelms the other. When you don’t see it coming, the grand ah-whoom will surprise you.5

▪ Nassim Taleb, an author and former derivatives trader, calls the extreme outcomes within power law distributions black swans. He defines a black swan as an outlier event that has a consequential impact and that humans seek to explain after the fact.9

▪ Crowds tend to make accurate predictions when three conditions prevail—diversity, aggregation, and incentives. Diversity is about people having different ideas and different views of things. Aggregation means you can bring the group’s information together. Incentives are rewards for being right and penalties for being wrong that are often, but not necessarily, monetary

▪ For a host of psychological and sociological reasons, diversity is the most likely condition to fail when humans are involved. But what’s essential is that the crowd doesn’t go from smart to dumb gradually. As you slowly remove diversity, nothing happens initially. Additional reductions may also have no effect. But at a certain critical point, a small incremental reduction causes the system to change qualitatively

▪ Another mistake that we make when dealing with complex systems is what psychologists call reductive bias, “a tendency for people to treat and interpret complex circumstances and topics as simpler than they really are, leading to misconception.”14 When asked to decide about a system that’s complex and nonlinear, a person will often revert to thinking about a system that is simple and linear

▪ The only thing that goes up in a bear market is correlation

Here are some tips on how to cope with systems that have phase transitions:

1. Study the distribution of outcomes for the system you are dealing with. Thanks to Taleb’s prodding, many people now associate extreme events with black swans. But Taleb makes a careful, if overlooked, distinction: if we understand what the broader distribution looks like, the outcomes—however extreme—are correctly labeled as gray swans, not black swans

2. Look for ah-whoom moments

▪ As the discussion about the Millennium Bridge and the wisdom of crowds revealed, big changes in collective systems often occur when the system actors coordinate their behavior

▪ Coordinated behavior is at the core of many asymmetric outcomes, including favorable (best-selling books, venture capital) and unfavorable (national security, lending) outcomes. Be mindful of the level of diversity and recognize that state changes often come suddenly

3. Beware of forecasters. Humans have a large appetite for forecasts and predictions across a broad spectrum of domains. People must recognize that the accuracy of forecasts in systems with phase transitions is dismal, even by so-called experts

4. Mitigate the downside, capture the upside. One common and conspicuous error in dealing with complex systems is betting too much on a particular outcome

▪ In the 1950s, John Kelly, a physicist at Bell Labs, developed a formula for optimal betting strategy based on information theory. The Kelly formula tells you how much to bet, given your edge. One of the Kelly formula’s central lessons is that betting too much in a system with extreme outcomes leads to ruin

▪ In the end, the admonishment of investment legend Peter Bernstein should carry the day: “Consequences are more important than probabilities.” This does not mean you should focus on outcomes instead of process; it means you should consider all possible outcomes in your process.

◆ Chapter 8

▪ The idea is that for many types of systems, an outcome that is not average will be followed by an outcome that has an expected value closer to the average. While most people recognize the idea of reversion to the mean, they often ignore or misunderstand the concept, leading to a slew of mistakes in their analysis

▪ So Galton turned to something he could measure: sweet peas. He separated sweet pea seeds by size and showed that while the offspring tended to resemble the parent seed, their average size was closer to the mean of the full population.3

▪ Galton’s significant insight was that, even as reversion to the mean occurs from one generation to the next, the overall distribution of heights remains stable over time. This combination sets a trap for people because reversion to the mean suggests things become more average over time, while a stable distribution implies things don’t change much

▪ Daniel Kahneman neatly captured this idea when he was asked to offer a formula for the twenty-first century. He actually provided two. Here’s what he submitted:7 Success = Some talent + luck Great success = Some talent + a lot of luck

▪ When you ignore the concept of reversion to the mean, you make three types of mistakes

▪ The first mistake is thinking you’re special

▪ “Mediocrity tends to prevail in the conduct of competitive business,” wrote Horace Secrist, an economist at Northwestern University, in his 1933 book, The Triumph of Mediocrity in Business. With that stroke of the pen, Secrist became a lasting example of the second mistake associated with reversion to the mean—a misinterpretation of what the data says

▪ Kahneman soon realized that the instructor was committing our third mistake. The instructor believed that his insults caused the pilots to fly better. In reality, their performance was simply reverting to the mean. If a pilot had an unusually great flight, the instructor would be more likely to pay him a compliment. Then, as the pilot’s next flight reverted to the mean, the instructor would see a more normal performance and conclude praise is bad for pilots. The instructors didn’t see that their feedback was less important for the performance on the next flight than reversion to the mean.15

▪ The main lesson is that feedback should focus on the part of the outcome a person can control. Call it the skill part, or the process. Feedback based only on outcomes is nearly useless if it fails to distinguish between skill and luck

▪ The halo effect is the human proclivity to make specific inferences based on general impressions. For example, Thorndike found that when superiors in the military rated their subordinate officers on specific qualities (e.g., intelligence, physique, leadership), the correlations among the qualities were impossibly high. If the officer liked his subordinate, he awarded generous grades across the board. If he didn’t like him, he gave poor marks. In effect, the overall impression the officer made on his superior obscured the details.16

▪ Mean reversion shapes company performance, which in turn manipulates perception

How do you avoid mistakes associated with reversion to the mean? Here’s a checklist that may help you identify important issues:

1. Evaluate the mix of skill and luck in the system that you are analyzing

▪ Here’s a simple test of whether an activity involves skill: ask if you can lose on purpose.23 Think about casino games like roulette or slots. Winning or losing is purely a matter of luck. It doesn’t matter what you do. But if you can lose on purpose, then skill is involved

2. Carefully consider the sample size. Daniel Kahneman and Amos Tversky established that people extrapolate unfounded conclusions from small sample sizes.24

▪ The more that luck contributes to the outcomes you observe, the larger the sample you will need to distinguish between skill and luck

▪ continuous success in a particular activity, require large doses of skill and luck

3. Watch for change within the system or of the system. Not all systems remain stable over time, so it’s important to consider how and why the system has changed.

4. Watch out for the halo effect.

◆ Conclusion

▪ Think Twice’s value comes in situations where the stakes are sufficiently high and where your natural decision-making process leads you to a suboptimal choice

▪ So you must learn about the potential mistakes (prepare), identify them in context (recognize), and sharpen your ultimate decisions when the time comes (apply).

▪ Get Feedback. One of the best ways to improve decision making is through timely, accurate, and clear feedback - Having the decision-making process written in your own hand makes it much more difficult to conjure new explanations after the fact. This process of auditing is particularly useful when decisions made with a poor process lead to good outcomes

▪ Create a Checklist. When you face a tough decision, you want to be able to think clearly about what you might inadvertently overlook - A good checklist balances two opposing objectives. It should be general enough to allow for varying conditions, yet specific enough to guide action. Finding this balance means a checklist should not be too long; ideally, you should be able to fit it on one or two pages. If you have yet to create a checklist, try it and see which issues surface. Concentrate on steps or procedures, and ask where decisions have gone off track before. And recognize that errors are often the result of neglecting a step, not from executing the other steps poorly

▪ Perform a Premortem - a process that occurs before a decision is made. Many people are familiar with a postmortem, an analysis of a decision after the outcome is known. You assume you are in the future and the decision you made has failed. You then provide plausible reasons for that failure. In effect, you try to identify why your decision might lead to a poor outcome before you make the decision

▪ Know What You Can’t Know - In most day-to-day decisions, cause and effect are pretty clear. If you do X, Y will happen. But in decisions that involve systems with many interacting parts, causal links are frequently unclear. For example, what will happen with climate change? Where will terrorists strike next? When will a new technology emerge? Remember what Warren Buffett said: “Virtually all surprises are unpleasant.”9 So considering the worst-case scenarios is vital and generally overlooked in prosperous times