1. A psychodrama with 2 characters
    1. System 1
      1. Effortless originator of impressions and feelings that are the main source of the explicit beliefs and deliberate choices of System 2
        1. Offers tacit interpretations of what happens around you, linking the present with the past and expectations of the future
        2. It contains a model of he world that instantly evaluates events as normal or surprising
        3. It’s the source of rapid and intuitive judgement
        4. It does most of that without your conscious awareness
        5. It’s the source of many systematic errors of intuition
      2. Operates automatically and quickly. No sense of voluntary control
      3. S1 continuously generates impressions and feelings for S2. If endorsed by S2, those impressions and intuitions turn into beliefs, and impulses turn into voluntary actions
        1. S2 generally accepts suggestions from S1 with little or no modification
      4. S1 is generally very good at what it does, but it has biases that lead to systematic errors
        1. S1 cannot be turned off, and because of this, errors of intuitive thought are difficult to prevent
        2. Errors can only be prevented by the monitoring of S2, which require effort and can’t be done constantly
        3. The best you can do is to learn to recognize situations in which mistakes are likely and try harder to avoid the mistakes when stakes are high
    2. System 2
      1. The conscious, reasoning self that has beliefs, makes choices, decides what to think, and what to do
        1. Agency, choice, concentration
      2. S2 is mobilized when a question arises for which S1 doesn’t have an answer, or when an event is detected that violates the model of the world that S1 maintains
        1. S2 has the ability to change the way S1 works by programming the normally automatic functions of a attention and memory
        2. S2 continuously monitors your own behaviour, and it’s mobilized to increased effort when it detects an error about to be made
      3. Defining characteristic: effortful operation, and laziness (reluctance to invest mor effort than strictly necessary)
        1. The thoughts and actions that S2 believes it has chosen are often guided by S1
      4. S2 is in charge of self-control, overcoming the impulses of S1
      5. S2 is an apologist for S1, rather than a critic, in the context of attitudes (affect heuristic)
        1. People let their likes and dislikes to determine their beliefs about the world
          1. An example of substitution
          2. Hard question: what do I think about it?
          3. Easier question: how do I feel about it?
        2. S2 will search for information and arguments that is consistent with existing beliefs
        3. S2 will act as an endorser, rather than an enforcer, in this cases
  2. Part 1: Intuitive thinking
    1. Attention & effort
      1. The pupils of the eye are a window to the soul: changing size matching effort
      2. You have a limited budget of attention that you can allocate to activities
        1. As you become more skilled, demands for energy diminish
      3. “Law of least effort” applies both to cognitive and physical exertion
      4. Intense focusing can make people effectively blind (the “invisible gorilla”)
        1. You can do several things at the same time but only if they are easy and undemanding
        2. We can be blind to the obvious, and also blind to our blindness
      5. The response to mental overload: S2 protects the most important activity so it receives the attention it needs
        1. The operations of S2 require attention and are disrupted when attention is drawn away
        2. Switching from one task to another is effortful, especially under time pressure
        3. Effort is required to keep several ideas that require separate action in memory
    2. Self-control
      1. Self-control and deliberate thought apparently draw from the same limited budget of effort
        1. In a state of “flow”, maintaining focused attention requires no exertion of self-control, thereby freeing resources to be directed to the task at hand
      2. An effort of will or self-control is tiring (ego depletion)
        1. If you have to force yourself to I do something, you’re less able to exert self-control when the next challenge comes
        2. Ego depletion involves conflict and the need to fight natural tendency
        3. Activities that impose high demands on S2 require self-control, and the exertion of self-control is depleting and unpleasant.
        4. Effortful mental activity appears to be especially expensive in glucouse
          1. The effects of ego depletion could be undone by ingesting glucose
      3. S1 has more influence on behaviour when S2 is busy, and it has a sweet tooth
        1. People who are simultaneously challenged by a demanding cognitive task and by a temptation are more likely to yield to temptation
        2. People who are cognitively busy are more likely to make selfish choices, use sexist language, make superficial judgements
    3. Intelligence & Rationality
      1. Intelligence is not only the ability to reason; it is also the ability to find relevant material in memory and to deploy attention when needed
        1. Memory function is an attribute of S1
        2. The extent of deliberate checking and searching is a characteristic of S2
      2. Rationality
        1. Those who are more alert, intellectually active, less willing to be satisfied with superficially attractive answers, more skeptical about their intuitions
        2. Superficial (lazy) thinking is a flaw of the reflective mind, a failure of rationality
      3. High intelligence does not make people immune to biases
    4. Thinking by Association
      1. Associative activation: ideas that have been evoked trigger many other ideas, in a spreading cascade of activity in the brain
        1. Associative coherence: each element is connected, and each supports and strengthen others
        2. Happens quickly, and all at once
          1. Only a few of the activated ideas will register in consciousness
        3. Associative memory is a network with different kinds of links
          1. Causes and effects
          2. Things to their properties
          3. Things to categories they belong to
      2. Priming
        1. Priming effect: exposure to a word causes immediate and measurable changes in the ease with witch many related words can be evoked
        2. Primed ideas have some ability to prime other ideas, although more weakly
        3. Your actions and emotions can be primed by events you’re not even aware
          1. It can also work in reverse: an action can influence an idea
        4. Priming arises in S1 and you have no conscious access to it
    5. Cognitive Ease
      1. A dial with two extremes
        1. Ease: in threats, no need to mobilize effort or redirect attention
        2. Strain: a problem exists, which will require mobilization of S2
          1. Cognitive strain, whatever the source, mobilizes S2, which is more likely to reject intuitive answers suggested by S1
      2. Inputs
        1. Repeated experience
          1. Familiarity has a quality of “pastness” that seems to indicate that it’s reflection of prior experience
          2. Frequent repetition leads to familiarity, even if the message is partial
          3. You experience greater cognitive ease with something you’ve seen earlier, and this sense of ease gives you the impression of familiarity
          4. If an answer feels familiar, you assume it’s probably true
          5. The impression of familiarity is produced by S1, and S2 relies on it for a true/false judgement
        2. Clear display
          1. But there may be other causes for a feeling of ease (including font, or the rhythm of the prose) and you have no simple way to trace the feeling to its source
        3. Primed ideas
          1. If an idea is linked by logic or association to other beliefs or preferences you hold, or comes from a source you trust and like, you will feel cognitive ease and assume it’s true
        4. Good mood
          1. Mood affects S1: When we’re uncomfortable or unhappy, we loose touch with our intuition
          2. A happy mood loosens control of S2 over performance: people become more intuitive, more creative, but also less vigilant and more prone to errors
      3. Outputs
        1. Feels familiar
        2. Feels true
        3. Feels good
        4. Feels effortless
    6. Jumping to conclusions
      1. Norms & Causes
        1. S1 has access to norms of categories which specify the range of plausible values as well as most typical cases
        2. When something is detected as an abnormality, S1 automatically finds a causal connection to construct a coherent story
      2. Neglect of ambiguity and doubt
        1. When uncertain, S1 bets on an answer
          1. Recent events and current context have the most weight
          2. S1 doesn’t keep track of the alternatives it rejects, or even of the fact that there were alternatives
        2. S1 begins with an attempt to believe an idea, in order to understand it
          1. S2 can challenge the answer, but it’s sometimes busy and it’s often lazy
      3. Confirmation bias
        1. People seek data that are likely to be compatible with the beliefs they currently hold
      4. Halo Effect
        1. Tendency to like (or dislike) everything about a person, including things you haven’t observed.
        2. Sequence matters: the weight of first impressions matters to the point that subsequent information is mostly wasted
        3. De-correlate errors to tame halo effect
          1. Make errors that individuals make independent of the errors made by others
          2. Works well only when observations are independent and their errors have no correlation
      5. What you see is all there is (WYSIATI)
        1. Information that is not retrieved by the association machine is ignored, and might as well not exist
        2. The measure of success for S1 is the coherence of the story it manages to create. The amount and quality of the data are largely irrelavant
    7. Making Judgements
      1. Basic assessment
        1. Situations are constantly evaluated as good or bad, requiring escape or permitting approach
      2. Intensity Matching
        1. An underlying scale of intensity allows matching across diverse dimensions
      3. Mental Shotgun
        1. We often compute much more than its needed
        2. An intention to answer one question evoked another, which was not just superfluous but actually detrimental to the main task
      4. Substitution
        1. If a satisfactory answer to a hard question is not found quickly, S1 will find a related question that is easier and answer it instead
          1. The easier questions are result of the mental shotgun
          2. The answer needs to be fitted to the original question, which is resolved with intensity matching
  3. Part 2: why is so difficult to think statistically?
    1. Law of small numbers
      1. Law of large numbers: the results of large samples deserve more trust than smaller samples
        1. Extreme outcomes (both high and low) are more likely to be found in small samples
      2. We’re prone to exaggerate the consistency and coherence of what we see
        1. There’s a bias to believe that small samples closely resemble the population from which they are drawn
        2. We’re pattern seekers, believers in a coherent world in which regularities appear not by accident but as a result of causality or intention
          1. Random processes produce many sequences that convince people that the process is not random after all
          2. We are far too willing to reject the belief that much of what we see in life is random
    2. Anchoring
      1. Anchoring occurs when people consider a particular value for an unknown quantity before estimating that quantity
        1. Any number that you’re asked to consider as a possible solution to an estimation problem will induce anchoring
        2. Anchoring effects are threatening in similar ways to priming, but additionally you’re always aware of the presence of the anchor, but you don’t know how it guides your thinking
      2. 2 mechanisms produce anchoring
        1. Adjustment (S2)
          1. Start from the anchor, assess whether it’s too high/low, then gradually adjust the estimate by mentally moving away from the anchor
          2. You’re likely to stop when you’re no longer sure you should go further: that’s the edge of uncertainty
          3. Adjustment is an effortful operation
        2. Priming (S1)
          1. Anchoring is a case of suggestion, which is a form of priming
          2. S1 tries its best to construct a world where the anchor is the true number
      3. A strategy of deliberately “thinking the opposite “ may be a good defence against anchoring
        1. You should assume that any number already on the table is having an anchoring effect, and mobilize S2 if the stakes are high
    3. Availability
      1. Availability heuristic: the process of judging frequency by the ease with which instances come to mind
        1. Instances of a class will be retrieved from memory. If retrieval is easy and fluent, the category will be judged to be large
          1. Connects to “affect heuristic”
          2. Our expectations about frequency are distorted by the prevalence and emotional intensity of the messages to which we are exposed
          3. Makes the world much tidier than reality, making decisions easier
          4. Good things have few costs
          5. Bad things have no benefits
        2. It can involve both S1 and S2
          1. The ease with which instances come to mind is a S1 heuristic
          2. S1 has the ability to set expectations and to be surprised when those expectations are violated
          3. S1 then retrieves possible causes for the surprise, usually amongst recent surprises
          4. It’s another example of substitution: you want to estimate the size of a category, but you report an impression of the ease with which instances come to mind
          5. S2 can reset the expectations of S1 on the fly, making the surplus look almost normal
        3. Can be disrupted when the experience of fluency is given some other spurious explanation (like irrelevant factors)
      2. Availability Cascade
        1. Self-sustaining chain of events where a relatively minor event leads to large-scale action
        2. Shows a basic limitation in our minds to deal with small risks: we either ignore then, or give then far too much weight
      3. Probability neglect
        1. The amount of concern is not adequately sensitive to the probability of harm
        2. You’re imagining the numerator while not thinking on the denominator
      4. Probability neglect and availability cascade inevitably leads to gross exaggeration of minor threats, sometimes with important consequences
    4. Representativeness
      1. Representativeness: similarity to a stereotype
        1. Stereotypes are statements about a group that are (at least tentatively) accepted as facts about every member
      2. A question about probabilities or likelihood activates a mental shotgun, evoking answers to easier questions. One of those is an automatic assessment of representativeness
        1. Often, the intuitive impressions are accurate, but when the stereotype doesn’t apply, it leads to a misleading answer
          1. Excessive willingness to predict the occurrence of unlikely (low base-rate) events
          2. Insensitivity to the quality of the evidence (WYSIATI)
        2. Both systems should be indicted
          1. S1 suggested an incorrect intuition, and S2 endorsed it
          2. S1 will automatically process information as if it were true
          3. S2 fails to catch it because laziness or ignorance
          4. You can introduce doubt about the quality of the evidence by letting your judgement to stay close to the base rate, but this requires self-monitoring and self-control
        3. Bayesian reasoning can help discipline intuition
          1. The logic of how people should change their mind in the light of evidence has 2 ideas
          2. Base rates matter, even in the presence of evidence about the case at hand
          3. Intuitive impressions of the diagnosticity of a evidence are often exaggerated. WYSIATI and associative coherence tends to make us believe the stories we spin for ourselves
          4. 2 steps
          5. Anchor your judgement on a plausible base rate
          6. Question the diagnosticity of your evidence
      3. Adding detail sets up a conflict between intuition of representativeness and the logic of probabilities
        1. Conjunction Fallacy: when people judge a conjunction of two events to be more probable than one of the events in direct comparison
          1. More details = better match against the stereotype
          2. When you specify a possible event in greater detail you can only lower its probability.
        2. Adding more detail to a scenario makes it more persuasive, but less likely to come true
          1. The most coherent stories are not necessarily the most probable, but they are plausible, and the notions of coherence, plausibility and probability are easily confused.
        3. Less-is-more pattern
          1. When removing items from a set improves the value of the set
          2. Occurs when the average dominates the evaluation
          3. Evaluating sets of data
          4. Single evaluation: one set shown at a time
          5. Produces less-is-more
          6. Behaviour reflects intuition
          7. Joint evaluation: all sets shown at the same time, allowing comparison
          8. Eliminates less-is-more in some cases, but not always
          9. Logic prevails in the absence of competing intuition
    5. Causality
      1. Base rates can be expressed in two forms and used differently
        1. Statistical base rates are facts about the population to which a case belongs: “85% of te cabs are Green”
          1. Generally underweighted, and often neglected altogether, when specific information about the case at hand is available
        2. Causal base rates indicate how the individual case came to be: “85% of accidents are cause by Green cabs”
          1. Treated as information about the individual case, and easily combined with other case-specific information
          2. They have the form of a stereotype, which is how S1 thinks about cayegories
      2. People’s unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular
        1. When presented with surprising statistical facts (general), people managed to learn nothing at all
          1. People may be impressed, but their understanding of the world hasn’t changed
        2. When they were surprised by individual cases (the particular), they immediately made the generalization and inferred the general case
          1. Statistical results with a causal interpretation have a stronger effect on our thinking than non-causal information
          2. Surprising individual cases have a powerful impact and are a more effective tool [for changing beliefs]
    6. Regression to the Mediocrity
      1. “The mean filial regression towards mediocrity was directly proportional to the parental deviation from it”
        1. The more extreme the original score, the more regression we should expect
        2. Regression occurs when the correlation between two measures is less than perfect
          1. The correlation coefficients is a measure of the relative weight of the factors they share
          2. Correlation and regression are different perspectives on the same concept
      2. Regression effects can be found wherever we look, but we don’t recognize them for what they are
        1. Regression does not have a causal explanation, just the result of random fluctuations, but we attach causal interpretations to it
        2. When attention is called to an event, associative memory will look for its cause, spreading to any cause already stored in memory.
        3. S2 finds regression difficult to understand and learn because of the insistent demands from S1 of causal interpretations.
          1. We will not learn to understand [identify] regression from experience [because a causal interpretation will be more appealing]
      3. Use control groups to detect regression
        1. Extreme groups regression to the mean over time
        2. You must compare a group that received a change to another that didn’t.
        3. The control group is expect to change by regression alone. The aim of the experiment is to verify that the other group changed more than regression can explain
      4. When making predictions, people perform substitution and ignore regression to the mean
        1. A prediction of the future is not distinguished from an evaluation of current evidence, leading to non-regressive predictions [too extreme: too weak or too strong]
          1. The predictions are as extreme as the evidence
          2. Intuitive predictions tend to be too extreme and you’ll be inclined to put too much faith in them (overconfidence)
        2. 4 steps To produce an unbiased prediction
          1. 1. Estimate the average (or base-rate)
          2. 2. Intuitive prediction that matches your evaluation of the evidence
          3. 3. Estimate the correlation of the factors
          4. 4. Move from the baseline towards your intuition, in proportion to the correlation
          5. The most valuable contribution of this procedure is that it will require you to think about how much you know
        3. Correcting your intuition may complicate your life
          1. You will never guess outcomes that are rare or far from the mean
          2. You will never have the satisfying experience of correctly calling an extreme case
          3. Unbiased predictions are justified if all errors of prediction are treated alike, regardless of their direction. But there are situations where some errors are more preferable than others.
          4. If you choose to delude yourself by accepting extreme predictions, remain aware of your self-indulgence
  4. Part 3: Overconfidence
    1. Understanding
      1. Narrative Fallacies (Taleb)
        1. Flawed stories of the past shape our views of the world and our expectations of the future
        2. Any recent salient event is a candidate to become the kernel of a causal narrative
        3. The halo effect helps keep narratives coherent and simple, by exaggerating the consistencies of evaluations
        4. WYSIATI
          1. A compelling narrative fosters an illusion of inevitability
          2. The mind doesn’t deal well with non-events: we form a narrative with all the events that happened, but don’t consider all the events that could have happened and change the outcome, but didn’t
          3. We underestimate the role of luck
          4. The more luck involved, the less it is to be learnt from it
          5. It’s easier to construct a coherent story when you know little
        5. Our comforting conviction that the world makes sense rests on a secure foundation: our almost unlimited ability to ignore our ignorance
      2. Hindsight bias
        1. We understand the past less than we believe we do
        2. One of the limitations of the mind is its imperfect ability to reconstruct past states of knowledge, or beliefs that have changed
          1. Once you adopt a new view of the world, you immediately loose your ability to recall what you used to believe before your mind changed
          2. Your inability to reconstruct past beliefs will inevitably cause you to underestimate the extent to which you were surprised by past events
          3. We revise the history of one’s beliefs in light of what actually happened
        3. It leads observers to assess the quality of decisions not by whether the process was sound but whether the outcome was good or bad
          1. Actions that seemed prudent on foresight can look irresponsibly negligent in hindsight
          2. Decision makers can be driven to bureaucratic solutions and extreme risk aversion
          3. It can also bring undeserved reward to irresponsible risk seekers
    2. Confidence
      1. We know as a general fact that our predictions are little better than random guesses, but we continue to feel and act as if they were valid
        1. Our knowledge of the general rule has no effect on our confidence in individual cases
      2. Illusion of skill is another form of illusion of validity
        1. People can stubbornly believe in their skill contrary to theory they all accept and what they could learn from dispassionate evaluation of their own personal experience
          1. Because it’s a feeling, not a judgement
          2. Facts that challenge basic assumptions (and thereby threaten people’s livelihoods and self-esteem) are simply not absorved
      3. Confidence is a feeling, which reflects the coherence of the information and the cognitive ease of processing it
        1. Subjective confidence in a judgement is not a reasoned evaluation of the probability that this judgement is correct.
        2. Declarations of high confidence mainly tell you that an individual has constructed a coherent story in his mind, not necessarily that the story is true
          1. High subjective confidence is not to be trusted as an indicator of accuracy
          2. Low confidence can be more informative
        3. Cognitive ease and associative coherence place subjective confidence firmly in S1
      4. People can maintain an unshakeable faith in any proposition, however absurd, when they are sustained by a community of like-minded believers
      5. The illusion that we understand the past foster overconfidence in our ability to predict the future
        1. Our tendency to construct and believe coherent narratives of the past makes it difficult for us to accept the limits of our forecasting abilities
          1. Everything makes sense in hindsight
          2. We can’t suppress the powerful intuition that what makes sense in hindsight today was predictable yesterday
        2. Even in the region where they know best, experts where not significantly better than non-specialists
          1. Those with most knowledge are often less reliable
          2. The expert develops an enhanced illusion of their skills and becomes unrealistically overconfident
          3. Experts are led astray not by what they believe, but by how they think
        3. Errors of prediction are inevitable because the world is unpredictable
    3. Intuition
      1. Intuition is nothing more and nothing less than: recognition
        1. Recognition-primed decisión model
          1. A tentative plan comes to mind by an automatic function of associative memory (S1)
          2. A deliberate process follows in which the plan is mentally simulated to check it will work (S2)
        2. The mystery of “knowing without knowing” is not a distinctive feature of intuition; it’s the norm of mental life
      2. Expert intuition
        1. The confidence that people have on their own intuition is not a reliable guide to their validity
          1. True experts know the limits of their knowledge. Pseudo-experts have no idea they don’t know (illusion of validity)
          2. Confidence in a belief comes from cognitive ease and coherence
          3. The associative machinery is set up to suppress doubt and to evoke ideas and information that are compatible with the current story
          4. Do not trust anyone (including yourself) to tell you how much you should trust their judgement
        2. Judgement reflects expertise when the conditions for acquiring a skill are also satisfied
          1. 2 conditions
          2. An environment that is sufficiently regular to be predictable
          3. Intuition cannot be trusted in the absence of regularities in the environment
          4. An opportunity to learn these regularities through prolonged practice
          5. In “wicked” environments (worse than irregular) people are likely to learn the wrong lessons from experience
          6. Developing expertise depends essentially on the quality and speed of feedback, as well as on sufficient opportunity to practice
          7. If the 2 conditions are satisfied, the associative machinery will recognize situations and generate quick and adequate predictions and decisions. You can then trust intuition.
      3. Intuition vs. Formulas
        1. Low-validity environments: domains with significant degree of uncertainty
        2. In Low-validity environments, the accuracy of experts was matched or exceeded by simple algorithms
          1. Experts try to be clever, think outside the box, and consider complex combinations in their predictions
          2. Humans are inconsistent in making judgements of complex information
          3. Probably due to extreme context dependence of S1 (priming)
          4. You will never know that you would have reached different conclusions under slightly different circumstances
        3. To maximize predictive accuracy, final decisions should be left to formulas, especially in low-validity environments.
          1. The aversion to algorithms making decisions that affect humans is rooted in strong preference for the natural over the synthetic
          2. The prejudice against algorithms is magnified when the decisions are consequential
        4. Intuition adds value, but only after disciplined collection of objective information and scoring of separate traits
          1. First, select a few traits
          2. 6 is a good number
          3. As independent as possible from each other
          4. Make a list of factual questions to assess each trait, to score in a 1-5 scale
          5. Collect information about each trait, one at a time, scoring before you move to the next (to mitigate halo effect)
          6. Add up all the scores
          7. Firmly resolve to select the option with the highest score, even if there’s another you like better
      4. In low-validity’s environments intuition will still be present, but it may be invalid
        1. S1 is often able to produce quick answers to difficult questions by substitution, creating coherence where there’s none
        2. The question that is answered is not the one that was intended, but the answer comes quickly and may be sufficiently plausible to pass the lax and lenient review of S2
        3. If it’s the only answer that comes to mind, it may be subjectively indistinguishable from valid judgements that you make with expert confidence
    4. Prediction
      1. Inside view
        1. Focus on specific circumstances and searched evidence in own experience
        2. Forecasting based in the evidence in front of you (WYSIATI)
      2. Outside view
        1. Starts by directing attention away from situation at hand, and towards a class of similar cases
          1. Baseline prediction: the prediction you make about a case if you know nothing except the category to which it belongs
        2. The baseline prediction is then used as an anchor for future adjustments
          1. If the reference class is properly chosen, it will give you an indication of where the ballpark is
        3. Reference class forecasting is an implementation of the outside view
      3. Common pattern
        1. people who have information about an individual case rarely feel the need to know the statistics of the class to which the case belongs
        2. When eventually exposed to the outside view, it’s ignored
          1. “Pallid” statistical information is routinely discarded when it’s incompatible with one’s personal impressions of the case
        3. The result is the Planning Fallacy
          1. Plans and forecasts that are unrealistically close to the best-case scenario
          2. Could be improved by consulting the statistics of similar cases
        4. Irrational perseverance follows: failing to abandon a project affected by the planning fallacy
          1. Facing a choice, we gave up rationality rather than giving up the project
    5. Optimism
      1. Optimistic individuals play a disproportionate role in shaping our lives
        1. They got where they are by seeking challenges and taking risks
        2. They are talented but almost certainly luckier than they acknowledge
        3. Their experiences of success have confirmed their faith in their judgement and in their ability to control events
          1. Overconfidence
      2. An optimistic temperament encourages persistence in the face of obstacles, but that can be costly
      3. Risk taking, optimistic entrepreneurs contribute to the economic dynamism of capitalist society, even if most risk takers end up disappointed
      4. Above-average effect
        1. People tend to be overly-optimistic about their relative standing on an activity in which they do moderately well
      5. Competition neglect
        1. People think their fate is entirely in their own hands
        2. They know less about their competitors and find natural to imagine a future in which they play only a little part
        3. The consequence is excess entry: too many competitors enter the market than the market can sustain, so the average outcome is a loss
          1. The effect on the economy as a whole could still be positive
          2. Optimistic martyrs
      6. The main benefit of optimism is resilience in the face of setbacks
        1. Optimism is essential to success in the face of repeated multiple small failures
    6. Overconfidence
      1. Overconfidence is another manifestation of WYSIATI
        1. When we estimate a quantity, we rely on the information that comes to mind nd construct a coherent story in which the estimate makes sense
        2. Allowing for information that does not come to mind, perhaps because one never knew it, is impossible
      2. A wide confidence interval is a confession of ignorance, which is not socially acceptable
        1. An unbiased appreciation of uncertainty is a cornerstone of rationality, but it’s not what people and organizations want
      3. Overconfidence is a direct consequence of features of S1, which can be tamed but not vanquished.
        1. You can’t really overcome overconfidence by training people
        2. Postmortems can be a remedy
          1. When an organization has almost come to an important decision, but not yet formally committed to it
          2. Imagine that it’s 1 year into the future, the plan was implemented as now it exists, and it was a complete disaster
          3. Write a brief story of that disaster
          4. Overcomes groupthink and unleashes imagination
          5. Legitimizes doubt, preventing it being seen as lack of loyalty to the team or the leader
  5. Part 4: Choice
    1. Econs & Rational Agent Model
      1. The agent of economic theory is rational, selfish, and their tastes don’t change over time
      2. Expected Utility Theory: the logic of how decisions are made by Econs
        1. Utility: the psychological value or desirability of money
        2. Gambles are assessed by their expected value (weighted average of the possible outcomes)
          1. The psychological value of a gamble is the average of the utility of those outcomes (Bernoulli)
          2. The diminishing marginal value of wealth is what explains risk aversion
      3. The utility of a gain is assessed by comparing the utility of two states of wealth.
        1. The distinction between gains and losses was not considered to be relevant
    2. Prospect Theory
      1. Explains systematic violations of axioms of rationality in choices between gambles
      2. 3 cognitive features of S1
        1. Evaluation relative to a neutral reference point
          1. The earlier state relative to which gains and losses are evaluated
        2. Principle of diminishing sensitivity: the subjective difference between two states of wealth diminishes as wealth increases
        3. Principle of loss aversion: losses loom larger than gains
          1. Loss aversion tends to increase when stakes ride.
      3. 2 fundamental insights
        1. When both gain and loss are possible, loss aversion causes extreme risk-averse choices
        2. When options are all bad (sure loss is compared to larger probable loss), diminishing sensitivity causes risk-seeking choices
      4. Blind spots
        1. Prospect theory can’t cope with disappointment
        2. Prospect theory fails to allow for regret
    3. Asymmetry between losses and gains challenges “rationality”
      1. Endowment effect
        1. Tastes are not fixed, they vary with the reference point
        2. The disadvantages of a change loom larger than its advantages, inducing bias for the status quo
        3. Owning a good appears to increase its value
          1. When we own an item, we consider the pain of giving it up
          2. When we don’t own the item, we consider the pleasure of getting it
          3. The two values are unequal because of loss aversion
          4. The response to a loss is stronger than the response corresponding to the gain
        4. It’s not universal
          1. There’s no loss aversion on either side of a commercial transaction (goods held for exchange)
          2. It’s more prevalent in goods held for use or enjoyment
          3. Endowment effect disappears eventually when good are carriers of future value exchanges (eventually disappears with trading experience)
          4. You don’t expect to find endowment effect amongst the poor: all their choices are between losses, small amounts are perceived as reduced loss, not a gain
      2. Negativity dominance
        1. The brain contains mechanisms designed to give priorities to bad news, to ensure survival
          1. Bad events are considered by S1 as threatening
        2. The self is more motivated to avoid bad self-definitions than to pursue good ones
        3. The boundary between good and bad is a reference point that changes over time and depends on the immediate circumstances
      3. Negotiations favours the status quo
        1. The existing terms define the reference point
        2. Loss aversion creates an asymmetry that makes agreement difficult to reach
          1. Your concessions are my gains and your losses, causing you more pain that they give me pleasure
        3. Negotiations over a shrinking pie are especially difficult because they require allocation of losses
        4. Most of the messages exchanged are attempts to communicate a reference point and provide an anchor to the other side
      4. Goals in the future are reference points
        1. Not achieving it is a loss, exceeding it is a gain
        2. Aversion to failure of not reaching it is much stronger than the desire to exceed it
      5. Perception of fairness
        1. Attitudes toward fairness challenge the view that economic behaviour is ruled by self-interest and that concerns for fairness are largely irrelevant
        2. Existing price sets a reference point which has the nature of an entitlement that must not me infringed
          1. It’s considered unfair to impose a loss on others, unless it is to protect our own entitlement
          2. When facing a loss, we’re allowed to transfer the loss to others
          3. Imposing losses on others to increase profits it’s considered unfair
        3. Exploitation of market power to impose losses is considered unfair
        4. Strangers who observe unfairness often join in punishment (altruistic punishment)
          1. We don’t reward generosity as reliably as we punish meanness
          2. Altruistic punishment is connected to pleasure centres of the brain
    4. Fourfold Pattern
      1. When evaluating an object, S1 assigns weights to the various of its characteristics
        1. Rational behaviour would be to assign weights based on outcomes (expectation principle)
          1. It does not correctly explains how you think about risky prospects
        2. Probabilities are actually assigned differently based on uncertainty
          1. Improbable outcomes are overweighted (possibility effe)
          2. Weighted disproportionally more than they deserve
          3. We tend to overweight small risks and are willing to pay more than expected to eliminate them altogether
          4. Almost certain outcomes are underweighted (certainty effect)
          5. Given less weight than their probability justifies
        3. The asymmetry created by possibility and certainty effect means that not all probabilities in the 0-100 range are treated uniformly
          1. The certainty effect is mor striking than the possibility effect if the outcome is a disaster rather than a gain
          2. It leads to inadequate sensitivity to intermediate probabilities (5-95% -> 13-79 weights)
          3. Probabilities that are extremely low or high (below 1% or above 99%) are special
          4. Rare events are either ignored altogether or given much more weight than they deserve
          5. We’re completely insensitive to variations of risk amongst small probailities
        4. When paying attention to a threat, the decision weight reflects how much you worry about it
          1. Worry is not proportional to the probability of the threat
          2. Reducing or mitigating the risk is not adequate; to eliminate the worry the probability must be brought down to zero
      2. The Fourfold Pattern explains the attitudes in combinations between Certainty/Possibility effects and Gains/Losses
        1. Certainty Effect/Gain
          1. Fear of disappointment
          2. People are risk averse
          3. Willingness to accept less to lock in a sure gain
        2. Possibility Effect/Gain
          1. Hope of large gain
          2. People are risk seeking
          3. Lottery: buying the right to dream
          4. If the gain is large, we’re indifferent to the low chance of winning
        3. Possibility Effect/Loss
          1. Fear of large loss
          2. People are risk averse
          3. Buying insurance: peace of mind
        4. Certainty Effect/Loss
          1. Hope to avoid loss
          2. Desperate gambles, exchanging the risk for making things worse for a small hope of avoiding the loss
          3. The thought of accepting a large loss is too painful, and the hope of complete relief too enticing, to make the “rational” (sensible) decision of cutting one’s losses
          4. People are risk seeking
          5. Diminishing sensitivity makes the sure loss more aversive than the larger loss
          6. The weight of a high probability loss is lower than the actual probability (underweighted by certainty Effect), reducing the aversiveness of the gamble
    5. Rare Events
      1. The actual probability (of a rare event) is inconsequential: only possibility matters
      2. Overweighting is rooted in S1 features: emotion and vividness influences fluency, availability, and judgements of probability
        1. Focused attention
          1. Unlikely events become focal
          2. If the event is very likely, you focus on its alternative
          3. We focus on what’s odd, different, unusual
          4. The probably of a rare event is likely to be overestimated when the alternative is not fully specified
          5. Vividness
          6. The more vivid descriptions produce a higher decision weight for the same probability
          7. Denominator neglect: directing attention to the winning marbles (numerator) neglects the number of non-winning marbles and the total (denominator)
          8. Explains why different ways of communicating risks vary so much in its effects
          9. Low-probability events are much more heavily weighted when described in terms of relative frequency (how many) than when stated in terms of chances, risks or probability (how likely)
          10. Salience is enhanced by mere mention of an event (attracting attention)
          11. Choice by description (as opposed to choice by experience) yields possibility effect (overweighting of rare outcomes)
        2. Confirmation bias
          1. Thinking about an event makes you try to make it true in your mind
        3. Cognitive ease
    6. Risk Policies
      1. Every single choice formulated in terms of gains and losses can be deconstructed in innumerable ways into a combination of choices, yielding preferences that are likely to be inconsistent
        1. Narrow framing: a sequence of simple decisions, considered separately
          1. The typical short-term reaction to bad news is increased loss aversion
        2. Broad framing: a single, comprehensive decision, with aggregated options
          1. Aggregation of gambles reduces the probability of loosing and loss aversion
          2. When the gambles are genuinely independent of each other
          3. When the possible loss does not cause you to worry about your wealth
          4. When it’s not about long-shots (very small probability of winning)
        3. Broad framing will be superior (or at least not inferior) in every case in which decisions are to be contemplated together
          1. A rational agent will of course engage in broad framing, but Humans are by nature narrow framers
          2. We tend to make decisions as problems arise, even when we are specifically instructed to consider then jointly
      2. Decision makers who are prone to arrow framing construct a preference every time they face a risky choice. They would do better by having a risk policy that they routinely apply whenever a relevant problem arises
        1. Eliminate the pain of occasional loss by the tougher that the policy that left you exposed to it will be advantageous over the long run
        2. A risk policy is a broad frame that embeds a particular risky choice in a set of similar choices
          1. Example: don’t buy extended insurance for electronics
        3. It’s analogous to the outside view
        4. Count on statistical aggregation to mitigate the overall risk
    7. Keeping Score
      1. We use “mental accounts” to keep score of various events
        1. Econs don’t use them: they have a comprehensive view of outcomes. For Humans, it’s a form of narrow framing, keeping things manageable by a finite mind
        2. We attach different emotional states to the “balance” of different mental accounts. The calculations of “emotional balance” is a S1 function
          1. Sunk-cost Fallacy: the decision to invest additional resource in a loosing account, when better investments are available
      2. Regret
        1. An abnormal event attracts attention, and it also activates the idea of the event that would have been “normal” under the same circumstances
          1. Departures from the default option produces regret
        2. Decision makers know they are prone to regret, and the anticipation of that painful emotions plays part in the decision
          1. The asymmetry in the risk of regret favours conventional and risk averse choices
          2. The reluctance to “sell” important endowments increases dramatically when doing so makes you responsible for an awful outcome
          3. This attitude is incoherent and potential damaging
          4. The resistance may be motivated by selfish fear of regret more than the wish to optimize the outcome
          5. The “what if” thought is an I age of regret and shame
          6. We spend much of our day anticipating, and trying to avoid, the emotional pain we inflict on ourselves
          7. This leads to actions that are detrimental to the wealth of individuals, to the soundness of policy, and to the welfare of society.
          8. People generally anticipate more regret that they actually experience
          9. If you can remember when things go badly that you considered the possibility of regret carefully before deciding, you’re likely to experience less of it
          10. It will hurt less than you think
    8. Reversals
      1. We normally experience life in the “between subjects” mode, in which contrasting alternatives that might change your mind are absent
        1. The moral intuitions that come to your mind in different situations aren’t internally consistent
      2. Preference reversal occurs because joint evaluation focused attention on an aspect of a situation, which was less salient in single evaluation
        1. The emotional reactions of S1 are likely to determine single evaluation
        2. Joint evaluation requires careful comparison and effortful assessment, which calls for S2
        3. Rationality is generally served by broader and more comprehensive frames, and joint evaluation is obvious broader than single evaluation
          1. The legal system, contrary to psychological common sense, favours single evaluation
          2. You should be wary of joint evaluation when someone who controls what you see has a vested interest in what you choose
      3. We break down the world into categories for which we have norms. Judgements and preferences are coherent within categories, but potentially incoherent when the objects evaluated belong to different categories
    9. Frames & Reality
      1. Framing effect: the unjustified influence of formulation on beliefs and preferences
        1. Losses evoke stronger negative emotions than costs
        2. Example: cash discount vs credit surcharge
        3. Important choices are controlled by utterly inconsequential features of the situation
      2. Most of us passively accept decision problems as they are framed, and therefore rarely have the opportunity to discover the extent to which our preferences are frame-bound rather than reality-bound
        1. Reframing is effortful and S2 is normally lazy
        2. Choices are not reality-bound because S1 is not reality-bound
      3. Preferences between the same objective outcome revers with different formulation
        1. You’re more likely to choose the sure thing in the KEEP frame, and more likely to gamble in the LOOSE frame
        2. Choices between gambles and sure things are resolved differently, depending on weather the outcomes are good or bad
          1. We tend to prefer the sure thing over gamble (risk averse) when the outcomes are good. We tend to reject the sure thing and accept the gamble (risk seeking) when the outcomes are negative
      4. Your moral feelings are attached to frames (descriptions of reality) rather than to reality itself.
        1. Framing should not be viewed as an intervention that masks or distorts your underlying preference
        2. Our preferences are about framed problems, and our moral intuitions are about descriptions, not substance
      5. Some frames are clearly better than alternative ways to describe (or think about) the same thing
        1. Different frames evoke different mental accounts, and the significance of the loss depends on the account to which is posted
          1. Broader frames and inclusive accounts generally lead to more rational decisions
  6. Part 5: Two Selves
    1. Experiencing Self: does the living
      1. The experiencing self doesn’t have a voice
      2. It sees life as a series of moments, each with some value
    2. Remembering Self: does the remembering
      1. It composes stories and keeps them for future reference
        1. Tourism is about helping people construct stories and collect memories
          1. It’s the remembering self that chooses vacations
          2. Picture taking is not about a moment to be savoured but a future memory being designed
      2. Memories are all we get to keep from our experience of living and the only perspective that we can adopt as we think about our lives is therefore that of the remembering self
        1. The memory that it’s kept [for an event] is a representative moment
          1. Memory is a S1 function, and S1 represents sets by averages, norms and prototypes, not by sums
        2. Two factors affect our memories
          1. Peak-end: we remember the average between the peak moment and the end
          2. Duration neglect: duration has no effect in total rating
          3. What matters when we assess longer episodes is the progression of deterioration or improvement of the ongoing experience, and how the person feels at the end
          4. The combination of these two factors can lead to absurd choices, result of distorted reflections of actual experience
          5. Example: repeat an experience that left the better memory rather than one that caused less pain
          6. Causes bias that favours short periods of intense joy to longer periods of moderate happiness
          7. It makes us fear a short period of intense but tolerable suffering more than we fear a much longer period of moderate pain
      3. The remembering self is sometimes wrong, but it’s the one that keeps score and governs what we learn from living, and it’s the one that makes decisions.
        1. Tastes and decisions are shaped by memories, and the memories can be wrong
          1. The outcome: decisions that are not attuned to experience
        2. People choose by memory when deciding to repeat an experience
        3. Responses to global well-being questions should be taken with a grain of salt
          1. The score you assign to your life is determined by a small sample of highly available ideas, not by careful weighting of the domains of your life
          2. Experienced well-being will depend on the environment and activities of the present moment
          3. Focusing illusion: any aspect of life to which attention is directed will loom large in global evaluation
          4. Causes people to be wrong about their present state of well-being as well as about the happiness of others, and about their own happiness in the future
          5. Adaptation to new situations (good and bad) consist in large part of thinking less and less about it
          6. Is a rich source of “miswanting”: bad choices that are the result of “affective forecasting” (expecting to make current state of bliss permanent)
          7. Example: exaggerate the effect of a significant purchase or changed circumstances (like marriage) in future well-being
          8. It creates a bias in favour of goods and experiences that are initially exciting, even if they eventually loose their appeal. Time is neglected, causing experiences that will retain their attention value in the long term to be appreciated less than they deserve to be
      4. What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experince
      5. I am my remembering self, and the experiencing self, who does my living, is a stranger to me
      6. It’s a construction of S2 but relies on memory, which is a function of S1