Thinking in Bets
by Annie Duke
Added: 3/23/2026
Read: —
Summary
A framework for making smarter decisions by thinking in probabilities rather than certainties.
📖 Currently reading this — notes in progress, more to come!
Notes
"Resulting — judging a decision by its outcome rather than the quality of the process that led to it."— Chapter 1: Life Is Poker, Not Chess
This is the trap every founder falls into. SortedOut got early traction, so I assumed every product decision was genius. But some of those decisions just got lucky with timing. The real question isn't 'did it work?' — it's 'would I make the same call again with the same information?'
"We can't just snap our fingers and change the way our brains operate. But we can commit to reshaping our decision process."— Chapter 1: Life Is Poker, Not Chess
This is basically what I'm building with the AI life coach. You can't rewire your brain overnight, but you can build a system that catches your blind spots — a personal model that remembers your patterns better than you do.
"Being wrong hurts us more than being right feels good. We are wired to protect our egos rather than to learn from our mistakes."— Chapter 2: Wanna Bet?
In ML, we literally measure this with loss functions — the model optimises to minimise being wrong, not maximise being right. Humans do the same thing but without the self-awareness. I've caught myself blaming data quality when a model underperforms instead of questioning my feature engineering choices.
"Thinking in bets starts with recognising that there are exactly two things that determine how our lives turn out: the quality of our decisions and luck."— Chapter 2: Wanna Bet?
Moving to Australia was a bet. I had maybe 60% confidence it would work out career-wise. But the expected value was massive — new market, permanent residency path, quality of life. That's the framework: even if you're not certain, if the expected value is high enough, you take the bet.
"When we think in bets, we are forced to acknowledge the uncertainty inherent in our beliefs."— Chapter 3: Bet to Learn
Every time I ship a feature for Rinqer, I'm placing a bet. Will restaurant owners actually use AI phone agents? I think 70% yes. But I need to structure the product so that even if I'm wrong about the use case, the underlying tech can pivot. That's risk management through architecture.
"The way we field outcomes determines how quickly we learn and improve."— Chapter 4: The Buddy System
In production ML, we don't just check accuracy — we track precision, recall, drift, latency. Duke is saying we should do the same thing with life decisions. Don't just ask 'did it work?' Ask 'what did I learn about my decision-making process?' That's the 1% better philosophy applied to thinking itself.
"A decision group can help us by being the audience we are accountable to."— Chapter 4: The Buddy System
I don't have a formal decision group, but this is exactly what the AI life coach could become — an always-available accountability partner that knows your history, your biases, your patterns. Not to make decisions for you, but to ask the right questions before you commit.
"Hindsight bias is the enemy of good decision-making. Once we know an outcome, we can no longer accurately reconstruct what we believed at the time."— Chapter 5: Dissent to Win
I see this in post-mortems at work. Once a production incident is resolved, everyone 'knew' the root cause all along. But if we'd done a pre-mortem — imagining the system failing before deployment — we'd have caught it. Duke's decision diary idea is basically a git log for your brain.
Reactions
Coming soon - reaction system in development