Do you know that feeling when you’re playing a game with someone and they start doing things completely out-of-character to win? When someone you trust suddenly stabs you in the back or reneges on a promise?

I’ll admit that I’m one of those people. When I play board games, I enter game mode — and in game mode, all bets are off. I will put rational strategy ahead of everything else, no matter how much it upsets my friends and family.

A bad character trait, I know. But here’s the thing: it works. You stand a much better chance of winning if you set aside the norms of polite society and reduce everything to cold, rational strategy.

In the context of board games, game mode is (mostly) harmless. But what happens when game mode thinking infects every aspect of our lives? Consider the job market: you tailor your resume to game the algorithms, perform enthusiasm you don’t feel in interviews, and accept that your worth is reduced to whatever makes you most “hireable.” Or dating apps, where you optimize your profile photos and bio and craft messages that follow proven formulas. The list of examples of having to play games in everything we do goes on.

In each case, the incentive is clear: play the game or lose to those who do. How do we escape a world where game mode is the default and you can’t opt out?

In our latest episode of Your Undivided Attention, Tristan and Aza sit down with Professor Sonja Amadae, who argues that we have become “prisoners” of reason: trapped in a world where optimal strategy and cutthroat competition have crowded out cooperation and trust. This is what Aza calls the Game Theory Dilemma.

And now we’re building AI systems that are perfect game theory players. They never get tired of optimizing. They never feel guilty about ruthless strategy. As AI becomes embedded in hiring, healthcare, criminal justice, and financial markets, it may hardwire game theoretical logic into the fabric of society itself.

Breaking out of the Game Theory Dilemma requires examining game theory’s core assumptions and learning to trust each other again. No small task, but critical if we’re going to build a more humane technological future.

Here are some of the key takeaways from our conversation with Prof. Amadae:

One of the most important points Professor Amadae makes is that game theory isn’t a fundamental law of nature. It’s a specific framework invented by humans to solve specific problems.

John von Neumann, the brilliant mathematician behind game theory, developed it in the 1940s to formalize how to win parlor games like chess and poker. But what started as a mathematical tool for board games became the dominant logic for nuclear deterrence, economic policy, and now AI development.

“This is not an invention, this is a discovery,” Amadae explains, describing how game theory gets framed. “This idea that we evolved to be these machines that have to propagate, and the way that you would do that is to be the perfect strategic actor.”

Of course, competition is a totally normal part of life. Some resources are scarce and competition over them is inevitable. But what game theory did was create a logic that holds competition up as the only rational choice. In reality, history is full of examples where cooperation was not only rational but advantageous in the long run.

This essentialism makes game theory feel inescapable. But recognizing it as a chosen framework, not an immutable truth, is the first step toward choosing different paths.

Amadae questions the validity of three core assumptions behind game theory:

1. Scarcity is the source of all value: Game theory requires that everything valuable can be reduced to a competitive metric. If I get more, you get less. But this ignores what Amadae calls “positive-sum goods:” things like self-esteem, friendship, and love that don’t diminish when shared.

“Most of what we value, I would argue, is actually these positive sum goods that you’re never going to even begin to enter into some kind of a game theory payoff,” she argues. “Actual relationships, friendship, love, family, having children. Most of what we value is actually these positive sum goods.”

2. Strategic competition is human nature: The biological essentialism that comes from game theory holds that we are evolutionarily programmed to be ruthless competitors. This framing makes cooperation look naive and self-sacrifice look irrational. But as Amadae points out, people who haven’t been explicitly taught game theory, like her students in Finland, often default to cooperation, especially in high-trust societies.

“Finland is a very high trust society and it doesn’t run according to this logic of game theory or the Prisoner’s Dilemma,” she explains. “I think it’s actually a crime of some kind to teach the Prisoner’s Dilemma because the students just cooperate there.”

And as Aza points out, the natural world is full of examples where cooperation is not only possible but advantageous. Cooperation in nature has been studied at length by evolutionary biologist and YUA guest David Sloane Wilson, who argues that “Selfishness beats altruism within groups. Altruistic groups beat selfish groups. All else is commentary.”

Check out our interview with him here: https://www.humanetech.com/podcast/the-race-to-cooperation-with-david-sloan-wilson

3. There is no alternative: The most insidious assumption is if you don’t play the game, you lose. Period. This creates a self-fulfilling prophecy where the only “rational” choice is cutthroat competition. But history is full of examples of people making alternative choices and succeeding. She points to the history of collective non-violence movements, like India’s Satyagraha, as a great example.

As Amadae argues throughout the episode, these assumptions are contestable, and that contestation opens space for different ways of organizing society.

Once game theory becomes the dominant logic in a domain, it reshapes that domain entirely. Amadae calls it a kind of “colonization” where authentic human connection gets replaced by strategic calculation.

Dating becomes pickup artistry, where every interaction is optimized for a specific outcome. Software design becomes AB testing, where features are chosen not for human flourishing but for maximum engagement. Political communication becomes focus-grouped messaging, stripped of authenticity and meaning.

“The world kind of feels like it’s being colonized by this cold, strategic logic,” Tristan notes. “What it leads to is this kind of deadening of culture, this deadening of dating, this deadening of relationships, this deadening of software design.”

The problem compounds: once some actors start playing by game theory rules, everyone else feels pressure to follow. The cooperative get out-competed. The authentic get replaced by the calculated. And the world becomes, as Amadae puts it, “a nightmare we can’t wake up from.”

If game theory has colonized human institutions, AI threatens to lock that colonization in place permanently.

AI systems are designed to optimize. They measure, test, and iterate toward maximum effectiveness. They don’t get tired of being strategic. They don’t feel guilty about manipulation. They operate in permanent “game mode.”

“AI is like the maximization of game theory logic,” Tristan observes. “AI arms every other arms race. If there’s a military arms race, AI arms and supercharges the military arms race. If there’s a corporate arms race, AI will arm that arms race too.”

Amadae adds that AI is already being programmed according to game theoretic assumptions: “When you put those two together, that we interpret that there has to be this AI arms race, and the AI is programmed to be a strategic rational actor, it’s going to keep feeding back that logic.”

The danger isn’t just that AI makes game theory more powerful; It’s that AI could make game theory the permanent architecture of human society, optimizing every interaction for strategic advantage rather than human flourishing.

Despite the grim picture, Amadae offers a simple starting point: trustworthiness.

“It starts with understanding this logic of the Prisoner’s Dilemma,” she explains. “The way out is that you just ask yourself the question: if the other guy went ahead and cooperated ahead of me, do I cooperate or not?”

If the answer is yes then you’ve broken out of the Game Theory Dilemma. You’re no longer purely strategic. You’re building something different: assurance.

This requires three things, according to Amadae:

Solidarity: Connection around a common cause that motivates action beyond self-interest

Commitment: Actually keeping your word, regardless of strategic calculation

Believing what you say: Speaking authentically rather than strategically

“How many times you just say whatever it takes just to get some outcome versus believing what we’re actually saying?” she asks. “That’s a basic duty for being a citizen in society: stating what we believe, and then trying to make our statements to be true.”

Amadae points to the 1983 film The Day After, which depicted the aftermath of nuclear war. The film was watched by over 100 million Americans and screened for President Reagan and the Joint Chiefs of Staff. Reagan later said it changed his thinking on nuclear strategy and helped push him toward deescalation with the Soviet Union.

What the film did was make the cost of defection — mutual nuclear annihilation — viscerally clear. When both sides could see that the game theory “solution” led to an outcome neither wanted, cooperation became rational.

As Aza summarizes: “It became existential. So now, cooperation becomes the rational thing to do.”

The parallel to AI is direct. If we can make the dangers of an unchecked AI arms race sufficiently clear, if we can show that game theory, taken to its logical conclusion, creates a world no one wants, then cooperation stops being for “suckers” and becomes the only viable path.

Game theory has become the invisible architecture of modern life, shaping everything from nuclear deterrence to dating apps to AI development. But it’s a choice, not destiny.

Breaking free requires recognizing game theory’s assumptions as limited and contestable. It requires building trustworthiness, solidarity, and commitment, values that game theory dismisses as naive but that are essential for human flourishing.

As Amadae puts it:

“We need to believe that there would be an alternative possibility. Maybe that’s the first step. If we can start to believe that, then maybe we can start to create other social patterns and not lose hope that we need to be these strategic cutthroat actors.”

And it requires clarity about where the current path leads. AI is accelerating us toward a world organized entirely around strategic competition. If we don’t break free of the game theory dilemma now — before AI systems become fully entangled in every institution — we may never get another chance.

We can still choose to build a more humane technological future. We can, for example, pursue narrow AI applications that promote human flourishing and make scarcity and competition less pressing. We may even be able to use AI to unlock new modes of cooperation, what Aza calls a “Move 37 for relationships.” But that would require that we critically examine the systems we are building today.

Competition is always going to be part of our lives. And that’s a good thing. It can bring out the best in us. And frankly, it’s fun. But in the world that game theory has built, the game just doesn’t feel very fun. We can choose to play better ones.