Some Problems Don’t Have Solutions
The smartest leaders know when to stop solving—and start taming the environment
In August 1949, in a remote canyon in Montana, called Mann Gulch, a team of firefighters led by Wagner Dodge faced imminent death. The wildfire they had parachuted in to battle was fanned by the hot, dry winds and quickly cut off their only escape route. His team panicked and desperately tried to outrace the approaching flames. This was an exercise in futility leading to most of them perishing in the blaze. It was, at its core, an attempt to solve the problem—put out the blaze, outrun the flames.
Dodge, on the other hand, quickly realized the problem was not solvable with the traditional means. He couldn’t outrun the flames, he couldn’t put out the fire. Instead, he did something radically different. He stopped running.
He set fire to the patch of grass in front of him and threw himself down into the ashes. The approaching wildfire scorched past him, leaving him untouched, because the small fire he had deliberately set removed any fuel for the wildfire. This escape fire, as it is known today, worked because it sought to tame the environment, not correct the problem.
That lesson has many applications today, but not many leaders have properly internalized it yet. For example, in April 2026 on Tesla’s quarterly earnings call, Elon Musk admitted that Hardware 3, the core operating system of Tesla cars, which had been touted as the foundation for autonomous self-driving (ASD) cars, wasn’t up to the task. For ten years, Musk had been proselytizing that ASD was just around the corner. Four million consumers took that message to heart and shelled out millions of dollars for Tesla automobiles. Only to have the rug pulled out from under them on that earnings call. What solace did Musk offer? He said that Hardware 4—the next iteration—could be the beginning of an answer and offered loyal Tesla owners a discount on the purchase of a car with Hardware 4. Shockingly, Tesla share prices went up 4 per cent on the news.
This is not about the irrationality of the stock market or consumers. It is about the challenges facing leaders—all leaders—when we assume that our problems can be solved with an algorithmic approach.
Risk is Not Uncertainty
This is where the distinction between risk and uncertainty is critical. Most people use the terms interchangeably. But they mean very different things. Economist Frank Knight, in 1921, was the first person to provide a useful distinction. Risk is when you don’t know the outcome, but you know the full distribution of outcomes. Think poker, blackjack, even chess. You might not know how the next hand will play out or what move your opponent will make, but the range of outcomes is fixed and the probability of each outcome can be accurately estimated. Uncertainty, on the other hand, is when you don’t know the outcomes and you can’t identify, let alone calculate, the odds. A useful analogy is to think of the roulette wheel. Risk is betting on where the roulette wheel stops. You don’t know where it stops, but you know everywhere it could stop and you can calculate the probabilities of each stop. Uncertainty is a player flipping the table over in the middle of a spin.
In his book How to Be Smart in a Smart World, the behavioral scientist Gerd Gigerenzer provides another useful characterization for this problem. He categorizes tame problems (with fixed rules, bounded possibility space—such as chess, GPS route mapping) versus wild problems (open-ended, unscripted, emergent—like driving through a crowded city, leading an organization through a crisis or managing a portfolio through a pandemic).
The error we keep making is that we treat wild problems as tame problems. That we use the same approach, the same models, the same solutions to solve both sets of problems. That was literally the challenge at Mann Gulch. Fighting wildfires can be a tame problem—indeed Dodge’s team had been able to use their playbook successfully in previous situations. But tame problems can quickly become wild when the circumstances change—as they did at Mann Gulch. A long, hot, dry summer, gusting winds, a valley that had only one entry and exit point. This is where trying to solve wild problems using a tame problem playbook can quite literally become deadly.
The Tram and The Car
So, what does risk and uncertainty (or tame vs wild problems) have to do with Elon Musk, Tesla or ASD? Everything.
Autonomous self-driving is a wild problem and Musk is trying to solve it like a tame problem. Driving in a busy city is a wild problem, not a tame problem. Think about the unexpected factors you have to navigate in your daily commute. A child suddenly darting in front of your car chasing a ball. A construction worker waving you through a red light. A car suddenly slamming its brakes in front of you. You have to decide whether to swerve into the next lane (possibly hitting someone else) or onto the sidewalk (possibly hitting a pedestrian) or keep going straight (definitely slamming into the car ahead). These are split second decisions where the roulette wheel has been overturned and you need to decide what to do.
Trying to solve these situations as a tame problem means that you invest in significant compute resources (faster, larger AI models) that can incorporate an increasing number of scenarios. The problem is that the permutation and combination of those scenarios is endless. And the more interactions there are (cars, pedestrians, traffic lights, construction, etc.), the more scenarios arise. No AI algorithm can be programmed for every possible scenario—because programming the AI beforehand requires us to imagine the scenario beforehand.
Gigerenzer’s broader point is that for some wild problems, the better solution is often to change the environment rather than rely only on more sophisticated prediction or control. He uses the rise of the motor car as an illustration: as cars entered public life, safety improved not by making drivers or vehicles infinitely smarter, but by redesigning the surroundings through road rules, lane markings, traffic signals, speed limits, and other infrastructure that made the system more predictable.
In a similar vein, Gigerenzer argues that the benefits of autonomous transport may come less from building an ever-smarter super-algorithm but more from taming the environment. One example is driverless rail or tram systems. Because they run on fixed tracks, with controlled crossings, clear right-of-way rules, and limited interaction with unpredictable traffic or pedestrians, the problem becomes much more structured and therefore easier for algorithms to handle reliably.
A great example is Waymo. Waymo’s genius was in recognizing that the goal was never to solve driving—it was to solve enough of driving to make deployment viable and safe. By pre-mapping every street in its operational zones in three-dimensional detail, by geofencing its vehicles within locales it genuinely understands, Waymo tamed its environment rather than trying to tame a wild problem. Not fully tame—the unexpected still happens, and that is precisely the point. But tame enough that, when the unexpected does appear, it stands out sharply against the known, stable model of what should be there, and the system can respond to it rather than be blindsided by it. Gigerenzer calls this ecological rationality—intelligence fitted to its environment, rather than intelligence pretending to float free of it. Waymo built an entire company on that principle. The fifteen million completed commercial rides are the proof of concept.
And yet, we are still buying the promise of that universal algorithmic solution. It is just over the horizon. That is why Tesla consumers and shareholders still keep buying, even though “just over the horizon” has been the mantra for ten years.
The Algorithm Keeps Getting Promised
The allure of the magical algorithm is enticing and, sadly, has bedevilled many industries for many years. Long-Term Capital Management, the hedge fund set up by Nobel Laureates, had to be bailed out in 1997 by the Fed. Their models said $50 million was the worst loss a single day could bring. Reality delivered $4.6 billion in losses in under four months—an outcome so far outside the model’s probability distribution that, as historian Niall Ferguson noted, it should not have happened in the entire lifetime of the universe. This was not a fat tail. It was proof that the tail and the model inhabited different realities entirely.
Similarly, in the aftermath of the 2008 financial crisis, Goldman Sachs’ CFO David Viniar famously remarked that they had witnessed “25 standard deviation moves, several days in a row.” Such events were statistically supposed to occur only once every 100,000 years; yet they happened repeatedly in a matter of days, delivering a humbling lesson about the limits of control.
Aviation offers a sharp parallel. Boeing’s MCAS system—software designed to automatically correct the 737 MAX’s handling characteristics—was modelled and stress-tested against known flight parameters. It was never designed for what actually happened: a faulty sensor triggering it repeatedly, overpowering pilots who had never been told the system existed. Lion Air Flight 610. Ethiopian Airlines Flight 302. 346 people dead. The model and the reality never inhabited the same world.
In 2006, six volunteers enrolled in a clinical trial for a new drug called TGN1412. The preclinical trials—which met every regulatory requirement—showed it was safe. Within 90 minutes of their first dose, all six were in intensive care with multiple-organ failures. The dose administered was 500 times smaller than the level found safe in animal studies. The models hadn’t failed through negligence. They simply could not see what they could not see: that the human immune system would respond in a way no animal study could ever have predicted. The wild problem had worn the costume of a tame one.
This is not to argue that the progress in large-language and other AI models are not genuine achievements. They are powerful tools. And they are well-suited to solve tame problems, like sales emails and chatbot interactions for standard customer service enquiries. The challenge is that these same tools are being touted as the answer to wild problems. And that’s not just going to hurt investors and consumers in the pocketbook, it is going to leave us wholly dependent and wholly flummoxed when we come face to face with wild problems. Exactly what happened to Wagner Dodge’s team at Mann Gulch.
What This Means for How You Lead
So, what does this mean for you as a leader? I see four direct, practical implications:
1. Be honest about which problem you are actually facing. Many organizations automatically treat all problems as tame problems. They create more detailed financial models, they brainstorm scenarios, they create more fancy PowerPoint decks to brief executives. These actions, while soothing in the moment, ignore the real underlying problem until it is too late.
2. When the problem is wild, look to tame the environment before you try to solve the problem. Can you constrain the decision space? Can you run a limited pilot before rolling out firm-wide? Create clearer rules of engagement that reduce the number of genuinely novel situations your employees face? Simplifying the operating environment is not the same as simplifying your thinking—it is often the most sophisticated move available.
3. Resist the algorithm when the environment is wild. This is not an argument against gathering more data or more analytical rigour. It is an argument for knowing where those tools genuinely help and where they create false confidence. A navigation system is enormously useful for route planning. It is not a substitute for judgment when road conditions are nothing like the map. You don’t drive faster in the middle of a blizzard because your GPS has mapped out the route.
4. Recognize that simple heuristics are not a failure of sophistication. Gigerenzer’s deeper point is that simple rules of thumb—ecological heuristics—often outperform complex optimization in genuinely uncertain environments. Not because they are smarter, but because they are more robust. They make fewer assumptions about a future they cannot know. David VanBenschoten ran the General Mills pension fund for 14 years without ever finishing higher than 27th percentile in annual returns—and ended up in the 4th percentile overall, outperforming 96% of his peers. His edge wasn’t brilliance or innovation; it was using simple rules for the disciplined avoidance of catastrophic loss while everyone else chased outliers.
The E.D.G.E. Applied
The difference between a good framework and a useful one is specificity. Here is how my E.D.G.E. framework applies to the only problem that actually matters: knowing what kind of problem you are dealing with before you try to solve it.
Establish. Draw the smallest honest circle around what you actually control—your process, your principles, your decision criteria. Not the market, not the competitor, not the technology curve. Tesla’s error wasn’t ambition. It was selling promises that lived outside that circle as though they were inside it.
Diagnose. Before reaching for a model, name the problem type. Risk—where historical data is a reasonable guide, or uncertainty—where the tail events are genuinely novel and no model built on the past can see them coming. LTCM, Boeing, TGN1412, Tesla. Different industries, same original sin: someone called a wild problem tame, and built a business on the misdiagnosis.
Go. In wild environments, move in small, reversible steps. Waymo maps a city, validates it, deploys commercially, then expands. VanBenschoten refused to lose badly and let compounding do the rest. The question isn’t how do I solve everything? It’s what is the bounded version of this problem I can actually deliver on? Find your track. Run on it.
Evolve. Wild problems don’t yield to a single solution—they yield to progressive understanding built through real engagement. Every Waymo ride is information. Be the organization that learns from what actually happens, not the one that defends what the model predicted.
The tram is still running. On its rails, on schedule, delivering real value within constraints it has never pretended don’t exist. The self-driving car is still working out what to do about the mattress on the highway.
The world doesn’t always reward the boldest promise. It rewards whoever figures out how to operate effectively inside the actual constraints of the actual problem. That is not a smaller ambition. That is the E.D.G.E.
© The Uncertainty E.D.G.E. | Published every other Tuesday
If this resonated, I’d love to have you as a free subscriber — and forward this to a leader who needs to see around corners.
The Uncertainty E.D.G.E. is for leaders who are accountable for outcomes they can’t fully control — and want a clearer way to think when certainty won’t come.
Essays and conversations on decision-making under pressure, every other Tuesday. Join me at theuncertaintyedge.com.
If the human side of leadership is what draws you, I also write The Good Human Practice — on inner clarity and character for leaders called on to make those hard decisions under pressure.



