You Are Only As Strong As Your Weakest Assumption
Why Investors Should Be Studying the Tankers, Not the Jets
“The market can remain irrational longer than you can remain solvent.” — John Maynard Keynes
There is a moment in any stock market boom when the narrative becomes load-bearing.
When the story of an asset class matters more than the fundamentals underpinning it, the system becomes vulnerable not to logical failure but to narrative collapse. Every strategy rests on invisible scaffolding. And you are only as strong as your weakest assumption about it.
What we are watching unfold across AI infrastructure — quietly, and then all at once — is precisely that. Not a technological failure. Not a crisis of intelligence or code. A structural decoupling between the “jets” and the “gas stations.”
Let me explain what I mean.
The F-35 Problem
For many years, the technology sector has operated with a form of perceived air superiority.
But in any complex system, you are not as strong as your strongest point. You are as strong as your weakest one.
The current Gulf conflict has made this visible in a way no academic paper could. While U.S. and Israeli forces maintain air superiority with advanced fighters that Iranian forces cannot match directly, the counter-strategy has repeatedly ignored the dogfight. The focus has shifted to the KC-135 refuelling tankers — the slow, unglamorous logistics aircraft that keep the fighters in the air. Without them, the most sophisticated combat jet in history becomes an expensive paperweight within hours.
In the AI economy, we have spent the last few years obsessed with the “jets” — the models, the benchmarks, the capability curves — while systematically ignoring the tankers: the physical, financial, and geopolitical scaffolding that keep those models operational at scale.
That scaffolding is shaking. And most strategic conversations haven’t caught up yet.
The Diagnosis: The Barbell Trap
The technology sector is experiencing a severe case of what psychologists call inattentional blindness — the well-documented failure to notice a highly visible object when our attention is focused elsewhere. We are transfixed by the cloud while ignoring the physical reality it rests on.
The AI investment cycle — absorbing hundreds of billions in actual and planned capital expenditure — has rested on three interlocking assumptions:
relatively cheap, sovereign-backed energy in the Gulf,
continued co‑investment from Gulf Sovereign Wealth Funds (SWFs), and
a geopolitical environment stable enough to build and operate at hyperscale.
The Gulf’s comparative advantages were real. Data centres powered by electricity often priced in the $0.05–0.06 per kWh range (versus roughly $0.09–$0.15 in many parts of the US) made the region a genuinely compelling compute destination.
Microsoft announced a $15.2 billion UAE package spanning equity in Abu Dhabi–based AI firm G42 alongside cloud and AI infrastructure and related spending, rather than pure hard infrastructure alone.
OpenAI’s first major international hub participation came via “Stargate UAE”, a planned 5‑gigawatt AI data‑centre campus in Abu Dhabi led and financed by G42 and partners, and promoted as the largest AI facility outside the United States once fully built.
Saudi Arabia, meanwhile, set multi‑gigawatt data‑centre ambitions under Vision 2030 and has used those targets to draw in hyperscalers such as Google, Amazon, and Oracle, even if headline figures like “2,200 megawatts” are best understood as aggregated pipeline estimates rather than a single discrete announcement.
The numbers were staggering, and the logic was clean. Until it wasn’t.
That logic rested on three pillars. First, oil revenues generating the surplus capital that Gulf sovereign wealth funds deployed globally—$119 billion in 2025 alone, representing 43% of all sovereign deal activity, with heavy flows into US tech and Treasuries. Second, a thriving expat economy: tourism contributed roughly 12-13% to UAE GDP, and tax exemptions for expats cementing the region as the world’s top destination for international talent and capital. Third, a reputation as a safe, stable platform for global AI ambitions. All three are now crumbling simultaneously.
The Strait of Hormuz — through which roughly 20 per cent of the world’s daily oil supply passes — has experienced an effective halt in shipping traffic since early March. Qatar has stopped gas production and declared force majeure. The blow to tourism and the expat economy has been equally swift. Airspace closures produced 37,000 flight cancellations in the first ten days of conflict alone. Dubai’s reputation for safety has been shattered — the Burj Al Arab was struck by drone debris, the Fairmont The Palm by an explosion, and Dubai’s airport damaged by a missile strike. Tens of thousands of Western expatriates have fled, and major banks ordered staff to work from home. A safe-haven reputation built over decades is not rebuilt quickly once lost.
On March 1, Iranian drone strikes damaged three Amazon Web Services facilities — two in the UAE and one in Bahrain — taking two of three availability zones offline and disrupting banks, payment services, and enterprise software across the region. Some experts are now calling for data centres to be reclassified as critical military infrastructure, a move that would dramatically alter the economics of Gulf-based AI buildout.
All three structural assumptions are now under simultaneous pressure.
The investment implications are direct. Gulf sovereign wealth funds were a critical prop beneath the valuations of the major AI hyperscalers that dominate global index funds. If those states are now forced to redirect capital to cover domestic fiscal shortfalls — lost oil revenues, reconstruction costs, recession risk — asset repatriation becomes very real.
This is where a concept called the Barbell Trap becomes essential to understand. Conventional analysis treats Gulf SWFs, which collectively manage well over $2 trillion across the five largest funds alone, as stable, long-term anchors with little incentive to sell their investments. This view fundamentally misunderstands portfolio architecture.
Gulf funds follow a classic barbell strategy: a large portion of assets sits in illiquid private equity, infrastructure, and real assets that cannot be sold quickly. The “liquid sleeve” — their accessible buffer — is composed primarily of US Treasuries and publicly traded equities, currently overweighted in tech. When domestic crisis pressure arrives, these funds cannot liquidate a private equity stake in an AI startup. They cannot quickly exit a land lease in the Oxagon industrial zone. Rather than deploying oil proceeds into US Treasuries and equities, Gulf states may be compelled to sell those same assets to maintain spending at home. That means liquidating their most accessible holdings first: the publicly traded technology stocks that are also anchoring retirement portfolios worldwide.
As I mentioned in a recent article in Canada’s The Globe & Mail newspaper, the structure of Gulf sovereign wealth funds means that illiquidity doesn’t dampen selling pressure — it concentrates and accelerates it, directed at the assets underpinning American market valuations.
The mechanism matters here, not just the direction. This isn’t a slow rotation. It’s a structural vulnerability that turns a regional crisis into a compressed, potentially rapid reversal of the very capital flows that have supported Western asset prices.
The Physical Problem: Water in the Desert
There is a harder constraint that rarely appears in analyst models.
Data centres powering AI across the UAE and Saudi Arabia are projected to consume over 426 billion litres of water annually by 2030. In a region that is among the world’s most water-stressed, every litre of that cooling water must first be manufactured. Qatar, the most desalination-dependent of the Gulf states, sources more than 99 per cent of its drinking water from desalination plants.
This is not an abstract environmental footnote. It is a direct operating dependency.
When US forces allegedly targeted a desalination facility on Iran’s Qeshm Island — cutting off water supply to approximately 30 villages in the Strait of Hormuz — the retaliation followed within 24 hours. An Iranian drone caused material damage to a Bahraini desalination plant near Muharraq. Strike, counterstrike, and in both cases the target was not a military installation. It was water.
These were not symbolic acts. They were a demonstration that the physical infrastructure on which the entire Gulf AI buildout depends is now a theatre of war. Multi-billion-dollar compute campuses require stable power, stable water, and stable supply chains. Each of those three inputs is now contested.
The Timing Problem: Being Right Isn’t Enough
Here is the argument that almost nobody in the AI investment community wants to sit with.
The AI thesis does not need to be wrong for valuations to crumble and investors to bail. It just needs to take longer to play out than the capital supporting it can endure.
This is a different kind of risk from the one most people are managing for. The standard bear case on AI focuses on technical failure — the models plateau, the use cases don’t materialize, the revenue doesn’t follow the hype. That case may or may not come true. But there is a second, quieter failure mode that history suggests is at least as dangerous, and certainly far less discussed: while the thesis is directionally correct, the timeline slips, and the capital structure collapses before the returns arrive.
Consider what happened to Barton Biggs at Traxis Partners in 2004.
Biggs was, by any measure, one of the most credentialed investors of his generation — 30 years at Morgan Stanley, ranked repeatedly as the top global strategist by Institutional Investor, architect of the firm’s research and asset management divisions. The fund started well. Then, in its second year, Traxis shorted oil — a high-conviction macro bet that oil prices had run too far and were due for a correction. The fund’s monthly letters described the position in detail, and as Biggs later acknowledged, “the whole damn world knew that we were short oil and losing money in oil.” Major redemptions followed the publicity around the position.
Oil didn’t correct on Biggs’ timeline. The losses were real, the redemptions were real, and the fund’s reputation was damaged — not because the underlying analysis was necessarily wrong, but because the market’s willingness to remain irrational outlasted the fund’s ability to hold the position. Biggs eventually rebuilt Traxis and went on to make several more celebrated calls. But the 2004 episode was a permanent feature of his story — a reminder that even the most rigorous conviction, held by the most experienced investor, can be rendered irrelevant by a mismatch between the timeline of the thesis and the patience of the capital behind it.
The AI buildout is now facing a version of the same problem. The underlying technology is almost certainly real. The demand trajectory is almost certainly real. The long-term disruption to knowledge work, professional services, software development, and decision-making is arguably not in serious dispute. But the financial and physical scaffolding that was supposed to support the pace of that buildout — the Gulf co-investment, the stable energy supply, the uninterrupted water infrastructure, the sovereign wealth recycling into US tech equities — is now contested in ways that weren’t priced into the model when the bets were made.
For hyperscalers like Microsoft, Google, and Amazon, regional revenue projections were baked into equity valuations at precisely the moment those projections became structurally uncertain. The implication is not that these companies are in trouble. The implication is that the timeline has likely shifted — and in a capital market where AI-linked valuations were priced for a specific pace of return, a timeline that slips by two or three years is not a minor adjustment. It is the difference between the thesis working and the capital that funded it losing patience first.
That is not a technological failure. That is a timing failure. And timing failures are often more dangerous than technical ones, because they are invisible right up until they aren’t.
Behavioural Insight
What makes this moment so perilous is not complexity. Complexity is manageable. What makes it dangerous is the gap between what we are paying attention to and what is actually driving the outcome.
Three cognitive patterns are converging here.
Narrative Capture. When a story is compelling enough — and the AI growth narrative is genuinely compelling — it crowds out systemic risk assessment. Leaders who would rigorously stress-test a supply chain often apply much weaker scrutiny to the infrastructure assumptions underpinning a decade-long technology bet. The more emotionally coherent the narrative, the less we question it.
Availability Bias. The risks that are easy to imagine — a model failing, a regulation emerging, a competitor overtaking — dominate strategic conversations. The risks that are harder to visualize — a petrodollar system unwinding, a desalination plant being struck, a sovereign wealth fund liquidating its liquid sleeve under domestic pressure — receive far less planning bandwidth, precisely because they sit outside our habitual mental models.
Sunk Cost Commitment. At the scale of capital already deployed in Gulf AI infrastructure, reverting the thesis requires acknowledging that hundreds of billions of dollars may have been allocated on assumptions that are no longer valid. Human psychology — at every level from analyst to board — resists that acknowledgment. The result is continued commitment to a position that the outside view has already rendered fragile.
The most dangerous phrase in strategic planning is: “We’ve always assumed this would hold.”
Leadership & Advisory Application
Establish Your Foundation. Separate your “jets” from your “tankers”. Every ambitious strategy has invisible scaffolding — the external conditions (capital access, geopolitical stability, physical infrastructure, currency flows) that the strategy silently depends on. Map yours explicitly. What are the top assumptions your strategy cannot afford to get wrong?
Diagnose Objectively. Apply the Outside View. Stop asking whether your AI strategy is brilliant — it may well be. Instead ask: “What happens to our valuation model if the timeline extends by three years and the cost of capital remains elevated throughout?” The question is not whether the destination is right. The question is whether your capital structure can survive the journey at the pace the market now implies.
Go with Purpose. Run the Pre-Mortem. Imagine it is 2028 and the AI infrastructure bet has underperformed materially. In most pre-mortems, the failure wasn’t the code. It was the confluence of physical vulnerability, sovereign liquidity pressure, and a timeline that slipped just enough for investor patience to expire. What would you do differently today if you already knew how that story ends?
Evolve Through Learning. Accept that geopolitical literacy is no longer a niche discipline confined to foreign policy specialists. It is now a core leadership competency — as fundamental to capital allocation decisions as financial modelling. The executives and advisors who treat it as optional are making a structural error.
For financial advisors specifically: the clients most exposed to this transition are often the ones who feel most secure. Concentrated positions in tech equities and Gulf-linked infrastructure funds are sitting on assumptions that the market has not yet fully repriced. The conversation to have is not “should we exit?” It is: “What proportion of this exposure rests on a timeline that has already changed — and does our client’s capital have the patience to wait for the thesis to catch up?”
The most expensive mistake in this environment is treating a structural reset as a temporary shock. The AI boom is not over — the technology is real, the demand is real, and the long-term trajectory is likely intact. But the financial and physical scaffolding that was assumed to be permanent has quietly become contingent, and the timeline that was assumed to be tight has quietly begun to stretch. Barton Biggs didn’t lose at Traxis because he was stupid. He lost because he was early, and early — when you are managing capital — is often indistinguishable from wrong. The jet is extraordinary. But extraordinary jets do not fly without tankers, and tankers operate on different fundamentals.
The Grove Question
This week, apply the Grove Question — named for Intel’s Andy Grove, who understood that strategic inflection points are rarely announced — to your most consequential position (check out my previous article on the Grove Question):
“If we were making this decision today, without our existing commitments and sunk costs, knowing what we know about the capital environment, the physical constraints, and the timeline risk — would we make the same bet, at the same scale, with the same expected pace of return?”
If the honest answer is “not exactly,” that is not a reason to abandon the position. It is a reason to examine the scaffolding more carefully than the jet.
Uncertainty doesn’t demand panic. It demands precision.
The Uncertainty Edge is a newsletter for leaders navigating the tension between hesitation and haste. Subscribe to receive frameworks, case studies, and decision-making tools every fortnight.



