Part 4: Finance, Technology & Investments

AI as the New Economic Arsenal: How Technological Superiority Shapes National Power

Erik Britton

I. Introduction

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last—unless we learn how to avoid the risks.”1

Stephen Hawking

Since the Industrial Revolution, the average person's standard of living has increased by at least a factor of ten globally. The kings and queens of pre-industrial societies could not dream of enjoying the material standard of living the average person now enjoys. At least 60% of that increase was driven by one force: technological innovation.2 In advanced economies like the UK, it’s more than 60% (the composition of growth in UK per capita real income since 1870 is pictured below: 1870 was already 110 years after the industrial revolution started).

Some innovations are incremental. Others are revolutionary. Consider the wheel—a 5,500-year-old breakthrough still propelling global productivity. Or more recent examples: the steam engine, electrification, the jet engine, the computer. Each was built upon what came before. No AI without computers. No computers without chips. No chips without the transistor. No transistors without electrification.

Now, the next leap is here: artificial intelligence. But before we embrace its full power, we must confront two uncomfortable truths.

First: AI must be developed in the interests of humanity. That includes ethical safeguards, regulatory frameworks, and clear-eyed acknowledgment of its existential risks.

Second—and more urgently for this chapter—technological innovation is inseparable from economic, military, and geopolitical power. AI is no longer a back-office algorithm. It’s a front-line instrument of national advantage.

The country that dominates AI will likely shape the coming global order. Today, the two leading contenders—China and the United States—hold starkly different views on liberty, governance, and the role of the state.

At stake is not just innovation. It is the ideological architecture of the 21st century. And

innovation, despite our best intentions, has never been politically neutral.

II – The Early-Stage Dynamics of AI Development

Most transformative technologies begin in a burst of collective exuberance. Finance, industry, and government converge in a race to scale and monetize what’s still in its infancy. AI is no exception.

In these early phases, two camps emerge. One rushes to push the technological frontier, focused on breakthroughs in capability. The other charges toward the commercial frontier, racing to be the first to deploy new technologies at scale. Both are essential, but not always aligned. The early adopters of AI include hedge funds which have been deploying algorithmic trading for many years already.

In the financial ecosystem, the venture capital and private equity risk takers ride this wave like surfers—paddling furiously into the swell, catching the rise, and hoping to exit before the rip curl inevitably crashes.

Meanwhile, longer-horizon investors—sovereign wealth funds, pension funds, national R&D banks—cruise through the turbulence like steady ships. Their role is less reactive, more strategic: to create ballast in a system often driven by hype. An illustrative hype cycle is pictured below. This phenomenon was first described by the Gartner Group, but the words in the picture below are mine. In practice, the shape, timing and magnitude vary across different technological innovations, but I find it a helpful framing device nonetheless.

Then comes the correction. As initial excitement outpaces real-world evidence, skepticism sets in. Asset prices fall. Funding tightens. Startups collapse. Mass layoffs follow. Commentators smugly recycle headlines of past tech failures. Billions in capital are written off. Promising ideas die, not from irrelevance, but from premature abandonment.

And yet, when the innovation is truly valuable, something remarkable happens: stabilization, then resurgence. Productivity increases. Profits rise. A new plateau is reached, higher than before the hype cycle began. The market corrects not down to the past, but up to a more durable future.

AI is now deep in this cycle. Early hype has given way to doubt in some quarters, but beneath the noise, the foundational work continues. The long-term impact—economic, military, and civilizational—will depend not just on what AI can do, but how wisely nations and institutions respond to these cycles.

III – National Comparative Advantages: Winners and Losers in the AI Race

To contend for AI dominance in the decades ahead, a country must possess more than raw computing power or charismatic entrepreneurs. At a minimum, it needs:

  • A large, highly skilled scientific base, including cutting-edge STEM research facilities and a steady inflow of top-tier graduates
  • A vibrant academic ecosystem beyond STEM, including ethics, policy, economics, and the humanities
  • Flexible and risk-tolerant R&D funding, channeled through a mature financial infrastructure
  • A huge pool of readily accessible data, gathered by both private and government entities
  • Access to the critical materials that support the growth of computing power
  • Access to energy, which will be needed in increasing quantities to support AI as it grows
  • An economy open to global flows of capital, talent, trade, and, above all, ideas
  • A regulatory environment that is both stable and adaptable
  • Legal protections for intellectual property that balance innovation with the public good
  • Good governance—at the national, subnational jurisdictions, and corporate levels
  • High rewards for successful innovators, which draw and retain the world’s best minds

Together, these conditions form the foundations of comparative advantage in the global AI race. As Nobel Laureate Paul Samuelson once noted, comparative advantage may be one of the few economic theories that is both true and not obvious.3

By definition, no nation can excel relatively in everything (that’s the ‘not obvious’ bit), so the real question is: Who is best positioned to lead in AI?

At present, three contenders dominate the field, each guided by a distinct strategic model:

  1. The American way: Maximum freedom, maximum reward
  2. The European way: Maximum regulatory compliance, minimum human risk
  3. The Chinese way: Maximum statecraft, maximum central control

Each model has strengths and weaknesses. The U.S. leans heavily on its powerful private corporate ecosystem, assuming—often correctly—that innovation thrives with minimal constraint. But in the case of AI, where unchecked deployment poses real-world ethical and societal risks, this assumption may fall short. A smart, flexible regulatory regime is not optional—it’s a strategic necessity.

This opens the door for the EU. By focusing on human rights, safety, and accountability, Europe may position itself as the first region to implement a comprehensive and exportable AI governance framework. The AI Act, though bureaucratic, could become the blueprint others follow, especially in developing markets.4

And then there’s China. With its top-down infrastructure, command over data, and mastery of economic statecraft, China could well capitalize on both Western models, observing, adapting, and scaling faster than its rivals. In a domain where coordination, speed, and control often outweigh frontier innovation, China’s system may confer a potent advantage.

This is not an academic debate. Which model wins will shape the norms, markets, and power structures of the 21st century. The AI race is not just about technology. It is a contest over who gets to write the rules of the future.

IV – AI as a National Security Asset

The firms that lead in AI will capture immense economic rewards. Those returns are already significant, and could become eye-wateringly vast. But that’s the less important dimension of AI leadership. The more consequential ones are military and geopolitical.

On the military front, AI is already deployed across strategic domains:

  • Cybersecurity and cyber-offense
  • Surveillance infrastructure
  • Autonomous and potentially lethal systems
  • Military command and control architectures

These are not theoretical threats. They’re active capabilities—visible in the war in Ukraine, in Gaza, in U.S. defense programs, and in Chinese military modernization.5 The outcome is not preordained. But the dynamic is clear:

No major power can afford to concede a potentially war-winning AI advantage, and each seeks one for itself.

This is the essence of an arms race, and it is well underway.

The Strategic Dilemma

So, how do you win an AI arms race? It’s not as simple as pouring more money into military R&D, personnel, or equipment. Every dollar spent on defense is one less available for civilian growth. And history teaches us:

In the long run, the dominant military power is usually the one with the strongest economy.

But to reach the long run, you must not lose in the short run.

That creates a strategic trilemma. Countries must choose between three options:

  1. Build a short-run war-winning advantage—and use it
  2. Invest in long-run economic dominance, while risking interim military setbacks
  3. Thread the needle: Prevent a decisive short-run loss while preserving long-run economic resilience

Option (3) is the path most nations would prefer—if they can make it viable.

One of the best ways to do so is by investing in dual-use technologies: those that strengthen both military and civilian sectors. AI is a quintessential dual-use asset. It enhances logistics, prediction, communication, surveillance, and decision-making across boardrooms and battlefields.

Threading the Needle with AI

For all major players, especially the U.S., China, and the EU, AI is one strand in a broader strategic thread. Other strands include:

  • Securing access to critical minerals and semiconductors
  • Reshoring or friend-shoring supply chains vital to defense industries
  • Aligning private sector innovation with national security priorities

This last point is where the U.S. faces its greatest challenge. In China, corporate strategy can be aligned by decree. In the E.U., alignment can be achieved, to some degree, by regulation. In the U.S., it must be shaped indirectly through incentives, minimal regulation, public-private collaboration, and values-based leadership.

America’s needle has a smaller eye to thread. Its policymakers must guide free agents in the private sector to act in concert with economic statecraft, without extinguishing the innovation engine they rely on.

It’s difficult. But not impossible. And with AI at the center of both economic power and military advantage, the stakes could not be higher.

V – AI Multiplies Force and Risk

It is not safe to allow adversaries to claim uncontested advantage in AI. And yet, to pursue dominance comes with risks too great to ignore. AI is both a force multiplier and a risk multiplier, and that duality sits at the core of every national strategy.

For the United States, with its decentralized innovation model, the strategic dilemma is especially acute. The policy needle has the smallest eye, but threading it remains essential. Unfortunately, AI is one of the most complex strands in that thread, for at least three reasons:

1. The Governance Dilemma

The temptation to go pedal-to-the-metal in pursuit of AI supremacy is understandable, but dangerous. As explored in Section II and throughout this anthology, unchecked AI acceleration invites unintended consequences:

  • Algorithmic bias embedded at scale
  • Use of lethal force by autonomous systems
  • Mass surveillance and erosion of personal privacy
  • Data misuse and the undermining of democratic processes

Yet, the instinct to hold back, driven by a desire to protect citizens from these harms, carries another risk: that you fail to protect them from external powers who hold no such reservations. In this game, ethical delay can equal strategic defeat. The challenge is not whether to proceed, but how.

2. The Labor Displacement Problem

Rapid AI deployment will inevitably displace some workers. How many? Economists can’t agree. Estimates range wildly, from a few million to two billion. The uncertainty is staggering.

In China, where a shrinking labor force collides with national growth ambitions, replacing workers with machines is seen not as a threat, but a necessity. But in the U.S., no such demographic deficit exists. If mass displacement occurs without sufficient policy support, domestic political will for AI leadership could erode swiftly.

What’s politically essential abroad could be politically destabilizing at home. 3. The Singularity and the Unknown

Finally, there’s the question no strategist can ignore—though most prefer not to face it directly.

There are two plausible futures for AI:

  • One leads to ever-increasing utility, akin to previous technological revolutions.
  • The other leads to the “singularity”, a point after which machines surpass human intelligence and all predictive models fail.

No one knows if or when the singularity will arrive. But the stakes are existential. This introduces a non-quantifiable, precedent-free variable into policymaking, one that challenges every known model of governance, economics, and national security.

So what is to be done?

The path forward must reconcile the need to act decisively with the need to act wisely. It must resist paralysis without surrendering to recklessness. It must balance economic dynamism with democratic legitimacy. And it must prepare leaders not just to optimize AI, but to master it.

VI – Conclusion: A Call to Exercise Responsible Democratic Power

The typewriter was one of the early machines to augment human productivity. Then came the word processor, the personal computer, the Internet—and now, artificial intelligence.

In 1867, Charles Weller proposed a training sentence for learning the typewriter. A sentence that filled a line neatly, but also resonated with moral urgency:

“Now is the time for all good men to come to the aid of their country.”

Respecting today’s sensibilities, I would substitute “people” for “men”, and “the human race” for “country”. But the sentiment stands.

Now is the time for all good people to come to the aid of the human race.

It applies now, perhaps more than ever.

This line should not be invoked casually. But in the case of AI, it is that time. That is the training sentence for how we must approach this technology: with shared responsibility.

There is more to democracy than marking a ballot every few years. Democracy means participating—living the values that bind a society together. It means accepting a measure of personal accountability for acting in the collective interest.

Governments can set direction, offer incentives, and build guardrails. But ultimately, it is up to us:

  • The citizens
  • The corporate decision-makers
  • The financiers
  • The regulators
  • The developers and deployers of technology

It is up to us to ensure that our values prevail.

That means:

  • Welcoming sensible regulation of AI, and innovating aggressively within those boundaries for national prosperity
  • Collaborating across political divides and international lines, even when doing so involves tolerating minor injustices in the short term
  • Aligning with national strategy, not blindly, but in a spirit of responsible partnership and stewardship

This is not a call to arms. It is a call to action:

A call to collective, intentional alignment around the technologies that will define our century.

If we fail to respond, we may find ourselves overtaken not just by adversaries, but by outcomes we never intended.

Economic strength and national security are no longer separable pursuits. In the age of AI, they are two sides of the same coin.

And it’s in our hands, now, to determine its value.

Endnotes:

1 Stephen Hawking, quoted in Rory Cellan-Jones, “Stephen Hawking Warns Artificial Intelligence Could End Mankind,” *BBC News*, December 2, 2014.

2 See for a discussion of the composition of growth, Nicholas Crafts, “The Contribution of Technological Change to Economic Growth: Lessons from the Industrial Revolution,” *Oxford Review of Economic Policy* 18, no. 3 (2002): 340–360.

3 Paul A. Samuelson and William Nordhaus, *Economics*, 19th ed. (New York: McGraw-Hill Education, 2009), 58.

4 European Commission, *Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act)*, April 21, 2021.

5 Lauren Kahn, Paul Scharre, and Megan Lamberth, *Artificial Intelligence and International Stability*, Center for a New American Security (CNAS), March 2021.

© 2026 Erik Britton. All rights reserved.