Part 4: Finance, Technology & Investments
Who Decides? Artificial Intelligence, National Security, Corporate Power, and the Imperatives of Democratic Governance
There is a quiet contest underway that most have not yet noticed. It is a contest not fought on a battlefield or debated in a legislature. It is unfolding inside corporate boardrooms, ethics committees, and the terms of service that govern some of the most powerful technologies of our time, especially in matters of national security and state surveillance. The question central to this contest is deceptively simple: who gets to decide how artificial intelligence is used when the stakes involve war, public safety, and the long-term stability of democratic societies?
The answer, if left to private actors alone, may not be the one we would choose.
This is not a criticism of the technology industry. The engineers and executives leading the AI revolution are, for the most part, serious, principled people wrestling genuinely with hard problems. But seriousness of purpose does not equate to sovereign authority. And in the accelerating race to develop, deploy, and sometimes restrain artificial intelligence, corporations have begun making decisions that belong - constitutionally, morally, and practically - to the institutions that democratic publics have authorized to make them.
To understand why this matters, it helps to begin with first principles. The Western democratic tradition, from Hobbes and Locke through Madison and Jefferson, rests on the premise that legitimate authority derives from the consent of the governed. Citizens delegate certain powers to the state in exchange for protection and the fair adjudication of competing interests, and they retain mechanisms – elections, courts, oversight, and constitutional constraints – to hold that state accountable. The just war tradition adds a complementary insight: decisions about the resort to and conduct of force have always been reserved to legitimate political authorities, not private actors. From Augustine and Aquinas onward, the exhortations have been consistent: when the stakes are catastrophic, the decision-makers must answer to the community that bears the consequences. No corporate board, however distinguished its membership, bears those consequences in the way that citizens do.
The tension between corporate responsibility and democratic authority has been most visibly dramatized in the domain of AI in warfare. Project Maven, the Pentagon's 2017 initiative to use machine learning to improve the analysis of drone surveillance footage, became a flashpoint when Google employees petitioned – successfully - to end the company's contract. The employees were not wrong to raise ethical concerns. The process by which those concerns were resolved, however, deserves scrutiny.
What happened at Google was not simply a company choosing its customers carefully. It was a corporate governance process that effectively inserted itself into a national security policy decision. The question of whether and how AI should assist military targeting is precisely the kind of decision that democratic societies, acting through their elected representatives and the military and civilian leadership those representatives oversee, are supposed to make. When a company's internal ethics review substitutes for that process – particularly when the company has an outsized influence on the technology - the chain of democratic accountability is interrupted. To be clear, the decision may be the right one. But it is made by the wrong institution, through the wrong mechanism, accountable to the wrong constituency.
This dynamic has only intensified since Maven. Recently, Anthropic, one of the leading AI companies in the world - has navigated public scrutiny over its relationships with defense and intelligence customers. The specifics matter less here than the pattern they reveal. The point is not to litigate those decisions, but to examine what they say about who is effectively setting policy. But when AI firms establish internal policies about which government applications they will and will not support, they are not merely setting product guidelines. They are shaping – even constraining - the operational options available to national security leadership. They are, in effect, making policy - without the electoral accountability that policymaking requires. A board of directors under this scenario can narrow the strategic choices available to a democratically accountable commander-in-chief. That is a significant transfer of power, and it has happened largely without public debate.
The problem is not that corporations have ethics. The problem is that well-intentioned ethics, applied without democratic sanction, can produce outcomes that no electorate authorized. A technology company that refuses to support certain military applications may be acting in accordance with its stated values. But it is also - whether its leaders recognize it or not - making a judgment about the appropriate balance between civilian oversight of the military, national security priorities, and international humanitarian law. These are judgments that belong to governments, legislatures, and ultimately to voters.
The Maven episode was dramatic enough to attract widespread attention. But the more pervasive and consequential version of this problem is quieter. It happens every time an AI company writes a policy that determines which government agencies can use its products, every time a foundation model provider decides which outputs it will and will not generate for law enforcement, and every time a technology executive testifies before Congress and implicitly signals which regulatory frameworks their company will accept.
This is not conspiracy. It is the predictable result of deploying powerful technology without a clear framework for allocating decision-making authority. When democratic institutions move slowly, as they often must in order to preserve deliberation and accountability, fast-moving technology companies fill the gap. They develop their own norms, their own enforcement mechanisms, their own version of policy. And because their products are indispensable, their de facto policies become real constraints on governmental action.
Infrastructure provides a parallel. The operators of power grids, water systems, and communications networks make decisions every day that affect public safety. We do not leave those decisions entirely to market forces. We regulate them, we require transparency, and in some cases, we treat the infrastructure itself as a public utility subject to public obligation. As AI becomes embedded in payment systems, trading platforms, logistics networks, and other data-rich infrastructure, corporate choices about where and how to deploy these capabilities shape not only operational risk, but also the resilience of financial systems and the distribution of economic power. The case for analogous accountability frameworks, therefore, becomes stronger, not weaker.
None of this implies that corporations should be passive instruments of government direction. The innovation capacity of the private sector is genuinely irreplaceable. The goal is not to nationalize AI development. The goal is to ensure that the application of AI to sensitive national security and public safety questions is governed through legitimate democratic processes.
The appropriate division of roles looks something like this:
Corporations. Corporations innovate, advise, and uphold meaningful ethical standards within their sphere of operation. They develop the technology, compete to improve it, and bring the expertise that comes from being at the frontier of what is possible. They can and should articulate the risks and limitations of their systems, refuse to misrepresent capabilities, and decline to participate in clearly illegal applications. What corporations should not do is unilaterally determine the conditions under which their technology may be used in contexts that affect national security, foreign policy, or the use of force.
Government. The path forward requires building the governmental capacity to exercise meaningful oversight of AI applications in sensitive domains. Congress must assert its oversight role, not abdicate it to agency discretion or corporate self-regulation. The executive branch must build the expertise to govern what it is acquiring. The military services must develop doctrine and accountability frameworks for AI-enabled operations that are as rigorous as those governing any other use of lethal force. And all of these institutions must communicate clearly to the private sector: we value your innovation, we welcome your counsel, and we will hold ourselves - not you - accountable for the decisions we make.
This is not merely a procedural argument. It is rooted in the substantive insight that decisions with potentially catastrophic consequences - decisions that could lead to armed conflict, to the violation of civil liberties at scale, or to the destabilization of critical infrastructure or institutions of finance - require a level of accountability commensurate with their stakes. The social contract that justifies governmental authority is, at its core, a promise that those who bear the consequences of collective decisions will have a meaningful say in making them – or at the very least will have a means to hold governments accountable for them. AI governance frameworks that route consequential decisions through corporate ethics committees, rather than democratic institutions, break that promise.
The American constitutional system was designed by people who understood that power corrupts, that even well-intentioned actors make self-serving judgments, and that the only reliable safeguard against the abuse of power is structural - dividing it, checking it, and requiring it to answer to legitimate constituencies. The framers were thinking about governmental power, but the insight generalizes.
The AI industry today concentrates extraordinary capability in a small number of firms. The decisions those firms make about how to develop, deploy, and constrain their technology will shape the security environment, the information landscape, and the balance of power among nations for decades. This is precisely the kind of concentrated, consequential power that the constitutional tradition teaches us to treat with suspicion - not because the people who hold it are malicious, but because structural accountability is more reliable than individual virtue.
The corporations that are building AI have a role to play in this process. They should welcome it, not resist it. A technology industry that is seen as a law unto itself - making consequential policy decisions without democratic sanction - is an industry that is accumulating a legitimacy deficit it will eventually have to repay. The more productive path is to engage genuinely with the project of building governance frameworks that allow innovation to proceed while ensuring that the decisions that matter most are made by the institutions that citizens have empowered to make them.
The emergence of artificial intelligence as a general-purpose technology of strategic significance is one of the defining developments of the early twenty-first century. Like the emergence of nuclear technology in the mid-twentieth century, it poses questions that go beyond the technical and into the constitutional, the ethical, and the political. How will the power that AI makes possible be governed? Who will decide how it is used? And to whom will those decision-makers be accountable?
These are not questions that markets will answer well on their own. They are questions that require the engagement of democratic institutions, informed by technical expertise but not subordinate to corporate preference. The just war tradition tells us that decisions about the use of force must be authorized by legitimate authority. The social contract tradition tells us that legitimate authority derives from the consent of the governed. Constitutional democracy tells us that the consent of the governed is expressed through representative institutions constrained by law.
Artificial intelligence does not change these principles. It makes them more urgent. The decisions that AI enables - decisions about surveillance, about targeting, about the integrity of financial systems and democratic elections - are precisely the decisions that must be made through processes that citizens can hold accountable. Technology companies can illuminate those decisions, can set the boundaries of what is technically possible, and can refuse to participate in that which is illegal. But the decisions themselves must rest with governments that answer to the people who will live with their consequences.
Getting this right is not simply a matter of governance procedure. It is a matter of whether democratic societies will remain genuinely self-governing in an age when the most powerful tools of statecraft are built and, increasingly, controlled by private actors. The answer to that question will depend on whether citizens, their elected representatives, and the technology industry itself can agree on a principle that the constitutional tradition has long affirmed: that power of this magnitude must be accountable to those it affects, and that accountability of that depth is only possible through democratic governance.
Companies will build the future. But in a democracy, the people must decide what kind of future it will be.
© 2026 LTG Eric J. Wesley (U.S. Army, Ret.). All rights reserved.