Part 3: Policy, Regulation & Legislation
The Decision That Disappears: Why the Most Important Question About AI isn't the One We're asking
In 1967, a United States Senator knelt on the floor of a house in the Mississippi Delta and placed his hand on the stomach of a starving child.[1] Federal programs designed to prevent precisely this kind of suffering existed; reports documenting its prevalence sat on desks throughout Washington. The child was not hidden. He was simply not seen. The visit set in motion a chain of events, including media coverage, public outrage, a CBS documentary seen by millions, and direct confrontation with the Secretary of Agriculture, that culminated, years later, in the transformation of the food stamp program and what one participant called the virtual elimination of hunger in America.[2] The mechanism of change was not any single piece of legislation. It was visibility.
There is a different kind of invisibility now, operating not in the Delta but in the computational infrastructure that mediates an increasing share of American life. The structure of the problem, and the remedy, is the same.
THE PROBLEM IS NOT ARTIFICIAL INTELLIGENCE. IT IS INVISIBILITY.
AI systems are now screening employment applications, scoring rental candidates, evaluating insurance claims, generating risk assessments for the justice system, and determining benefit eligibility across the United States. In the overwhelming majority of cases, the people affected have no knowledge that a computational system was involved. Consider a woman who applies for housing. An AI system scores her application using patterns learned from historical data, patterns that correlate through proxies no human examiner would recognize with race, disability, or family status. She is denied and receives a form letter. Her right to fair housing is established in law. But that right is empty, because she cannot challenge a process she does not know exists.
That is the first layer of invisibility: consequential decisions made by AI behind a wall the affected person never sees through. A second layer is now arriving. Generative AI systems are drafting the denial letters themselves, conducting customer service interactions in which the person believes they are negotiating with a human being, and operating as therapists, tutors, financial advisors, and caseworkers. They generate personalized content, pricing, and recommendations calibrated to what the system has inferred about the individual it is addressing. In these interactions, the AI is not concealed behind a decision. It is the interaction. The person may have no idea they are engaging with a machine.
In both cases, a question is conspicuously absent from the national conversation about AI, which focuses overwhelmingly on the technology itself: how powerful the models are, whether they will achieve general intelligence. Almost no one is asking the question that precedes all of these: can the people whose lives are affected by these systems see that the systems are there?
The answer, at present, is no. And the reason begins with the nature of the systems themselves.
THE NATURE OF CONTEMPORARY AI SYSTEMS.
When policymakers hear “artificial intelligence,” many still imagine something analogous to a sophisticated spreadsheet: defined rules, clear inputs, traceable logic. Previous generations of automated decision-making operated this way. A credit scoring algorithm employed defined variables with assigned weights. Given access, one could read the logic and reconstruct the reasoning behind any decision. These tools were complex, but they were not incomprehensible.
Contemporary generative AI bears no meaningful resemblance to these systems. The large language models and foundation models now being deployed across American commerce, government, and civic life were not programmed with rules. They were trained on trillions of fragments of text, image, and data collected from across the internet. No complete inventory of that training data exists, and none can be reconstructed. The companies that built these models cannot produce a definitive accounting of what their systems absorbed, because the datasets are too vast, too poorly documented, and in many cases no longer retained in their original form.
A single foundation model, built by one company, may now be deployed by thousands of other companies across thousands of distinct applications. The builder does not know how it will be used; the deployer does not fully understand how it works. The model’s behavior is distributed across billions of numerical parameters that no human being can interpret. There is no rule book to audit, no decision tree to trace. When the system produces an output, no one involved, neither builder, deployer, nor regulator, can provide a complete explanation of why.
This is the landscape we are attempting to govern, and it helps explain why many current approaches to AI accountability remain incomplete.
THREE APPROACHES, ONE SHARED FAILURE.
Three dominant approaches have emerged, each addressing part of the problem, but missing the point at which AI affects people most directly.
To understand why the three dominant approaches to AI accountability all fail in the same way, it helps to distinguish two phases of how these systems operate. The first is training: the process by which a model is built, in which it absorbs patterns from vast datasets of text, images, and other data. Training happens once, before the system is deployed. The second is inference: what happens when the trained model encounters a specific person in a specific situation. At inference, the system takes in new information, reads the individual it is interacting with, and generates an output tailored to that encounter. Training determines what the model knows. Inference determines what it does, to whom, and when. Nearly all current accountability efforts focus on the first phase. The more consequential phase is the second.
With that distinction in mind, consider the three most prominent efforts to hold AI systems accountable. These are not marginal proposals. They represent the combined output of the European Union’s regulatory apparatus, the most influential voices in AI safety, and the leading technical scholarship on algorithmic fairness. Each addresses a real problem. Each has produced genuine insight. And each, for different reasons, directs its attention somewhere other than the point of inference, where the system actually meets the person it affects.
Europe looked upward. The EU AI Act, adopted in 2024, requires companies to file extensive documentation with regulatory authorities: model cards, risk assessments, conformity reports. These are genuine requirements, but they are transparency directed upward, toward institutions. The individual whose loan application, employment candidacy, or insurance claim was processed by an AI system still has no right to know that it happened.
The existential risk community looks forward. A widely cited 2025 report projected super-intelligent AI by 2027; its own authors revised the timeline within months.[3] But the deeper damage is political, not technical. If the defining threat of artificial intelligence is a godlike system arriving next year, then governing the AI systems making consequential decisions about people’s lives today becomes a footnote. An intense focus on speculative catastrophe can distract from the urgent need to govern the AI systems already shaping people’s lives today, and it can unintentionally reinforce the idea that these systems are too powerful to regulate effectively.
The training data community looks backward. The prevailing assumption in AI accountability scholarship is that algorithmic bias originates in training data, and the remedy is to audit it. For foundation models, such auditing is technically impossible: the datasets comprise trillions of data points from millions of sources, and no complete inventory exists or can be reconstructed. But even if it were possible, it would miss the more fundamental threat. Consider a loan officer whose education has been audited and certified free of bias. That officer can still sit across a desk, read the applicant’s face, sense nervousness, and extend a worse offer, not because of anything in the officer’s training, but because of what the officer perceives about this particular applicant, at this particular moment. Contemporary AI systems do precisely what that loan officer does, but at computational scale and without the applicant ever knowing. This is inference. A large language model deployed in a customer-facing role processes the individual’s word choices, hesitations, and engagement patterns, constructing a real-time model of who they are and what they are likely to accept. An AI system negotiating a price or presenting insurance options can calibrate its output to inferences about the individual’s emotional state, sophistication, or desperation. No audit of training data will detect this, because it does not originate in the training data. It occurs at the moment the model encounters the person.
With generative AI, inference becomes more consequential still. When a company deploys a foundation model, it does not simply install the model as-is. It provides the model with instructions, often called a system prompt, that shape how the model behaves in every interaction. These instructions are invisible to the person on the other end. A company deploying an AI customer service agent can instruct it to upsell, to discourage cancellations, to express empathy strategically, to avoid acknowledging liability, or to offer different terms to different users based on what the model infers about them in real time. The model that a person encounters is not the model that was trained. It is the model as directed by its deployer, at inference, through instructions the person will never see. No training data audit, no model card, and no regulatory filing captures this layer. It is the point at which the most consequential decisions are made, and it is the point at which visibility is most completely absent.
Upward. Forward. Backward. Never outward, toward the affected individual, at the moment the decision is made. That is the gap in every existing approach to AI governance.
These are not merely structural failures. In the United States, each approach faces political obstacles that render it not just incomplete but unachievable. Any proposal that requires a federal regulatory apparatus confronts the reality that no such apparatus exists, that Congress has shown no capacity to build one, and that any attempt to do so is met with the argument that regulation will drive AI development overseas and cede technological leadership to China. Any proposal that requires access to training data or model architecture runs into trade secret doctrine, which American courts have consistently protected. And any proposal that touches the development of AI systems now faces an organized political movement, heavily funded by technology and cryptocurrency interests, to establish a constitutional “right to compute” that would subject AI regulation to strict scrutiny.
The innovation argument, the competitiveness argument, and the constitutional argument converge on a practical reality: approaches that attempt to govern what happens inside the model face significant political barriers in the United States, particularly in the near term.
What remains is an approach that does not require understanding the model, accessing its training data, or building a federal agency. It requires disclosure at the point where the system meets the person. And there is a precedent for it.
VISIBILITY AS MECHANISM: THE EMPIRICAL CASE.
In 1986, Congress required industrial facilities to publicly disclose what toxic chemicals they released into the environment.[4] The law imposed no limits on emissions, banned no substances, and created no enforcement apparatus. It established a single obligation: tell the public.
Over the following decade, reported emissions fell by approximately sixty percent.[5] The reduction was driven not by enforcement (none existed) but by the consequences of visibility itself. Communities organized. Journalists investigated. Investors reassessed risk. The disclosed information generated its own accountability without a single prohibition.
What most observers miss is the second chapter. Four years after disclosure took effect, Congress enacted the most significant amendments to the Clean Air Act in a generation.[6] The 1990 amendments were drafted using data that the disclosure regime had produced, demanded by a public that the disclosed information had mobilized, and enforceable because the reporting infrastructure was already operational. The 1990 legislation could not have been written in 1985. No one knew what was in the air. After disclosure, everyone knew. And once they knew, they demanded more.
You cannot see it. Then you can see it. Then you act on what you see.
That is the sequence this framework proposes for AI. We are in 1985: no jurisdiction in the country has comprehensive data on which decisions within its borders are being made or influenced by AI systems. Legislatures cannot regulate what they cannot see.
This framework is 1986. Require that when an AI system makes or substantially informs a consequential decision about a person, that person is informed: that AI was involved, what categories of data were used, whether the system profiled them in real time, and how to contest the outcome. Allow communities to see. Allow evidence to accumulate. Substantive regulation then becomes 1990: informed by empirical data, grounded in documented patterns of impact, enforceable because the infrastructure already exists.
This is not an alternative to regulation. It is the foundation that regulation requires.
WHAT VISIBILITY REQUIRES.
Adapting the TRI model to artificial intelligence is not a matter of simple analogy. Chemical disclosure involved a known substance released in a measurable quantity from a fixed location. AI systems are different: the “substance” is a decision or interaction, the “release” happens at the moment of inference, and the “location” is wherever the affected person happens to be. A visibility framework for AI must be designed for this reality. Several conditions are necessary for it to work.
First, the right to know must attach at the point of impact: the person affected by the system, at the moment the system affects them. Not upstream, at a regulatory filing office. Not after the fact, in an annual report. When an AI system makes or substantially shapes a consequential decision about a person, that person must be informed that AI was involved, what categories of data the system used, whether the system profiled them in real time, and how to contest the result. “Consequential” must be defined with precision: decisions affecting employment, housing, credit, insurance, public benefits, education, and the justice system. A spell-check does not trigger disclosure. An AI system that screens a rental application does. The boundary is not whether AI was present but whether its output shaped a decision with material consequences for a person’s life. This is the core of the framework, and it is the element that every existing approach omits.
But disclosure that AI was involved, while necessary, is not sufficient. This memo has argued at length that the most consequential layer of AI deployment is the system prompt: the invisible instructions a deployer gives a model that shape its behavior in every interaction. If the framework establishes a right to know that AI is present but leaves the deployer’s instructions entirely opaque, it addresses only half the problem it identifies. The person must also be informed of the system’s purpose: not the text of the prompt, not the model’s architecture, but what the system was instructed to optimize for. Was it directed to sell? To discourage cancellation? To triage risk? To assess eligibility? Disclosure of purpose does not require revealing proprietary methods. It requires that the person on the other end of an AI interaction knows what the system was designed to do to them.
Second, because federal action is foreclosed for the reasons described above, the framework must operate at the state level, and it must preserve the authority of municipalities to go further. Three hundred and fifty-one cities and towns are not a problem of fragmentation. They are an asset: laboratories of democratic governance, each capable of understanding the AI systems operating within its borders and responding to what it finds. A framework that preempts municipal authority replicates the very centralization it is meant to counteract.
Third, government itself must be subject to the same disclosure obligations it imposes on the private sector. When a state agency procures an AI system to screen benefit applications, evaluate employees, or allocate public resources, the people affected by those systems have at least as strong a claim to transparency as they do when the deployer is a private company. Government procurement of AI without public accountability is not governance. It is automation of authority without democratic oversight.
Fourth, the framework must be transparency, not prohibition. It must not ban AI systems, restrict their development, or require that deployers reveal proprietary model architecture. What it must require is that the effects of these systems on people are visible: that the black box, whatever it contains, announces itself. This distinction will not eliminate opposition. Industry fought the Toxic Release Inventory. It fought nutritional labeling. It fought financial transparency requirements. It will fight AI disclosure with comparable resources and sophistication. But the distinction between transparency and prohibition determines whether the framework survives that fight. Commercial disclosure requirements have decades of First Amendment precedent.[7] A framework that does not touch model development, does not require technical access to proprietary systems, and does not restrict any company’s right to build or deploy AI occupies the strongest available legal ground. The question is not whether there will be a fight. It is whether the framework is designed to win it.
Fifth, a disclosure obligation alone is insufficient if the people and institutions receiving the disclosure lack the capacity to understand what it means. The framework must therefore invest in AI literacy: not technical training in model architecture, but the civic understanding necessary for residents, municipal officials, journalists, and community organizations to interpret what is disclosed and act on it. The Toxic Release Inventory worked because communities already understood that chemicals in their water were dangerous. AI disclosure will work only if communities develop a comparable understanding of what algorithmic decision-making means for their lives.
None of these conditions requires technological innovation. None requires access to proprietary systems. None requires federal action. Each can be enacted by a state legislature in a single session. And taken together, they would make the United States the first jurisdiction in the world to establish a comprehensive right to know when AI is making decisions about people’s lives, not by regulating what happens inside the model, but by ensuring that what comes out of it is never invisible to the person it touches.
The child on the floor in Mississippi was not invisible because no one cared. He was invisible because no one was required to look. The AI systems shaping the lives of millions of Americans are not invisible because the problem has escaped notice. They are invisible because no law requires their disclosure to the people they affect.
The most important question about artificial intelligence is not how powerful the models will become, or whether they will achieve general intelligence. It is the question that precedes all of these: can the people whose lives are shaped by these systems see that the systems are there?
The work begins by changing that answer.
The 1967 Mississippi Delta visit was conducted by Senators Robert F. Kennedy and Joseph S. Clark as part of the Senate Subcommittee on Employment, Manpower, and Poverty’s investigation into poverty programs. For a comprehensive account, see Ellen B. Meacham, Delta Epiphany: Robert F. Kennedy in Mississippi (University Press of Mississippi, 2018). See also the accounts collected in the Mississippi Encyclopedia entry on Kennedy’s visit, https://mississippiencyclopedia.org/entries/robert-f-kennedy-in-mississippi/. ↑
The phrase is attributed to civil rights attorney Marian Wright Edelman, reflecting on the period in which advocates and the federal government worked together to expand the Food Stamp Program. See the Hunger Museum exhibit, “Food Stamps and the Advocates Who Secured Them,” MAZON, https://hungermuseum.org/exhibits/food-stamps-and-the-advocates-who-secured-them/. See also Marion Nestle, “The Supplemental Nutrition Assistance Program (SNAP): History, Politics, and Public Health Implications,” American Journal of Public Health, Vol. 109, No. 12 (Dec. 2019). ↑
The report referred to is “AI 2027,” published in April 2025 by the AI Futures Project, led by Daniel Kokotajlo (a former OpenAI researcher) along with Scott Alexander, Eli Lifland, Thomas Larsen, and Romeo Dean. The report projected superintelligent AI by the end of 2027. By November 2025, the authors revised their timelines; Kokotajlo stated his median forecast had shifted to “around 2030, lots of uncertainty though.” A July 2025 update to the forecast pushed the median back approximately 1.5 years. See https://ai-2027.com/ and the authors’ updated timelines forecast at https://ai-2027.com/research/timelines-forecast. ↑
The Emergency Planning and Community Right-to-Know Act (EPCRA) of 1986, 42 U.S.C. § 11001 et seq., was enacted as Title III of the Superfund Amendments and Reauthorization Act. Section 313 of EPCRA established the Toxics Release Inventory (TRI). See U.S. Environmental Protection Agency, “What is the Toxics Release Inventory?” https://www.epa.gov/toxics-release-inventory-tri-program/what-toxics-release-inventory. ↑
EPA estimates that toxic releases of TRI-covered compounds declined by approximately 49% from the program’s inception in 1987 through the mid-2000s. See Shanna H. Swan et al., “Environmental Justice Implications of Reduced Reporting Requirements of the Toxics Release Inventory Burden Reduction Rule,” Environmental Health Perspectives, Vol. 117, No. 10 (2009) (citing EPA estimates). The commonly cited figure of approximately sixty percent reflects total releases and transfers (as wastes) of listed chemicals between 1988 and the late 1990s. See Archon Fung & Dara O’Rourke, “Reinventing Environmental Regulation from the Grassroots Up: Explaining and Expanding the Success of the Toxics Release Inventory,” Environmental Management, Vol. 25, No. 2 (2000). See also James T. Hamilton, Regulation through Revelation: The Origin, Politics, and Impacts of the Toxics Release Inventory Program (Cambridge University Press, 2005). ↑
The Clean Air Act Amendments of 1990, Pub. L. 101-549, 104 Stat. 2399, were signed into law on November 15, 1990, four years after EPCRA established the TRI. Title III of the 1990 Amendments specifically addressed hazardous air pollutants, noting that “information generated from The Superfund ‘Right to Know’ rule (SARA Section 313) indicates that more than 2.7 billion pounds of toxic air pollutants are emitted annually in the United States.” See EPA, “1990 Clean Air Act Amendment Summary: Title III,” https://www.epa.gov/clean-air-act-overview/1990-clean-air-act-amendment-summary-title-iii. ↑
The Supreme Court established the constitutional framework for compelled commercial disclosures in Zauderer v. Office of Disciplinary Counsel of the Supreme Court of Ohio, 471 U.S. 626 (1985), holding that disclosure requirements compelling “purely factual and uncontroversial information” about commercial services are constitutionally permissible so long as they are “reasonably related to the State’s interest in preventing deception of consumers.” See also Central Hudson Gas & Electric Corp. v. Public Service Commission, 447 U.S. 557 (1980) (establishing intermediate scrutiny for commercial speech regulation); Congressional Research Service, “Assessing Commercial Disclosure Requirements under the First Amendment,” R45700 (2019). ↑
© 2026 Russ Wilcox. All rights reserved.