Part 3: Policy, Regulation & Legislation
Effective Regulation through Agentic AI
Introduction
Enron’s infamous 2001 collapse highlighted the fundamental flaw in regulatory oversight dependent on self-reported metrics. The system relied on self-interested entities to present an unbiased view of their own performances, despite the opportunity and motivation to do otherwise. In Enron’s case, that trust facilitated the convoluted use of special purpose entities and deceptive accounting practices to obscure debt and overstate profits. Due to the infeasibility of scaling robust governance, many of today’s regulatory practices still rely on this flawed dependency, implicitly accepting the associated risks of self-reporting, such as public health failures or environmental harm.. This chapter proposes a fundamental transformation in how governance is conducted: Agentic AI systems, guided by AI-trained human regulators, analyzing data directly from unchangeable source ledgers. This system expands the breadth and depth of oversight by enabling both standardized and tailored analyses on trusted data, while still keeping a human at the helm.
Data: The Foundation of Governance
To enable that transformation, governance must begin with trusted data. This starts by requiring all public disclosures to be rooted in the same underlying data used for regulatory insights, anchoring projections in fact, and eliminating selective reporting. This data is stored in immutable ledger tables that time-stamp each update and record the user who made it, creating a tamper-proof centralized source of truth. Industry-specific requirements, codified by regulation, will standardize what data must be shared and how each element is calculated. Enforcing this system of continuously auditable books that must be used year after year makes it increasingly difficult to mislead over time, as any attempt to shape metrics for short-term benefit is constrained by the requirement for long-term consistency and a traceable lineage connecting all reported data to specific individuals and processes. Permanently linking regulatory agencies' AI engines to these datasets enables analysis at any time, motivating companies to maintain accurate, up-to-date underlying data at all times since they cannot anticipate when they will be evaluated. This also prevents the timing of regulatory disclosures to a company’s advantage.
Even with robust data structures, oversight remains incomplete without addressing how metrics themselves can be selectively framed. Current regulatory frameworks often permit entities to choose which specific indicators to report within prioritized categories, allowing them to selectively curate and frame metrics to present performance in the most favorable light. Consider, for example, the average length of stay (ALOS), a commonly used indicator of hospital efficiency. A hospital may discharge patients early to reduce its ALOS, potentially increasing the hospital readmission rate (HRR). If ALOS is reported in isolation, regulators may fail to detect that efficiency gains are being achieved at the expense of patient outcomes. Only by evaluating ALOS and HRR alongside metrics such as bed occupancy rate, discharge disposition, and 30-day mortality can regulators gain a comprehensive understanding of hospital efficiency and patient care. Simultaneous analyses across a diverse medley of metrics paint the nuanced picture of a company.
Intelligent Tools
To make such multidimensional oversight viable at scale, Regulatory AI must be equipped with intelligent tools that partner with human regulators, supporting them in performing superior and swifter validations. Regulatory agencies will define a baseline set of industry-specific metrics, each with standardized calculation methods to ensure foundational coverage. Regulatory AI will compute these metrics to generate a holistic view of enterprise performance and surface potential risks, considering all aspects of an organization at the same time. For any area of concern, the AI will initiate deeper targeted analyses until it can classify the situation as compliant, deficient, or an area requiring further investigation. It will also ingest regulatory filings, market data, and news to create tailored benchmarks to uncover anomalies to further investigate, all while safeguarding each organization’s respective data. These insights are compiled into a report that includes a high-level summary of findings, key organizational trade-offs, identified deficiencies, justifications for areas receiving additional scrutiny, and recommendations for further investigation to be reviewed by the human regulator. AI executes this task exponentially faster than humans and at a scale beyond the capacity of current human resources, enabling its application across all organizations. This enables deeper oversight in previously inaccessible areas, richer insights across all regulated entities, and allows human regulators to focus on the highest-value issues. AI serves as a partner in decision-making; it is not a replacement for human judgment.
Common Concerns
A shift of this magnitude in governance practices naturally invites scrutiny, particularly from regulators whose caution serves as a vital safeguard for public trust. Several key concerns arise in this context: value realization, consistency of enforcement, and implementation pathway.
All governmental initiatives must produce a benefit that more than justifies the investment of time, resources, and training to operationalize. Agentic Regulatory AI fulfills this need by dramatically expanding the breadth and depth of oversight, operating at a scale and speed far beyond human capability, and delivering a caliber of governance that enhances both consistency and precision. This fulfills a core obligation of government: the faithful execution and enforcement of the law. This technology fundamentally redefines protection by enabling comprehensive evaluation across all systems, not just a narrow set of high-value use cases, ensuring critical oversight is not left to chance nor the bandwidth of regulators.
Still, fairness in application remains a concern. Questions may arise about whether customized analyses result in inconsistent application of the law. In fact, the opposite. It is through these customized analyses that the Regulatory AI can verify that organizations are adherent. Subtleties of each company require focused insights to appreciate the nuance. While it is easy to confuse bespoke analyses as holding certain organizations to higher standards, the expectations of each entity are constant. One-size-fits-all systems ignore subtlety. Regulatory AI allows reviewers to focus their attention based on a prioritization that considers the full context of the business and its industry. Further review is only performed to address open questions and, through better coverage of all companies, ensure that those standards are equally applied. Areas and organizations that are clearly adherent do not require additional review.
With the conceptual foundation in place, attention must turn to implementation. Too often, large, well-intentioned government endeavors are initiated only to be quashed as they become too expensive before delivering any value. Implementation of agentic Regulatory AI needs to be delivered in iterative phases where each phase delivers tangible new capabilities. To overcome the need for a massive initial investment, the initiative can be funded per phase, where additional funding is dependent upon the success of the preceding phase, and the final tool is evaluated against predetermined acceptance criteria. For all industries, but particularly in regulatory areas concerning financial impropriety, the monetary impacts can and should be measured, as they are a clear and quantifiable way of justifying each stage of investment.
Phase 1: Matching Human-Caliber Governance
This initial phase focuses on proving that Regulatory AI matches and exceeds the caliber of current governance performed by humans over a small number of well-understood use cases within one industry. Given the same data access as a human regulator, it will be built to calculate all the metrics that regulators are interested in, rather than just a small subset curated by the regulated company. It then evaluates the organization holistically, considering all metrics in context simultaneously. This phase validates that the AI can achieve equivalent proficiency to human regulators in identifying regulatory noncompliance.
Phase 2: Exceeding Human-Caliber Governance
Phase 1 demonstrated that Regulatory AI can operate on par with human regulators in their current approach to oversight. Phase 2 expands the scope of Phase 1 to identify noncompliance through the AI-generated tailored analyses that consider cross-metric evaluation, both within the organization being reviewed and through anonymized benchmarks derived from peer institutions. Success for this phase is measured by the Regulatory AI’s ability to identify legitimate forms of noncompliance that human regulators overlooked.
Phase 3: Automated Quality Validation
Phase 2 showed that Regulatory AI can outperform human regulators through a highly manual evaluation process. To demonstrate that Regulatory AI performs at this level reliably across all organizations in the industry, its quality validation process must also be fully automated. Phase 3 introduces the requisite, automatically produced, AI-generated quality validations for the Regulatory AI itself. This is the only path for fostering trust in AI governance at scale.
Quality standards will be codified through regulation about Regulatory AI itself. Each analysis of the Regulatory AI will be recorded for auditability and to inform future enhancements. This phase empowers performance maintenance on the quality validation tools and, as a mechanism to maintain industry trust, the result of said validations must be shared publicly.
Phase 4: Storing Data into the Fully Auditable Tables
Now that Phases 1 through 3 have demonstrated the value of Regulatory AI, it is reasonable for regulators in Phase 4 to require regulated organizations to structure key regulatory data in immutable, auditable tables. This strengthens the integrity of the data that underpins Regulatory AI and its subsequent insights, as it impedes the ability to misrepresent the organization in data. Data is trustworthy and verifiable as its format preserves update history, logs all user/system updates, and standardizes calculations. This phase establishes the technical and procedural backbone for trustworthy, continuous oversight.
Ideally, Phases 4 and 1 would begin simultaneously, as demonstrating the value of Regulatory AI will likely require less time than updating regulations and organizations implementing the underlying data into the immutable ledger tables, but this is not the recommended practice, as it does not allow value to be proven at each preceding stage before initiating the next.
Phase 5: Scaling within a Single Industry
Phase 3 proved that Regulatory AI was a powerful tool for improving, expanding, and broadening the implementation of regulatory governance within a single domain on a small number of entities. Phase 5 leverages the automated validation toolkit for the Regulatory AI to empower fast expansion to an entire domain. It is only through granular feedback about deficiencies in the evaluation process that confidence can be achieved in delivering Regulatory AI at scale. This Phase is not merely adapting to variability in data availability and nuances in entity type, scale, legal entity structure, and location, nor testing the analytical capabilities of the system. It is also an evaluation of the Regulatory AI as a production tool, and whether it is capable of handling a massive amount of information and computation at once. It is a test of infrastructure as much as it is a test of the AI. The goal is to show that the tool isn’t just something that could be applied to a few understood use cases, but a tool that could, in practice, be applied across the entire industry.
Phase 6: Scaling to New Industries
Phase 6 focuses on generalizing the Regulatory AI made for a specific industry and expanding it to other industries. The overall framework will be remarkably similar, so while it will still take significant investment to implement this in new industries, delivery to the new industries will be profoundly easier and faster than developing the Regulatory AI from scratch. The challenges will primarily come from nuances in domain-specific elements that were not present in the industry for which the Regulatory AI was initially developed. Each industry introduces unique concerns, data, resources, regulatory priorities, and ethical implications.
Conclusion
Enron's collapse revealed far more than financial misconduct; it exposed the profound vulnerability of regulatory oversight that relies on curated narratives and selective disclosures. Today’s governance frameworks remain susceptible to similar weaknesses, perpetuating the risk of undetected noncompliance across critical industries. Agentic AI, integrated thoughtfully into regulatory practice, represents not just an incremental improvement but a paradigm shift. It fundamentally transforms how regulators approach oversight, amplifying human judgment rather than replacing it.
This technology dynamically interprets industry-specific nuances and proactively detects subtle patterns across vast data sets, something rigid, rule-based systems inherently fail to achieve. AI agents autonomously determine the scope and depth of analyses required, guided by human oversight yet operating with the agility to adapt as data emerges. Such flexibility significantly raises the barriers against manipulation and improves regulatory efficiency at scale.
While no technological innovations are flawless, the true benchmark for success lies in tangible, measurable improvement over existing methods. Agentic AI enhances accountability, discourages deception by heightening the likelihood of detection, and fosters systemic transparency. Ultimately, the integration of Agentic AI into regulatory ecosystems strengthens
public trust by creating a more robust and resilient foundation for governance, aligned closely with societal expectations of integrity and fairness.
Gargantuan paradigm shifts are slow, and changes to regulatory best practices are even slower. The ideas proposed in this chapter are the future of effective governance, but they still require taking that first step: by proving Agentic AI can match the oversight capabilities of today’s regulators. Phase 1, Matching Human-Caliber Governance, is a practical and bounded starting point. It offers a proof-of-concept that Agentic AI can operate at regulatory standards equal to or above today’s human oversight.
We invite mission-aligned partners to help collaborate on the launch of Phase 1 and help demonstrate how AI can strengthen regulatory oversight in the service of the public good. This is an opportunity to shape the effective and data-driven regulatory system of the future.
© 2026 Zachary Elewitz, PhD, MBA. All rights reserved.