Part 1: Ethics & Responsible AI

Navigating the Nexus: Transparency, Accountability, and Governance

Amanda Molina&Jamison Rotz

In 2019 internal documents revealed that Instagram’s follow-recommendation tools were suggesting minors’ accounts to adults flagged for “groomer-like” behavior—at significantly higher rates than to ordinary users—and that these findings were escalated to senior leadership.1 In 2023, a joint investigation by the Wall Street Journal and the Stanford Internet Observatory documented that Instagram’s search, hashtag, and recommendation features helped users locate accounts advertising or trading in child sexual abuse material and related “self-generated” content, effectively lowering the friction for predators to find each other and their targets.2 Subsequent letters from U.S. senators and regulatory filings have described Instagram as having operated, for a period, as an “open-air market” for such material and alleged that Meta failed to take adequate action despite clear external and internal warnings; Meta disputes these characterizations but has nonetheless rolled out remedial measures and algorithm changes in direct response to these revelations. Social medial algorithms are one of the longest running forms of large scale machine learning algorithms in our society and this example highlights the risks we face from both a technical and social perspective as we navigate technology in the age of AI.

A Fundamental Shift in Computing

The world of technology is changing. Since the 1950s, deterministic, rules-based systems have defined the technologies that have powered our industries, but this is starting to change. Thanks to massive internet data availability and advancements in cloud computing since 2010, we have seen advancements in the capabilities of probabilistic machine learning systems that deliver new capabilities but work quite differently and present new challenges when designing and deploying highly trustworthy systems.

The benefit of probabilistic systems is that they can adapt to changes in data and logic automatically by "learning" from a continually updated stream of data as it is generated through real-world transactions. This avoids the human labor required to continuously analyze data and update rules to match changes in processes and data in the real world.

Two tradeoffs must be contended with as machine learning systems become more ubiquitous in the computing landscape.3 First, probabilistic systems perform terribly with edge cases. This is a natural, inherent tension in the technology. Edge cases are by definition statistical outliers, which will result in sparse data and poor predictions in stochastic systems.

Secondly, these systems handle failure states quite differently from rules-based expert systems. The traditional expert systems typically operate in a run state for conditions that were considered by the engineers implementing the systems. Properly coded systems would enter a fail state when confronted with circumstances that were not within the considerations of the algorithm. Probabilistic systems, on the other hand, will always return a result, even if the confidence in that result is low. They only enter a failure state when they encounter a constraint or "guardrail" coded into the system. The need for engineers to imagine potential failure states and codify constraints leads to higher error rates in machine learning systems and thus, increased risk. Furthermore, the non-deterministic nature of many machine learning systems makes them more difficult to test conclusively.

Still, the economic benefit of allowing machine learning algorithms to handle massive complexity far more efficiently than traditional systems has caused adoption to explode in the past five years, with acceleration since the release of ChatGPT at the end of 2022. In 2021, McKinsey's State of AI survey showed 56% of enterprises were using machine learning AI for at least one of their business functions.4 In 2025, the McKinsey report showed the figure rose to 78% with many firms citing AI systems in multiple units. Furthermore, 71% of respondents stated they were using generative AI in at least one function.4

The increasing ubiquity of generative AI algorithms, along with the computing risks associated with them, has rightfully drawn the attention of governments and organizations concerned with the potential consequences of the technology. The Meta lawsuits filed exemplify what happens when these risks materialize without proper governance structures in place.

This chapter lays out our framework for accountability and governance that organizations can voluntarily adopt in order to move forward and accelerate technology responsibly. Learning from the example at Meta, we propose a framework built on a baseline risk analysis for AI systems and three interdependent pillars: transparency, accountability, and governance.

There have been many AI regulation frameworks proposed, but in this fast-moving environment, they have been ineffective at achieving the desired results at scale. Many are outdated by the time they are issued. Others have focused on measurements of systems, technical limits, and controls, which have proven unenforceable on a global scale or, in the case of enforceable regulations such as export restrictions, have been largely ineffective at achieving the desired goals.

Instead of measuring convenient technical items, we must recognize that these technologies behave in a more “human” manner: making judgments based on data, reasoning toward goals, developing blind spots with sparse data, and making mistakes. While AI systems process vastly more data than humans and scale with economic resources rather than biological limits, the statutory law governing human behavior can be applied to technical systems through proper accountability structures.

As a society, we've largely decided what norms we want to impose laws around to enforce what is best for society. The statutory law needed to govern AI exists for human societies. This can be applied to technical systems with the proper accountability system in place to ensure responsible behavior from the corporations producing the technology, as well as the entities and individuals adapting and using the technology.

Done correctly, an accountability structure can reward institutions that act responsibly by allowing them to move forward quickly, while constraining those who demonstrate patterns of recklessness or harm as deemed by existing law.

The global nature of this technology presents challenges since different societies demonstrate differing norms and laws, so as the American Society for AI, we'll focus our efforts on Western democratic social structures, as we see this being the best path to equitable prosperity for all.

Transparency enables accountability by making operations visible. Accountability informs governance by establishing clear expectations. Governance shapes transparency by defining disclosure requirements. Weaknesses in any pillar undermine the entire ecosystem. Transparency without accountability becomes disclosure theater. Accountability without transparency becomes arbitrary. Governance without both becomes technocratic and unresponsive.

Implementation faces persistent challenges, including resource constraints, tensions between regulatory stability and technological adaptation, and the need for global coordination. Many organizations lack the expertise and resources to implement comprehensive measures. Frameworks must balance stability for predictable guidance with flexibility for emerging capabilities.

Applying Regulation According to Risk

Baseline Risk Analysis

The comprehensive evaluation of AI system risks requires a structured approach across four critical dimensions that determine appropriate governance measures. These dimensions, validated through analysis of global frameworks including the European Union’s AI Act (EU AI Act), National Institute of Standards and Technology (NIST) standards, and industry implementations, provide a systematic method for assessing and managing AI risks.

Functional Scope Risk

Different AI applications create fundamentally different risk profiles based on their intended function and domain. A recommendation algorithm for entertainment content operates in a vastly different risk context than an AI system making medical diagnoses or criminal justice decisions.

The EU AI Act exemplifies proportionate governance by establishing risk-based categories with tailored requirements, recognizing that a one-size-fits-all framework would either stifle innovation in low-risk applications or fail to protect against high-risk uses.6 Key assessment questions for leaders include the following: What decisions will this AI system influence? What are the potential consequences of errors? Are vulnerable populations particularly affected?

Deployment Size Considerations

Scale fundamentally transforms risk. An AI system tested successfully with 1,000 users may exhibit entirely different behaviors when deployed to millions. The EU's threshold of 10^25 floating-point operations for systemic risk classification reflects growing recognition that computational scale correlates with potential societal impact.7

Large-scale deployments amplify risk through network effects, market influence, systemic dependencies, and data concentration. Google's approach, triggering new safety evaluations at each 2x increase in compute, provides a model for dynamic risk assessment that evolves as the system grows.

Level of Autonomy

The degree of human oversight fundamentally shapes AI risk profiles. Current frameworks classify systems across six levels, from Level 0 (no automation) to Level 5 (full automation), with critical distinctions between human-in-the-loop (active involvement), human-on-the-loop (monitoring with intervention capability), and human-out-of-the-loop (independent operation).8

Higher autonomy demands more stringent governance. Policymakers face the challenge of creating frameworks flexible enough for today's semi-autonomous systems while anticipating the emergence of fully autonomous AI within the decade.

Failsafe Mechanisms

Robust failsafe mechanisms are essential when AI systems operate in unpredictable real-world environments. These mechanisms must address both technical failures and the inherent limitations of probabilistic systems in handling edge cases.

Essential failsafe components form an integrated defense system: confidence thresholds that recognize uncertainty and defer to human judgment; circuit breakers that shut down systems when anomalies are detected; graceful degradation that maintains core safety functions during partial failures; and comprehensive audit trails that enable post-incident analysis. Industry convergence around IEEE P7009 and ISO/IEC 42001 standards provides the foundation for consistent implementation across sectors.9

Transparency

The Foundation of Trust

With baseline risks understood, transparency emerges as the cornerstone for building public trust in AI systems. The opacity of modern AI, particularly large language models, poses fundamental challenges to democratic accountability that institutions must address through comprehensive disclosure frameworks.10 These disclosures can be validated through open-source methodologies and third-party audits, creating "trust badges" that make transparency a competitive advantage. Given the complexity of modern AI systems, organizations should provide layered explanations with technical details for regulators, functional descriptions for business users, and outcome-focused explanations for affected individuals, transforming transparency from a compliance requirement into a market differentiator.

The Regulatory Landscape

Global recognition of transparency's critical role has driven legislative action. The EU AI Act mandates that high-risk AI systems enable deployers to "reasonably understand" their functioning. At the same time, California's AI Transparency Act sets new standards by requiring both visible "manifest" disclosures and embedded "latent" metadata for systems with over one million monthly users.11 These frameworks establish the principle that organizations deploying AI affecting public welfare must provide meaningful transparency to all stakeholders.12

Four Facets of Transparency

Meeting regulatory requirements and public expectations demands transparency across four dimensions:

  • AI System Documentation: Standardized disclosure of model architectures, capabilities, limitations, and intended use cases, following the nutrition label model for accessibility
  • Training Data Transparency: Disclosure of data sources, curation principles, and known biases, now mandatory under the EU AI Act for high-risk systems
  • Safety and Guardrail Documentation: Explicit documentation of override capabilities and constraint systems, with verifiable and persistent safety measures
  • User Data Management: Clear policies on how user interactions train systems, articulated in accessible language that goes beyond standard privacy law compliance

Accountability

Global Approaches to AI Accountability

The international landscape reveals diverse strategies for balancing innovation with responsibility.12 The EU pursues comprehensive regulation through strict liability chains and conformity assessments. The United States follows a more fragmented approach, with state-level initiatives like Colorado's AI Act focusing on anti-discrimination in automated decisions affecting employment, housing, and healthcare.13

Despite jurisdictional differences, convergent themes emerge: mandatory pre-deployment assessments, emphasis on explainability that reveals not just what AI decides but how and why, and recognition that accountability requires understanding AI's decision-making processes.

Concrete Accountability Mechanisms

Effective accountability demands mechanisms with real consequences. Liability frameworks must shift the burden of proof; the EU's proposed AI Liability Directive would require AI companies to prove their systems didn't cause damage, rather than forcing victims to prove they did. In the U.S., legal scholars advocate treating AI decisions as products subject to safety requirements, with companies facing liability similar to that of defective product manufacturers when faulty AI causes harm, such as unfairly denying healthcare. Additionally, stronger professional standards should apply to individuals throughout the AI development hierarchy, with accountability scaling by company size: startups face basic disclosure requirements while large platforms undergo mandatory audits and maintain AI ethics boards with binding authority.

Beyond traditional liability and professional standards, innovative market-based mechanisms can strengthen accountability. Algorithmic whistleblower provisions could offer financial rewards for reporting AI governance violations, creating market-based enforcement where companies and employees help police the industry. This proven model from finance and healthcare would incentivize early detection of dangerous AI practices before they cause harm. Penalties could escalate from warnings to mandatory audits to criminal liability for executives knowingly deploying harmful systems. Additionally, industry-funded insurance pools could be used to require AI companies to contribute to shared funds covering damages from AI failures, similar to existing models in nuclear power and vaccines. Companies controlling over 20% market share would face heightened antitrust-like scrutiny, creating collective industry stakes in safety.

Governance

Adaptive Frameworks for Rapid Change

With transparency enabling oversight and accountability ensuring consequences, governance must provide the adaptive framework that keeps pace with AI's rapid evolution. Static regulations developed through lengthy deliberation cannot match development cycles measured in months. Effective governance focuses on results rather than prescriptive technical requirements that quickly become obsolete.

AI's borderless nature demands multilevel coordination. The EU AI Act provides comprehensive mandatory regulation, while the U.S. NIST AI Risk Management Framework offers voluntary guidance that emphasizes flexibility.14 This divergence creates potential for regulatory arbitrage where companies might relocate operations to avoid stricter EU requirements. Technical standards through ISO/IEC create a common vocabulary while respecting regional differences, and the OECD AI Principles enable democratic nations to align approaches voluntarily. The challenge lies in harmonizing mandatory regulations (like the EU's) with voluntary frameworks (like the U.S.'s) to prevent companies from simply choosing the least restrictive jurisdiction while still fostering innovation. Without convergence between binding and voluntary approaches, companies can exploit regulatory gaps, undermining the effectiveness of both systems.

Beyond traditional regulatory approaches, innovative governance mechanisms can provide continuous oversight and market-based enforcement. Automated validation frameworks would enable real-time monitoring through sentiment analysis of user complaints, FDA-style incident reporting databases, and compliance verification systems, transforming AI oversight from periodic audits to continuous surveillance. Creative protection mechanisms would establish new rights requiring attribution when AI mimics creators' styles, revenue-sharing for training on creative works, and consent for replicating voices or artistic styles, ensuring creators benefit rather than being displaced by AI trained on their work. Public scorecards would rank companies' AI governance practices with transparent metrics visible to consumers and investors, allowing market forces to reward responsible development and penalize safety shortcuts. Together, these mechanisms create a dynamic governance ecosystem that adapts as quickly as the technology itself while maintaining democratic accountability and protecting individual rights.

Implementation Imperatives

The governance challenges posed by AI demand nothing less than reimagining how democratic societies balance innovation with accountability. When transparency is hidden, accountability is evaded, and profit motives capture governance, AI systems become instruments of catastrophe rather than progress. Success requires coordinated action: private sector leaders must build AI governance boards with genuine authority, continuous risk monitoring systems, and operational feedback loops, while governments need technical expertise within democratic institutions, stable yet flexible principles, and international coordination mechanisms.

Both sectors must recognize that frameworks alone are insufficient without sustained commitment. The pace of AI development requires governance structures that adapt as quickly as the technology itself while maintaining the stability necessary for public trust. We cannot allow the wellbeing of humanity, especially children, to became the price of ungoverned AI. In this pivotal moment, leaders must choose whether to shape AI's trajectory through proactive governance or react to its consequences through crisis management. The families who lost loved ones to MCAS deserved better; future generations deserve our commitment to ensuring such preventable tragedies never happen again.

Endnotes:

1 Reily Griffin and Kurt Wagner “Instagram Suggested ‘Groomers’ Connected with Minors FTC Says” Bloomberg, 2025, https://bloomberg.com.

2Jeff Horwitz and Kathrine Blunt, “Instagram Connects Vast Pedophile Network” The Wall Street Journal 2023 https://wsj.com

3 Karen Fong, From Paper to Practice: Utilizing the ASEAN Guide on Artificial Intelligence (AI) Governance and Ethics (Singapore: ISEAS Publishing, 2024),

https://doi.org/10.1355/9789815203684.

34McKinsey & Company (QuantumBlack), Global Survey: The State of AI in 2021, McKinsey & Company, December 8, 2021, accessed June 2025,

https://www.mckinsey.com/~/media/McKinsey/Business%20Functions/McKinsey%20Analytics/Our%20Insights/Global%20survey%20The%20state-of-AI-in-2021/Global‑survey‑The‑state‑of‑ AI‑in‑2021.pdf.

5 McKinsey & Company (QuantumBlack), “The State of AI: How Organizations Are Rewiring to Capture Value,” McKinsey, published March 12, 2025, accessed June 3, 2025, https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.

6 T. Karathanasis, "Fitting 'Systemic Risks' into a Taxonomy in the GPAI Code of Practice: Will the Resulting Ambiguity Be Exploited by GPAI Model Providers?" Journal of Internet Law 28, no. 6 (2025): 1–22.

76 Ibid.

8 Adriano Alessandrini, Lorenzo Domenichini, and Valentina Branzi, “Chapter 2 - Automation Functions, Philosophies, and Levels,” in The Role of Infrastructure for a Safe Transition to Automated Driving, (Amsterdam: Elsevier, 2021), 7–47,

https://doi.org/10.1016/B978-0-12-822901-9.00006-3.

9Michael Farrell et al., “Evolution of the IEEE P7009 Standard: Towards Fail-Safe Design of Autonomous Systems,” in 2021 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW) (2021): 401–406,

https://doi.org/10.1109/ISSREW53611.2021.00109; Samira A. Benraouane, AI Management System Certification According to the ISO/IEC 42001 Standard: How to Audit, Certify, and Build Responsible AI Systems (Boca Raton, FL: CRC Press, 2024).

10 Samira A. Benraouane, AI Management System Certification According to the ISO/IEC 42001 Standard: How to Audit, Certify, and Build Responsible AI Systems (Boca Raton, FL: CRC Press, 2024).

11 California Senate Bill 942, California AI Transparency Act, 2023–2024 Reg. Sess., ch. 291 (enacted Sept. 19, 2024), https://legiscan.com/CA/text/SB942/id/3021807. 11 Jared Sayles, Principles of AI Governance and Model Risk Management: Master the Techniques for Ethical and Transparent AI Systems, 1st ed. (New York: Apress, 2024), https://doi.org/10.1007/979-8-8688-0983-5.

12 Mariana Gkliati, “Shaping the Joint Liability Landscape? The Broader Consequences of WS v Frontex for EU Law,” European Papers 9, no. 1 (2024): 69–86,

https://doi.org/10.15166/2499-8249/743.

13 Colorado General Assembly, The Colorado AI Act: Regulation of Artificial Intelligence Systems, Legislation & Policy Brief, accessed June 2, 2025,

https://leg.colorado.gov/sites/default/files/images/fpf_legislation_policy_brief_the_colorado_ai_ act_final.pdf.

14 Jared Sayles, Principles of AI Governance and Model Risk Management: Master the Techniques for Ethical and Transparent AI Systems, 1st ed. (New York: Apress, 2024), https://doi.org/10.1007/979-8-8688-0983-5.

© 2026 Amanda C. Molina & Jamison Rotz. All rights reserved.