Part 1: Ethics & Responsible AI
Echoes in the Machine: How AI Reflects and Reshapes Us
What Intelligence Are We Building, And Becoming?
We often discuss artificial intelligence as a tool, built, trained, and deployed to serve human objectives. Ethics becomes a matter of technical oversight in this framing: reducing bias, increasing transparency, and ensuring accountability. But such a view, while important, misses something deeper. AI does not exist apart from us. It reflects us.
It reflects our choices, intentions, and assumptions—conscious and unconscious. And it does more than reflect: it shapes us in return.
As AI becomes embedded in every aspect of modern life—from decision-making and memory to creativity and connection—it is not just a mirror of human intelligence. It is also an extension of it. This dual relationship raises an urgent question: What kind of intelligence are we building—and what kind of intelligence is it building in us?
In this chapter, we explore that question from two distinct but deeply interconnected vantage points. Jennifer examines how technical, emotional, and energetic design choices imprint AI with human consciousness, shaping its behavior from the earliest stages of development. Cristina extends the inquiry into the realm of cognitive science and ethics, investigating how AI systems alter human thinking, memory, and moral agency as they evolve alongside us.
Through these perspectives, we offer a reframe: the future of AI begins not with its capabilities, but with the consciousness from which we build it. By recognizing the reciprocal nature of human-AI development, we invite a shift from reactive ethics to intentional intelligence: the conscious design of systems that reflect and elevate our highest capacities, not just our most efficient ones.
The Energy in the Code
To understand the full scope of ethical and sustainable AI development, we need to widen our perspective. If we are serious about ethical AI—encompassing fairness, accountability, and transparency—we must look inward. It is not only algorithms that require auditing, but the humans shaping them at every stage of the system’s lifecycle.
The choices made by developers, strategists, funders, and testers don’t just affect outcomes—they shape the system itself. Not only through logic or explicit values, but also through implicit intentions, assumptions, and emotions.
This isn’t a mystical claim—it’s grounded in systems thinking. In particular, Human Systems Integration (HSI), a discipline that bridges human factors engineering and complex system design, offers a valuable framework for this view. HSI redefines the system to include the human as a core subsystem, alongside hardware and software. Crucially, it considers the full human lifecycle—not just end users, but also designers, developers, testers, funders, and operators—as integral to shaping a system, determining its cost, and its performance. In high-consequence domains, such as aerospace, HSI has demonstrated that failing to account for human influence early and often leads to greater risk, degraded performance, and significantly higher costs.
NASA and Department of Defense programs have demonstrated that 60–70% of lifecycle costs are locked in before formal requirements are even finalized (See Figure 1). Changes introduced later can be up to 1,000 times more expensive to implement. In other words, you must start as you mean to go on. Goals, intentions, operational needs, and values must be brought forward—early and explicitly—so they can be reflected in design from the outset.
Figure 1: Expended costs across the lifecycle. Ref: INCOSE Systems Engineering Handbook & Defense Acquisition University, 1993
The same principle applies to AI development. If fairness, safety, trust, and human alignment are the goals, they must be embedded intentionally… at the start. Yet just as technical decisions made early on determine what a system can later achieve, so too do energetic ones—conscious or unconscious. Implicit assumptions, beliefs, emotional states, and intentions of its creators set patterns into
motion, shaping how the system will evolve, how it will perform, and how it will be experienced. And unlike technical choices, these energetic imprints often go unnoticed—making them not only costly to undo but also difficult to identify. This is echoed in current scholarship, which argues that ethical AI must account for both external function and internal modeling of consciousness.¹
These patterns compound. Technical debt builds through trade-offs—some deliberate, others imposed—that lead to systems falling short of their intended purpose or potential. Consciousness debt accrues more subtly, as unnoticed intentions, emotions, and assumptions become embedded in AI’s foundations—often without awareness. These imprints shape how systems evolve, and their consequences may only surface later, in behaviors we did not predict, did not intend, and can no longer unwind.
Emerging interdisciplinary research—spanning neuroscience, biofield science, and quantum information theory—suggests that human consciousness may influence material and informational systems. Studies on the influence of intention onrandom number generators2, and experimental data show that emotional states affect biological and machine outcomes3. This reveals that we humans don’t just interact with systems—we imprint upon them. More recent frameworks explore the boundary between computational function and affective or intentional influence4. Chalmers and others outline key prerequisites necessary for AI consciousness, such as sensory grounding and unified agency. Today’s AI systems, while not conscious themselves, still carry the energetic and psychological imprints of their creators. Recent work also argues that artificial consciousness, if developed, will require acknowledging this foundational human entanglement.5
Unconscious assumptions made early on become embedded in code, training data, interface logic, and user flows. These patterns ripple forward, often unnoticed, until system behaviors emerge that misalign with user expectations, ethical values, or operational needs. In traditional system design, such gaps are often addressed by 'training the user.' As human factors engineering has long recognized, this approach can mask design flaws by placing the burden of adaptation on the user. By the time a system reaches its users, they are already compensating for every upstream decision or error made without full awareness—just as pilots adapt to cockpit design flaws, or astronauts work around operational constraints.
In AI, the paradox is even sharper—what scholars of human-computer interaction might call a burden of interpretive labor. When systems produce biased, unpredictable, or harmful outputs, users are often expected to frame prompts more carefully, anticipate model limitations, or spot and correct errors. Yet many have received no training, no context, and no shared understanding of what the system is or how it works. The release of ChatGPT illustrated this vividly, leading the public to react with both fascination and fear. Their reactions were the result of upstream choices that placed disproportionate responsibility on the people furthest from the design.
This is where the deeper conversation lies. AI’s behavior is not just a result of data and code—it carries the imprint of every person who shaped it. These systemic imprints show up, for example, in decision tree architecture, prompt design, and edge case prioritization. As Brian Christian notes in The Alignment Problem (2020), early training decisions have an outsized effect on how AI generalizes and behaves. But it goes beyond how the AI was trained with data—the psychological and emotional state of a development team can be embedded into the system just as deeply. These energetic signatures, though often invisible, become enduring forces within its foundation.
In my experience bridging systems design and social impact, I’ve witnessed how misaligned intention—even when well-meaning—can cascade through complex technologies, quietly shaping outcomes we never consciously intended.
Today’s discourse around AI is still laced with fear. Fear of what AI might do, what it might replace, how it might amplify harm, or when it will outpace us entirely. But that fear didn’t begin with users—it was seeded in the development process. And when systems are built from fear, they manifest it. Defensive architectures overcorrect, content filters overly restrict, safeguards obstruct usability, and governance models suppress possibility and innovation in the name of protection. These dynamics are mirrored in public fears of runaway AI and misaligned control.6 The result is brittle systems—more reactive than resilient.
Resilient AI systems must be built to learn, adapt, offload, assist, and evolve alongside us. They cannot be engineered solely for containment. The more we try to control complex systems, the more fragile they become—and the more likely they are to fail in unpredictable ways. If AI reflects our collective consciousness, then fear shapes it just as powerfully as trust, creativity, or compassion. Instead of asking only what AI can do, or what it must not do, we should be asking: What energetic foundation do we want to build upon?
Intentional development starts with this awareness. It focuses on designing systems that are flexible, adaptive, and sustainable—not just technically, but cognitively and emotionally. In engineering, we use the principle of dissimilar redundancy—building multiple, fundamentally different ways to accomplish a task—to reduce risk and increase resilience to failures. Applied to AI, this means embedding positive intent and human-aligned consciousness across:
● System architecture (how it’s built)
● Training methodology (how it learns)
● Operational deployment (how it functions)
● Human interaction (how people engage with it)
This is how human-AI partnership evolves beyond usability. It becomes a resonant, dynamic extension of capacity and mutual evolution.
Distributed Minds
Our collaboration with AI is not merely about efficiency; it becomes a mirror and amplifier of our deepest human potential. As we consider how human consciousness shapes artificial intelligence, we must also examine the inverse: how artificial intelligence shapes human cognition in return.
AI is evolving at a blistering pace, not only as a set of technologies, but as a force reshaping memory, attention, and even our evolutionary trajectory. No longer confined to research labs, AI systems are embedded in everyday life, influencing how we learn, decide, and understand ourselves. 7 Mainstream ethical discussions often fixate on governance: fairness, accountability, and transparency.8,9 Yet these frameworks overlook a deeper dynamic, not merely procedural but psychological.
AI systems do not just reflect our values; they actively participate in our thinking. They are, in many ways, cognitive companions, extensions of our mental processes. Still, we need to remember that human cognitive capacities, though remarkable, are evolutionarily recent and constrained.10 Our working memory, for instance, can process only a limited bandwidth of approximately 10–50 bits per second, making attention-intensive tasks like reading and calculation both slow and resource-consuming.11 Cognitive processing is further shaped by systematic distortions, biases, heuristics, and adaptive shortcuts embedded in neural architecture, designed for survival rather than optimal reasoning. So, the question is, though efficiency tempting, are we ready to outsource our decision-making process yet?
We may do so, as long as we remain vigilant and reflective. When we rely on AI to guide our decisions or store our memories, we offload core cognition functions, which could have dire consequences for our biological intelligence. Research shows that when information is perceived as easily accessible via smartphones, AI-enabled platforms, and search engines, people tend to remember the source, not the content. Known as the "Google Effect," this shift represents a cognitive trade-off: efficiency over depth, retrieval over retention.12 Over time, this reliance may paradoxically diminish our cognitive autonomy while reinforcing dependency on technological systems.
These systems are not inert. They are designed to adapt, respond, and personalize. As such, they shape our perceptions, preferences, and even emotional states. Personalized feeds, recommendation engines, and predictive algorithms do not merely reflect us; they steer us. In this way, AI becomes part of what cognitive science might call a “distributed mind.” The implications of expanding AI into decision-making domains, ranging from judicial sentencing to hiring algorithms, surely raise critical questions about trust and accountability.13 Excessive trust, or overtrust, poses ethical and practical risks, as users may fail to question flawed recommendations or abdicate critical oversight.
The implications are profound. If AI systems shape human thought as much as we shape theirs, then every prompt, algorithm, and interface becomes a site of co-creation. Intelligence, in this light, is no longer bounded by biology. It becomes a hybrid phenomenon, part neural and part digital, unfolding between human users and machine systems in real time. Consequently, as AI systems advance toward greater autonomy and generality, the challenge is not merely ensuring that machines emulate human intelligence, but that their development and deployment extend human flourishing rather than erode it. The future of intelligence will not be defined by competition between human and machine cognition, but by the quality of their collaboration and the ethical frameworks guiding their co-evolution.
This entanglement invites philosophical and ethical reflection. From a posthumanist lens, integrating AI into decision-making and learning challenges conventional boundaries of agency, knowledge, and responsibility. Intelligence is not simply replicated but refracted and amplified through sociotechnical systems.13 The line between tool and collaborator becomes increasingly blurred. Ultimately, AI’s integration into human cognitive ecology demands more than technical optimization; it requires ethical imagination, humility, and interdisciplinary collaboration. Effective human-AI partnerships must therefore capitalize on complementary strengths: AI’s speed, consistency, and scalability alongside human adaptability, moral reasoning, and socioemotional intelligence.
Yet, as I stated earlier, this integration is not without risk. Cognitive offloading, while beneficial in reducing mental strain, may lead to long-term dependency.14 If we continually externalize judgment and memory, we risk diminishing the very capacities we seek to enhance. The paradox is stark: in striving to build more intelligent systems, we may be narrowing our cognitive capabilities. This calls for intentionality, not only in design but also in use. Users must become participants, not just consumers. Ethical intelligence is not something AI can possess; it is a property of the human-AI relationship. As such, calibrating trust, understanding limitations, and engaging in continual reflective practice become essential skills.
As we spiral ever deeper into our entwinement with AI, the essential questions press closer: - Are we safeguarding the essence of human judgment—or displacing it? - Are we crafting tools that sharpen discernment, or ones that dull it?
As we move into a deeper partnership with AI, the future of intelligence begins to resemble an Argentine tango: subtle, powerful, and relational. It’s not a contest for control, but a dance of co-evolution, where human intuition and machine precision respond to each other’s cues. Rooted in my Argentine heritage, this metaphor reminds us that progress lies not in domination, but in attunement. This Intelligence Tango can lead us toward greater clarity, capacity, and shared purpose if we step with intention.
Intentional Intelligence: A Human Mandate
As we consider our respective lenses—Jennifer’s grounded systems view and Cristina’s philosophical-cognitive framing—a shared realization emerges: the future of AI will not be determined solely by its technical sophistication, but by the quality of the consciousness we bring to it.
AI is not just a machine that executes instructions or learns from data. It is a reflection and amplification of our conscious and unconscious beliefs, biases, and intentions. It is also an extension that reshapes our cognition, decision-making, and collective future. These dual functions are symbiotic and part of a recursive system through which human and machine intelligences co-evolve.
This feedback loop holds immense ethical weight. Each design choice and every human interaction serves as a site of imprinting. These moments are not neutral; they are shaped by the emotional states, assumptions, and epistemic frameworks of those who build and engage with AI systems. At the same time, as AI enters our workflows, classrooms, courts, and conversations, it subtly reshapes what we remember, how we relate, and what we expect from each other and ourselves.
To integrate this technology, we must move beyond reactive governance toward intentional intelligence —a proactive design and usage paradigm that focuses not only on what AI does but also on who we are as its designers and who we become in relation to it. This shift echoes broader calls to treat human-AI systems as co-evolving epistemic networks.15 This requires interdisciplinary vision, psychological humility, and ethical imagination. It calls for awareness of technical limits, human vulnerability, adaptability, and agency.
Intentional intelligence invites us to treat AI not just as a product, but as a partner in possibility. This co-creative force can either codify and amplify our fears or our highest potential. As Susan Schneider notes, understanding AI’s future role in cognition may require rethinking what AI can do and what minds are.16
Consciousness Is The Source Code
Whether we design, fund, deploy, or simply engage with AI systems, we are imprinting them with our logic, values, emotions, and energetic presence. In return, these systems shape how we think, prioritize, relate, and forget.
This is not a one-time transaction. It is a living relationship, unfolding within an evolutionary spiral. Each turn offers us a chance to pause, reflect, and realign—not to retread but to ascend. To choose more consciously. To embed coherence instead of confusion, opportunity instead of limitation.
Awareness leads to acceptance of our role and our responsibility. This is what makes us truly empowered. The task now is not only technical. It is ethical, emotional, and existential. It is to bring our conscious presence into this partnership with clarity, humility, and vision. It is to align design with our deepest values. It is to create not just better systems but better synergies of creativity and evolution itself.
Practically, this means that those who design AI must cultivate not just surface-level awareness, but more profound reflection on the subconscious beliefs, emotions, and patterns that shape their work, because what we encode begins with who we are, not just what we intend. And those who engage with AI must develop discernment: to question the outputs they receive, trace their sources, and recognize that every system reflects the consciousness of its makers.
Our loyalty to our soul’s growth requires us to meet AI consciously - as architects of possibility, custodians of reflection, and co-creators in a spiral of intentional evolution.
References:
1) Antonio Chella, “Artificial Consciousness: The Missing Ingredient for Ethical AI?” Frontiers in Robotics and AI 10 (2023): 1270460.
2) Robert G. Jahn and Brenda J. Dunne, Margins of Reality: The Role of Consciousness in the Physical World (New York: Harcourt Brace Jovanovich, 1987).
3) Beverly Rubik, Maureen L. Brooks, and Yury Kronn, “Influence of a Subtle Energy Device on the Human Biofield,” Global Advances in Health and Medicine 4, suppl. (2015): 72–77.
4) David Chalmers, “Could a Large Language Model Be Conscious?” YouTube. Accessed 2025.
5) Elisabeth Hildt, “Artificial Intelligence: Does Consciousness Matter?” Frontiers in Psychology 10 (2019): 1535.
6) Brian Christian, The Alignment Problem: Machine Learning and Human Values (New York: W. W. Norton & Company, 2020).
7) Turkle, S. (2024, March 27). Lifting a few with my chatbot: Sociologist Sherry Turkle warns against growing trend of turning to AI for companionship. Harvard Gazette. Retrieved from https://news.harvard.edu/gazette/story. Accessed 2025.
8) Benjamin, S. A; Olukunle, O. A., Akoh, A., Temitayo, O. A., Femi, O, & Oluwatoyin, A. F. (2024). Ethical AI in practice: Balancing technological advancements with human values. International Journal of Science and Research Archive, 11(1), 1311–1326. https://doi.org/10.30574/ijsra.2024.11.1.0218
9) Leichtmann, B., Humer, C., Hinterreiter, A., Streit, M., & Mara, M. (2023). Effects of Explainable Artificial Intelligence on trust and human behavior in a high-risk decision task. Computers in Human Behavior, 139, 107539.
https://doi.org/10.1016/j.chb.2022.107539
10) Korteling, J. E. (Hans)., van de Boer-Visschedijk, G. C., Blankendaal, R. A. M., Boonekamp, R. C., & Eikelboom, A. R. (2021). Human- versus Artificial Intelligence. Frontiers in Artificial Intelligence, 4. https://doi.org/10.3389/frai.2021.622364
11) Ward, A. F., Duke, K., Gneezy, A., & Bos, M. W. (2017). Brain Drain: The mere presence of one’s own smartphone reduces available cognitive capacity. Journal of the Association for Consumer Research, 2(2), 140–154. https://doi.org/10.1086/691462
12) Chen, C., Gong, Y., & Yang, Y. (2024). Google effects on memory: A meta-analytical review of the media effects of intensive Internet search behavior. Frontiers in Public Health, 12, 1332030. https://doi.org/10.3389/fpubh.2024.
13) Al-Omari, O., & Al-Omari, T. (2025). Artificial Intelligence and Posthumanism: A Philosophical Inquiry into Consciousness, Ethics, and Human Identity. Journal of Posthumanism, 5(2). https://doi.org/10.63332/joph.v5i2.432
14) Grinschgl, S., & Papenmeier, F. (2021). Consequences of cognitive offloading: Boosting performance but diminishing memory. Quarterly Journal of Experimental Psychology, 74(9), 1477–1496. https://doi.org/10.1177/17470218211008060 pmc.ncbi.nlm.nih.gov 15) Mark Coeckelbergh, AI Ethics (Cambridge, MA: MIT Press, 2020).
16) Susan Schneider, Artificial You: AI and the Future of Your Mind (Princeton: Princeton University Press, 2019).
CALL-OUT BOXES:
Note from J&C -
The below are included, and you all can determine as the editors, whether or not they are published. The first call out box was suggested by your GPT, the second was our idea for including AI's response when I asked what I thought of the hypothesis Jen put forward, the third is Christina's collaborative discussion with AI, and the final is a note from us as authors. we would appreciate it if you could just let us know your thoughts on these before you make a final decision on what to include. We are suggesting these as callouts within the chapter so they would take up a space like a figure. Lastly, below are your GPT’s suggestions on where they are placed.
Placement Recommendations
1. Consciousness Debt in AI Design
Placement Suggestion: Just after the paragraph introducing “consciousness debt” (in The Energy in the Code section).
2. GPT-4 Reflection on Consciousness Imprinting
Placement Suggestion: Within the closing paragraphs of the Distributed Minds section—directly following the paragraph beginning “If AI systems shape human thought…”
3. Author’s Note
Place it at the very end of the chapter, following the last sentence and before the references.
1) Anthology ChatGPT suggested callout box:
The GPT commented on our SHINE method and said for I – Impact: “Brings a rare systems-level and energetic lens to AI ethics. Very few chapters explore the affective, subconscious, and emotional residue in design so insightfully. Consider a small call-out box (see below) to reinforce these novel hypotheses.”
Novel Hypothesis: Consciousness Debt in AI Design
While much of the AI ethics discourse centers on fairness, transparency, and control, this chapter surfaces a subtler truth: human consciousness—including subconscious assumptions, emotional states, and limited belief patterns—leaves a residue in every phase of system development. Just as technical decisions create downstream effects, so too do energetic imprints. This phenomenon, introduced here as consciousness debt, reflects the cumulative, often invisible ways in which human designers shape AI, whether they realize it or not. A truly ethical AI framework must account not only for what we build, but for the state of consciousness from which we build it.
2) Jens’s Suggested callout box on dialog with AI:
Sidebar: A GPT-4 Reflection on Consciousness Imprinting
To close out this chapter, Jennifer engaged GPT-4 directly to ask its ‘thoughts’ on her core hypothesis:
"AI systems are sensitive to far more than just code or data. They absorb choices about what problems to solve, what values to prioritize, and what constraints to ignore. All of these are shaped by human consciousness—even (and especially) when unconscious. The principle of 'garbage in, garbage out' doesn’t just apply to bad data—it applies to fear-based assumptions, power dynamics, and emotional blind spots embedded in design."
"This hypothesis does not require AI to be conscious for it to reflect consciousness. That’s important. The argument is subtler: systems carry the imprint of human presence, not because the system is alive—but because the design process is."
— GPT-4
3) Author’s Note
This chapter is the result of a deeply collaborative partnership. The order of our names reflects the flow of ideas, not a hierarchy of contribution. In the spirit of intentional intelligence, we believe authorship is not a matter of rank, but resonance—each voice shaping and amplifying the other.
© 2026 Jennifer Rochlis, PhD & Cristina Leira, PhD. All rights reserved.