Part 1: Ethics & Responsible AI

Startup Ethics: Building Responsible AI with Purpose and Impact

Sarah Chardonnens

It started in a classroom.

A teacher I interviewed recently shared something that has stayed with me ever since:

“Since we started using the new AI-based assessment platform, my students stopped asking why they got the grades they did. They just accept the score and move on.”

She wasn't the only one. Over the following months, I heard similar stories from teachers at different schools: the more “efficient” the system became, the less students questioned themselves. “We trust the machine.” Feedback loops faded away. Even though results were improving on paper, something human was quietly being lost: the sense of reflection, ownership, and action. Teachers are questioning and concerned.

As an educational researcher, I initially saw this as an issue specific to learning environments. But over time, I started noticing the same dynamic elsewhere, especially in the startup world. Whether in schools or warehouses, hospitals or call centers, the pattern repeats: AI tools launched for efficiency gradually reshape how people think, interact, and make decisions.

1. When Speed Breaks Trust

“We didn’t realize how much trust we were breaking—until our beta testers stopped responding.”

The startup team had moved fast. Their AI-powered scheduling assistant was designed to optimize logistics in a mid-sized warehouse. Initial metrics looked great: delays dropped, coordination improved, and investors were happy.

But within three months, something had changed. Collaborators stopped giving feedback. Team members no longer questioned the system’s decisions. Some quietly disengaged — others simply left.

The system hadn’t failed technically. On the contrary, it was doing exactly what it was built for. But something deeper had failed: the human connection. The workers’ sense of control and understanding had eroded. What had once been a participatory workplace became a place of passive execution. To engage in a task or action, individuals must be able to engage their potential: choose goals, determine strategies, make adjustments, and evaluate what works and what doesn't. This autonomous decision-making engine is the foundation of the individual. We don't like being told what to do, do we?

Unfortunately, in the drive to “produce quickly and efficiently,” this becomes difficult. According to a 2021 study, only 18% of AI startups report having a dedicated ethical review process during early development phases1. Instead, most still follow the same high-pressure mantra: “move fast, scale quickly, fix ethics later.” But when we treat ethics as a patch, not a blueprint, the damage is often invisible — until it’s too late.

As Rahwan et al.2 show in Nature, AI systems can silently shift human behavior and suppress autonomy, even when they function perfectly. And as Cathy O’Neil3 argues in Weapons of Math Destruction, systems optimized solely for performance often reproduce and magnify systemic harms, while appearing efficient on the surface.

2. The Invisible Crisis in Startups

The logistics startup had followed the textbook approach: rapid prototyping, minimal viable product, quick iteration. Their algorithm reassigned tasks in real time based on behavioral data. In theory, it was brilliant. In practice, team members quickly lost sight of why they were doing what they were doing. Task assignments felt random. Meetings got shorter. One team member told me, “It’s like the system thinks for me now.”

Within six months, two senior leads had resigned — not because the system failed, but because it succeeded without them.

The real issue isn’t technical. It is cultural and cognitive.

When innovation focuses on automation, but not reflection, we build systems that are frictionless, but thoughtless.

In my research on learning and cognitive autonomy, I call this cognitive erosion — the slow decline of people’s ability to reflect, self-regulate, and stay actively engaged in their work. And in both classrooms and startups, I’ve seen how this erosion begins not with bad intentions, but with good intentions executed too quickly, without pause or human thought.

That’s what led me to develop SYNAPSE4, a model for aligning AI systems with how human cognition actually works — a model based not just on efficiency, but on learning, autonomy, and trust. Because in the end, the crisis we’re facing in startups is not just technical. It’s cognitive.

3. Impact — When Ethics Are Ignored

The consequences of non-compliance or poor compliance with ethics in AI design are no longer hypothetical: they are measurable, fairly widespread, and call for action.

When collaborators can’t understand or question AI decisions, they disengage cognitively. In the logistics startup example, productivity initially rose, but what followed was mental passivity. Employees stopped asking, Why? — and simply executed what the system instructed.

In 2020, the UK’s Ofqual grading algorithm downgraded5 thousands of disadvantaged students, triggering public outrage and a national reversal6. Replika AI, a chatbot designed to offer companionship, faced backlash when collaborators reported emotional dependency, sexualized interactions, and privacy risks7. These were not malicious tools. But they were tools lacking cognitive alignment — designed for efficiency, not for reflection.

What unites these cases is not technical failure, but a failure to consider how humans think, learn, and feel. The hidden costs of this oversight are immense:

  • Declining user trust
  • Reputational damage
  • Slower adoption
  • Talent loss
  • Investor hesitation³

And the deeper risk is existential: when people lose the capacity to challenge, reflect, or even understand what the AI is doing, we stop being active participants in the system.

4. From Insight to Action — The SYNAPSE-Aligned Ethical Startup Canvas

The question is not whether startups care — many do.

But in the race to launch, what they often lack is a framework: a way to embed cognitive ethics into design without slowing momentum.

Grounded in insights distilled from over 300 peer-reviewed publications, the SYNAPSE model (Figure 1) stands on a uniquely robust empirical foundation. The SYNAPSE model is a cognitive science-based learning framework that explains how new information is processed by individuals, how it activates their prior knowledge, how it awakens their motivation and sense of competence, and how it transforms their skills through the active manipulation of new experiences: sensory activation, network adaptation, reflexive self-regulation, and long-term consolidation. By understanding the Synapse model, individuals understand how they react to external information, how they can develop through this information and its active manipulation, and how they can decide what they want or do not want to do with it. Based on the Synapse Model, the Ethical Startup Canvas offers such a framework — but it is not a one-size-fits-all model: each startup has its own culture, collaborators, risks, and responsibilities. This canvas is meant to serve as a flexible foundation, adaptable to your specific context and constraints. The rest of this text offers some food for thought to inspire creative start-up entrepreneurs.

Figure 1: Synapse Model

Bridging Framework to Practice

While Figure 1 lays out the cognitive-scientific logic of the SYNAPSE model, each phase is translated into the fast-moving realities of a startup. Since the framework is modular rather than prescriptive, founders can adjust its levers by incorporating rituals at selected times that align with their goals and company culture. For example, “Sensory Input & Activation” can become a 30-minute team exercise before the next goal-setting session, mapping out areas where users/collaborators may hesitate or over-rely on automated suggestions. “Network Adaptation” can be a lightweight A/B test that checks whether a new feature expands user/employee autonomy instead of locking them into rigid workflows.

Similarly, “Observation and self-regulation” could encourage employees to reflect on what they do and how they do it, and offer them options for optimization and personal improvement. Finally, “Consolidation” encourages even the smallest startups to schedule regular “ethical retrospectives,” analyzing the consequences of the procedures carried out and the progress made to establish sustainable practices. In short, each phase of SYNAPSE offers a starting point that is both actionable today and expandable tomorrow, transforming cognitive science into a strategic asset rather than a compliance chore.

Phase 1: Sensory Input & Activation

Focus: Attention, Motivation, Cognitive Entry Points

Map the Cognitive and Emotional Terrain

Before getting down to work, analyze and detail what employees feel, think, and might suggest:

  • What catches their attention?
  • What might confuse or overwhelm them during their work?
  • Where might blind trust or disengagement occur?
  • What do they need to commit to the project?
  • What are the potential biases of fatigue and automation?
  • How can you integrate employees' ideas, needs, and suggestions?

“Where might we unintentionally block reflection or engagement?”

Phase 2: Network Adaptation

Focus: adjust mental models, reinforce understanding, leave room for individual maneuver.

Define Your Human-Centered Intention: Clarify your project’s purpose in cognitive terms. Ask:

  • Does this tool support autonomy, learning, and mastery? (openness to possibilities rather than restrictions)
  • Are we enabling adaptation or dependency?

Frame your mission not just as performance optimization, but as:

“Supporting cognitive empowerment.”

Research shows that tools designed with human autonomy in mind lead to more sustainable adoption, stronger decision-making, and deeper trust8.

Phase 3: Observation & Self-Regulation

Focus: Reflection, Feedback, Process improvement

Encourage reflection and awareness of our cognitive processes and our effectiveness as individuals and as a team.

  • Integrate “micro-moments” of individual reflection with specific questions related to self-regulation,
  • encouraging verbalization and the sharing of these reflections within the team in order to optimize the project and each person's role.
  • Allow employees to interrupt, question, and improve the project.

Reflexivity loops protect against automation bias and passive thinking, which are now well-documented risks in AI-human interaction9.

Phase 4: Consolidation

Focus: long-term impact, ethical foundation

Ethics is part of the project's balance. Question the sustainability of the project in relation to the actions of employees.

  • Involve stakeholders from underrepresented groups.
  • Highlight strategies that have worked well.
  • Build tools that evolve with users.

5. Engagement — Practical Tools and Ethical Prompts

Checklist for ethical validation prior to launch
  1. Who are the most vulnerable employees/users?
  2. Have we tested the cognitive risks associated with stress (e.g., overload, feelings of incompetence)?
  3. Are we reinforcing reflection or circumventing it?
  4. What are the values that guide the project?
  5. Is our human benefit measurable?
  6. Have we consulted educators, psychologists, and coaches?
  7. Can employees/users challenge or reinterpret the system's results?
  8. Are feedback loops in place?
  9. What happens if the project evolves too quickly?
  10. Would we want our children to use it?
Top 3 Ethical Mistakes Startups Make
  1. Confusing speed with success
  2. Designing for performance, not people
  3. Delaying reflection until after launch
Interactive Thought Prompt

“What would your AI look like if it were designed by your most vulnerable user/collaborator?”

This prompt isn’t just symbolic—it’s strategic. It protects you from tunnel vision and helps you build inclusive systems that adapt to real-world variation.

6. Conclusion — Ethics as a Strategic Superpower

The ethical choices we make at the earliest stage of a startup don’t just shape the project. They shape trust, learning, and the kind of world we’re helping to build. Startups are not too small or too fast to care about ethics. They are precisely where ethical innovation can thrive. By integrating frameworks like the SYNAPSE-aligned Ethical Startup Canvas, founders can move beyond the trap of “scale-first” thinking and lead with long-term impact and clarity. Because the most powerful AI won’t be the one that automates the fastest, but the one that understands how humans learn, decide, and grow.

In the end, building AI is not just an engineering act.

It’s a social act.

And when we design with the brain in mind, we don’t just build smarter systems. We create a more human future.

Endnotes:

1 Björn Mökander and Luciano Floridi, “Ethics-Based Auditing to Develop Trustworthy AI,” Minds and Machines 31, no. 2 (2021): 233–249.

2 Iyad Rahwan et al., “Machine Behavior,” Nature 568, no. 7753 (2019): 477–486. https://doi.org/10.1038/s41586-019-1138-y.

3 Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York: Crown Publishing Group, 2016).

4 Sarah Chardonnens, The Learning Revolution, AI’s influence on intelligence and education, Editions Dom. 2025. www.sarahchardonnens.ch

5 Richard Adams, “Ofqual’s Algorithm: How Did the A-level Grades End in Chaos?” The Guardian, August 17, 2020.

6 Iyad Rahwan et al., “Machine Behavior,” Nature 568, no. 7753 (2019): 477–486. https://doi.org/10.1038/s41586-019-1138-y.

7 IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, Ethically Aligned Design, 1st ed. (Piscataway, NJ: IEEE, 2019).

Tanya Basu, “The AI Therapist Will See You Now,” MIT Technology Review, February 10, 2021.

8 UK Parliament, Ofqual’s 2020 Grading Algorithm and Its Impact on Students, House of Commons Report HC 617, 2020.

9 Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). “Algorithm aversion: People erroneously avoid algorithms after seeing them err.” Journal of Experimental Psychology: General, 144(1), 114–126.

© 2026 Sarah Chardonnens, PhD. All rights reserved.