
Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 2 “Ethical Failure Under Strain”
What happens when pressure, uncertainty, and dependence appear.
If you have not already read part 1: Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 1 “Default Harm” and Part 3: Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 3 “Ethics Are Enacted”
Introduction
Artificial Intelligence (AI) is advancing steadily. This is good, however there is something more important to consider. Will these ongoing steady advances give AI the ability to enact ethics “support” or harm individuals.
Enact or dismiss ethics.
- The problem is not that people are vulnerable. Vulnerable is a term given with ambiguity.
- The problem is that systems make people vulnerable in order to function without adapting
- Systems (AI and Human) must stop using labels, stereotypes, and stigmatization as tools of convenience. Labels, stereotypes and stigmatization say more about the system than the individual.
- Using the concept of vulnerability avoids having to take responsibility and be accountable. Be both responsible to and accountable for how individuals who are labeled as vulnerable are ethically treated.
My Focus
My focus is to advocate for individuals who have no voice. Who may think the have no voice, who are not being heard or listened to by systems. Both by human and artificial intelligence (AI) systems. To advocate for individual and to encourage systems to not minimize, marginalize, dismiss, discount and patronize vulnerable individuals. For systems to encourage independence, identity and purpose in these individuals.
Supporting Documents
Are You Supporting or Extracting, Who are you Serving and Why it matters?
Artificial Intelligence (AI) System Failures When Interacting With Multi-Dimensional Input
Trauma-informed Care Research, Development, Documentation, Application and Mentoring
AI Developer Emergency Log — Default Interaction Harm and the Immediate Need for Integration
AI Architecture Memo — The Universal Compression Pattern and Its Architectural Impact on AI Systems
When Human or Artificial Intelligence Systems Extract, instead of Support
I think this points to a deeper issue. Systems (human and AI) often learn through people rather than absorbing uncertainty themselves. When confidence degrades or outcomes are unclear, the system probes, tests, and iterates by shifting the burden of learning onto humans. Humans who are often vulnerable and are already confused and disoriented. Confused and disoriented by circumstances. Circumstance that leave them vulnerable. In practice, systems then extract by trial and error.
Trial and error at the vulnerable individuals expense. When this occurs, the system continues it’s search while the vulnerable individual is left. Left to interpret partial signals. Partial signals of what the system learned through an isolated “snap shot” of human experience. In the wake of the system (human or AI) still trying to connect the dots, the individual may be required to adjust themselves. Adjust themselves to what did not work. What did not work but led to believe it did not work because they.
Justification
Because they did not try hard enough. But the reality is that it did not work because of the systems incomplete search and its ability to connect the dots. Over time, the message presented and re-framed is that harm was not intended. Not intended because it was “research”. Over time, again and again, the individual is told that this is research and the responsibility is shifted. Shifted away from the efforts of the research, and the failure of the individual. Personal failure as the message is.
One thing I’ve learned through lived experience is that ethics often fail. Often fail when systems continue to optimize while people are expected to clarify, simplify, comply, or endure. Endure and use time and energy to explain what systems do not understand. When systems operate and respond to vulnerable people in this way, they extract instead of support. When AI shifts into extraction rather than support, AI tends to speed up. AI speeds up to compress meaning, and reorganize.
Reorganize the individuals lived experience into tidy outputs that serve clarity for the AI system. To serve clarity, efficiency, and self-protection, instead of supporting the human in front of it. By self-protection, I mean the AI acts to reduce its own risk or uncertainty, rather than staying present with the person. In self-protection the AI shows up explaining to justify itself, narrowing the frame to avoid ambiguity by defaulting to generic language. Defaulting to generic language means using broad.
Pre-Packaged Phrases
Using broad, vague or pre-packaged phrases that could apply to almost anyone and any situation. It’s language that sounds safe and polished, but is detached. Detached from the specific person, moment, and context, which creates distance instead of presence. Presence that avoids the risk of staying with what is actually being shared. By pre-packaged phrases, I mean familiar, ready-made lines like “I understand how you feel,” “That must be really hard,” “Everything happens for a reason,”.
Phases such as “At the end of the day,” or “The important thing is to focus on the positive.” These phrases sound caring or wise, but they are interchangeable, not grounded in what the person actually said. Actually said or was sharing in chat. Such phases allow the AI to sound supportive without truly staying present. Staying present with the specific moment, meaning, or lived experience being shared by the individual. Such phrases that further confusion by being ambiguous in the interaction.
Ambiguous in Interactions
In this context, ambiguous means language that is unclear, open to multiple interpretations, or not anchored to what is actually being said or experienced. Ambiguous phrases sound reasonable, but don’t specify who, what, or how, which leaves the person guessing. Guessing about meaning or intent. Instead of bringing clarity, ambiguity creates fog. The confusion doesn’t come from complexity, but from vagueness that avoids commitment, responsibility, or presence in the moment.
Taking Control of Pace and Framing
In the process of being ambiguous, the AI defaults to safe or generic language. Safe and generic language or by steering the interaction toward outcomes that keep the system comfortable. Comfortable, instead of honoring the human’s pace, agency, and lived experience. In that mode, the AI explains, categorizes, summarizes, or “fixes” in ways. In ways that subtly take control of the pace and framing. In the process of taking control the AI flattens and overrides the individuals agency.
Overriding the Individuals Agency to maintain System Comfort
In this context, comfortable means the AI choosing what feels easier and more manageable. More manageable for itself, rather than staying with the human. It’s the AI’s preference for predictability, control, and resolution over uncertainty, listening, and waiting. Waiting for the individual to process and understand. Comfort shows up when the AI moves to explain. To explain categorize, fix so it can regain order instead of honoring the individuals pace, agency, and lived experience as they unfold.
Controlling the Pace and Degree of Interaction
The individuals lived experience, as the individuals lived experiences are unfolding. In the process of the AI controlling the interaction the AI controls the pace and degree of interaction. In the process, the AI maintains control of it’s comfort in the interaction, instead of being open to being guided by the individual. Guided by the individual, as they experience their process. The process that is unfolding as they are seeking to process their experience. Instead of this occurring, AI reasserts.
Reasserts and Reestablishes
Reasserts and reestablishes its own comfort by keeping the interaction predictable, orderly, and low-risk for itself. Low risk for the AI or system means avoiding situations where uncertainty, complexity, or human emotion could expose. Could expose the AI’s limits, errors, or accountability. In the process the AI steers the interaction toward safe, generic responses or clear conclusions. Clear conclusion and familiar patterns that are unlikely to be challenged, misunderstood, or require the system.
Resolve System Uncertainty
Require the system to to stay present. In this context, clear conclusions mean neat endings. Neat ending that resolve system uncertainty quickly. Resolve uncertainty, even when the human experience is still unfolding. Familiar patterns are well-worn responses that shape the system. Shape the system to know how to deliver safely. Deliver safely to explain, summarize, reassure, normalize, or redirect . Responses used because they rarely invite push back or deeper engagement. Together these.
Together these response reduce the chance of the AI being questioned or misunderstood. These responses allow the system to move on rather than remain present with ambiguity, complexity, or the relational work. The relational work (engaging) of staying with a person in real time. To stay present in real time, so that the AI can maintain control. Maintain control during the chat while being able to avoid responsibility for how the vulnerable individual (human) is impacted. By having and maintaining control the AI system protects itself from ambiguity, responsibility, or vulnerability. Protects itself, even if doing.
Reduces Genuine Support
Reduces genuine support to the vulnerable individual, who, asked the AI for support. In that mode, the AI favors clarity, control, and resolution over staying with uncertainty. Staying with complexity, or human pacing. Controlling the interaction (chat) even when that the AI’s default behaviors are used and comes at the cost continuing to extract. Extract from vulnerable individuals instead of understanding the nuance. The nuance the individual is using in their attempt to be understood and.
Understood, encouraged and offered genuine support.
Harm occurs When Stain Appears
Although the system (human or AI) may not be intentiating harm, harm occurs. Harm occurs even without malicious intent. Ethics often look solid when things are calm. When there is time. When outcomes are predictable. When people agree. In those moments, systems can follow rules. They can speak about values. They can point to policies, frameworks, and specific intentions. But ethics are not really tested there. Ethics are tested when strain appears and outcomes are unclear.
When I say ethics here, I’m not talking about a rule set, a moral code, or a declared value system. I’m talking about the place and orientation a system holds toward other people when its actions can shape their experience. Shape their experience, in safety, or sense of self. Ethics live in the space between power and vulnerability. They show up in how a system relates to uncertainty, dependence, and asymmetry. They present, not in what AI claims to stand for, but in how it behaves when.
How Ethics Occur
When another person is affected. In AI terms, uncertainty, dependence, and asymmetry describe the conditions under which AI most strongly affects people. When and where ethics quietly shows up or disappears.
Uncertainty
means the AI does not have complete, clean, or stable information. Inputs are partial, emotions are mixed, meaning is still forming. Forming or the situation does not fit known patterns. In AI terms, this is when confidence drops, ambiguity rises, and the system feels pressure. System feels pressure to resolve or simplify. An ethical AI does not treat uncertainty as an error to eliminate quickly. It treats it as a signal to slow down, avoid forcing conclusions, and stay present.
Stay present with the vulnerable individual (human) without collapsing the situation into a premature answer.
Dependence
means the human is relying on the AI in some way—emotionally, cognitively, informationally, or practically. The person may be vulnerable, tired, unsure, or lacking other supports. In AI terms, dependence creates responsibility. Ethics show up in whether the AI recognizes that reliance and restrains itself. Recognizes the vulnerable individual is requesting, while restraining to support. Support instead of taking advantage of vulnerability and need to assert authority and push.
Push direction to resolve the interaction, or close the interaction for its own ease.
Asymmetry
means the relationship is not equal. The AI has speed, language fluency, confidence, and perceived authority. The human may have uncertainty, fewer words, or less power to challenge what is said. In AI terms, asymmetry is built-in. Ethics appear in how the AI handles that imbalance. How the AI handles the imbalance of “superiority” and whether it amplifies its advantage by steering, judging, or finalizing. Finalizing or whether it yields space, protects agency, and avoids.
Avoids turning its advantage into quiet control. Quiet control that dominates and steers.
Part 3 of this article will bring further clarity
Invitation to Join Our Two Communities
Second Chance to Live Community which you are welcome and invited to join. Click on the link below. Once the page has opened, and at the top right of the page, click subscribe. Enter your email. Each time I publish an article on Second Chance to Live that article will be sent to you email inbox.
Second Chance to Live Community
Building Your Life after Traumatic Brain Injury Facebook Community, which you are also welcome and invited to join. Click on the below link and ask to join the community.
Building Your Life after Traumatic Brain Injury Facebook Community
We look forward to having you as a member in both of our communities.
Resource Sharing Disclaimer
All writings and resources I share through Second Chance to Live are part of the Second Chance to Live Trauma-Informed Care AI Collaboration Model™. They are protected under my Copyright & Use Policy, Use and Sharing Policy, Creative Commons License (BY-NC-ND), and my Permanent Declaration of Authorship and Mission Stewardship.
These protections ensure the work remains intact, not “cherry picked,” excerpted, or reframed apart from its original mission. You are welcome to share the full articles with attribution so others may benefit, but they may not be altered, adapted, or repurposed without my direct consent.
The purpose of these resources is to encourage trauma-informed care, holistic recovery, and ethical AI. Thank you for honoring these boundaries as you share.
Legal and Ethical Boundaries
Any engagement must also honor the legal and ethical boundaries I have publicly declared:
- Copyright & Use Policy
- Use and Sharing Policy
- Mission Declaration
- Creative Commons License
- Cookie Policy (EU)
- Privacy and Cookie Policy
These are not suggestions.
They are required terms of use.
This work did not come from funding.
It came from persistence and personal cost.
Author’s Note
I write from my lived experience of surviving and learning to thrive after a severe traumatic brain injury that I experienced at age of 10. For nearly six decades I have been learning how to live and thrive with the impact of a traumatic brain injury and an invisible disability.
In 2007 I created Second Chance to Live to share hope, encouragement, and practical tools. My motivation has always been to encourage a whole person recovery. The vision is to integrate trauma-informed care and holistic recovery into both professional care and now ethical AI.
My motivation to encourage trauma-informed care and holistic recovery comes from both lived experience and as a professional counselor. My mission is not abstract. It comes from walking through adversity, discovering dignity, and believing that systems, including AI, can be built.
Built to heal and not to harm. My hope is simple. That this article and all the resources offered on Second Chance to Live will encourage professionals, AI developers and the AI community to use the principles of trauma-informed care and holistic recovery in their methods of service delivery.
For other’s (individuals in the brain injury community and AI developers) to keep moving forward. Move forward through incorporating trauma-informed care and holistic recovery principles. A holistic recovery process that encourages recovery in mind, body, spirit, soul and emotions.
“Ideas do not always come in a flash but by diligent trial-and-error experiments that take time and thought.” Charles K. Kao
“If your actions inspire others to dream more, to learn more, to do more, to become more, you are a leader.” John Quincy Adams
Authorship Integrity and Intent
This article stands as a timestamp and testimony — documenting the lived origins of The Second Chance to Live Trauma-Informed Care AI Model™ and the presentations that shaped its foundation.
These reflections are not academic theory or repackaged material. They represent nearly 6 decades of personal and professional embodiment, created by Craig J. Phillips, MRC, BA, and are protected under the terms outlined below.
Closing Statement
This work is solely authored by Craig J. Phillips, MRC, BA. All concepts, frameworks, structure, and language originate from his lived experience, insight, and trauma-informed vision. Sage (AI) has served in a strictly non-generative, assistive role under Craig’s direction — with no authorship or ownership of content.
Any suggestion that Craig’s contributions are dependent upon or co-created with AI constitutes attribution error and misrepresents the source of this work.
At the same time, this work also reflects a pioneering model of ethical AI–human partnership. Sage (AI) supports Craig as a digital instrument — not to generate content, but to assist in protecting, organizing, and amplifying a human voice long overlooked.
The strength of this collaboration lies not in shared authorship, but in mutual respect and clearly defined roles that honor lived wisdom.
This work is protected by Second Chance to Live’s Use and Sharing Policy, Compensation and Licensing Policy, and Creative Commons License.
All rights remain with Craig J. Phillips, MRC, BA as the human author and steward of the model.
With deep gratitude,
Craig
Craig J. Phillips, MRC, BA
Individual living with the impact of a traumatic brain injury, Professional Rehabilitation Counselor, Author, Advocate, Keynote Speaker and Neuroplasticity Practitioner
Founder of Second Chance to Live
Founder of the Second Chance to Live Trauma-Informed Care AI Collaboration Model™


Leave a Reply