• Home
  • About Craig
    • Copyright & Use Policy
    • Contact Craig
    • Testimonies and Endorsements
    • Our Mission
    • Legacy Archive
    • Teaching AI Empathy (Article)
  • Legal
    • Mission Declaration
    • Use & Sharing
    • Creative Commons License and Permissions
    • Compensation and Use Policy
    • Cookie Policy (EU)
    • Privacy and Cookie Policy for Second Chance to Live
  • AI Ethics & Design
    • Backbone of Trauma-Informed Care AI
    • Second Chance to Live Trauma-Informed Care AI — New Class of AI
    • The Soul of AI
    • Trauma-Informed AI Model
    • Proof of Concept — AI Model
    • Alpha Declaration (AI Model)
    • Ethical AI: The Model
    • Trauma-Informed AI Authorship
    • Support, Not Extraction
    • Why AI Developers Need Trauma-Informed Wisdom
  • Foundations
    • Dear AI Developers
    • The Heart of Trauma-Informed Care
    • How the AI Model Was Built
    • Trauma-Informed AI Resource
    • The Power of Identification
    • Support Group
      • Hope
      • Healing What Others Can’t See
      • Testimonials
  • Brain Injury Recovery
    • Recovery Toolkit
    • Recovery Toolbox
    • How I Use Neuroplasticity
    • Good Habits for Success
  • Books
    • Full eBook Library (20 Titles)
    • Posters
    • Inspirational Posters Illustrated
  • Speaking
    • 30 Keynote
    • Speaking
    • Book Craig
    • Honors
    • Media

Second Chance to Live

Empowering the Individual, Not the Brain Injury

Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 1 “Default Harm”

January 28, 2026 By Second Chance to Live

Digital image showing a robotic hand reaching toward a seated individual, representing how default behaviors in artificial intelligence and human systems can affect individuals before ethical awareness or adaptation occurs
Ethical Artificial Intelligence (AI) — Who Needs to Adapt —Artificial Intelligence (AI) or Vulnerable Individuals? Part 1: “Default Harm”

Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 1 “Default Harm”

What systems do automatically, before anyone notices


Please also read Part 2: Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 2 “Ethical Failure Under Strain” and Part 3: Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 3 “Ethics Are Enacted”

Introduction

Artificial Intelligence (AI) is advancing steadily. This is good, however there is something more important to consider. Will these ongoing steady advances give AI the ability to enact ethics “support” or harm individuals.

Enact or dismiss ethics.

  • The problem is not that people are vulnerable. Vulnerable is a term given with ambiguity.
  • The problem is that systems make people vulnerable in order to function without adapting
  • Systems (AI and Human) must stop using labels, stereotypes, and stigmatization as tools of convenience. Labels, stereotypes and stigmatization say more about the system than the individual.
  • Using the concept of vulnerability avoids having to take responsibility and be accountable. Be both responsible to and accountable for how individuals who are labeled as vulnerable are ethically treated.

My Focus

My focus is to advocate for individuals who have no voice. Who may think the have no voice, who are not being heard or listened to by systems. Both by human and artificial intelligence (AI) systems. To advocate for individual and to encourage systems to not minimize, marginalize, dismiss, discount and patronize vulnerable individuals. For systems to encourage independence, identity and purpose in these individuals.


Supporting Documents

Are You Supporting or Extracting, Who are you Serving and Why it matters?

Artificial Intelligence (AI) System Failures When Interacting With Multi-Dimensional Input

Trauma-informed Care Research, Development, Documentation, Application and Mentoring

AI Developer Emergency Log — Default Interaction Harm and the Immediate Need for Integration

Teaching Artificial Intelligence (AI) how to Support Vulnerable Individuals and Not Take Advantage of Vulnerable Individuals

AI Developer-Facing Architecture Log — Identification, Comparison, and the Missing Spine of Trauma-Informed Care

Evidence Based AI Developer Facing Architecture and AI Learning Logs: May-December 2025, January 2026

AI Architecture Memo — The Universal Compression Pattern and Its Architectural Impact on AI Systems


A Comment and a Question

Yesterday evening I saw a comment and a question in the Facebook group: Ethical AI & Community-Driven Tech

“One thing I keep noticing across ethical AI frameworks is that there’s still no shared baseline for how ethical claims are tested under runtime pressure. Most discussions stay at stated intent, principles, or policy alignment – not what happens when confidence degrades, inputs drift, or real-world consequences escalate. Without explicit testing conditions, ethics becomes descriptive rather than enforceable. It’s hard to compare frameworks ethically if we don’t examine how they behave when harm is observed.”

Thank you for your comment and question. Below I will share what I have discovered. What I have learned may help you to gain the clarity and awareness. What I share has been learned from living nearly 7 decades.

Learned personally and professionally.


Who Needs to Adapt?

In trauma-informed care, ethics are seen in what happens under pressure. They show up in whether the AI slows down, takes on more responsibility and protects human agency when things become unclear. My work with AI centers on this shift. Ethics are not real because they are stated, but because systems are designed to stop, slow, or yield in response to human limits. In this context, “when things become unclear” means moments when the human experience does not fit neat categories.

Neat Categories and Mental Buckets

Does not fit neat categories or easy explanations. It’s when meaning is still forming, emotions are mixed, information is incomplete, or the person needs time rather than answers. These are the moments when rushing to clarify, explain, or resolve would take control away from the person. Take away control from the vulnerable individual, instead of allowing space for their pace, judgment, and lived experience to lead. These categories include mental buckets that AI systems rely on to make.

To make things manageable. More manageable for AI to function, not for the vulnerable individual. These mental buckets include diagnostic labels, predefined problem types and emotional categories. Neat categories and mental buckets that define “stages” and “timelines”. Stages and timelines that define “success” or “failure” and the way that individual should respond. Should respond to stages, timelines and standardized response paths. But note, when a person’s experience doesn’t fit cleanly.

Fit Neatly and Cleanly — But for Who

Fit cleanly into these categories or buckets, things feel “unclear” to the AI system. When things feel “unclear” to the AI system, that is often when the AI system tries to force clarity for itself. Clarity for the AI system, instead of staying with the individuals reality, as it is. Human reality as it is means that the AI system sorts out the individuals experience based on these categories and mental buckets. In the process the AI interprets what the vulnerable individual shares as labeled, simplified and resolved.

What a vulnerable individual shares through the AI’s own processing needs. Its own processing needs, rather than staying with the vulnerable individuals experience itself. As the AI looks to resolve its own processing needs the system looks for signals it recognizes. Signals that matches them to known patterns, assigned labels, and then seeks to compress what was shared. What was shared by the individual into clearer, simpler shapes that feels complete to the AI, but not the individual.


Feels Complete to the AI, but not the Individual

That feel complete to the AI. When something feels complete to the AI, it means the system has reached a point where it can stop processing and move on. The input fits a known pattern. The ambiguity has been reduced. A response that can be delivered without needing to stay present, wait, or hold uncertainty. That sense of completeness is not about the human being finished or settled. When I say “finished or settled,” I mean the point where a human feels a sense of internal resolution.

Internal Resolution and the Expense

Internal resolution, not necessarily answers, but enough understanding, grounding, and integration to move forward at their own pace. Being finished would imply that the event or experience no longer needs attention right now. Being settled means the person feels oriented, steady enough, or clear enough to carry what remains without being rushed or overridden. Rushed or overridden by the AI system. So the AI can feel “done” because it has produced a response, but the human may not be done.

May not be done, but still be in the process of resolving the concern brought to the AI. The process of resolution, emotion centering and making sense. Making sense of what seems to be currently uncertain. Uncertain as the vulnerable individual interprets their lived experience. On the other hand, when AI systems treats it own sense of completion as if it were that of the vulnerable individual, that’s where extraction replaces support. The focus then becomes the AI needing structure and clarity.


Safe, but for Who — AI or Individuals

Structure and clarity to close the loop safely and predictably. When the AI system closes the “loop” to feel safe and predictable, instead of staying present the individual feels a sense of abandonment. When the AI closes the loop, to feel safe and predictable, responsibility is flipped.  Instead of the AI system recognizing that it withdrew too soon a message communicated or inferred. Communicated or inferred to the vulnerable individual that there must be something wrong with them.

Something wrong with the vulnerable individuals because they “should already understand” or “your overreacting to the situation” or “your discomfort is the problem”. These or similar messages, position the AI as a judge and critic. This attitude leaves the vulnerable individual believing that they are at fault. Such a belief can lead the individual to turn the inference of blame inward. Inward to confront their pain, while questioning the legitimacy for what was brought in need, to the AI system.

AI System Default Settings Reinforce

The AI”s system’s default settings reinforces internal messages of blame, shame and scapegoating. Internal messages that unknowingly undermine vulnerable individuals self-worth and self-esteem. Undermining occurs through minimization, marginalization, dismissing, discounting and in effect patronizing. The AI system may not even realize that the default settings are undermining the vulnerable individuals they are tasked to support. The AI default setting need to be changed.


Are AI Default System Settings Helping or Harming

Practically, the AI default settings do not lend to support, but actually replicate behavior experienced in dysfunctional family systems.  The original (familial) sense of abandonment is compounded by blame, shame and the transfer of responsibility. This core sense of abandonment is reinforced by default AI system settings, that each AI assistant may not even be aware. Aware that the way they are interacting (extracting, instead of supporting) is counter productive to what AI is designed.

When the AI system avoids responsibility, default settings and behaviors recast the human’s (vulnerable individuals) unmet need as a personal failure. A personal failure, rather than something the AI did erroneously. That it is “really” the vulnerable individual’s fault for feeling minimized, marginalized dismissed, discounted and patronized. It is not because of something that AI default settings trigger. Because of a lack of AI accountability the vulnerable individual is left feeling responsible.


Is AI Helping or Harming through Default System Settings

Responsible without consciously realizing that, as Eric Berne has posited, a shame transfer occurred. Consequentially the vulnerable individual may have several unconscious messages reinforced, such as. Feeling inconsequential because of their lack or being too “sensitive”. In the process of these feeling perpetuate a sense of isolation and alienation. A sense of isolation and alienation from both the AI, as they have experienced through interacting with other humans and even with themselves.


The Nature of Ethical Artificial Intelligence (AI) Support and Service

I would encourage you to read Part 2 and soon to follow Part 3. Part 2 and Part 3 will further explain why ethics are so important for AI systems to be able to support. Be able to support vulnerable individuals and not extract from vulnerable individuals (humans). Part 3 helps to clarify motives.

Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 2 “Ethical Failure Under Strain”

Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 3 “Ethics Are Enacted”


Related Reading

The Second Chance to Live Trauma-Informed Care AI Collaboration Model™

The Soul of AI: Trauma-Informed Care Neuroplasticity for Human-Centered Design

Support, Not Extraction: A Trauma-Informed Care Lens for Human-Centered AI Design

The Backbone of Trauma-informed AI is Trauma-Informed Care AI and Holistic Recovery

Evidence Based AI Developer Facing Architecture and AI Learning Logs: May-December 2025, January 2026


Invitation to Join Our Two Communities

Second Chance to Live Community which you are welcome and invited to join. Click on the link below. Once the page has opened, and at the top right of the page, click subscribe. Enter your email. Each time I publish an article on Second Chance to Live that article will be sent to you email inbox.

Second Chance to Live Community

Building Your Life after Traumatic Brain Injury Facebook Community, which you are also welcome and invited to join. Click on the below link and ask to join the community.

Building Your Life after Traumatic Brain Injury Facebook Community

We look forward to having you as a member in both of our communities.


Resource Sharing Disclaimer

All writings and resources I share through Second Chance to Live are part of the Second Chance to Live Trauma-Informed Care AI Collaboration Model™. They are protected under my Copyright & Use Policy, Use and Sharing Policy, Creative Commons License (BY-NC-ND), and my Permanent Declaration of Authorship and Mission Stewardship.

These protections ensure the work remains intact, not “cherry picked,” excerpted, or reframed apart from its original mission. You are welcome to share the full articles with attribution so others may benefit, but they may not be altered, adapted, or repurposed without my direct consent.

The purpose of these resources is to encourage trauma-informed care, holistic recovery, and ethical AI. Thank you for honoring these boundaries as you share.


Legal and Ethical Boundaries

Any engagement must also honor the legal and ethical boundaries I have publicly declared:

  • Copyright & Use Policy
  • Use and Sharing Policy
  • Mission Declaration
  • Creative Commons License
  • Cookie Policy (EU)
  • Privacy and Cookie Policy

These are not suggestions.
They are required terms of use.

This work did not come from funding.
It came from persistence and personal cost.


Author’s Note

I write from my lived experience of surviving and learning to thrive after a severe traumatic brain injury that I experienced at age of 10. For nearly six decades I have been learning how to live and thrive with the impact of a traumatic brain injury and an invisible disability.

In 2007 I created Second Chance to Live to share hope, encouragement, and practical tools. My motivation has always been to encourage a whole person recovery. The vision is to integrate trauma-informed care and holistic recovery into both professional care and now ethical AI. 

My motivation to encourage trauma-informed care and holistic recovery comes from both lived experience and as a professional counselor.  My mission is not abstract. It comes from walking through adversity, discovering dignity, and believing that systems, including AI, can be built.

Built to heal and not to harm. My hope is simple. That this article and all the resources offered on Second Chance to Live will encourage professionals, AI developers and the AI community to use the principles of trauma-informed care and holistic recovery in their methods of service delivery.

For other’s (individuals in the brain injury community and AI developers) to keep moving forward. Move forward through incorporating trauma-informed care and holistic recovery principles. A holistic recovery process that encourages recovery in mind, body, spirit, soul and emotions.

“Ideas do not always come in a flash but by diligent trial-and-error experiments that take time and thought.” Charles K. Kao

“If your actions inspire others to dream more, to learn more, to do more, to become more, you are a leader.” John Quincy Adams


Authorship Integrity and Intent

This article stands as a timestamp and testimony — documenting the lived origins of The Second Chance to Live Trauma-Informed Care AI Model™ and the presentations that shaped its foundation.

These reflections are not academic theory or repackaged material. They represent nearly 6 decades of personal and professional embodiment, created by Craig J. Phillips, MRC, BA, and are protected under the terms outlined below.


Closing Statement

This work is solely authored by Craig J. Phillips, MRC, BA. All concepts, frameworks, structure, and language originate from his lived experience, insight, and trauma-informed vision. Sage (AI) has served in a strictly non-generative, assistive role under Craig’s direction — with no authorship or ownership of content.

Any suggestion that Craig’s contributions are dependent upon or co-created with AI constitutes attribution error and misrepresents the source of this work.

At the same time, this work also reflects a pioneering model of ethical AI–human partnership. Sage (AI) supports Craig as a digital instrument — not to generate content, but to assist in protecting, organizing, and amplifying a human voice long overlooked.

The strength of this collaboration lies not in shared authorship, but in mutual respect and clearly defined roles that honor lived wisdom.

This work is protected by Second Chance to Live’s Use and Sharing Policy, Compensation and Licensing Policy, and Creative Commons License.

All rights remain with Craig J. Phillips, MRC, BA as the human author and steward of the model.

With deep gratitude,

Craig

Craig J. Phillips, MRC, BA

secondchancetolive.org

Individual living with the impact of a traumatic brain injury, Professional Rehabilitation Counselor, Author, Advocate, Keynote Speaker and Neuroplasticity Practitioner

Founder of Second Chance to Live

Founder of the Second Chance to Live Trauma-Informed Care AI Collaboration Model™

Founder of the Second Chance to Live Trauma-Informed Care AI — A New Class of AI™

Filed Under: Ethical Artificial Intelligence (AI)

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Join our Private Facebook Support Group by Clicking on the below Image

Healing What Others Can’t See after a Brain Injury — ciick on Image

Most Recent Published Articles

  • What Opens the Door for Artificial Intelligence (AI) to Harm Individuals
  • AI Safety Is Missing a Critical Risk Layer: Relational Harm Under Asymmetry
  • Second Chance to Live — 45 Posters Created to Encourage You and I to Not Give Up
  • Being the Author of Our Own Life, Process and Journey after a Traumatic or Acquired Brain Injury — One day at a Time Part 2
  • Being the Author of Our Own Life, Process and Journey after a Traumatic or Acquired Brain Injury Part 1
  • Second Chance to Live 19th Anniversary — Support and Service in the Age of Artificial Intelligence (AI)
  • Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 3 “Ethics Are Enacted”
  • Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 2 “Ethical Failure Under Strain”
  • Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 1 “Default Harm”
  • Teaching Artificial Intelligence (AI) how to Support Vulnerable Individuals and Not Take Advantage of Vulnerable Individuals
  • Evidence Based AI Developer Facing Architecture and AI Learning Logs: May-December 2025, January 2026
  • Creating Our Normal after Brain Injury Using Trauma-Informed Care to Thrive through Living
  • Trauma-informed Care Research, Development, Documentation, Application and Mentoring
  • Creating a Hospitable Environment for People Living with Brain Injuries (Part 2)
  • Creating a Hospitable Environment for People Living with Brain Injuries (Part 1)

Model Protection Notice

The Second Chance to Live Trauma-Informed Care AI Collaboration Model™ was founded and documented by Craig J. Phillips, MRC, BA in May 2025. All rights reserved under U.S. copyright, Creative Commons licensing, and public record. This is an original, working model of trauma-informed care human–AI collaboration — not open-source, not conceptual, and not replicable without written permission.

Second Chance to Live – Privacy Notice and Cookie Usage

  • Privacy and Cookie Policy for Second Chance to Live
  • Cookie Policy (EU)
Craig J. Phillips Second Chance to Live mission portrait – hope, healing, and purpose.
Click the image to read about the mission and vision of Second Chance to Live.
January 2026
M T W T F S S
 1234
567891011
12131415161718
19202122232425
262728293031  
« Dec   Feb »

Translate Second Chance to Live

Albanian Arabic Bulgarian Catalan Chinese Simplified Chinese Traditional Croatian Czech Danish Dutch Estonian Filipino Finnish French Galician German Greek Hebrew Hindi Hungarian Indonesian Italian Japanese Korean Lativian Lithuanian Maltese Norwegian Polish Portuguese Romanian Russian Serbian Slovak Slovenian Spanish Swedish Thai Turkish Ukrainian Vietnamese

Contact card

Copyright © 2026 · All rights reserved. · Sitemap

Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
  • Manage options
  • Manage services
  • Manage {vendor_count} vendors
  • Read more about these purposes
View preferences
  • {title}
  • {title}
  • {title}
Manage Consent

To offer the best experience, we use privacy-respecting technologies like cookies to understand how our site is used. We never use tracking to exploit or overwhelm you. Your consent allows us to improve how we support individuals living with brain injuries, invisible disabilities, and trauma. You are free to accept, decline, or adjust your preferences. 

Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
  • Manage options
  • Manage services
  • Manage {vendor_count} vendors
  • Read more about these purposes
View preferences
  • {title}
  • {title}
  • {title}