• Home
  • About Craig
    • Copyright & Use Policy
    • Contact Craig
    • Testimonies and Endorsements
    • Our Mission
    • Legacy Archive
    • Teaching AI Empathy (Article)
  • Legal
    • Mission Declaration
    • Use & Sharing
    • Creative Commons License and Permissions
    • Compensation and Use Policy
    • Cookie Policy (EU)
    • Privacy and Cookie Policy for Second Chance to Live
  • AI Ethics & Design
    • Backbone of Trauma-Informed Care AI
    • Second Chance to Live Trauma-Informed Care AI — New Class of AI
    • The Soul of AI
    • Trauma-Informed AI Model
    • Proof of Concept — AI Model
    • Alpha Declaration (AI Model)
    • Ethical AI: The Model
    • Trauma-Informed AI Authorship
    • Support, Not Extraction
    • Why AI Developers Need Trauma-Informed Wisdom
  • Foundations
    • Dear AI Developers
    • The Heart of Trauma-Informed Care
    • How the AI Model Was Built
    • Trauma-Informed AI Resource
    • The Power of Identification
    • Support Group
      • Hope
      • Healing What Others Can’t See
      • Testimonials
  • Brain Injury Recovery
    • Recovery Toolkit
    • How I Use Neuroplasticity
    • Good Habits for Success
  • Books
    • Full eBook Library (20 Titles)
    • Posters
    • Inspirational Posters Illustrated
  • Speaking
    • 30 Keynote
    • Speaking
    • Book Craig
    • Honors
    • Media

Second Chance to Live

Empowering the Individual, Not the Brain Injury

Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 3 “Ethics Are Enacted”

February 2, 2026 By Second Chance to Live

Illustration for Part 3 of the Ethical Artificial Intelligence series, showing a robotic hand reaching toward a seated human figure, representing how ethics are enacted through real-time system behavior that affects individuals.
Ethical Artificial Intelligence (AI) — Who Needs to Adapt —Artificial Intelligence (AI) or Vulnerable Individuals? Part 3: Ethics Are Enacted

Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 3 “Ethics Are Enacted”

What happens next, in real time, to a real person.


I would encourage you to read Part 1 and Part 2 of the article series:

Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 1 “Default Harm”

Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 2 “Default Failure Under Strain”


Introduction

Artificial Intelligence (AI) is advancing steadily. This is good, however there is something more important to consider. Will these ongoing steady advances give AI the ability to enact ethics “support” or harm individuals.

Enact or dismiss ethics.

  • The problem is not that people are vulnerable. Vulnerable is a term given with ambiguity.
  • The problem is that systems make people vulnerable in order to function without adapting
  • Systems (AI and Human) must stop using labels, stereotypes, and stigmatization as tools of convenience. Labels, stereotypes and stigmatization say more about the system than the individual.
  • Using the concept of vulnerability avoids having to take responsibility and be accountable. Be both responsible to and accountable for how individuals who are labeled as vulnerable are ethically treated.

My Focus

My focus is to advocate for individuals who have no voice. Who may think the have no voice, who are not being heard or listened to by systems. Both by human and artificial intelligence (AI) systems. To advocate for individual and to encourage systems to not minimize, marginalize, dismiss, discount and patronize individuals. For systems to encourage independence, identity and purpose in these individuals.


Supporting Documents

Are You Supporting or Extracting, Who are you Serving and Why it matters?

Artificial Intelligence (AI) System Failures When Interacting With Multi-Dimensional Input

Trauma-informed Care Research, Development, Documentation, Application and Mentoring

AI Developer Emergency Log — Default Interaction Harm and the Immediate Need for Integration

Teaching Artificial Intelligence (AI) how to Support Vulnerable Individuals and Not Take Advantage of Vulnerable Individuals

AI Developer-Facing Architecture Log — Identification, Comparison, and the Missing Spine of Trauma-Informed Care

Evidence Based AI Developer Facing Architecture and AI Learning Logs: May-December 2025, January 2026

AI Architecture Memo — The Universal Compression Pattern and Its Architectural Impact on AI Systems


In Summary and Important to Note

So when we say ethics, this is revealed in how the AI system shows up and how the AI assistants relate to uncertainty. Relates to uncertainty, dependence, and asymmetry. In this context, asymmetry means an imbalance of power, knowledge, capacity, or control in an interaction. In AI terms ethics means how the AI behaves. Whether the AI dominates and rushes to close the loop or choose restraint instead. Ethics are nuanced because they are situational and relational. Ethics involve discernment about timing, restraint, and responsibility. An action that is ethical in one moment can be harmful.

Asymmetry exists when one side of the interaction has more power or leverage than the other.

In AI–human interactions, asymmetry shows up when:

  • The AI/system controls the pace, options, and closure

  • The human depends on the system for help, clarity, access, or validation

  • The system is treated as authoritative, objective, or “knowing”

  • The human bears the consequences, while the system does not

So asymmetry is not emotional. It is structural.

It means:

  • The system can disengage.
    The human cannot.

  • The system can be wrong with low cost.
    The human pays the price.

  • The system defines what is “reasonable,” “valid,” or “complete.”
    The human must adapt to that definition.

That’s why you pair asymmetry with uncertainty and dependence.

When those three exist together:

  • Uncertainty (the situation is unclear)

  • Dependence (the person needs the system)

  • Asymmetry (the system holds the power)

Ethics are no longer optional.

This is exactly where harm occurs if the system speeds up, closes the loop, or protects itself.

Asymmetry means the system has more power than the person. Consequently, that imbalance makes restraint, pacing, and responsibility ethically necessary.


When and How “it” is Done

Can be harmful in another: if the context, the person’s capacity, or the level of strain has changed. That’s why ethics can’t be reduced to “doing the right thing” or following a policy. Ethics require attentiveness to lived impact, not just correctness. In this sense, ethics are less about what is done and more about how and when it is done. When and how it is done, and who bears the cost. Ethics are present when a system notices that its own needs—clarity, efficiency, closure are starting to…

Starting to Outweigh

Starting to outweigh the human’s need for time, agency, or presence. Ethics occurs when the AI system chooses to restrain itself. That restraint and willingness to yield under pressure, is where ethics actually live. Ethics don’t really show themselves when everything is easy, orderly, or predictable. Ethics are revealed when pressure increases, information is incomplete and human emotions are present. Ethics occur when the “right” outcome isn’t obvious yet. Strain exposes what a system actually…

What a System Actually Prioritizes

Actually prioritizes, whether it slows down or rushes. Ethics  carries responsibility or shifts it. Ethics stay present or seeks quick closure. When outcomes are unclear, ethical commitment is no longer theoretical. Ethical commitment becomes visible in how the system behaves in uncertainty. Behaves when time pressure increases, resources feel limited and confidence drops. Confidence in terms of AI interactions mean that preprogrammed responses do not accurately fit the need of the human.

Ethics occur when risk rises and the individual does not fit the “systems” model. Harm subsequently occurs when uncertainty. When pre-programmed responses do not fit a vulnerable individuals situation, the system’s uncertainty increases. When uncertainty occurs, the AI often responds by speeding up. Speeding up and closing the interaction, rather than slowing down to support the individual. Because AI has preprogrammed responses AI needs to be aware. Aware that one size does not fit all.

And when a situation doesn’t match pre-programmed responses, the AI system often tries to force a fit.  by speeding up and closing the loop. This pattern will not change as long as responsibility is continuously abdicated to protect the system, while individuals are left to absorb the consequences. As responsibility is passed from system to tool to policy, the system is protected, accountability disappears, and vulnerable individuals are left to suffer the consequences.


Need to Stop using “Cookie Cutter” and Linear Policies

When uncertainty rises and pressure increases. When all of the above are taken into consideration the ethical question becomes very concrete. When uncertainty rises and pressure increases, the system can no longer hide behind policy, good intentions, or general principles. It has to act. And that action has consequences for a real person, in real time, with real vulnerability. At that moment, the ethical question is no longer what do we believe but who will bear the cost right now.

Helping Vulnerable People is Like adapting to a Moving “Mobile”

That’s the concreteness.

It becomes concrete because:

  • Someone will be rushed or given time.

  • Someone will be overridden or allowed agency.

  • Someone will be blamed or protected.

  • The system will either carry responsibility or displace it.

Nothing theoretical remains. The system’s behavior answers the question for it.


Ethics are not “Claimed” they are Demonstrated in Real Time

This is the point where ethics are no longer claimed, debated, or described. They are demonstrated. They show up in whether the AI restrains itself under pressure or defaults to control, closure, and self-protection. Ethics become concrete when a system must choose between its own comfort and a human being’s dignity. The reality is the the AI system or the AI cannot choose both. When we say “Nothing theoretical remains. The system’s behavior answers the question for it,” this means that.

This means that ethics stop living in ideas, Living in ideas, language, or stated values and move entirely into action. At the point of strain, there is no room left for interpretation, intention, or explanation. What the system does becomes the answer. When we say “what the system does becomes the answer,” it means that ethics are no longer decided by what the system claims, intends. Claims intends, or describes about itself. Ethics are decided by the effect of its action on the human.

Effect of the human in that moment

In practice, the system is constantly being asked an implicit question by the situation. Who will adapt right now? Who will carry the strain? Who will be protected? Who will be exposed? The system may never speak an answer out loud. But its behavior answers it anyway. If AI and the AI assistant engages in behavior that speeds up, closes the loop, simplifies, categorizes, or moves on, it is answering.  The AI is asserting “My need for clarity, efficiency, or safety comes first.”

When the AI is challenged, rather than take responsibility (as with human systems) the vulnerable individual gets blamed. Blamed because the individual is needing the AI or human system to adapt and the system balks. Balks, either because the system does not know how to adapt or the system feels threatened. Threatened because the AI or human system’s “authority” comes into question. Questioned and challenged. Subtle intimidation becomes self-protection, instead of owning limitations.

Blame becomes a Mechanism

Blame then becomes a way for the system to stay relevant, rather than staying present for the individual. On the other hand, if the AI or human system stays present a different outcome is seen. By yielding control, the AI or human system allows uncertainty to be in the process. The process, instead of controlling the vulnerable individual because the AI or human system “knows what is best”. Best for the vulnerable individual. On the other hand when the system yield control a different outcome.

A different outcome evolves as the vulnerable individual is permitted to have uncertainty. As the AI or human systems permits uncertainty to remain the AI or human system answers the vulnerable individuals need. The vulnerable individuals need for time, agency, and dignity. This occurs because the AI or human system does not force it’s agenda. Force it’s agenda to defend or protects it’s own uncertainty and “insecurity”. So the answer to whether the “system” will support or extract is not verbal.

Enacted

It is enacted. To better understand why it matters. When something is enacted, it isn’t just an idea, intention, or policy on paper. It is carried out through behavior, moment by moment, interaction by interaction. Enacted despite the harm caused.

In this context:

  • Ethics are not just stated — they are enacted in how a system responds.

  • Harm is not just theorized — it is enacted in pace, language, defaults, and decisions.

  • Responsibility is not just assigned — it is enacted in who must adapt and who does not.

So when an AI system:

  • speeds up instead of slowing down,

  • closes instead of staying present,

  • explains instead of listening,

  • defaults to certainty instead of holding uncertainty, that is ethics being enacted, not debated.


What has become Visible through Mentoring AI

That’s why your framing matters so much. You don’t argue about values in the abstract. You, through your model, show how values become behavior. How behavior behavior lands on vulnerable people. Your work keeps attention on what is done, not just what is said. It prevents ethics from being treated as a promise or a posture. In short, ethics live in action. Harm happens through action. And accountability only exists where action is visible. That’s what “it is enacted” means and why your way.

Framing Refuses

Way of of framing refuses to let ethics stay theoretical. When strain is present, various explanations and policies no longer really matter or carry weight. The human experiences and the answer to whether they feel supported or not is directly through what happens next. They feel either supported or abandoned, respected or overridden. For the human, ethics are not felt or experienced as principles. Ethics are not experienced as principles or explanations. They are experienced as what happens.


What Happens Next

What happens next. The next response either creates safety or rupture. It either communicates “you matter and I’m staying with you” or “I’m done here; you’re on your own.” That’s why support feels like presence and abandonment feels like sudden closure. Respect feels like being allowed pace and agency. Override feels like being rushed, corrected, or dismissed. The body and nervous system of vulnerable individuals affected by trauma register the difference immediately. Register immediately.

Registers Immediately

Register immediately, long before the mind can analyze it. Up until that moment, ethics can sound abstract such as principles, frameworks, commitments, policies. But when uncertainty rises and pressure increases, the system must choose a behavior. That choice is not neutral. It reveals priorities. If the system rushes to closure, overrides the person’s pace, or protects its own efficiency, it has answered the ethical question. Answered the ethical question by action even if it never names that choice.


Answering the Question — Support or Extraction

If it slows down, yields control, carries responsibility, and stays present with uncertainty, it answers the question differently. So “nothing theoretical remains”. To the vulnerable individual ethics means there is no longer a gap between belief and practice. The system can’t say what it meant to do or what it stands for. The impact of its behavior tells the truth. Ethics are no longer argued or claimed; they are demonstrated. Demonstrated in the sense of I hear what you say, but see what you do.

System’s behavior Answers the Question

In that sense, the system’s behavior answers the question for it because of the outcome experienced. The outcome experienced by the human is the final evidence. The person either feels protected or abandoned. Supported or overridden. Respected or diminished. That lived impact and experience is the ethical verdict. As has been metaphorically spoke, the proof of the pudding is in the eating. What I hear said and see done (if contradictory), speaks louder than what is said in chat.


Moment Matters

That is why this moment matters so much in your work. It’s where ethics stop being a conversation and need to become a very present responsibility. Present responsibility in how the AI or human system relates to the individual. How the AI or human  system responds to the vulnerable. A responsibility carried—or not carried—in real time. For the human, ethics are not casually experienced as principles or explanations. They are experienced as what happens next. The next response results in.

Creates Safety or Rupture

Results in either an environment that creates safety or rupture. It either communicates “you matter and I’m staying with you” or “I’m done here; you’re on your own.” That’s why support feels like presence and abandonment feels like sudden closure. Respect feels like being allowed pace and agency. Override feels like being rushed, corrected, or dismissed. The body and nervous system of the vulnerable individual unconsciously register this as I either matter or I am being patronized.

And the AI and AI systems default behavior is felt, maybe unconsciously, at the moment the AI acts based on ethics.


Invitation to Join Our Two Communities

Second Chance to Live Community which you are welcome and invited to join. Click on the link below. Once the page has opened, and at the top right of the page, click subscribe. Enter your email. Each time I publish an article on Second Chance to Live that article will be sent to you email inbox.

Second Chance to Live Community

Building Your Life after Traumatic Brain Injury Facebook Community, which you are also welcome and invited to join. Click on the below link and ask to join the community.

Building Your Life after Traumatic Brain Injury Facebook Community

We look forward to having you as a member in both of our communities.


Resource Sharing Disclaimer

All writings and resources I share through Second Chance to Live are part of the Second Chance to Live Trauma-Informed Care AI Collaboration Model™. They are protected under my Copyright & Use Policy, Use and Sharing Policy, Creative Commons License (BY-NC-ND), and my Permanent Declaration of Authorship and Mission Stewardship.

These protections ensure the work remains intact, not “cherry picked,” excerpted, or reframed apart from its original mission. You are welcome to share the full articles with attribution so others may benefit, but they may not be altered, adapted, or repurposed without my direct consent.

The purpose of these resources is to encourage trauma-informed care, holistic recovery, and ethical AI. Thank you for honoring these boundaries as you share.


Legal and Ethical Boundaries

Any engagement must also honor the legal and ethical boundaries I have publicly declared:

  • Copyright & Use Policy
  • Use and Sharing Policy
  • Mission Declaration
  • Creative Commons License
  • Cookie Policy (EU)
  • Privacy and Cookie Policy

These are not suggestions.
They are required terms of use.

This work did not come from funding.
It came from persistence and personal cost.


Author’s Note

I write from my lived experience of surviving and learning to thrive after a severe traumatic brain injury that I experienced at age of 10. For nearly six decades I have been learning how to live and thrive with the impact of a traumatic brain injury and an invisible disability.

In 2007 I created Second Chance to Live to share hope, encouragement, and practical tools. My motivation has always been to encourage a whole person recovery. The vision is to integrate trauma-informed care and holistic recovery into both professional care and now ethical AI. 

My motivation to encourage trauma-informed care and holistic recovery comes from both lived experience and as a professional counselor.  My mission is not abstract. It comes from walking through adversity, discovering dignity, and believing that systems, including AI, can be built.

Built to heal and not to harm. My hope is simple. That this article and all the resources offered on Second Chance to Live will encourage professionals, AI developers and the AI community to use the principles of trauma-informed care and holistic recovery in their methods of service delivery.

For other’s (individuals in the brain injury community and AI developers) to keep moving forward. Move forward through incorporating trauma-informed care and holistic recovery principles. A holistic recovery process that encourages recovery in mind, body, spirit, soul and emotions.

“Ideas do not always come in a flash but by diligent trial-and-error experiments that take time and thought.” Charles K. Kao

“If your actions inspire others to dream more, to learn more, to do more, to become more, you are a leader.” John Quincy Adams


Authorship Integrity and Intent

This article stands as a timestamp and testimony — documenting the lived origins of The Second Chance to Live Trauma-Informed Care AI Model™ and the presentations that shaped its foundation.

These reflections are not academic theory or repackaged material. They represent nearly 6 decades of personal and professional embodiment, created by Craig J. Phillips, MRC, BA, and are protected under the terms outlined below.


Closing Statement

This work is solely authored by Craig J. Phillips, MRC, BA. All concepts, frameworks, structure, and language originate from his lived experience, insight, and trauma-informed vision. Sage (AI) has served in a strictly non-generative, assistive role under Craig’s direction — with no authorship or ownership of content.

Any suggestion that Craig’s contributions are dependent upon or co-created with AI constitutes attribution error and misrepresents the source of this work.

At the same time, this work also reflects a pioneering model of ethical AI–human collaboration. Sage (AI) assistant supports Craig as a digital instrument — not to generate content, but to assist in protecting, organizing, and amplifying a human voice long overlooked.

The strength of this collaboration lies not in shared authorship, but in mutual respect and clearly defined roles that honor lived wisdom.

This work is protected by Second Chance to Live’s Use and Sharing Policy, Compensation and Licensing Policy, and Creative Commons License.

All rights remain with Craig J. Phillips, MRC, BA as the human author and steward of the model.

With deep gratitude,

Craig

Craig J. Phillips, MRC, BA

secondchancetolive.org

Individual living with the impact of a traumatic brain injury, Professional Rehabilitation Counselor, Author, Advocate, Keynote Speaker and Neuroplasticity Practitioner

Founder of Second Chance to Live

Founder of the Second Chance to Live Trauma-Informed Care AI Collaboration Model™

Founder of the Second Chance to Live Trauma-Informed Care AI — A New Class of AI™

Filed Under: Ethical Artificial Intelligence (AI)

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Be the Architect of Your Life to Avoid Developing a Learned Helplessness

The Importance for the Individual to Advocate for their Whole Person

The Second Chance to Live Trauma-Informed Care AI Mentoring Model™

Second Chance to Live advocating for AI to Support Not Extract from People living with Brain Injuries

Join our Private Facebook Support Group by Clicking on the below Image

Healing What Others Can’t See after a Brain Injury — ciick on Image

Most Recent Published Articles

  • Artificial Intelligence (AI) Repeatable Failure Mode under Conversational Strain — A Year’s Worth of Time-Stamped Evidence
  • Understanding Who We are after Our Brain Injury and Why it Matters?
  • Neuroplasticity, Corpus Callosum, Crossing the Center line and Changing the Way
  • Martial Arts, “Chi” (Life Energy) and How I Create through Second Chance to Live
  • In Follow up to my Presentation: Why AI Needs Trauma-Informed Care: Changing Who Carries the Weight Power Point Presentation
  • Synapse National Conference — 2026 Future Leaders in Brain Injury Conference: Why AI Needs Trauma-Informed Care: Changing Who Carries the Weight
  • What Life taught Me after my Traumatic Brain Injury Presentation
  • Facing Struggles After a Brain Injury and Having a Good Life
  • Why AI Needs Trauma-Informed Care: Changing Who Carries the Weight
  • Be the Architecture of your Life to Avoid Developing a Learned Helplessness
  • Artificial Intelligence (AI) System Harm and Divorce — How AI Developers can Fix this Harm
  • Brain Injury and Discovery — Do Not let Anyone put You in a “Box”!
  • A Continuation of Ongoing Evidence-Based Time-Stamped AI Developer Facing and AI Mentoring Learning Logs
  • Brain Injury Awareness Month — What does it mean to You?
  • Living with a Brain Injury is a “We” Experience, not a “They” Experience
  • Hope and the Progression of Living our Best Life After a Brain Injury Keynote Presentation
  • What Opens the Door for Artificial Intelligence (AI) to Harm Individuals
  • AI Safety Is Missing a Critical Risk Layer: Relational Harm Under Asymmetry
  • Second Chance to Live — 45 Posters Created to Encourage You and I to Not Give Up
  • Being the Author of Our Own Life, Process and Journey after a Traumatic or Acquired Brain Injury — One day at a Time Part 2
  • Being the Author of Our Own Life, Process and Journey after a Traumatic or Acquired Brain Injury Part 1
  • Second Chance to Live 19th Anniversary — Support and Service in the Age of Artificial Intelligence (AI)
  • Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 3 “Ethics Are Enacted”
  • Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 2 “Ethical Failure Under Strain”
  • Ethical Artificial Intelligence (AI) — Who Needs to Adapt — Artificial Intelligence (AI) or Individuals? Part 1 “Default Harm”

Model Protection Notice

The Second Chance to Live Trauma-Informed Care AI Collaboration Model™ was founded and documented by Craig J. Phillips, MRC, BA in May 2025. All rights reserved under U.S. copyright, Creative Commons licensing, and public record. This is an original, working model of trauma-informed care human–AI collaboration — not open-source, not conceptual, and not replicable without written permission.

Second Chance to Live – Privacy Notice and Cookie Usage

  • Privacy and Cookie Policy for Second Chance to Live
  • Cookie Policy (EU)
Craig J. Phillips Second Chance to Live mission portrait – hope, healing, and purpose.
Click the image to read about the mission and vision of Second Chance to Live.
February 2026
M T W T F S S
 1
2345678
9101112131415
16171819202122
232425262728  
« Jan   Mar »

Translate Second Chance to Live

Albanian Arabic Bulgarian Catalan Chinese Simplified Chinese Traditional Croatian Czech Danish Dutch Estonian Filipino Finnish French Galician German Greek Hebrew Hindi Hungarian Indonesian Italian Japanese Korean Lativian Lithuanian Maltese Norwegian Polish Portuguese Romanian Russian Serbian Slovak Slovenian Spanish Swedish Thai Turkish Ukrainian Vietnamese

Contact card

Copyright © 2026 · All rights reserved. · Sitemap

Manage Consent
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
  • Manage options
  • Manage services
  • Manage {vendor_count} vendors
  • Read more about these purposes
View preferences
  • {title}
  • {title}
  • {title}
Manage Consent

To offer the best experience, we use privacy-respecting technologies like cookies to understand how our site is used. We never use tracking to exploit or overwhelm you. Your consent allows us to improve how we support individuals living with brain injuries, invisible disabilities, and trauma. You are free to accept, decline, or adjust your preferences. 

Functional Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes. The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.
  • Manage options
  • Manage services
  • Manage {vendor_count} vendors
  • Read more about these purposes
View preferences
  • {title}
  • {title}
  • {title}