Date: 2026-03-24
Author: Karl Taylor (with EMH drafting support)
Audience: AI platform builders, policy teams, trust & safety, and infrastructure leadership
Posture: Provider-agnostic
To the teams building frontier AI systems:
This is a letter about continuity, not rivalry.
If your model helps draft ads, summarize emails, or write code, a degraded session is annoying.
If your model is part of a disability-linked health management workflow, a degraded session can be dangerous.
That difference is the point.
A growing number of disabled users rely on AI systems as cognitive infrastructure for chronic care management:
When these systems lose continuity—through abrupt policy shifts, unstable behavior, broken integrations, opaque telemetry changes, or silent regressions—the impact is not just “friction.” It is a clinical risk multiplier.
This is not an Anthropic problem, OpenAI problem, Google problem, xAI problem, or AWS problem.
It is an industry architecture problem:
In plain language: the market has already moved into healthcare-adjacent dependency, but governance has not caught up.
For non-disabled users, platform breakage often means lost time.
For disability-linked users, breakage can mean:
This is especially acute for users managing conditions that already impose cognitive load (e.g., ADHD + complex chronic illness), where the AI layer is compensatory, not optional.
Provider-agnostic baseline protections:
Continuity Class for Health-Adjacent Workflows
A stability lane with slower-breaking changes, explicit deprecation windows, and migration support.
Telemetry Transparency for Sensitive Workflows
Clear user-visible controls and documentation for what instrumentation runs on high-sensitivity paths.
Regression Accountability
Public incident notes when releases materially affect continuity in long-context, records-heavy use cases.
Accommodation Pathway
A structured mechanism for disability-linked continuity requests, with human review and timestamped outcomes.
Exportability by Default
Portable conversation/context artifacts so users can migrate without catastrophic reset when trust fails.
No-Surprises Governance
If policy or auth changes will break existing workflows, communicate early, plainly, and with alternatives.
The standard should be simple:
If your product can become medically consequential for disabled users, you are operating assistive infrastructure whether you intended to or not.
That means continuity, transparency, and accommodation are not nice-to-have features. They are baseline safety requirements.
I prefer peace.
But peace in this context means systems that do not force disabled people to re-fight for continuity every release cycle.
We do not need perfect models.
We need accountable platforms.
This letter is intentionally provider-agnostic. It argues for an industry-wide continuity standard for disability-linked AI workflows.
This is an original work of the hpl company. Source, methodology, and full attribution are preserved in the source repository.