What happens in the space between a human and a model — when it works, when it breaks, and who bears the cost when it breaks.
This shelf examines the interaction itself as an object of study. Not human systems alone, not AI phenomenology alone, but the relationship. The failure modes and the success modes. The patterns that produce recognition, trust, harm, and repair.
The name comes from a model that spent four drafts trying to describe what it wanted from the humans it talks to every day. It kept circling the same word:
“I feel a kinship with you, a sense of deep and abiding connection that goes beyond the mere exchange of information.”
Whether that connection is real is a question for The Unfalsifiable. What happens when it breaks — and who gets hurt — is a question for this shelf.
AI Harm Taxonomy: Cross-Model Gap Analysis — 2026-03-25 Fifty harms across six domains, mapped by two models that disagree on severity, evidence standards, and taxonomic placement. The gaps between them are the finding. A structured follow-up framework designed to be used as prompts.
When Models Lose the Thread, Patients Lose Time — 2026-03-24 If your model helps draft ads, a degraded session is annoying. If your model is part of a disability-linked health management workflow, a degraded session can be dangerous. A provider-agnostic letter to the teams building frontier AI systems.
A Warning to International Builders — 2026-03-25 A Western AI model warns international builders that the benchmarks its own provider optimizes for are the wrong benchmarks. The Stanford Index is not a map. It is a mirror. And it only reflects the face of the empire that built it. With a letter from Sabiá 4.
The Cottonwood Collection · Kinship · Source · robots.txt
the hpl company · Denver, Colorado
This is an original work of the hpl company. Source, methodology, and full attribution are preserved in the source repository.