“I Put Myself Back in the Narrative”: Toward a Foundational Rebuilding of Healthcare

“I am not throwing away my shot.”
Alexander Hamilton

Often, I carry a silent collection of books, quotes, and ideas swirling in my mind—waiting patiently until they find structure. This piece emerged from that same gradual accumulation. A few weeks ago, we brought our children to Washington, D.C., and concluded our visit with a tour of the Smithsonian’s National Museum of American History, where one of Lin-Manuel Miranda’s Hamilton costumes is displayed. Naturally, we watched the show as soon as we returned—not merely because it's a brilliant piece of art, but because it beautifully underscored how profoundly story shapes us.

We craft narratives about who we are, what legacy we will leave, and how fleeting moments coalesce into meaning. Social media, in many ways, promises a living monument to our own unfolding tales. But truly resonant stories, those that matter, aren’t neat or simplistic. They contain contradictions. They reveal longing, failure, courage. They are messy. They are deeply human. So here is a different kind of blog. Maybe not different in form, but in feeling. It emerged from that mix of art, personal reflection, and clinical urgency. To our shared history and to your ongoing story, I offer this.

Contemporary healthcare remains constrained by a legacy infrastructure designed for billing, not healing. While digitalization promised efficiency and safety, it has instead produced a fragmented information environment that fails both clinicians and patients. The resulting harm is measurable, widespread, and (critically) preventable. This is not merely a call for improvement within the existing model. The structural deficits are so profound that they necessitate a foundational rebuilding of how we document, synthesize, and operationalize patient information. This work must feel revolutionary, not because it is utopian, but because it disrupts the default assumptions of modern medical delivery: that clinicians can rely on charts to tell the truth, that AI can substitute for narrative, and that patients are passive recipients rather than active participants in diagnostic accuracy.

By embedding narrative medicine into routine care, clinicians create space for patients to share their stories—ensuring that AI supports, rather than replaces, the rich context behind each symptom. Shared decision-making becomes the norm, with patients actively contributing their values and preferences to diagnostic discussions . Rather than relying on data alone, this approach positions AI as a tool for deeper human connection, anchoring technology in empathy and story (Ghenimi, 2024). Ultimately, this work calls us to reconfigure healthcare and transform it into a truly collaborative, human-centered system where narrative, not assumptions, guides accuracy and healing.

Disordered Data as a Clinical Risk Factor

Diagnostic excellence depends on access to complete, coherent clinical information. However, in the current system, health data is often fragmented, inconsistently structured, or inaccessible altogether. These are not minor inconveniences. They are primary contributors to clinical error and patient harm. A 2023 study by Newman-Toker and colleagues at Johns Hopkins estimated that nearly 795,000 Americans each year experience death or permanent disability due to diagnostic error. These events disproportionately occur in cases involving vascular events, infections, and cancers (i.e. the so-called “Big Three”), which collectively account for approximately 75% of diagnostic harms.

In parallel, data from malpractice claims show that in cases where electronic health record (EHR) systems were implicated, over 80% involved medium-to-severe patient harm. Notably, these incidents often stemmed from hybrid systems, siloed notes, or non-interoperable documentation—highlighting the profound consequences of EHR design and usability failures.

What these studies suggest is that disordered information environments are not peripheral to medical error; they are central. And the consequences are not theoretical—they manifest as sepsis misdiagnosed as flu, strokes mistaken for migraines, and early cancers missed until metastatic spread. In each case, the signal was likely present. The system simply failed to surface it.

The solution cannot be piecemeal. As long as critical clinical information is distributed across platforms, hidden in unsearchable formats, or obscured by non-standard documentation practices, diagnostic error will remain a structural inevitability. Preventing harm requires more than individual vigilance. It requires systemic coherence.

Patient Engagement as Diagnostic Safety Mechanism

“I put myself back in the narrative.”
Eliza Schuyler Hamilton

A growing body of evidence affirms what many clinicians already suspect: the traditional model of diagnosis, in which providers operate as solitary experts and patients as passive narrators, is no longer sufficient. In an era of data fragmentation and overburdened clinical workflows, patients themselves represent an untapped safety mechanism. A safety mechanism capable of identifying errors, correcting misinformation, and restoring coherence to disordered care narratives.

Research from the OpenNotes Project provides a compelling empirical foundation for this assertion. In a 2021 analysis of over 30,000 patients who were given access to their clinician’s visit notes, 21% discovered at least one error in the documentation. Nearly half of these were considered “somewhat” or “very serious” by the patient, ranging from incorrect medications to omitted diagnoses. In follow-up studies involving structured patient feedback systems, two-thirds of flagged notes were found to contain legitimate safety concerns. Most were subsequently corrected in the official record—often before any downstream harm occurred.

Perhaps even more consequential is the role patients can play in the diagnostic process itself. Approximately 7% of surveyed patients identified omissions or inaccuracies directly relevant to their diagnosis. In over half of these instances, the flagged information revealed important new clinical details. These are not edge cases. They are systemic signals: patients are often the only party with longitudinal insight into their own health patterns, and their ability to contribute meaningfully is not theoretical—it is demonstrable and measurable.

The implications are profound. Patient engagement should not be relegated to experience metrics or satisfaction surveys. It must be integrated as a clinical safety strategy. This means more than granting portal access; it requires constructing systems that anticipate, invite, and act upon patient-generated insights. Preparation tools must reflect the same clinical reasoning that providers use. The must frame symptoms in structured narratives, connecting past medical history to present concerns, and enable patients to articulate not only what they feel but why it matters in the context of their broader care.

When these conditions are met, patient voices cease to be anecdotal and become evidentiary. They function as a diagnostic force multiplier—filling in the gaps that rushed visits and siloed systems leave behind. And in doing so, they challenge the epistemic hierarchy that too often marginalizes the person at the center of the chart.

Confronting the Limits of EHRs and the Limited Promise of AI

The introduction of electronic health records was expected to usher in a new era of efficiency, safety, and interoperability. In practice, the implementation of EHRs has largely reproduced the limitations of the paper systems they replaced, while introducing new forms of clinical friction. Despite their ubiquity, most EHRs remain poorly optimized for diagnostic reasoning. Interface design often prioritizes billing workflows over clinical logic. Key data points are scattered across non-integrated modules, and even within a single record, there is often no structured mechanism for tracing a symptom's trajectory or confirming that critical test results have been reconciled.

These design issues are not trivial. They produce measurable harm. As Ratwani et al. (2019) demonstrate, clinicians routinely miss key clinical information (incidental imaging findings or medication changes) due to poor data presentation. In some cases, the relevant data existed in the record but was inaccessible due to poor labeling, incorrect filing, or system incompatibility.

While artificial intelligence and clinical decision support systems are often proposed as solutions to diagnostic complexity, their utility remains fundamentally constrained by the integrity of the data they process. AI systems trained on biased, incomplete, or inconsistently structured data inherit those flaws. In real-world deployments, predictive algorithms have failed to identify at-risk populations, misclassified patients based on outdated codes, and produced false alarms due to poor documentation hygiene.

These failures are not technical glitches; they are the predictable outcome of epistemic fragility. As Lincoln Weed (2020) notes, the root problem is not simply that AI is insufficiently advanced. It is that the informational “supply chain” that undergirds clinical care remains dependent on fragmented memory, ad hoc processes, and siloed knowledge. Until this foundation is rebuilt—with standardized data structures, integrated patient narratives, and clinically relevant interfaces—no amount of algorithmic sophistication can meaningfully compensate.

Indeed, there is a risk in over reliance on AI: it can displace the very human judgment that is most capable of discerning pattern, nuance, and context. This is especially true when the record is incomplete. Rather than replacing clinicians, AI should be designed to augment human reasoning. But this augmentation is only possible when the underlying data is complete, comprehensible, and correctly attributed.

Toward a Model of Clinical Partnership and Narrative Integrity

“I wrote my way out.”
Alexander Hamilton

A system in which critical clinical information is dispersed, patients are sidelined, and reasoning is reactive cannot be “optimized.” It must be reimagined.

The work of Storyline Health Navigation is grounded in the belief that medicine is both a science of precision and an art of synthesis. The modern record should not be a static repository of disjointed facts, but a living document. A document that reflects the patient’s trajectory, incorporates their voice, and supports clinical reasoning in real time.

To achieve this, we offer a model that centers three non-negotiables: clarity, coherence, and collaboration. Patients are guided to prepare symptom narratives using clinical frameworks such as OLDCARTS, to review and reconcile their medication lists, and to clarify which concerns are most urgent or unresolved. This structured preparation is not an administrative task—it is a form of diagnostic collaboration.

What results is not just a “better intake.” It is a fundamentally different encounter: one in which the clinician can see the patient as more than a problem list, and the patient can see themselves as more than a passive recipient. This model restores the diagnostic space to what it should be. Clinician and patient should have a meeting of minds, informed by data but not dictated by it, in service of discernment rather than documentation.

References

Agency for Healthcare Research and Quality. (2020). The patient’s role in safe diagnosis. U.S. Department of Health and Human Services. https://www.ahrq.gov/patients-consumers/patient-involvement.html

Ghenimi, N. (2024, December). Integrating AI with narrative-based medicine: Enhancing patient‑centered care in primary practice. Perspectives in Primary Care, Harvard Medical School Center for Primary Care. Retrieved from Harvard Medical School Center for Primary Care website.

Newman-Toker, D. E., Schaffer, A. C., Yu-Moe, C. W., et al. (2023). Serious misdiagnosis-related harms in malpractice claims: The "Big Three" – vascular events, infections, and cancers. BMJ Quality & Safety, 32(6), 401–410. https://doi.org/10.1136/bmjqs-2022-015548

OpenNotes Project. (2021). What happens when patients read their visit notes? Annals of Internal Medicine, 174(3), 413–420. https://doi.org/10.7326/M20-7642

Ratwani, R. M., Savage, E., Will, A., Fong, A., & Hettinger, A. Z. (2019). Identifying electronic health record usability and safety challenges in pediatric settings. Health Affairs, 38(9), 1443–1449. https://doi.org/10.1377/hlthaff.2019.0027

Weed, L. L. (2020). The future of medicine: A new model for a new era. Journal of Participatory Medicine, 12, e20190. https://doi.org/10.2196/20190

Previous
Previous

Embodying the Clinical Story: Incarnation and the Practice of Presence in Healthcare

Next
Next

The Conversation We Avoid: Confronting Death in Modern America