
The pitch is seductive: an AI listens to your patient encounter, generates a complete clinical note, and you just review and sign. No more hours of charting after dinner. No more choosing between documentation quality and time with patients.
I've been piloting ambient AI documentation at my hospital for six months. The technology is genuinely impressive — and genuinely concerning. We need to have a more honest conversation about both sides.
The time savings are real. Our pilot physicians are spending 30-45 fewer minutes per day on documentation. That's not trivial. For a hospitalist seeing 16-18 patients a day, that's the difference between finishing notes at 5 PM and finishing at 7 PM.
The notes are surprisingly comprehensive. The AI captures details that busy physicians often omit — exact phrases the patient used, the sequence of the review of systems, specific counseling provided. In many cases, the AI-generated note is more thorough than what the physician would have written.
Here's the problem nobody wants to talk about: the AI sometimes gets things wrong, and the errors are subtle enough that a tired physician doing a quick review might miss them.
In our pilot, we found:
Each of these errors, standing alone, seems minor. But a clinical note is a legal document. It drives billing, informs other providers, and can be subpoenaed in litigation. A hallucinated physical exam finding isn't just an inaccuracy — it's a potential malpractice issue.
The most insidious risk is what I call the "review problem." Signing off on a document you wrote engages different cognitive processes than reviewing a document someone else wrote. When you dictate or type a note, you're actively constructing the narrative. When you review an AI draft, you're doing quality assurance on someone else's work.
Human beings are bad at QA. We skim. We assume. We develop trust in systems that are usually right, which makes us worse at catching the times they're wrong. This is the same automation complacency that causes problems in aviation and manufacturing.
The technology is coming regardless. Our job is to implement it in a way that captures the benefits without creating new categories of harm. That requires the same rigor we apply to any clinical intervention: evidence, monitoring, and a willingness to pump the brakes when the data warrants it.
James Okonkwo, MD, CMIO
James is a hospitalist and Chief Medical Information Officer at a 400-bed community hospital. He bridges the gap between physician workflows and IT systems, and has led three major EHR implementations.
Clinical insights delivered to your inbox. No spam.