
I used to spend about two hours a week writing product requirements documents.
Not thinking about them. Not reviewing them with clinicians. Just writing them, taking agreed-upon decisions from meetings and turning them into formatted specs that engineers could build from. It was necessary work, but it wasn't why I was hired.
I now do that in 20 minutes. I paste my meeting notes into an AI tool, describe the clinical workflow, and get a structured first draft back in seconds. I edit it, review it with the team, and move on.
Here's the thing that gets me. Ask anyone if they'd hire a plumber and they say yes immediately. Of course you hire a plumber. The plumber is better at plumbing than you are. Nobody loses sleep over that admission.
Then those same people will spend three hours manually drafting a document that an AI tool could produce in four minutes, because using AI feels like cutting corners or admitting something.
That's the contradiction. We're completely comfortable outsourcing to human specialists whose skills exceed ours. We've just been slow to apply the same logic to AI.
I will never crunch data as fast as a tool built to crunch data. I will never produce a first draft as quickly as a tool that has processed more text than I'll read in ten lifetimes. I've accepted that, the same way I've accepted that my plumber is better at pipes than I'll ever be. When I bring AI in for those tasks, I get time back. Time to do the work that actually requires a hospital pharmacist who also happens to run product teams. Or, sometimes, time to just be done for the day.
That's the framing I'd suggest for anyone in health IT trying to figure out how to think about this. Not AI versus your job. AI as the specialist you hire for the parts of the job it does better than you.
Most AI products marketed specifically to healthcare aren't ready, aren't worth the cost, or both. The tools that have actually stuck for me are general-purpose ones applied thoughtfully to health IT problems.
This is where I spend most of my AI time. A few things that have genuinely changed how I work:
Writing first drafts is the biggest one. PRDs, build specifications, change management emails, training materials. Give the tool enough context about the workflow and the audience and you get something workable in under a minute. The blank page problem goes away. The clinical review still takes just as long, but starting from something is much faster than starting from nothing.
Meeting synthesis is the other one I use constantly. Paste raw notes, ask for a summary and action items. It's more reliable than my own recall after a long governance call.
I've also started using LLMs to stress-test workflows before they go to build. "What clinical scenarios might break this?" gets you surprisingly useful answers if you give enough context about the patient population and the care environment. It's not a substitute for clinical input, but it surfaces things worth asking about.
The important caveat: these tools produce clinically dangerous suggestions confidently and without warning. If you don't have the clinical background to catch those, you're in trouble. That's not something you work around. It's something you stay aware of.
If your role involves HL7, FHIR, SQL, or any kind of scripting, Copilot removes a lot of the friction from that work. It won't replace knowing the standards. But it eliminates most of the lookup time for syntax and boilerplate, which adds up.
These are outside the typical health IT PM scope, but you'll encounter them as implementation projects. I've watched ambient documentation roll out across our physician group. For doctors with heavy documentation requirements, the time savings are real. The validation and governance burden on health IT teams is also real. Build your review workflows before go-live, not after.
Get pieces like this in your inbox every week. No spam, unsubscribe anytime.
Copilot in Power BI and ChatGPT's data analysis have made ad hoc analysis accessible to people who aren't SQL fluent. Upload a dataset, ask a question in plain English, get a chart back. For teams doing utilization reviews, order set analysis, or alert burden work, this changes how quickly you can answer operational questions.
The tasks getting automated are the ones that require processing and formatting, not judgment. Writing first drafts. Searching. Summarizing. These are real time savings, and they're compounding.
What isn't being automated is understanding the clinical implications of a build decision. Knowing that the workflow you're configuring will create a dangerous workaround at 2am when the overnight pharmacist is covering 40 patients alone. Knowing which alerts clinicians will actually read versus which ones will train them to click through everything. Knowing that the order set looks clean in the system and creates problems on the floor.
That knowledge doesn't come from data. It comes from having been in those environments.
This is starting to create a real divide in health IT career trajectories. People whose value is primarily technical, knowing how to configure a system or navigate a build tool, are increasingly competing with AI workflows that can produce similar outputs faster. People whose value is clinical and contextual, understanding the workflows behind the build, are becoming harder to replace, not easier, as AI generates more output that needs clinical validation.
A few things I'd tell someone making the transition from clinical practice right now:
Learn to prompt well. Most people use these tools like a search engine. They ask a question and take the first answer. Effective prompting means giving context, specifying your audience, asking the tool to push back on its own suggestions, and knowing when an output is wrong. It's a real skill and it's worth practicing.
Don't downplay the clinical background. The instinct when you're new to health IT is to lead with the tech skills you're picking up and treat the clinical experience as a bonus. It's the other way around. Clinical experience is what AI can't replicate. It's the reason you're in the room.
Get comfortable validating AI outputs. A lot of health IT work in the next few years will involve reviewing things AI generated and making judgment calls about whether they're safe to implement. Clinical training is exactly the right background for that.
Go deep on one tool before you go broad. One LLM used well will change your daily workflow more than five tools used occasionally.
AI is making health IT work faster and shifting which parts of the job require the most skill. Clinical knowledge, the kind built over years at the bedside or in the dispensary, is what holds its value in that shift. The people who can evaluate what AI gets right and wrong in clinical contexts are going to be in short supply for a while. That's a reasonable place to be building toward.
Jason Potts, PharmD
Hospital pharmacist and health IT product manager. Writing about the intersection of clinical practice and technology at Clinical to Code.
Clinical insights delivered to your inbox. No spam.