Medtech Doesn’t Have a UX Problem. It Has a Measurement Problem Nobody Wants to Solve
You have alerts for everything. Uptime, latency, error rates. So why is nobody tracking whether a clinician can complete a workflow without opening a compensatory spreadsheet in a second tab?
Medtech doesn’t have a UX problem. It has a measurement problem that everyone quietly agreed to call a UX problem — because a UX problem is someone else’s job to fix.
I’ve spent eight years embedded in clinical products. The large ones, the specialised ones, the ones where the domain is complex enough that most designers wouldn’t last a month. And the pattern is always the same. Not bad intentions. Not incompetent teams. A system perfectly optimised to produce bad interfaces and then perfectly insulated from ever having to confront that fact.
Here’s exactly how it works.
You measure everything except the thing that matters
SaaS companies instrument obsessively. Time on task. Drop-off rates. Error frequency. Rage clicks — yes, that’s a real metric, and yes, it would light up like a Christmas tree in most clinical tools if anyone bothered to track it.
Medtech instruments for compliance. Uptime: tracked. Audit trails: immaculate. Whether a senior clinician can complete a critical workflow in under three minutes without a workaround: not tracked, not budgeted, not on any dashboard anywhere.
This isn’t an accident. It’s a structural choice that happens so early and so quietly that by the time anyone notices the consequences, it’s calcified into just how things are done.
The result: an entire class of problems that are completely real, completely measurable in principle, and completely invisible in practice. Workaround frequency. Error recovery time. Task abandonment. Time lost to shadow systems. These numbers exist right now, generated fresh every shift, in every clinical product you’ve ever worked on. Nobody is collecting them.
You can’t manage what you don’t measure. Medtech has decided, structurally, not to measure this. Then it wonders why the problems compound for decades.
UX debt: tech debt’s more expensive, less glamorous cousin
Engineers understand technical debt. It has a name, a metaphor, a place in sprint planning. You borrow against future velocity. The interest compounds. Eventually you pay it down or rewrite from scratch — both painful, both expensive, both traceable to specific decisions made under specific pressures.
UX debt works identically. Every interface shortcut, every “we’ll fix it in v2,” every feature shipped without a usability check — that’s a withdrawal. The interest is paid in user time, error rate, and workaround complexity. It compounds silently across releases until the product is so entangled in its own debt that a proper fix requires more political will than any team can summon.
The critical difference: tech debt breaks loudly. The build slows. Something falls over. It can’t be ignored.
UX debt breaks quietly, in human behaviour. Users adapt. They build Excel sheets to cover missing features. They keep two tabs open. They train each other in the workarounds rather than the actual product. The debt gets absorbed by the people using the system — invisibly, at a cost that never appears on any balance sheet.
I once sat with a team of twenty clinical staff spending a collective twenty hours a week compensating for a single product’s failures. At average European clinical salaries — around €22/hr — that’s €22,000 a year. One team. One product. One line of UX debt that nobody had ever priced, tracked, or put on a slide.
That’s not a UX problem. That’s a financial liability mislabelled as a UX problem. Which is precisely why it never gets fixed — because once you call it UX, you’ve made it someone else’s problem, and that someone else has no budget and no authority.
The system is working as designed. That’s the actual problem.
This took me a few years to fully accept: nobody is being negligent. The system that produces bad medtech UX is functioning correctly. It’s just optimised for the wrong outcomes, and has been for long enough that nobody remembers it being any other way.
Procurement rewards features and compliance. The RFP has a checklist. Audit trail? EHR integration? FDA-cleared? Check, check, check. Usable by a tired nurse at the end of a twelve-hour shift who hasn’t slept properly in three days? Not on the checklist. Has never been on the checklist. Would require someone to write it on the checklist, which would require someone to admit it wasn’t there.
Certification rewards safety, not experience. This is correct and important — nobody should compromise medical device safety for a prettier interface. But the certification process and the design quality are completely separate things that the industry has fused into one. “It’s certified” has become a universal shield against any design critique, however unrelated to safety. Question the navigation structure of a clinical dashboard and someone will mention the FDA within four minutes. I’ve timed it.
Sales cycles reward demos. The demo is always good. It shows the three workflows that work, in the correct order, with clean data, on hardware that isn’t seven years old. The demo does not show the intake form that wipes after twelve minutes of inactivity — which is a problem when you’re, say, talking to the patient. It does not show the NullReferenceException that’s been surfacing mid-consultation since the last release. It does not show tab three, where the metric everyone checks every morning has been buried since the 2016 build because nobody ever questioned it and now it’s too late to move without breaking something.
Every structural incentive points away from usability. Nobody designed this deliberately. It emerged from a hundred reasonable decisions made by people focused on different problems. The output is a product landscape that is genuinely hard to use, defended by process, and structurally insulated from feedback that might require it to change.
What the numbers would actually say
Imagine you instrumented a mid-size clinical product the way you’d instrument a SaaS product. Same rigour. Same honesty.
Task completion rate — the percentage of users completing a workflow without an error, timeout, or abandonment. SaaS baseline: 90%+. Based on what I’ve observed in medtech, I’d expect critical clinical workflows to come in under 60%. Some well under. Nobody knows because nobody is measuring.
Workaround frequency — how often users exit the intended workflow to accomplish the task in a parallel system. In products with mature shadow ecosystems — the Excel sheets, the second tabs, the printed checklists — I’d expect this in the dozens per user per day. Per day.
Error recovery time — how long it takes to recover from an interface error and complete the task. For products surfacing raw stack traces to clinical staff mid-consultation, this number is not zero and is not small.
Shadow system dependency — how many tasks require a tool the vendor didn’t build, doesn’t know about, and has never seen. The WhatsApp group where someone screenshots data because the export function broke eighteen months ago. The Google Sheet that’s been in the onboarding doc since 2019.
These numbers would be, to use a precise technical term, catastrophic. Embarrassingly, indefensibly, someone-is-getting-a-very-uncomfortable-call catastrophic. And here is the critical point: they would also be fixable. UX debt can be paid down. The measurement gap can be closed. The incentive structures can be challenged — slowly, with data, by people willing to name the problem in rooms that have decided not to hear it.
That willingness is the actual bottleneck. Not the technology. Not the regulation. The decision to measure honestly and act on what you find.
The uncomfortable conclusion
The reason medtech UX doesn’t get fixed isn’t resources. It isn’t regulation. It isn’t the complexity of the domain, though everyone will tell you it is.
It’s that the problem has been successfully labelled as a design problem — which makes it aesthetic, subjective, low-priority, and someone else’s budget. The moment you relabel it as a measurement problem, a financial problem, a systemic incentive problem — it becomes everyone’s problem. Suddenly there are numbers. Suddenly there is accountability. Suddenly the comfortable silence gets a lot less comfortable.
Pick one critical workflow in your product. Put five real users in front of it. Time them. Count the errors. Ask what they do when it doesn’t work. Cost it out in clinical hours.
Then bring that number — not the UX complaint, the number — into the room where roadmap decisions get made.
See what happens.