Interoperability Is a Design Problem
On the illusion of integration and the need to design for continuity, not just compliance
Interoperability is one of those words that sounds reassuring without being very revealing.
It appears in strategy documents, funding announcements, digital roadmaps, and reform agendas. It signals progress, coordination, and modernisation. And yet, in practice, it often functions as a proxy term, standing in for a deeper set of unresolved design problems across health systems.
After writing about what it feels like when fluency becomes infrastructure, it’s worth naming the system condition that makes that shift necessary in the first place.
That condition is poor interoperability.
At its simplest, interoperability is about systems being able to exchange and meaningfully use information. In healthcare, this usually means clinical records, pathology, imaging, medication histories, discharge summaries, referrals, and care plans moving across settings in ways that support continuity of care.
But framing interoperability as a technical integration problem is already too narrow.
The real issue is not that systems can’t talk to each other. It’s that they are not designed, governed, or incentivised to do so in ways that align with how care is actually delivered and experienced.
When interoperability fails, the gap is rarely empty. It gets filled by people.
Patients repeat their histories. Families carry documents. Clinicians reconstruct context under pressure. GPs piece together fragments after the fact. Care coordinators and administrators chase information that should already be available. Fluency, memory, and persistence become the connective tissue holding the system together.
This is why interoperability cannot be separated from experience.
From a design perspective, a system that relies on users to integrate information across fragmented platforms is not interoperable in any meaningful sense. It is merely operational. It functions because someone else is doing the work the system has deferred.
This deferral is often invisible at the organisational level. Care still happens. Outcomes are often acceptable. Risk is absorbed quietly. But the cost is cumulative. Cognitive load increases. Decision-making becomes more conservative. Trust erodes. Time is lost to coordination rather than care.
Importantly, this is not a failure of individual organisations acting in isolation.
Interoperability sits at the intersection of funding models, governance arrangements, vendor incentives, data standards, privacy interpretations, and legacy infrastructure. Many providers are operating within constraints they did not create and cannot easily change. Many digital tools are optimised for local efficiency rather than system coherence. Many policy settings prioritise compliance over continuity.
The result is a system that is locally optimised and globally fragmented.
In this context, health literacy is often held up as the solution. Patients are encouraged to be informed, proactive, and engaged. Clinicians are expected to navigate multiple systems seamlessly. Organisations invest in training and workarounds to compensate for missing integration.
But literacy is not interoperability.
When systems require fluency to function, they are shifting design responsibility onto users. This is not empowerment. It is risk transfer.
From a system design perspective, this should be a red flag. Interoperability is not about making people better at navigating complexity. It is about removing unnecessary complexity from the navigation task altogether.
There is also a temporal dimension that often gets overlooked.
Interoperability failures don’t always show up immediately. They appear over time, as staff turnover increases, as institutional memory thins, as reliance on informal knowledge deepens. What once “worked” because the right people knew the right things becomes brittle as those people leave or burn out.
This is where interoperability connects directly to organisational memory.
Systems that cannot share information reliably also struggle to learn. They repeat work, duplicate decisions, and rediscover problems that have already been encountered elsewhere. Over time, the system becomes less adaptive, even as it becomes more digitised.
The introduction of AI and advanced analytics does not solve this problem. In some cases, it intensifies it.
AI systems are only as good as the information environment they operate within. When data is fragmented, incomplete, or poorly contextualised, AI can produce outputs that appear coherent while masking important gaps. Efficiency increases, but understanding does not necessarily follow.
Without interoperability, AI risks amplifying the illusion of system intelligence rather than strengthening it.
This is why interoperability is not a backend issue to be delegated to IT teams or vendors. It is a core design and governance challenge. It requires decisions about what continuity actually means, who is responsible for it, and how value is measured across organisational boundaries.
The question is not whether interoperability is desirable. That much is already agreed.
The harder question is whether we are willing to design for it in ways that reduce burden rather than redistribute it, that support judgment rather than override it, and that acknowledge care as a system experience, not a series of disconnected events.
Until then, fluency will continue to operate as infrastructure. And the system will continue to work, just not for everyone, and not without cost.
If you’re grappling with interoperability, fragmented data, or the downstream impact of system design decisions, I work with organisations to rethink how information, incentives, and experience align across health systems. Reach out via info@dialecticalconsulting.com.au or contact me via LinkedIn.