Before the Tech Stack, the Org Chart: The Design Signals Most Organisations Are Missing
Something is shifting in how the organisations paying closest attention to AI are structuring themselves, and it is worth sitting with before assuming this is someone else's problem or a problem that has already been solved.
Atlassian recently expanded its Chief People Officer role into Chief People and AI Enablement Officer. Avani Solanki Prabhakar, who holds the role, described the move publicly: “We're approaching AI as a cultural transformation first, and a technology shift second.” ServiceNow has done something similar, with Jacqui Canney holding the title of Chief People and AI Enablement Officer, a dual mandate that deliberately refuses to treat people strategy and AI enablement as separate domains. At Moderna, the chief people and digital technology functions have been merged since mid 2025 under a single executive. Tracey Franklin described this as “a deliberate move to close the gap between the people who shape culture and those who build the systems that support it.”
These are large, well-resourced organisations with operating models, talent pools, and market positions that bear little resemblance to most businesses navigating this transition. That matters. These examples are not templates. They are signals, and signals are worth reading carefully rather than either dismissing or copying wholesale.
The IBM Institute for Business Value, in collaboration with Dubai Future Foundation and Oxford Economics, surveyed more than 600 Chief AI Officers across 22 geographies and 21 industries in early 2025. Only 26% of organisations currently have a Chief AI Officer (CAIO) role, up from 11% in 2023, with 66% expecting most organisations will have one within two years. The research also found that centralised AI governance delivers meaningfully better returns than fragmented approaches, while also acknowledging that not every organisation requires a dedicated CAIO. What mattered most was not the title itself, but clear accountability and strong cross-functional coordination.
The pattern across all of these examples points toward something consistent, even if the structural responses differ. The organisations making meaningful progress have recognised that people strategy and technology strategy are not separate domains. Treating them that way is part of why so many AI programs can stall once they leave the strategy deck.
The question worth sitting with is not whether your organisation needs a new role. It is whether your current structure is capable of holding the human and technology questions together at the same time.
For most organisations, the honest answer is that it is not.
That is a design problem, not a technology problem. It is also one of the most significant organisational design opportunities of this moment.
This is also the context in which the Australian Government has built and published a national AI services directory, recognising that embedding AI into Australian workplaces seriously requires structured thinking, independent perspective, and genuine expertise. Having Dialectical Consulting included in that directory as a practice specialising in AI adoption, organisational design, and workforce capability feels like a relevant moment to explore one of the patterns I keep seeing emerge across this work, particularly among organisations making meaningful progress with AI adoption.
Why Organisations Keep Starting in the Wrong Place
Most organisations are not approaching AI adoption this way because they lack intelligence or ambition. They are doing so because the surrounding conditions often privilege urgency over reflection, making the wrong response easier to execute than the right one.
The first pressure is competitive anxiety. There is a cultural, and often ego-driven, expectation to be part of the momentum regardless of whether anyone has a clear picture of what competitors or even the broader market are actually doing, or whether any of it is working. The pressure itself is not irrational. Few organisations want to be the ones left behind in a transition this significant. But in environments where visibility and signalling begin to outweigh grounded understanding, perception can quickly start driving strategy. It has the energy of a space race. Moving quickly becomes confused with moving meaningfully. Visibility gets mistaken for progress.
The second is the efficiency myth. Organisations operating inside tight financial conditions are hearing that AI will deliver significant productivity and efficiency gains, and understandably, they are drawn to that promise. The gains being cited are rarely interrogated for what they actually require to be realised, or whether the promised efficiencies still hold once the initial transformation momentum settles 18 months later.
Introducing AI into workflows does not automatically create efficiency. It surfaces complexity.
It exposes gaps in process design, capability, governance, and decision-making that were often already there but previously absorbed through human workarounds, tacit knowledge, and invisible coordination. The gain, where it is real, tends to arrive after the harder organisational work has already been done.
The third is shadow AI, and this is the one most organisations should probably be paying closer attention to.
Most leadership teams already know their people are independently using AI tools. The instinct is often to treat this primarily as a governance or compliance problem. The places where people have already integrated AI into their own workflows are usually the places where the genuine use cases live.
The organisation's own people are demonstrating where AI creates value in practice, often at the exact point in the workflow where friction already existed. Meanwhile, leadership teams are frequently looking past those signals toward top-down business cases that are more abstract, more sanitised, and considerably less grounded in how work actually happens.
The fourth is board pressure.
The directive arrives as speed. The organisation responds with activity. Activity is not the same as progress.
The result of all four pressures operating simultaneously is organisations that are moving, sometimes quickly, in directions that resemble AI adoption without yet doing the deeper work that would make any of it sustainable.
The Org Chart and the Work Underneath It
There is a distinction missing from most conversations about AI transformation, and it is where many organisations are currently stuck.
Redesigning an org chart is often the most visible layer of organisational transformation. Creating a new role, merging functions, standing up a dedicated team. These moves matter because structure shapes accountability and direction. But they do not fully describe how work actually flows, how decisions get made, or how capability develops in practice.
What they primarily define is formal structure, who leads what, where accountability sits, and how strategic priorities are signalled across the organisation. The deeper challenge is redesigning the systems underneath that structure so workflows, decision-making, and capability can evolve alongside it. That includes creating the conditions for human capability to shift toward the kinds of work AI cannot easily replicate, and toward the areas where human value becomes increasingly important.
Most organisations have not yet started that harder work.
That means genuinely asking which tasks and decisions are best handled by humans, which are best handled by systems, and rebuilding role design around that answer rather than simply layering AI over the top of existing work. It means identifying where AI can absorb the transactional and repetitive so that human capability can be redirected toward what requires judgement, contextual reading, relational intelligence, creative direction, and taste.
Critically, it means creating the time and conditions for people to actually develop those capabilities rather than handing them a tool and expecting adaptation to magically follow.
Microsoft's 2026 Work Trend Index, drawing on data from 20,000 workers across 10 countries, found that organisational factors, including culture, manager behaviour, and how talent is supported and developed, account for more than twice the AI impact of individual effort alone.
This is a significant finding because most AI adoption strategies are still heavily built around training individuals.
The data suggests the primary lever is not the individual. It is the system around them.
The most motivated and capable person inside a system that is not designed to support AI adoption will barely move the needle. The environment outweighs the individual by more than two to one.
That is precisely why organisational redesign often begins with the org chart. The structure matters, but so does the redesign of the workflows, decision-making, enabling technologies, and human capability sitting underneath it.
What Humans Are Actually For
Once the organisational redesign question is taken seriously, a more confronting question follows.
What are humans actually for in a working world where AI can increasingly perform cognitive and procedural tasks that were previously considered distinctly human work?
Most commentary answers this defensively by listing the things AI supposedly cannot do yet. I think that framing is both limiting and temporary because the list keeps shrinking.
A more useful question is where distinctly human capability creates value that AI cannot meaningfully replicate, and then what it would take to deliberately develop those capabilities rather than leave them to chance.
The capabilities that matter most in this context are not adjacent to what AI does well. They sit furthest from it.
Critical thinking. Maintaining the ability to interrogate assumptions, evaluate competing signals, and think clearly in environments increasingly shaped by speed, automation, synthetic certainty, and information overload.
Contextual judgement. The ability to read a situation in its full complexity, account for what is unsaid, and act with nuance.
Cross-domain synthesis. Drawing connections across fields in ways that are not obvious from within any single one.
Creative direction and taste. Setting vision, shaping output, and making strategic and aesthetic decisions that require discernment, coherence, and a genuine point of view.
Relational intelligence. Navigating the human dynamics no workflow can fully account for.
Epistemic confidence. Knowing what you know, what you do not, and how to act responsibly in the space between them.
These are not soft skills in the dismissive sense of the phrase.
They are the capabilities that make AI adoption actually work because they are what humans bring to the collaboration that machines cannot.
Building them requires deliberate organisational design and commitment.
It requires time to be made rather than simply encouraged. It requires structures that reward experimentation, reflection, failure, and learning alongside delivery, not instead of it.
Most organisations have not redesigned their ways of working to support any of this. They have introduced AI into structures that were never designed for it and are now finding the results underwhelming, which is entirely predictable.
The Time Pressure Problem
There is a legitimate counterargument to all of this, and it deserves a direct response rather than a polite acknowledgement before moving on.
The organisations sitting across the table in these conversations are not indifferent to the human side of this work. Many understand it clearly. The pressure they are operating under is real.
The question they are actually asking is not whether to do this properly. It is whether they can afford the time required to do it properly while also keeping pace with what is happening around them.
It is a fair question.
I think the risk calculation is being done incorrectly in most cases.
The cost of moving quickly while doing the human work poorly is not simply delayed efficiency gains. It is accumulated organisational dysfunction that compounds over time.
Processes built on AI that nobody fully understands or trusts. Capability gaps that widen as the technology advances. Workers who have been told AI is coming without being given meaningful agency over what that means for their role, responding with disengagement that is later misdiagnosed as a culture problem.
The organisations that skip the human work do not save time.
They borrow it, and they pay it back with interest.
Trust at Every Layer
None of this is separable from trust.
Trust is not a single relationship. It operates simultaneously across every layer of an organisation and beyond it, with failures at one level eventually propagating through the others in ways that are often slow to surface and expensive to repair.
There is the trust between boards and executive leadership around whether AI strategy is genuinely coherent or simply competitive anxiety dressed up as vision.
There is the trust between leadership and middle management around whether the direction being set is realistic given what people on the ground are actually experiencing.
There is the trust between managers and frontline workers around whether AI adoption is something being done with them or to them.
There is also the trust between the organisation and the communities it serves around whether AI is being used responsibly, transparently, and in service of something beyond efficiency metrics.
An organisation that reduces its workforce on the basis of efficiency gains that never fully materialise does not just create an internal trust problem. It becomes a case study in what poor AI adoption looks like, and that perception spreads outward.
Shadow AI use surfacing in ways the organisation cannot account for creates a different kind of trust failure, one that is harder to trace and harder to repair.
Both are avoidable. Neither is theoretical.
Responsible AI is not a governance checkbox applied at the end of a project. It is the operating condition under which all of this work has to happen if any of it is going to hold.
It means being honest about what AI can and cannot do. It means involving people in decisions that affect their work. It means building governance that is proportionate to actual risk, not simply reputational anxiety.
It also means understanding that trust, once lost at any of these layers, is slow and expensive to rebuild.
What the Signals Are Actually Telling Us
The structural changes happening at the organisations paying closest attention to this are not the story itself. They are signals pointing toward something more fundamental.
The organisations getting this right are not simply the ones that created a new role or merged two functions. They are the ones that started with a harder question.
Is our current structure capable of holding the human and technology questions together at the same time?
If not, what would it take to redesign it so that it can?
That question leads somewhere very different from where most AI adoption programs currently begin.
It leads to org design before tech stack.
To workflow redesign before tool deployment.
To genuine capability development before efficiency extraction.
Human capability does not need to evolve at the pace of AI. That would be an unreasonable expectation.
It cannot evolve at the pace of dial-up internet either.
The organisations that treat the human side of this transition as seriously as the technology side will be the ones that come through it with their people, their culture, and their trust relationships intact.
The ones that do not will spend years unwinding decisions that looked fast at the time.
The org chart question comes before the technology question.
Most organisations still have it the wrong way around.
The gap between those two groups is widening in ways that will become increasingly difficult to close later.
If you’re wondering about how to redesign your organisation and adopt AI in a meaningful way. Reach out via info@dialecticalconsulting.com.au or contact me via LinkedIn.