The Economy Never Needed Humans. It Just Had No Choice.

The AI displacement conversation is organised around a shrinking, defensive question. Here's the one we should be asking instead.

The dominant narrative around AI and employment has a logic to it, and that logic is making us ask the wrong question.

The question, as it's usually framed, is this: what can humans still do that AI can't? It's a question about capability gaps, about finding the safe harbour before the tide comes in. Every answer to it is temporary by design. AI gets better, the gap closes, the harbour gets smaller. The question has a depressing trajectory baked into it, and the best it can offer is a managed decline.

I want to propose a different question. But first, it's worth naming the assumption buried inside the one we've been asking.

The hidden assumption

The displacement argument, in almost all of its forms, rests on a premise that nobody is interrogating: that humans have economic value because they perform tasks that machines cannot yet perform. Under this logic, human employment is essentially a placeholder, a stand-in until the technology catches up. Once AI can do it cheaper and better, the human doing it becomes redundant. The question is just how much redundancy, and how fast.

Surfaced plainly, that's a fairly thin premise. It treats human value as entirely instrumental and entirely contingent on a capability comparison that humans are structurally going to lose.

Humans don't just have instrumental value. They have relational value, contextual value, presence value. Values that aren't downstream of capability at all. And those don't become less real because AI gets better. If anything, they become more real.

So here is the question I think we should be asking instead: now that AI is starting to handle the automated layer, what does the human layer actually look like when we design for it deliberately?

The model underneath

There's a useful analogy in the AI infrastructure itself. Underlying most of the AI applications we interact with are foundation models, the same core capability powering multiple products across wildly different contexts. The model doesn't change. The application layer does.

Most of our workforce has been organised the other way around. We've built roles and tasks and job descriptions, the application layer, without ever really interrogating the underlying human capability that sits beneath them. When AI strips the automatable tasks out of a role, what's left isn't a broken role. It might be the most valuable part, finally visible because it's no longer buried under the rest.

The challenge, then, isn't to find new tasks for people to do. It's to redesign how we identify, organise, and deploy human capacity. That's a fundamentally different architecture problem. And it's not a people and culture problem. It requires the kind of structural thinking we've applied to almost everything except this.

The end user doesn't disappear

Here's the other thing the displacement conversation consistently overlooks. Every product, every service, every system, no matter how automated it becomes, ends in a human experience. Someone lives in the building, receives the care, eats the food, uses the app. That human at the end of the chain doesn't go anywhere.

The more automated the production layer becomes, the more the experience layer, the human reception of what's been made, becomes the whole point. Which means that understanding humans, deeply and contextually, not as segments or personas but as actual people living actual lives, becomes a core economic competency. It's not a soft skill at the edge of the business. It's the business.

We're already seeing early market signals of this. The premium on physical presence, live experience, handmade, local, isn't nostalgia. It's a rational response to a world of infinite, frictionless, automated supply. When everything is available everywhere instantly, what becomes scarce is human presence and physical reality. Scarcity creates value. That's not a complicated economic argument.

The design challenge nobody has picked up

If you follow this argument far enough, it leads somewhere that most of the AI conversation isn't going: the realisation that we've never had to consciously design for human value before, because scarcity made it automatic. When humans were the only option, their value was self-evident. AI removes the scarcity argument, and suddenly we have to get deliberate about something we've always taken for granted.

That is not a comfortable idea. It requires asking, seriously and at scale, what humans are actually for in the economic life of organisations and communities. What capabilities do people carry that we've never needed to isolate because they were always bundled inside roles built for a different purpose? What products and markets exist on the other side of that question?

The unemployment outcome isn't inevitable. But neither is the alternative. Someone has to choose to build differently. And the organisations that ask this question now, before the wave is fully upon them, are the ones that will have something to offer when it arrives.

The question isn't what humans can still do that AI can't. The question is what we've never actually built that only humans can provide.

That's a generative question. It has an expanding trajectory. And it's the one worth spending time on.

If you’re thinking through how AI is reshaping work in your organisation, this is exactly the kind of work I partner on. You can reach out via info@dialecticalconsulting.com.au or contact me via LinkedIn.

Previous
Previous

The Human native framework

Next
Next

Complexity Is Not the Problem. It’s the System Telling Us the Truth.