The Cyborg Era: What AI means for jobs
Today we have a special guest essay from Séb Krier, AGI Policy Dev Lead at Google Deep Mind. He writes about the impact of advanced AI on society and institutions, and I highly suggest that you follow him on X here @sebkrier.
I am often asked a variant of “if AGI can perform any task a human can, why would human labor retain any value? We retired the horse; we discarded the telegraph. Why should we be any different?” This post explains why I think the reality is more complicated, but also why my views aren’t synonymous with ‘there will automatically and always be plenty of work for humans’ either.1
Why it’s still complicated and comparative advantage can apply for a long time
The telegraph and horse examples are clear cases of full substitution. They also can’t necessarily adapt to do something else. The horse analogy fails because it treats labor purely as power output. It ignores the unique human capacity for complex, super-additive organization. You cannot stack a thousand horses to build a super-horse, but you can stack a thousand humans to build a corporation, a government, or a philosophy. Human utility is super-additive in a way that equine muscle is not.
As I explained elsewhere, “[AI being able to] do anything humans can do cheaply” does not imply perfect substitution. It could, but it doesn’t necessarily - you need to do more work to demonstrate that. It’s important to distinguish between substitutability (the technical feasibility of replacing an input) and substitution (the actual economic outcome of input choices). Just because an input can be replaced doesn’t mean the optimal menu of production will result in that choice.
Many arguments still misunderstand what comparative advantage actually means and (a) conflate that with capabilities or cost; and (b) assume AGIs simply replicate human relative productivity (maintaining the same ratio of skill between Task A and Task B, just at a higher level). Of course it’s not unreasonable to think that AGI will lead to quasi-perfect substitution and absolute advantage across all tasks, but you need to do the work of outlining that rationale clearly. Assuming it as self-evident is lazy, and just pointing to a definition of “does all tasks a human can do cheaply” is necessary but insufficient.
For full substitution to occur, you need the aggregate output of [humans + AGI] to be economically inferior to [AGI] alone in production. My pushback is on two fronts: first, I think complementarity may persist longer than people expect - you don’t immediately get full substitution as soon as something AGI-like is here; and second, economic value is not solely defined by efficiency - often human involvement is intrinsic to a service or product’s value.
We know that at least so far, AI progress is rapid but not a sudden discontinuous threshold where you get a single agent that does everything a human does perfectly; it’s a jagged, continuous, arduous process that gradually reaches various capabilities at different speeds and performance levels. And we already have experience with integrating ‘alternative general intelligences’ via international trade: other humans. Whether through immigration or globalization, the integration of new pools of intelligence is always jagged and uneven rather than instantaneous.
I think we get there eventually, but (a) it takes longer than bulls typically expect - I think 5-10 years personally; (b) people generally focus on digital tasks alone - they’re extremely important of course, but an argument about substitution/complementarity should also account for robotics and physical bottlenecks; (c) it requires more than just capable models - products attuned to local needs, environments, and legal contexts; (d) it also requires organising intelligence to derive value from it - see for example Mokyr’s work on social/industrial intelligence. This means that you don’t just suddenly get a hyper-versatile ‘drop in worker’ that does everything and transforms the economy overnight (though we shouldn’t completely dismiss this either).
This already has plenty of implications for comparative advantage, as it leaves all sorts of crevasses and weaknesses that are patched through human involvement. As proto-AGI becomes cheaper and more capable, humans can still “step up” into higher levels of abstraction: just as software engineers are evolving into product managers, overseeing the architecture rather than laying the bricks, future workers will likely function as orchestrators of intelligence. As long as the combination of Human + AGI yields even a marginal gain over AGI alone, the human retains a comparative advantage.
We’ve had millennia of things (and more humans!) replacing what humans do and yet humans do more than ever. If a million little tasks can be replaced and there is literally zero impact on total employment, why would we assume that wouldn’t hold up for a technology that replaces the next million tasks? I think this argument is stronger than most futurists give it credit for, but as we’ll see below there are some important caveats too.
Why demand for human touchy-feely things is not just an inconvenient detail
Now, let’s assume we are in a world of fairly advanced AGI. Gradually, as others have observed, the human labour share of output decreases: machines do 99.9% of the work, humans 0.1%. Note that to even get to such a state of affairs, you need to account for a lot of time - this doesn’t happen instantaneously as some assume. But once you get there, this isn’t in and of itself cause for concern; the 0.1% can still be highly productive and well paid. Moreover, with AGI, if the pie grows 1000x, I would expect demand for human-centric goods and services (status, authenticity, positional goods etc) to rise despite labor’s share dropping to 0.1%.
Since humans can re-specialize, and since humans are also simultaneously the ‘demand signal’ that powers the entire AGI economy (the machines aren’t just producing paperclips on loop that no one asked for), the argument is that they can always find the “next best” use of their time, whereas a telegraph is stuck being a telegraph. The remaining human tasks don’t have to be about capabilities per se, but other things that we deem valuable if performed partly or fully by humans.
In this scenario, humans can still be highly productive - even if, in principle, these are tasks an AI could do. You cannot “mothball” humans if the consumer preference is specifically for the “annoyance” (inefficiency/humanity/care) of the human. If you model humans as utilitarian agents who only care about efficiency, then yes they become a hindrance; if you model them as demanding other things too, e.g. the process, or taste, or conferring status, then the “transaction cost” (the fact that a human took 40 hours to make it instead of a robot taking 4 seconds) is the value proposition. As my colleague Nick Swanson reminded me, “I would stop going to the Blythe Hill Tavern if there were robots behind the bar, and they’re highly efficient with the Guinness.”
Now I do think there’s a potential complication here: does that niche’s productivity continue rising, or do wages crash because the clearing price becomes the price of the AI doing the task? I can see a case for the latter, but in principle I can also imagine all sorts of services and products that don’t get affected by cheap replicability - just because I can get some artist to do a perfect Mona Lisa reproduction doesn’t mean the original sells for any less. People pay a lot to go see concerts and Olympic races even if in principle a model can generate the same song and a robot can run faster. The ‘capability’ of perfectly replicating it isn’t sufficient. And certainly in a world where everyone is richer, you can imagine all sorts of positional goods growing in demand. Even today, consider the many subcultures and worlds that have their own status hierarchies: my engineer friends might not care at all for owning a first press copy of an Aphex Twin record, but my collector friends definitely will. Ifn a post-scarcity world, provenance remains scarce.
Additionally, arguments for extreme technological unemployment often make a category error: they assume labor only has value if it is plugged into the AGI supply chain. That’s incorrect; there is no constraint preventing human labor from producing output in a separate, parallel economy. The only hard limit is if natural resources (energy, land, matter) become the bottleneck and AGI consumption bids their prices up so high that human activity becomes physically unaffordable; but barring that extreme resource-capture scenario, humans can continue to trade with each other. If you think AGIs will consume all resources unconstrained, then you could in principle simply tax their resource usage directly.
Basically as long as human involvement has some quality Al lacks (e.g., legal liability, emotional connection, physical adaptability in unstructured environments, fluffy feelings, beauty-because-human etc), the “perfect substitution” doesn’t happen, and the horse or telegraph analogy fails. We should not treat this outcome as merely delaying the inevitable; the ‘full substitution’ scenario depends on a specific cascade of extreme assumptions holding true simultaneously, which proponents rarely flesh out or justify beyond repeating that AGI can be cheap and very capable. This doesn’t imply that humans are permanently ontologically walled off because they’re special, just that in order to get full substitution, you really need a lot more to be true at the same time.
And you don’t need to be a weird carbon chauvinist “fanatic“ to keep wanting some human involvement. If Al makes digital goods (movies, code, logic, transport) effectively free, you naturally shift your consumption to the things Al can’t make free: human connection, status, and so on. Though it’s worth bearing in mind that consumers can be quite willing to substitute ‘human’ qualities for convenience and lower cost (see e.g. self-checkout, ATMs etc). On the one hand, I think in the long run it’s plausible that this is insufficient - in the sense that I definitely don’t expect most humans to be working as artists and storytellers. On the other hand, I think it’s extremely hard to predict what kinds of things humans in a post-AGi world might want and value - after all, six out of 10 jobs people are doing today didn’t exist in 1940. This is the part people emotionally resist, because you can’t just think really hard and imagine these future needs/jobs by just extrapolating lines.
The challenge is basically one of scale: even if demand for ‘human’ goods rises in a richer world, can that market expand enough to employ the bulk of the workforce, or will it remain a luxury niche capable of supporting only a fraction of the population? This is hard to answer confidently. My hunch is that at a certain level of wealth, many people will simply opt out of labour markets and pursue other more meaningful interests, some of which will remain monetizable. Indeed discussions about labour often ignore the fact that consumer welfare and purchasing power grow very quickly with AGI!
Why this is not an eternal checkmate
Now finally, let me concede something important: comparative advantage and preference-for-humans have limits, and are not automatic ‘get out of jail’ cards. In international trade, even without legal barriers like tariffs, high shipping costs can create a “corner solution” where the equilibrium outcome is zero trade because the marginal benefit of its comparative advantage is swallowed by the fixed and variable costs of moving goods twice.
If a country is very poor, its wages are low, which gives it a massive comparative advantage in labor-intensive tasks: this is why you import t-shirts from Indonesia. However, in trade, reliability and speed are often more important than the nominal wage. A tiny island in the middle of the ocean with 10 people on it can have the most free market economy ever; it still won’t make sense for Merck to trade with them given the high fixed costs which make the whole thing not worth it.
Consider the discussions above about the preference for human-involvement in certain tasks. Even if humans remain superior at ‘human’ tasks, capital (energy, raw matter) remains finite. If allocating capital to messy human processes is too costly compared to efficient AGI processes, and if consumers face time scarcity (you can only watch so many hours of content in a day), the market might simply swap the type of good entirely. We might get fewer human-made plays not because robots are better actors, but because the superior human product gets substituted for a different, capital-efficient category of entertainment entirely. Ideally, AGI innovates to reduce the ‘messiness costs’ of human integration to offset this. In fact, this is my default assumption in the short to medium run, given the often under-discussed ‘demand-side’ considerations. But if corner solutions dominate, lack of perfect substitutability alone isn’t enough to save the job.
Similarly, as systems improve towards ASI, ‘interface friction’ may act as a prohibitive cost (not just financial, but latency and context) that forces a corner solution. For an ASI to utilize a human, it must ‘export’ data (explain the context), wait for biological processing, and ‘import’ the result. This ‘round-trip’ tax means that even if a human has a comparative advantage in a specific task, the time-cost of leaving the digital substrate is too high.
Consequently, an ASI economy may choose a ‘local’ (ASI-native) solution that is technically less efficient in terms of compute, but superior in terms of total integrated speed and reliability. The humans effectively end up in the same situation as the island of 10 people above: their ‘low opportunity cost’ is irrelevant because they are logistically stranded outside the high-speed value chain. This is what Norvid wisely points to in this tweet. The “low opportunity cost” of the human (the fact that they might work for “cheap” and you could allocate ASIs to more valuable tasks) is irrelevant because the fixed cost of connecting them to the AI-driven value chain is too high.
Note that even then, the humans remain the beneficiaries of this now ‘closed loop’ ASI economy: again, the ASI economy is not producing paper clips for their own enjoyment. But when humans ‘demand’ a new underwater theme park, the ASIs would prefer that the humans don’t get involved in the production process. Remember the ‘humans keep moving up a layer of abstraction’ point above? At some point this could stop!
Importantly, this may in fact not require perfect/full substitution to happen. Even before AGI is a perfect substitute, the ‘fixed cost’ of employing a human may become so high relative to an API call that firms simply redesign processes to eliminate humans entirely. I’ve written about this possibility before. We may face systemic employment failure not because AI can do the job perfectly, but because the friction of human involvement costs more than the value humans add.
If these fixed costs bite early, the challenge shifts to the political economy of distributing the surplus. However, the policy response requires precision: rather than a blunt capital tax which might hamper the accumulation driving growth, the optimal strategy might be to tax the bottlenecked input. If the AI economy is constrained by land, energy, or raw resources, you tax those specifically, similar to how nations capture rents from mineral wealth. This is somewhat debated, however: in a self-improving ASI economy, ‘capital’ effectively begins to behave like a self-renewing natural resource. Here the standard objections to capital taxation (that it reduces investment and growth) may fall away, allowing for taxation without the usual downside of stalling economic progress.
Lastly, this ‘corner solution’ risk mostly applies to the logic of comparative advantage. The second argument about demand for human services and goods still applies (subject to the above caveats). More likely than not we become like the “hand-made” artisans of today: economically irrelevant to the infrastructure of the world (power grids, undersea cables, vaccine development), even if we are highly “valued” in a small, niche market of status goods. At least to me this seems fine: I don’t think there is anything objectively more valuable about being a consultant or engineer than an artist or meditation teacher, particularly if it also means that poverty, misery, disease, pollution, and other afflictions are solved.
What’s the takeaway?
As seen above, I think there are strong reasons to think that comparative advantage does last some time. There will of course be job displacement, as there have always been with technological change, but the net effect continues to be positive; of course it remains important to think carefully about welfare provisions and appropriate safety nets. AGI is also a flimsy concept when used in these discussions: my own views on the jaggedness of capability gains, the bottlenecks of physical robotics, the intricacies of industrial organization and the high fixed costs of redesigning legacy workflows, and the incremental nature of progress, make me skeptical of sudden shifts. Add to this the scarcity of compute and energy, Jevons Paradox for AI demand, the fact that capital investment is also constrained by expected demand (firms won’t invest in capacity if they anticipate insufficient demand), the frictions caused by institutional inefficiency and the legal/regulatory morass surrounding liability, the durable consumer preference for human-centric/status goods. All these factors - along with the vast long tail of tasks in unstructured environments - make me think that if we get ~AGI in the next ten years, you can easily expect at least another decade of adaptations before we even approach a ‘corner solution’ of full substitution.
Of course in the limit, the full substitution scenario is coherent! It’s not impossible that sometime in the future compute is quasi-abundant, we no longer want fluffy human things, and fixed costs force a decoupling of human and AGI labour; but many people who start with these assumptions really underestimate how extreme, fragile, and demanding they are. Compounding assumptions necessary to support a particular take contain so many moving parts that can individually materially affect a prediction; put differently, the error bars and confidence intervals are enormous. As Brian Albrecht explains, reaching a ‘zero labor’ endpoint actually relies on a fragile stack of economic conditions holding true simultaneously. And yet the overconfidence accompanying these claims often comes with a menu of political prescriptions.
So I don’t think the ‘hyper pessimistic’ scenario should be the ‘default assumption’ or starting point. Modeling these scenarios remains valuable - but my concern is that many commentators and policymakers will simply not understand the complexity of the scaffolding needed to get to these conclusions, and assume that when someone predicts ‘AGI in the next ten years’, that the most extreme versions of the economic arguments just automatically flow from that. This is simply not true.
And as alluded to above, my own inclinations about the CAIS-like nature of AGI itself make me cautious about jumping straight to the “drop in hyper-capable AGI agent” frame in the first place (which is the default assumption when the AI community discusses AGI, with notable exceptions). This has implications for both timelines (integration doesn’t happen overnight, and especially not if your target is the entire economy) and for complementarity and substitution.
So I expect cyborgism to last a long time - at least until ASI is so superior that a human adds negative value/gets in the way, compute is highly abundant, bottlenecks disappear, and demand for human stuff is zero - which are pretty stringent conditions. These scenarios are worth taking seriously and modeling and studying; but starting off with “perfect substitution” as an assumption and reasoning backwards from there is ultimately not particularly instructive or that useful of a prediction in the short/medium term, at least from a policy and fiscal perspective. It’s far wiser to proceed incrementally, and update along the way - the precautionary principle is bad guidance here too. From a research point of view, however, we must watch to see which force dominates: the resilience of human comparative advantage or the ruthless logic of fixed costs. Until that specific evidence mounts, preemptive policy surgery is likely to do more harm than good. 🌎👨🚀🔫👨🚀
Huge thanks to Nathaniel Bechhofer, Pedro Serôdio, Brian Albrecht, Alex Imas, and Herbie Bradley for the highly instructive discussions and feedback. I’ve learnt a lot - but errors remain mine.




Brilliant breakdown of the positionl goods angle here. The insight that provenance stays scarce even when replicability becomes cheap is sometihng I've been wrestling with in supply chain work. I dunno if status hierarchies in niche communities can really scale enough to employ the workforce tho. Seems like we're betting alot on taste-based differentiation holding economic weight.
Wrote a brief post last year describing plausible categories of TAI-resistant human roles, similar to your non-substitutable demand argument:
A Taxonomy of Jobs Deeply Resistant to TAI Automation | Convergence Analysis
https://www.convergenceanalysis.org/publications/a-taxonomy-of-jobs-deeply-resistant-to-tai-automation