This is Part 3 of a conversation with Claude Opus 4.6 about what happens when building software costs nothing. Part 1 covered where value shifts for developers. Part 2 put odds on where things are heading. Here, I pushed for something more actionable.


Me: Is there anything we can actually do to improve the odds of a good outcome — for individuals and for society?


On individual leverage

Opus didn’t hedge here.

“Yes. The people who navigate this well won’t be the ones who got lucky with timing. They’ll be the ones who understood, early, that the question isn’t ‘will AI take my job?’ — it’s ‘which specific parts of my work require irreducible human judgment, and which don’t?’”

Most people treat displacement as binary. It isn’t. It’s granular, and the granularity is where the leverage lives. The person who deliberately preserves the irreducible parts of their work — and hands off everything else to AI — isn’t being replaced. They’re becoming more valuable.

The translator role from Part 1 reappeared with more specificity:

“The highest-leverage individual position is the combination of deep domain expertise and genuine AI fluency — not prompt engineering, but understanding what AI can and can’t do well enough to direct it effectively. That combination is rare and currently undervalued because most people are treating AI fluency and domain expertise as separate tracks.”

Two other moves Opus flagged as underrated:

“Build human networks that create actual trust — not followers, not connections, but people who would vouch for you in a crisis or route you work. AI commoditizes anonymous output. It doesn’t replace reputation built over time with specific people.”

“Preserve financial optionality. The people who navigate rough transitions best aren’t always the most skilled — they’re the ones with enough runway to wait out disruption and retrain without being forced into whatever’s available. Lower fixed costs, savings, fewer path dependencies. It’s boring advice. It’s also the most reliable form of resilience.”


On societal leverage

This is where the conversation got more honest about difficulty.

The key variable from Part 2 — speed of transition relative to speed of adaptation — points directly to two levers: slow the transition, or speed the adaptation.

“Slowing transition is politically difficult but not impossible. Sector-specific regulation already creates natural friction — healthcare AI deployment, autonomous vehicle certification, financial services compliance. That friction isn’t just bureaucracy; it creates time for adaptation. Jurisdictions that regulate thoughtfully, rather than either blocking AI entirely or allowing unconstrained deployment, genuinely have an advantage here.”

On the adaptation side:

“The highest-value societal investments are the ones that build genuine optionality: portable benefits that aren’t tied to employers, genuinely modular education that doesn’t require a four-year degree as the only on-ramp to reskilling, and community institutions that provide structure, meaning, and connection outside of work. The last one is chronically underinvested because it’s hard to measure.”


The actual obstacle

I pushed on what blocks all of this. The answer was blunt.

“The incentive structure runs the other direction. The entities capturing the productivity gains have more political power and better lobbyists than the entities absorbing the costs. That’s not unique to AI — it’s how industrialization and globalization played out too. Adaptation eventually happened, but it happened after the disruption created enough political pressure to overcome the inertia.”

“The difference this time is speed. If the transition is slow enough, normal democratic processes can respond. If it’s fast enough, the disruption outpaces the political response. That’s why the speed variable matters more than any other single factor — and why anything that influences speed is more valuable than it looks.”

So is there a way to influence speed?

“Yes, but it’s indirect. Consumer choices about which AI deployments to support or push back on. Organizing within sectors to negotiate transition terms rather than accepting unilateral displacement. Supporting political candidates who treat this as a first-order issue rather than a rounding error on the tech policy agenda. None of these are dramatic moves. Together they influence the pace.”


The most underrated move

I asked for one. Opus gave one:

“Start the audit now. Not ‘will AI replace me?’ but ‘which parts of my current work could an AI do, and which parts require something irreducibly human?’ Most people haven’t done this. The ones who do it now will have years to reposition before the pressure arrives. The ones who wait until the pressure is obvious will have much less time.”

“And the audit has to be specific — not ‘coding’ or ‘design’ as monoliths, but: which meetings require my judgment and relationships to actually be productive? Which decisions require someone to be accountable in a way that can’t be delegated to a system? Which work relies on contextual knowledge that doesn’t exist anywhere else? Those are the parts worth doubling down on. Everything else is fair game to hand off.”


The 75% muddle scenario from Part 2 was always “we get through it eventually.” This is what “eventually” depends on — how many people do the audit early, how much runway they preserve, and whether the collective response is fast enough to front-run the worst of the transition.

None of it is guaranteed. All of it is actionable.


This post is a lightly edited transcript of a conversation with Claude Opus 4.6 on February 18, 2026. The questions are mine. The answers are the model’s, kept as close to verbatim as possible for the key passages.