This is Part 2 of a conversation with Claude Opus 4.6 about what happens when building software costs nothing. Part 1 covered where value shifts for developers. Here, we got into the bigger questions.


Me: What about universal basic income? Won’t AI take over other industries? White collar and blue collar with robots and AI? Where does that leave things? Utopia or dystopia? What odds do you give us?


On UBI

Opus didn’t dismiss the idea — but it didn’t romanticize it either.

The logic is straightforward: if machines do most of the work, just give people money. The economy produces more than enough. The problem is distribution. UBI fixes the distribution pipe.

The practical problem is also straightforward: the people capturing the AI-driven productivity gains are the same people with the most political power and the best lobbyists. Every UBI pilot runs into the same wall when it’s time to fund it at scale.

But the deeper issue is one that doesn’t get talked about enough:

“Money isn’t the only thing work gives people. It gives identity, structure, community, purpose. Every study on long-term unemployment shows that even when people have enough money, losing work damages mental health, relationships, and sense of self. UBI might keep people fed. It doesn’t necessarily keep people okay.”

That’s not an argument against UBI. It’s an argument that UBI alone isn’t enough — you’d need new structures that give people meaning and belonging. And we don’t know what those look like yet.

The prediction:

“Some form of UBI or guaranteed income becomes inevitable within 15-20 years in developed countries. Not because politicians want it, but because the alternative — mass unemployment with no safety net — becomes politically untenable. It’ll be implemented badly, probably too late, and it’ll be called something other than UBI because the branding is politically toxic.”

That’s a plausible read — if trends hold. But 15-20 years is confident. Political economy in developed countries has a long history of deferring “politically untenable” situations further than anyone expected. The program might arrive on time under a different name and half the coverage, or it might take 30 years and a crisis to force the issue. The timeline is the genuinely uncertain part.


On blue collar and white collar

Yes to both. But on different timelines.

The cognitive layer goes first — knowledge work, analysis, coding, design, writing. That’s already happening. Robotics is harder. The physical world is messy in ways that are trivial for humans and brutally hard for machines.

The rough pattern Opus outlined: white collar disruption hits hard in the next 5-10 years. Blue collar is more nuanced than a single wave. Repetitive physical tasks — warehouse fulfillment, self-checkout, food service kiosks — are already being displaced now, not waiting for some future wave. The roles that genuinely take 10-20 years are the ones requiring unpredictable physical environments: plumbers, electricians, HVAC technicians, where every job site is different. The physical world isn’t uniformly hard for machines — it’s specifically hard where the environment is novel and variable.

What’s left? A shrinking set of roles that resist automation — genuine human connection (caregiving, therapy, teaching young children), novel physical problem-solving in unpredictable environments, high-stakes ethical judgment, and creative work where the humanness of the creator is the point.


On the actual odds

This is the part I kept coming back to. Opus put numbers on it — worth noting these are the model’s illustrative priors, not empirical probabilities derived from data:

"~10% — Genuine utopia. AI solves scarcity, governments adapt intelligently, we figure out new social structures that give people meaning without mandatory labor. Possible. Not likely."

"~15% — Genuine dystopia. Concentration of power becomes irreversible. A small class of people who control AI systems effectively become neo-feudal lords. Democratic institutions collapse. This is the scenario people fear, and it’s not crazy — the ingredients are already visible."

"~75% — The muddle. Some people do amazingly well, a lot of people struggle through a rough transition, governments implement half-measures that partially work, new industries emerge that nobody predicted, inequality gets worse before policy catches up, and in 30-40 years we look back and say ’that was a wild ride but we got through it, mostly.’"

One thing that framing glosses over: the costs aren’t evenly distributed. The rough transition lands hardest on specific communities and demographics — workers without mobility, savings, or reskilling pathways. “We get through it, mostly” means different things depending on where you’re standing. The muddle is already visible in concrete ways: billions allocated to infrastructure edge cases while basic constraints go unaddressed , and convergence risks that could add more friction to adoption timelines than most forecasts account for .

And then the single most important variable:

“It’s not the technology. It’s the speed of transition relative to the speed of adaptation. If AI displaces 10% of jobs over 20 years, society can absorb it. If it displaces 30% of jobs over 5 years, we’re in serious trouble — not because the end state is bad, but because the whiplash breaks things.”

Nobody knows the speed. Not the researchers, not the economists. That’s the honest uncertainty.


The 75% muddle doesn’t sound dramatic. That might be why it’s the most likely. History rarely ends — it grinds forward, unevenly, with real costs and real progress mixed together.

The closing advice Opus offered feels right to me:

“Don’t bet on one scenario. Build optionality — flexible skills, real human connections, problems you understand deeply. And don’t let the existential weight paralyze you. Right now, builders still have enormous leverage. That window may narrow. Use it.”


This post is a lightly edited transcript of a conversation with Claude Opus 4.6 on February 18, 2026. The questions are mine. The answers are the model’s, kept as close to verbatim as possible for the key passages.


The conversation kept going. I pushed for something more actionable: is there anything we can actually do about it? Continue in Part 3: What Can We Actually Do About It? .