There’s a thesis circulating at the highest levels of tech: the cost of producing software is approaching zero. It’s being said loudly, by people with the resources to act on it.
Klarna’s CEO Sebastian Siemiatkowski is the clearest proof that this is already happening. He shut down over 1,200 SaaS vendors and replaced much of that work with AI. He’s not theorizing — he’s restructured the company around the assumption that the cost of writing code will reach zero, and he’s already acted on it.
Elon Musk is making a different but related bet. Where Siemiatkowski is focused on cost, Musk is focused on autonomy. Macrohard — his AI software project within xAI — is built on the premise that humans don’t need to be in the loop at all: software agents spawn other agents, write code, run tests, and ship. An AI software factory, end to end.
Marc Andreessen has framed it as a structural inevitability: AI drives the cost of software production toward zero the way competitive markets always do. Microsoft CEO Satya Nadella has been more specific about what that means for existing software: most SaaS products are “CRUD databases with a bunch of business logic” — thin wrappers that were viable because building software was expensive. When building is cheap, the wrapper isn’t worth much. The moat was always execution cost, not product differentiation.
The counterpoint comes from an unexpected voice. Jensen Huang — whose company supplies the chips that make all of this possible — called fears that AI would replace software “the most illogical thing in the world.” He was responding to investors selling enterprise software stocks on fears that AI agents would make software products obsolete. His reasoning: AI agents are software consumers, not software replacers. More AI means more demand for software, not less.
Huang is right. And that’s where the more interesting question begins.
When Something Gets Cheaper, You Use More of It
There is a consistent pattern in the history of technology: when a resource becomes cheaper, demand for it expands faster than price falls. This is called Jevons’ paradox, and it applies everywhere.
Cheaper compute didn’t reduce the amount of software in the world. It created categories that couldn’t have existed before — video streaming, social networks, cloud infrastructure, real-time machine learning. Each drop in cost enabled a new layer of complexity that wasn’t previously feasible.
The same dynamic applies to software production itself. When it costs less to build, teams don’t ship fewer features and go home early. They build systems they couldn’t previously staff. Scope expands. Ambition expands. The kinds of products that were previously ruled out because they’d take too long or cost too much become viable.
The consequence of cheaper execution isn’t less software. It’s more complex software, built by smaller teams, moving faster than before.
The Constraint That Was Always Holding You Back
There’s a constraint that shaped almost every product decision for the last thirty years: you had a limited team.
Features took time. Systems had scope limits imposed by headcount. Some projects were ruled out at the whiteboard stage because “we don’t have anyone to build that.” Opportunity cost was everywhere — every feature you built was a feature someone else didn’t build.
That constraint is loosening fast.
When I built a production deployment orchestrator using Claude — a project I estimated would take six months — it took three days. That wasn’t a productivity improvement. It was a categorical change. It meant work that would have been deprioritized off the roadmap forever became a weekend project.
Multiply that across a company. Multiply it across an industry. The systems that were too expensive to staff — the platforms that required 50 engineers to be viable — are now accessible to a much smaller team. For the first time, a single person with sufficient judgment and domain knowledge can tackle problems that previously required an engineering organization.
The trillion-dollar one-person company isn’t a thought experiment. It’s an early-stage reality. The question is what it takes to be the one person.
The Software Factory, and Who’s Building It
Musk’s Macrohard framing is interesting precisely because it’s honest about what this looks like at scale: agents spawning agents, software factories running autonomously.
Major tech companies are building internal versions of this. The AI code generation tools are just the visible tip; underneath them are workflows, pipelines, and evaluation systems for directing and reviewing AI output at scale.
The race isn’t about who can write code fastest. It’s about who can direct, review, and validate AI output at the rate that AI can produce it. Production velocity is no longer the constraint. Judgment capacity is.
This is the interesting competitive dynamic: the bottleneck has shifted. For fifty years, the bottleneck was “can we build it?” Now the bottleneck is “can we tell if what we built is right?”
Two Kinds of Judgment
That question has two parts, and AI can only answer one of them.
Give it explicit criteria — a spec, a test suite, acceptance conditions — and it can validate output against them reliably. That part is automatable, and the tooling around it is getting better fast.
What it can’t replicate is the judgment you haven’t written down yet. The “that’s not what I meant” that only surfaces when you see the wrong output. The felt sense of when something is technically correct but directionally wrong. Tacit knowledge — the kind that lives as intuition rather than documentation — is still yours to supply.
This is the ceiling on the full-autonomy vision. Agents can execute. They can validate against rules you’ve stated. But someone still has to set the rules, and setting them requires knowing what you want before you’ve seen it. The factory still needs a foreman. Not everyone in that role is actually qualified.
The Dividing Line Is Judgment
There are two kinds of people in this new environment.
The first can review the code, validate the output, and own the product. They understand what they asked for well enough to check whether they got it. They build test suites. They know the system well enough to ask “what happens when this fails?” before it fails. They treat AI output as a pull request, not as a finished product.
The second blindly trusts what the AI produces. They ship the first output that looks plausible. They don’t have enough context on the problem space to know when an answer is wrong. They’ve automated their own decision-making away.
The first group has a force multiplier. The second group has a liability multiplier — they’re shipping mistakes faster than before.
Instructions and tools tell an AI what to do, but they don’t ensure the output is actually correct. Guardrails that create feedback loops are what close that gap — not as a nice-to-have, but as the core discipline that separates AI-augmented development from vibe coding at scale.
The uncomfortable version of this: many people who consider themselves technical will end up in the second category, because the skills that matter have shifted. Raw implementation speed matters less. System design, domain knowledge, and the ability to evaluate output matter more.
The Opportunity Is Real
The opportunity is a generation of software that couldn’t previously be staffed — platforms that required 50 engineers, systems ruled out at the whiteboard, products too complex to ship with a small team. That market is opening. It may beget the first trillion-dollar one-person company.
I’ve written separately about what’s left to sell when everyone can build software : distribution, domain expertise, and trust matter more as execution cost falls.
But the underlying prerequisite hasn’t changed: you have to be able to tell if what you built works. You have to understand the system well enough to catch the mistakes. You have to have the judgment to lead the team — even when the team is an AI.
The foreman has bad days. He gets tired, inconsistent, forgets to enforce the rules he set last month. Which is actually the argument for writing things down: every time your judgment catches something the system missed, ask whether you can articulate why. The spec doesn’t forget. The test suite runs every time. Tacit knowledge encoded into explicit criteria doesn’t require you to remember — it just runs. The foreman’s job shrinks as the rulebook grows. But the rulebook only grows because someone keeps catching things and writing them up.
The execution barrier is gone. The judgment barrier doesn’t disappear — it becomes the work.
The Downstream Cascade
Software has always been the design layer for physical systems. When software production accelerates, the downstream effects don’t stay in software.
Cheaper and faster software generation means faster design iteration for physical products. Faster design iteration means faster manufacturing of physical things. And as robotics, automation, and AI-controlled hardware mature, software becomes the substrate for physical infrastructure — logistics networks, manufacturing plants, robots exploring space and mining other worlds.
Each layer enables the next. The acceleration of software production is upstream of the acceleration of everything else. When that layer gets cheaper and faster, what gets manufactured changes. What gets built in the physical world changes.
The cascade has already started. We’re just not far enough in to see where it ends.
