The Inflection Point Was November 2025
Key Takeaway: According to software engineer Simon Willison, November 2025 was the moment AI coding agents shifted from mostly functional to reliably operational. What follows is a period of rapid automation across knowledge work. The dark factory is not a future scenario. It's a present-tense trajectory.
Everyone is looking for the moment AI became real. Most people will date it wrong, pointing to ChatGPT's 2022 launch or GPT-4's release. Simon Willison, one of the most credible technical voices in the AI space, argues the actual inflection point was November 2025.
That's when AI coding agents stopped being mostly functional and became reliably operational. Not for all tasks. Not without supervision. But reliable enough to shift the economic calculation for software development permanently.
In a conversation on Lenny's Newsletter published April 2, Willison laid out where this leads. His term: dark factories.
What November 2025 Actually Changed
The shift in November 2025 was about error rates, not capability.
AI coding agents had been capable of writing code for two years. The problem was reliability: agents would complete 80% of a task correctly and fail in unpredictable ways on the remaining 20%. That failure rate made them useful for assistance but not for autonomous execution.
When error rates dropped below a critical threshold in November 2025, the calculus changed. At some failure rate, human review of every AI output costs more than the value the AI provides. Below that threshold, you can deploy agents with spot-check oversight rather than continuous review.
That's the inflection point. And it matters because the cost structure of software development changes permanently when you cross it.
According to Willison, mid-career engineers bear the highest displacement risk from this shift. Not junior developers, who are often supervised anyway, and not senior engineers, whose architectural judgment is difficult to automate. The mid-level tier, skilled at execution but spending most of their time in territory that agents now cover reliably, faces a structural compression in demand.
The Lethal Trifecta and What It Means for Business
Willison introduced a concept that deserves wider attention outside engineering circles: the lethal trifecta.
Three conditions: private data access, untrusted content, and external communication. When an AI system has all three simultaneously, you have a security risk profile that existing governance frameworks weren't built to handle.
This isn't theoretical. Every business that deploys AI agents to process customer emails (untrusted content), access CRM or financial data (private data), and send responses (external communication) has the trifecta. Most of them don't know they have it.
The parallel Willison draws to the Challenger disaster is sharp. NASA had information indicating risk before the launch. The organizational structure prevented that information from reaching the people who needed it. A similar failure mode is building in enterprise AI deployments right now.
The Dark Factory
The concept of the dark factory, a manufacturing facility that operates without lights because no humans are present, has existed in industrial manufacturing for decades. Japanese automakers experimented with fully automated production lines in the 1980s.
What Willison describes is the knowledge work equivalent. Software that writes itself, reviews itself, tests itself, deploys itself. No human in the loop except at the goal-setting and monitoring layer.
This isn't arriving in 2030. At Anthropic, a system called CASH already runs AI growth experiments autonomously. At companies building with the current generation of coding agents, large portions of the development loop are already running without per-task human involvement.
At Madison AI, we're building toward exactly this architecture: a system where the marketing execution layer runs autonomously within defined parameters, with humans responsible for strategy and judgment.
The question for any business leader isn't whether this is coming. It's whether your operational architecture is being built to work with it, or whether you're assuming it won't reach you. I covered two signals on AI and work last week that pointed at this same trajectory, and wrote about AI agents running business operations as a near-term preview of what the dark factory looks like in practice.
The factories are going dark. The only decision is whether you're building with the lights off or waiting for someone else to figure it out first.
Based on Simon Willison's conversation with Lenny Rachitsky on Lenny's Newsletter, published April 2, 2026.
FAQ
What is a "dark factory" in the context of AI?
A dark factory is a production environment that operates fully autonomously without human presence. Applied to knowledge work and software, it refers to AI systems that write, review, test, and deploy code without per-task human involvement. The concept is already emerging in companies using advanced AI coding agents.
Why are mid-career engineers most at risk from AI automation?
Junior engineers are supervised and learning; their value is developmental. Senior engineers provide architectural judgment and decision-making that's difficult to automate. Mid-career engineers often spend the majority of their time executing well-defined tasks within known frameworks, which is precisely where current AI coding agents perform most reliably.
What is the "lethal trifecta" in AI security?
The lethal trifecta describes a high-risk configuration where an AI system simultaneously has access to private data, processes untrusted content (like user emails), and has the ability to communicate externally. This combination creates a vulnerability profile that existing security frameworks don't adequately address.
