
One unelected shift in Silicon Valley could quietly replace human judgment with “black box” code—and the security bill may land on everyday Americans.
Story Snapshot
- Software developer Simon Willison says “dark factories” are emerging, where AI agents write and ship code with no human code review.
- StrongDM has publicly described a working “software factory” approach built around rules that explicitly forbid humans from writing or reviewing code.
- Willison points to November 2025 as an inflection point when coding agents moved from “mostly works” to “actually works,” accelerating automation timelines.
- Security remains the biggest unresolved problem, with Willison warning prompt injection is not solved and could create catastrophic failures.
What “Dark Factory” Means: AI Builds the Software, Humans Step Aside
Simon Willison, the co-creator of Django, says the next major leap in AI-assisted programming is the “dark factory”—a workflow where AI coding agents operate autonomously with no human code review or intervention. He discussed the concept on Lenny Rachitsky’s podcast in late January and early February 2026, framing it as the logical end point of today’s “agentic engineering” trends. In this model, the process becomes a pipeline that turns specs into software with minimal human touch.
Willison ties the term to a broader automation analogy: just as robotics enabled “dark” manufacturing floors that don’t need lights, AI may enable software “factories” that don’t need constant human oversight. The framework is commonly credited to Dan Shapiro, who mapped AI programming onto five “automation levels” similar to autonomous vehicles. Level 5—the dark factory—implies the process itself changes so radically that it is no longer recognizable as traditional software development.
From “Spicy Autocomplete” to Full Autonomy After the 2025 Inflection Point
Willison argues the industry crossed a practical threshold around November 2025, when coding agents shifted from unreliable demos to tools that “actually work” often enough to reorganize teams around them. Earlier stages looked like improved autocomplete, then code generation with human review, then increasingly agent-driven loops where humans supervise outcomes rather than craft each line. That trajectory matters because it turns AI from a productivity boost into a management decision: who reviews, who approves, and what happens when nobody does.
In conservative terms, this is less about trendy tech and more about accountability. When humans stop reviewing code, responsibility moves upward to whoever designs the system, selects the tools, and sets the rules for deployment. For Americans already distrustful of opaque institutions, a “black box” development pipeline raises familiar concerns: decisions get automated, blame gets diffused, and regular people are told to accept the results because “the system” said so—even when the system can’t clearly explain itself.
StrongDM’s “Software Factory” Rules: No Human Writing, No Human Review
The dark factory idea gained credibility because it is not just theory. StrongDM published a public description of its “software factory” operations that match the pattern Willison described. The rules are blunt: code must not be written by humans, and code must not be reviewed by humans. StrongDM also uses an internal benchmark suggesting that if a team is not spending at least $1,000 per human engineer per day on tokens, it signals the process still has room to automate further.
Willison has emphasized he has spoken with at least one team implementing the pattern and described it as “fascinating,” adding that small teams—fewer than five people—are already trying it. The limited scale is important: it does not show broad adoption across the economy yet, and it does not provide long-term performance metrics. What it does show is a workable blueprint that other organizations can copy quickly, especially where cost-cutting pressure is high.
The Real Risk Isn’t Speed—It’s Security and the End of Human Oversight
Willison’s biggest warning is security. He has long highlighted prompt injection as a serious, unsolved problem, and in the dark factory model the consequences multiply because there is no human review safety net. If an AI agent can be tricked—through malicious inputs, compromised dependencies, or poorly constrained instructions—the output can become a vulnerability that ships at machine speed. Willison also warns of a “lethal trifecta” that could lead to an “AI Challenger disaster,” underscoring potential for systemic failure.
For readers who care about limited government and constitutional order, the takeaway is practical: software now runs everything from banking to utilities to federal systems, and automation without transparency invites abuse and mismanagement. It focuses on software engineering, not government procurement, so it does not prove federal adoption. But it clearly shows the direction of travel—toward less human accountability. Americans should demand clear audit trails, rigorous testing, and enforceable responsibility before “dark factory” logic becomes the default.
Sources:
https://simonwillison.net/entries/
https://podwise.ai/dashboard/episodes/7679901
https://www.lennysnewsletter.com/p/an-ai-state-of-the-union
https://plantis.ai/kb/articles/an-ai-state-of-the-union-weve-passed-the-inflection-point-da-8a45e903
https://simonwillison.net/2026/Feb/7/software-factory/
https://simonwillison.net/2026/Jan/28/the-five-levels/













