AI is not intelligence. It's coordination technology.
Stop thinking about AI as another software cycle
We did ourselves a disservice by calling this thing “artificial intelligence.” The name pushes us toward anthropomorphism and IQ-measuring contests. Is it smarter than a doctor? Can it pass the bar exam? Will it be conscious by 2027?
A group of scholars recently made a more useful suggestion. In a paper published in Science, Henry Farrell, Alison Gopnik, Cosma Shalizi, and James Evans argue that LLMs are better understood not as intelligence but as a new type of “cultural and social technology.” They’re in the same category as writing, printing, markets, and bureaucracies: systems that aggregate and reorganise human knowledge at scale.
The comparison to markets is particularly sharp. Hayek’s great insight was that no individual can know everything about supply and demand, but prices coordinate dispersed knowledge into something usable. LLMs do that for text, images, code, and ideas.
“Someone asking a bot for help writing a cover letter for a job application is really engaging in a technically mediated relationship with thousands of earlier job applicants and millions of other letter writers and RLHF workers.”1
That’s not thinking. It’s remixing. And this is not a criticism. Its impact could hardly be bigger.
What happens when you change how societies think
The printing press didn’t just make books cheaper. It led to an explosion of what today we would call ‘slop’. Suddenly, unwritten recipes used by alchemists were indistinguishable from those used by surgeons. Old texts came together in new combinations and nobody knew what to trust.
Society had to invent bibliographies, journals, editors, peer review, and publishing houses to restore signal from noise. All this was a prerequisite for the Enlightenment.
Perhaps GenAI is doing something analogous. It can produce text, images, and code faster than any filtering mechanism we’ve built can evaluate. The result, as investor Yoni Rechtman has catalogued, is that existing systems are collapsing under volume: Hiring pipelines break when applications become free to generate. Outbound marketing channels get poisoned. Security models fail because everything looks real.
These aren’t model-performance problems. They’re signal-to-noise problems. And they won’t be solved by making GPT-5 smarter.
There’s another dimension to this. Vibe-coding, instant prototyping, and AI-assisted development mean we’re building more software, faster, and with less thought than ever before. Most of it will have to be rewritten, governed, integrated, and maintained by humans.
The bottleneck isn’t generating code. It’s making code that actually works in production, connects to real systems, and doesn’t create a governance nightmare six months later.
What this means for adoption
If AI is a social technology, then the path to adoption doesn’t run through better benchmarks. It runs through trust. Someone has to sit with the 40-year veteran who won’t use the system, earn their confidence, and iterate until the tool fits their hands. Humans have to develop and propagate the new ‘artefacts’ or norms (just to avoid the word 'institutions’) that help us isolate the signal and benefit from the incredible explosion of capability.
This is what we’re building at 10xHumans: services that translate between what AI makes possible and what organisations can actually absorb. It’s training tailored to each role, wrapped in an AI strategy that augments the team, supported by change management for humans, by humans.
Next week: what this means for the picks & shovels investment thesis.
Farrell H, Gopnik A, Shalizi C, Evans J. Large AI models are cultural and social technologies. Science. 2025 Mar 14;387(6739):1153-1156. doi: 10.1126/science.adt9819. Epub 2025 Mar 13. PMID: 40080578.



