Gutenberg’s Printing Press vs. Today’s AI: Why We Fear the Tools That Free Us
- Jefferies & Partners

- Dec 29, 2025
- 9 min read

When Johannes Gutenberg’s press first began to clatter in the mid-fifteenth century, it did not sound like progress to everyone. For to many in power it sounded like trouble.
For centuries before, written knowledge had moved slowly. Books were copied by hand, usually by monks in quiet rooms where time seemed to stretch. The scarcity of books gave them a kind of sacred weight. If you controlled the scriptoria and the scholars, you largely controlled what people could read, and therefore what they could think.
Then came a German goldsmith with an idea: metal type, ink, and a press that could turn out pages at a speed no scribe could match. Suddenly, the monopoly on copying was gone. Books could be produced in the hundreds, then thousands. Pamphlets could be printed quickly and distributed cheaply. The pace and scale of communication changed almost overnight.

It is easy, from our vantage point, to celebrate this as a straightforward victory for enlightenment. But it didn’t feel that way at the time. The printing press was greeted with a wave of anxiety that sounds remarkably familiar to the way we talk about artificial intelligence today in 2025.
Authorities worried that ordinary people might read scripture directly and “misinterpret” it, bypassing priests and established doctrine. Scholars feared that their carefully preserved knowledge would be drowned in an ocean of poorly translated, hastily printed texts. Rulers were unsettled by the idea that pamphlets critical of their decisions could circulate rapidly among the population, stoking unrest. Moralists complained that cheap printed matter would corrupt taste, spread obscenity, and debase the culture.
In other words: too much information, in the wrong hands, moving too fast.
Fast forward, to today and we have swapped the ink for algorithms and the wooden press for server racks, and the script sounds almost identical. We worry that generative AI will flood the world with misinformation, bypass experts and institutions, cheapen culture with an endless stream of synthetic content, and be weaponised by people with bad intentions.
The fear is not irrational. It never was.
Printed pamphlets did fuel rebellions and religious conflicts. They helped spread slander, superstition, and propaganda. The press gave voice not only to reformers and scientists but also to charlatans and demagogues. It made it easier for good ideas and bad ideas to travel further and faster than ever before.
In the same way, AI makes it easier to do many things at scale: translate, summarise, generate, imitate. It makes it easier to help, but also easier to deceive. It can write helpful explanations of complex topics, but it can also fabricate convincing lies. It can design proteins, but it can also help design more sophisticated cyberattacks. It can help a lonely student learn in their own language, and it can help a scammer mimic a loved one’s voice.
It’s tempting, faced with that ambiguity, to locate the danger in the tool itself. To say: this hammer is evil; look how many skulls it could crack. But a hammer is a stupid, honest thing. It doesn’t know whether it’s driving nails into the frame of a house or swinging in a street brawl. It has no intentions. It only magnifies the force of the arm that wields it.
The printing press was a more complicated hammer, and AI is more complicated still. But the principle holds. These technologies do not come with values preinstalled. They are multipliers of human intent. They expand our reach. They sharpen both the surgeon’s scalpel and the con artist’s tricks.
Bad actors did not appear with the arrival of the press, and they have not appeared with the arrival of AI. Long before any of this, people lied, manipulated, exploited, and incited. They did it from pulpits and taverns, in letters and speeches, through rumour and rhetoric. When the press arrived, they simply found a more efficient way to do what they were already doing. The same is now happening with AI. It does not create malice; it accelerates whatever was already there.

Our anxieties are not only about truth and lies. They’re also about work. Here, too, history refuses to be quiet.
In a 1938 essay for MIT Technology Review titled “The Bogey of Technological Unemployment,” MIT president Karl T. Compton asked whether machines were friendly genies, serving every human need, or “Frankenstein” creations that might turn on their makers. Writing in the shadow of the Great Depression, with US unemployment around 20%, he understood why people were afraid.
Compton drew an important distinction. For the economy as a whole, he argued, technological unemployment was largely a myth: new machines lowered costs, opened new markets, and created entirely new industries. Over time, that meant more work, not less. But for individuals and communities, the story was very different. When a town’s mill closed or a craft was replaced by a new technique, the pain was real and immediate. Families couldn’t live on the promise that “jobs will come back in the long run.”
That tension, between long-run growth and short-run disruption, has never really left us. In the 1960s, another MIT economist, Robert Solow, tackled the same question. In an essay called “Problems That Don’t Worry Me,” he dismissed the idea that automation was about to cause catastrophic mass unemployment. Productivity, he noted, was improving, but not at revolution speed. Yet Solow, like Compton, acknowledged that specific kinds of work could suddenly become obsolete, with very real human costs.
Fast-forward to the early 2010s. Industrial robots had already hollowed out many manufacturing jobs; now AI and digital tools were nibbling at clerical and office work. MIT scholars Erik Brynjolfsson and Andrew McAfee argued that technology might be destroying jobs faster than it created new ones. Later, economist David Autor and colleagues pointed out that around 60% of jobs in 2018 were in occupations that hadn’t even existed in 1940, a reminder that innovation also invents new kinds of work, even as it abolishes old ones.
More recently, a Goldman Sachs analysis estimated that around two-thirds of US roles are exposed in some way to AI automation, but that most will be partially automated rather than replaced outright. AI becomes part of the workday, not a pink slip in a chat window.

So the pattern, across MIT Technology Review’s 125-year vantage point, is messy but clear. Technology upends tasks, reshapes sectors, and hurts specific workers in the short term. Over the long term, it tends to raise productivity, enable new industries, and change what “a good job” looks like. The risk is not an overnight, jobless future conjured by a Silicon Valley genie; the risk is unmanaged transition, where the benefits and the pain are distributed with brutal unevenness.
In other words, the “bogey” hasn’t gone away, but it has never been the whole story.
What actually changes: the scaffolding around the tool
History offers a useful answer, not in the form of a comforting guarantee, but as a pattern. The societies that lived through the printing revolution did not control the tool by smashing presses. Attempts to suppress printing entirely failed, or succeeded only briefly and brutally. What did work, over time, was something messier: building institutions and norms around the tool.
Schools and universities grew to teach literacy and critical thinking, so people could read more than one text, compare sources, and argue. Libraries and catalogues were created to organise growing forests of books. The profession of editing and publishing emerged, with reputations to defend and standards to uphold. Systems of citation and peer review developed. Laws evolved: about libel, sedition, obscenity, and later about copyright and press freedom.
None of this removed the possibility of harm, but it changed the balance. It made some uses of the press easier and more legitimate, and others harder and more costly. The same pages of type that had once amplified chaos now also amplified inquiry, debate, and reform. The technology didn’t become moral. The surrounding ecosystem did more of the ethical work.
We face a similar fork with AI. The tool is here. It will only become more capable. We can react with blanket fear and vague calls to “slow everything down,” or we can do the harder work of shaping the scaffolding around it: governance, incentives, education, and culture.
That doesn’t mean we shrug off the risks. It means we stop pretending that the risks are alien, imported into the world by lines of code, and acknowledge that they are recognisably human. The fear that a chatbot can convincingly lie is, at its root, the old fear that a persuasive person can convincingly lie. The difference is speed, reach, and the way we package authority.
Speed and reach matter. So regulation matters. Guardrails matter. But they should be built with the same kind of realism that we bring to other powerful tools.
We do not ban hammers because they can be used as weapons; we have laws against assault and murder. We do not outlaw cars because they can kill people; we set speed limits, require licenses, build traffic systems, and design crumple zones. We do not abolish the press because it can defame and incite; we have libel laws, media ethics codes, and in many places, protections for a free but responsible press.
With AI, something similar will be needed. Clear rules for high-risk domains like healthcare, finance, critical infrastructure, and elections. Transparency about how systems are trained and tested. Accountability for those who deploy them at scale. Better tools for detecting synthetic media and tracing the origins of content. And perhaps most importantly, a cultural expectation that using AI does not dissolve human responsibility. “The model suggested it” cannot be allowed to replace “I chose to act on this.”
Literacy for a machine age
At the same time, we need the equivalent of literacy: not just digital skills, but a broad, everyday understanding of what AI is and isn’t. In the early print era, people gradually learned to ask: Who wrote this? Who printed it? Who paid for it? What’s their interest? Today the questions are parallel: Who trained this system? On what data? Who benefits from the way it frames the answer?
If those habits of questioning become widespread, AI becomes less spooky. It becomes one more part of an environment that citizens know how to navigate, like newspapers, television, or social media. And just as literacy made the press more powerful in good ways, AI-literacy can tilt the balance toward uses that genuinely serve people rather than manipulate them.
There is also a quieter, more hopeful symmetry between the printing press and AI that is easy to miss in the noise. The press did not simply create chaos. It also made possible the modern scientific enterprise, the rapid spread of technical know-how, the preservation and sharing of literature, philosophy, and law. Movements for abolition, suffrage, labour rights, and democracy all depended on printed words that could travel cheaply, crossing borders and social classes.
We rarely think of those as “printing outcomes,” but they are. The same technology that printed inflammatory pamphlets also printed the arguments that dismantled slavery, that defended human rights, that explained germ theory and public health. The same ink that carried lies carried counter-arguments, evidence, and solidarities.
AI has a similar double potential. It can generate targeted propaganda, but it can also help fact-check it. It can produce cheap disinformation, but it can also translate scientific papers, accelerate drug discovery, and help millions of people understand complex decisions in their own language. It can be used to automate scams, but also to detect fraud, monitor for cyberattacks, and build systems that are more robust than any we’ve had before.
What we choose to build
At Jefferies & Partners, we think about AI this way: not as an inevitability to fear, but as a management challenge to design for. We’re a management solutions firm, not a lab, so our question is always, “How do leaders turn this from an abstract risk into a concrete advantage, for their people, their customers, and their communities?”
That means favouring augmentation over simple replacement. Using AI to expand what skilled people can do, not just to thin out headcount. It means treating “human in the loop” not as a safety slogan but as a moral commitment: someone, somewhere, owns the consequences.
The Gutenberg press did not make Europe wiser on its own. It made wisdom more possible and stupidity more visible. AI will not make us more ethical or more foolish by itself. It will make the consequences of our ethics and our foolishness larger, faster, and harder to ignore.
So perhaps the most honest position is neither optimism nor pessimism, but a kind of clear-eyed courage. We do not need to fear the hammer. We need to look at our hands.
Will we build the institutions, norms, and habits that make good uses of AI easier and bad uses riskier? Will we invest in the slow, unglamorous work of education, governance, and cultural adaptation? Will we demand, as Compton did in 1938, that management’s ultimate motive is not “quick profits” but service to the public, this time in a world where algorithms sit alongside people in every workflow?
A hammer can be a weapon or a tool. A press can be a poisoner of minds or a midwife of revolutions. AI can be an engine of manipulation or a scaffold for understanding. The choice is not in the metal, the type, or the code.
The choice is in us.


Comments