AI is Killing the Web as We Know It
The incentives for creating quality content are diminishing by the day.

The Economist, “AI is killing the web. Can anything save it?“
tl;dr version:
The article exposes a fundamental shift in how people navigate the internet—AI is delivering answers directly, fundamentally altering the web’s economic ecosystem.
The unraveling of the “open web” bargain
The traditional web model depends on websites offering free content in exchange for search-driven traffic and advertising revenue. But now, AI-driven query tools (like ChatGPT, Perplexity, Google’s AI‑Mode and OpenAI’s upcoming “Operator” browser) provide users with synthesized answers instead of links. This means users increasingly never visit the original sources, choking off traffic to content sites The Week+7The Living Library+7The Economist+7mint+1The Week+1.
Similarweb data shows a ~15 % drop in global search-generated human traffic over the last year. The hardest hit: reference sites (−15 %), health (−31 %), education, and science platforms The Living Library+1mint+1. Publishers like Dotdash Meredith—already receiving over 60 % of their traffic from Google—have seen that figure plummet to the mid‑30s mint.
Stack Overflow’s CEO warns that AI “is basically choking off traffic to most content sites,” explaining a noticeable decline in community contributions The Economist+5mint+5The Week+5. Even Wikipedia warns that AI-generated summaries without citations are “blocking pathways” for people to discover and contribute mint+1The Times+1.
New models to monetize AI-generated value
In response, content creators are negotiating licensing deals or suing AI firms. News Corp has deals with OpenAI; The New York Times works with Amazon and is litigating against OpenAI; Reddit struck a $60 m/year licensing deal with Google mint. However, legal rulings in California have so far sided with AI firms, affirming their fair-use defense The Economist+3mint+3Financial Times+3.
Moreover, the majority of small websites are too small to negotiate with big AI platforms, and unilateral blocking of bots merely ensures total obscurity The Guardian+2mint+2The Living Library+2.
That has driven new technical and economic innovations:
- Cloudflare offers bot-control services where site owners choose the conditions under which AI gathers their content, including pay‑per‑crawl options mint.
- Tollbit enables sites to impose “bot paywalls,” varying fees depending on the content, with early uptake among publishers like AP and Newsweek mint+1BizNews+1.
- ProRata (via its Gist.ai engine) redistributes ad revenue from AI answers back to source sites, providing an exemplar for a long-term model mint.
Despite these efforts, scaling up remains challenging—antitrust law limits collective action, and judges have yet to rule in favor of widespread licensing frameworks mint.
A broader and perhaps necessary evolution
Google maintains optimism: as search becomes more AI‑centric, the volume of content indexed has grown by 45 % in two years mint. More sites are being “read” by AI, even if by bot rather than human eyes. Google claims outbound clicks haven’t dramatically dropped, citing other factors like social media and podcasts mint.
The article concludes with a realistic outlook: the open web’s traditional ad‑and‑traffic‑driven model is under existential threat. Content creators must adapt—employing licensing agreements, exploring bot-led paywalls or revenue sharing, and diversifying with newsletters, apps, events, audio/video offerings and enterprise products mint.
Ultimately, While AI may represent the most significant disruption to date, The Economist suggests it’s a pivot point, not the web’s death. The future internet may coalesce around new “agentic” protocols, collective licensing, revenue-sharing infrastructure, and open standards—balancing creator incentives with AI‑driven efficiency mint.
In short, AI-driven search is reshaping the internet’s economic model. While content creators scramble to monetize their work via deals, bot paywalls, or revenue sharing, the survival of the open web hinges on whether these new systems can scale. Without them, we risk losing serendipity, diversity, and the core value of human‑driven content—even as AI offers unprecedented access and efficiency.
The above is from ChatGPT, with the most basic prompt imaginable: a request for a 500-word summary. It’s pretty good–and considerably better than it would have been when I first started playing with the LLM a year and change ago. It also recommends the FT article “We need a new deal for the web” and other from Business Insider titled “Thanks to ChatGPT, the pure internet is gone. Did anyone save a copy?“
It’s noteworthy that neither the FT or BI pieces are paywalled—but the Economist article is.
Another prompt, “Provide the key points of those three articles in bullet form,” produces this:
1. The Economist – “AI is killing the web. Can anything save it?”
- The traditional web model relies on free content attracting traffic via search engines, monetized through ads.
- AI tools like ChatGPT, Perplexity, and Google’s SGE answer queries directly, bypassing source websites.
- This has led to a significant drop in web traffic, especially for reference, health, and education sites.
- Content creators are losing incentives to produce content, which threatens the sustainability of the open web.
- Some publishers are striking licensing deals with AI firms (e.g., News Corp, Reddit), but many smaller sites lack leverage.
- Tools like Tollbit, Cloudflare’s bot-control, and ProRata aim to monetize or control AI access to content.
- Lawsuits challenging AI use of scraped data have largely failed so far, with courts siding with AI companies.
- Google insists AI hasn’t greatly reduced clicks and says more content is being indexed than ever before.
- AI companies may need to share value with creators to preserve a healthy content ecosystem.
- The web may shift to a “post-search” world relying on bots, agents, and new compensation models.
2. Financial Times – “We need a new deal for the web”
- The web’s original promise of openness and access is being undercut by the rise of AI and platform dominance.
- AI threatens to extract value from the web without fairly compensating creators.
- Large tech firms have built monopolistic platforms that siphon value but contribute little original content.
- The FT calls for a new “deal” to protect journalism, knowledge, and public discourse from being hollowed out.
- This deal should include legal reform, revenue-sharing models, and enforceable transparency for AI systems.
- Content producers need stronger collective bargaining power to negotiate with AI firms.
- Regulation should require AI systems to attribute and compensate sources.
- The current path could lead to a digital world dominated by derivative content with no incentives to create new work.
- Europe’s efforts (e.g., the EU AI Act) are a step forward but need to go further.
- Preserving a rich, open, and diverse web requires rebuilding the economic model around trust and value creation.
3. Business Insider – “Thanks to ChatGPT, the pure internet is gone. Did anyone save a copy?”
- AI tools are rapidly consuming and rewriting internet content, altering its original structure and accessibility.
- This creates a risk of “data collapse” where AI models recycle AI-generated content, degrading quality over time.
- Some technologists are racing to preserve “clean” versions of the internet, such as through the “Low Background Steel” project.
- There’s growing concern about the provenance of data used by AI—what’s real versus AI-generated.
- Cloudflare and others are helping sites manage AI bots and control scraping, but it’s an uphill battle.
- The internet is becoming more polluted by synthetic content, reducing discoverability of original thought.
- Critics warn that without intervention, AI will erode the “signal” of genuine human expression.
- Some archivists are trying to preserve a version of the web that predates AI dominance.
- Others call for labeling systems or watermarks to distinguish AI output from human-generated content.
- The article paints a sobering picture: the age of a “pure,” decentralized web is fading, and we may not be able to reverse it.
Again, pretty good and done in seconds.
It then asks, “Let me know if you’d like a comparison or synthesis of these.”
The comparison:
Shared Concerns
- AI is Disrupting the Web’s Economic Model
All three articles agree that AI tools (like ChatGPT, Perplexity, and Google’s SGE) are undermining the traditional web structure by intercepting user queries and providing answers directly, thereby bypassing source websites and cutting off revenue streams for content creators.- Loss of Incentives for Content Creation
They emphasize that creators—journalists, academics, developers—are losing motivation to produce quality content, knowing they might not receive traffic, attribution, or compensation. This threatens the sustainability and quality of the internet.- Urgency for New Legal and Economic Frameworks
Each article suggests the current legal and economic systems are ill-equipped to protect content creators from exploitation by AI. They call for regulatory intervention, licensing systems, or new technical standards to preserve the open web.
Differences in Emphasis
Theme The Economist Financial Times Business Insider Focus Web traffic decline & monetization crisis Policy and rights-based call for reform Data purity, archiving, and existential risk Tone Analytical, structural Normative, advocacy-driven Alarmist, preservationist Proposed Solution Licensing, bot paywalls, revenue sharing via new tech Legal regulation, collective bargaining, enforceable transparency Archiving the web, labeling AI content, preserving “pure” data Threat Framed As Economic unsustainability for creators Collapse of public discourse and media independence Irreversible degradation of internet quality and provenance
Unique Contributions
- The Economist introduces the idea of “bot paywalls” and revenue-sharing mechanisms (e.g., Tollbit, Cloudflare, ProRata) as experimental models for creators to monetize interactions with AI.
- The Financial Times frames the problem as a political and moral crisis, urging governments and regulators to enforce accountability, transparency, and fairness from AI platforms—especially to protect journalism and democratic discourse.
- Business Insider raises the more philosophical and archival issue of the web’s transformation: the internet as a record of human knowledge is being contaminated by synthetic content, possibly leading to a “data collapse” where future AI only feeds on its own outputs.
Points of Convergence
- AI scraping content without consent is an urgent issue.
- Compensation models and/or regulation are needed to ensure human creators are not driven out of existence.
- AI firms are benefiting disproportionately from a content ecosystem they did not create but now dominate.
- Without systemic change, the result could be a less diverse, less trustworthy, and less useful web.
Conclusion
These three articles complement each other to paint a comprehensive picture: the web is undergoing a structural transformation due to generative AI. While The Economist focuses on the economic fallout, the FT highlights the moral and democratic implications, and Business Insider warns of a technological degradation of data integrity. All converge on the need for urgent innovation, regulation, and collaboration to save the web from being hollowed out by the very AI tools that depend on it.
The synthesis:
AI Is Reshaping the Internet’s Core Structure
The rise of AI tools that deliver synthesized answers—such as ChatGPT, Perplexity, and Google’s AI Overviews—is fundamentally changing how users engage with information online. Rather than directing users to source websites, these systems increasingly serve as gatekeepers, offering final answers without requiring users to click through. This shift undermines the basic economic model that has sustained the open web for decades: free content in exchange for ad-supported traffic.
The Consequences: Economic, Epistemic, and Existential
This disruption is multi-dimensional:
- Economic: As AI displaces traditional search engines, web traffic is collapsing, especially for knowledge-intensive sites—reference, health, education, science. As creators lose visibility and ad revenue, they are scaling back or shutting down. Licensing deals with major AI firms (e.g., Reddit, News Corp) offer some relief, but smaller sites are left behind, unable to negotiate or monetize their content access.
- Epistemic: The Financial Times emphasizes that this process erodes democratic discourse and journalism, turning content creation into a one-sided extraction game that benefits tech platforms but hollows out the knowledge economy. Without reliable revenue, trusted institutions may fade, replaced by opaque AI outputs lacking attribution, transparency, or accountability.
- Existential: Business Insider introduces the notion of “data collapse,” where AIs are increasingly trained on AI-generated content, creating a feedback loop of synthetic information. The original, human-authored internet is being polluted—its provenance unclear, its trustworthiness diminished. Some technologists now treat the internet as a cultural artifact that must be preserved before it’s overwritten by machine-made content.
Emerging Responses, But Fragmented
Solutions are emerging, but are still nascent, fragmented, and underpowered:
- Technical fixes: New tools like bot paywalls (Tollbit), bot access control (Cloudflare), and revenue-sharing systems (ProRata) allow websites to charge or regulate AI access. But adoption is low, and many creators fear that blocking AI simply ensures invisibility.
- Legal and regulatory reform: The Financial Times argues that governments must intervene, creating frameworks that enforce attribution, mandate compensation, and support collective bargaining by content creators. Yet courts have largely sided with AI firms, reinforcing the doctrine of “fair use” in scraping content.
- Cultural and archival efforts: Business Insider highlights efforts to preserve a “clean” internet—archiving web content untouched by AI, and proposing labeling systems to help users distinguish human-generated versus AI-synthesized content.
A Crossroads for the Web
Together, the three articles suggest that we are at a critical juncture. The web is moving from an open, human-centric ecosystem to a closed loop of machine-to-machine content creation and consumption. If unchecked, this transformation could undermine the internet’s diversity, authenticity, and value as a public good.
Yet it is not inevitable. A combination of technical innovation, legal reform, and cultural awareness could help build a new web economy—one where AI works with, not against, human creators. To get there, platforms must start sharing the value they extract, regulators must recognize the stakes, and society must decide what kind of digital future it wants to preserve.
In short: AI is not just changing the web; it’s threatening to consume it. Whether the internet remains a vibrant space of human knowledge or becomes a flattened mirror of machine outputs will depend on choices we make now—about economics, policy, and ethics.
So, aside from making paywalls essentially irrelevant—making monetizing web content still harder—it makes creating (generating?) content easier than ever before.
Producing quality, original content is hard. The Economist, FT, New York Times, Washington Post, or another elite outlet may well pay seasoned reporters six-figure salaries and provide extensive budgets for research and travel. For years, aggregation sites have paid staffers peanuts to crank out stories, often under more clickbait headlines, that simply regurgate the reporting of others. The more scrupulous ones acknowledge their sources, but most just pretend that they’re the ones who got the quotes.
But generative AI—and I’m just using the free version of ChatGPT and haven’t spent a lot of time playing with it—can now do that in seconds. One suspects we’ll get to the point where even the prestige outlets rely on it more heavily. Heck, I’m already seeing that happening in academia—and not just with the students.
The nature of my academic training is such that I prefer to at least feed in source materials that I trust. But most will simply use ChatGPT and competitors the way most of us have used Google over the years. We’ve already seen the erosion of the distinction that was drilled into me as far back as high school between primary, secondary, and tertiary sources and between authoritative and non-authoritative ones. That will inevitably escalate rapidly in this new environment.
And, of course, if people can’t make a living creating quality content, they won’t create it.
Very interesting. Thanks.
I refuse to buy from Amazon or EBay, instead searching out source vendors and manufacturers.
I guess it’s important to apply that tactic to searches as well.
At Amazon KDP, many ‘authors’ are trying to get rich with AI created books. Discerning readers aren’t fooled by AI written books but this garbage makes it more difficult for us authors who write original content.
There’s lots of AI created content at Youtube. Much of it horribly wrong or just as my wife says ‘an empty can is noisy’. A over 15 minute video about some minor sports decision.
AI is ruining YouTube, where my screen time has been cut by probably 40%. It’s Dumpster diving now, digging through crap hoping to find an un-eaten donut. I wish they had a filter to weed it out, but their useless algorithm keeps shoveling more garbage at me. I now avoid anything that doesn’t show me an actual human creator in the thumbnail.
As for search, Google had already fucked that up and now it’s almost useless. It used to be kind of fun and now it’s just. . . bleh.
Someone needs to offer an AI-free search engine, and we need AI filters because I have yet to come across a single piece of AI-generated content that isn’t just shit.
Next up, an AI that consumes all the AI generated content so you don’t have to.
Douglas Adams saw the need for this decades ago.
I don’t use AI. I’ve blocked the AI search results in Chrome. I never use ChatGPT or similar tools. I don’t dork around with generative AI for any reason.
And my life is a happy, happy place. I don’t feel as though my days are diminished by one electron.
There are great uses for AI, particularly in the sciences. But the use cases that tech companies are trying to ram down our throats? Useless garbage that nobody asked for.
@Kingdaddy:
How did you block AI search?
NM got it.
No I don’t got it, it came right back.
Hello from Norway!
Google has gotten so bad at basic searches for links and info that I’m turning more and more to ChatGPT.
Like any tool, it has to be used correctly, it’s more useful for some things and bad at others, and is not a substitute for your own reasoning.
Personally, I’d rather deal with an LLM than stupid SEO optimized clickbait.
@Andy:
AI search is like being in a class where the teacher just tells you all the answers. I don’t want to be told, I want to learn.
@Michael Reynolds:
If you ask it to give you an answer to something, then yes, it’s asking it to be the teacher to give you an answer. But you don’t have to ask it that. And you don’t trust it anymore that you trust Google search, even when it was better.
If you have a PC, you can uninstall Copilot. If you ask ChatGPT, it will give detailed instructions 😉
FWIW, I’m getting positive results on Google by adding “-noai” to the search query thus:
How many ai can chat on the head of a pin -noai
The AI overview doesn’t appear when I do this.
It would work better if you had an AI that remembers you want your search without any AI input or summaries 😀
@Andy:
Unless it’s a math question I don’t want just an answer even when I think I just want an answer. I learn a lot by stumbling upon things I didn’t know I wanted.
@Kathy:
Oh, cool. Thanks.
@Kingdaddy: I have not yet used it myself (save passively via Google–I need to turn that off because I do not find the summaries all that useful). (Although I do use Grammarly as a editing tool, which counts, I think).
I have a friend who uses it to draft certain kinds of correspondence using materials he provides. I also know of at least two instances of people trying to use it to do their jobs, which is bad because in both cases, the persons using the AI tools don’t know enough about their jobs to know if they are getting crap or not.
I can see using it for summarization and the like and other things that would save time.
I think that the tools are coming, and at some point, it will be necessary to learn to use them.
@Michael Reynolds:
If you want serendipity then an LLM probably can’t give you that. But between the algorithm and SEO optimization, Google sucks at it, at least in my experience.
You mentioned Youtube earlier. I’ve also found it’s become a lot worse and judging the kinds of things I’d be interested in. Is that AI or the algorithm just sucking? I’m not sure what’s going on there, except it’s also a Google property and seems tied to the general enshittification of Google.
Same with Spotify for music.
@Kathy:
Enter, Digital Prion Disease, aka “Model Collapse“.
What I would really love is for a couple of these billionaire fartsniffers to turn their wealth management over to their favorite AI and have that AI hallucinate and accidentally wipe them out.
@Michael Reynolds:
Not all the crap on youtube is AI created either. Today a airplane pilot came out with a less than 10 minute video with three glaring mistakes. The content provider doubled down on one of his mistakes making him look more unprofessional.
@Andy:
For Youtube, I think it might be a both situation. I basically only use YouTube to watch recorded live DJ sets. More specifically, basically only House or Techno. YouTube used (as late as a year ago) really good about suggesting sets or DJs that I hadn’t seen or heard of. It also used to be very straight forward for finding specific DJs or festival sets. Now, even if I know exactly who I’m looking for, it’s buried. Especially on my phone.
Couple of days ago I was looking for a group, “deBasement”. Took me two tries to get a search that turned them up. Ok, it’s a weird name, sorta, but even the search that turn them up, put them near the bottom under a TON of unrelated crap.
Also, on Spotify, anyone notice that even if you turn the “ai” shuffle off, every once it a while, Spotify gets a stroke and just starts playing songs that aren’t even on the playlist?
As someone who is concerned that AI will cause major issues in our society, not necessarily from a Terminator perspective but from a major upheaval in jobs, etc., I do have to admit that it can be a useful tool. I develop software for a living, and the joke years ago in this industry was that everyone just googles how to do something that one guy figured out. There was still a person in the loop typing or pasting what were snippets, at least. These days, there are serious pushes to integrate AI at higher and higher levels of the design phase. You can apparently now enter an english-language request for something, and it will generate a pretty reasonable first cut at a working program. If it’s not quite right, you can even iterate until it is. Scary, considering how new this all is and how fast it’s moving.
@Steven L. Taylor:
I have spent my professional life avoiding and ignoring editorial input, I’m not letting AI get involved.
The trouble with most current “AI”, it’s based on csraping the internet.
And as is well known, the internet is full of crap.
See on of our students whose assignment on international law in WWW2 nearly ended up on track for failure after they discovered refrences to US trade with Nazi Germany during WW2.
The problem being that if you dig a little deeper, the two main references were to secondary sources with glaring alarm signals: one who was obsessed with Errol Flynn being a Nazi agent, another that Wall Street finaced the Bolsheviks.
And when either referred to primary sources, a little more digging soon showed that they were often being used in ways that were, at minimum, misleading by omission.
Another example essay plan, on the use of railway land grants to fund colleges, iirc, referred to the work of a certain “Dr Buzzkill”. lol
Apparently, most “AI” models use human filtration workers to spot and remove the more obviously obscene, insane, fictional or fanatical junk.
But these were likely not obviously so to a poorly paid and non-expert reviewer.
Also, the current main “AI” seems largely incapable of generating internalised rule-sets based on what is “known” or even “specified”.
Thus the relative ease with which rules on generating “inappropriate” content can be subverted.
My little game of generating a story about the radioctive werwolves of Chernobyl, for instance. 🙂
The point of which is, it would require a rather stupid human being not to realise that somebody was playing bugs Bunny games with the rules.
Or the evidenced incomptence of most at playing chess, compared either to an actual rules-based chess program, or to an average human chess player.
I am hardly an expert in any of this; but quite a number of people who are, in various related fields’ who are not employed by AI companies, seem to think that the current “word associative based” models are going to be a dead-end.
Highly useful within their limits, but prone to falling flat on their faces in edge-cases.
Speaking of AI in Google search…
I shared a story yesterday in the Signal chat. Not a plug. But it concerns someone else’s personal life as much as mine. I didn’t share any identifiable details, but I didn’t think it was a good idea to put it on the open internet with the amount of detail I provided.
Anyway, I had to figure out how to call EMS for someone in a different state. The AI summary led me down the wrong path.
It may have worked if a couple other variables were different. But had the situation been more time sensitive, it may have caused a bad outcome.
I remained calm, and thought through the options to try. But I imagine that someone who panicked may not have been able to figure it out quickly enough.
Ultimately, I solved the problem with traditional searches. If I ever find myself in that situation again, I would go that route first.
@Michael Reynolds: What I do with YouTube is that I have, and seek out, a set of content providers that add value with an authentic voice, and pay the most attention to them. I think you crave more variety than I do, so this probably makes this work less well for you.
On a new topic, yes, I will have to wade through AI-generated garbage, which has visuals with amusing misspellings. And voiceovers that don’t take that long for me to understand are text to speech, and maybe also ai-written. Throw those out, what’s left? Some good stuff.
@Beth:
I’ve had similar issues searching on YT.
On a whim and not knowing any modern House groups, I asked chatgpt:
and got this with links:
I don’t know this group at all, I’m curious if that is an accurate answer?
Re AI being great, terrible, meh… so much depends on what you ask/expect it to do. And who the “you” is that does the asking/expecting.
Here’s a recent paper published in Philosophy & Technology that some might find interesting.
ChatGPT is Bullshit (Partly) Because People are Bullshitters
@Beth:
There’s something like poetic justice in LLMs choking on their hallucinations.
These companies could pay to use books, TV, movies, plays, and have people allow them to record conversations in all kinds of settings (work, school, leisure, etc.) But, alas, that would cost them money
Alternately, with so much garbage out there, people who can afford to will be more likely to pay for known good content, just to save themselves the hassle of sifting through crap. “Known” being a key factor.
This hasn’t worked out for news in the past, as free sources of misinformation have flourished, but that’s hand crafted, dynamic misinformation which has been carefully tailored to both activate fears and tell an overarching narrative that keeps you coming back to find out what’s new with the fight against the globalist pedophile ring (arrests coming soon!).
Instructions on how to fix your toilet, on the other hand, kind of need to be right.
And long-form content (like news, there’s always a new episode of news) lives or dies by a parasocial relationship with the characters (Alex Jones, Charlie Kirk, Chris Hayes…). The LLMs just can’t meet that need.
I expect product managers at YouTube are trying to figure out how to market a YouTube subscription that explicitly banishes AI content without saying “we will deliberately make our product worse for everyone who doesn’t pay.” (They are struggling against ad blockers, and desperately want some cudgel to beat people who aren’t paying customers).
If you destroy the commons, you create an opportunity to sell a replacement to those who can afford it.
@Gustopher:
Very interesting idea. A YouTube for people who don’t want crap. I would gladly pay for a better, more curated YouTube where I can have my people – Beato, Conan, Josh Johnson, Ryan George, various history guys, various science-y guys – without the crap.
@Steven L. Taylor:
The legal community, I’ve personally heard/read about dozens of lawyers who’ve submitted pleadings with bogus cites and case law provided by AI .
The consequences are entertaining for the outside viewers.
ETA, yes, I am THAT luddite of a Luddite
@Michael Reynolds:
You can search thematically, or at random, to amass channels you find useful and can subscribe to. then on the website’s left hand side, click on “Subscriptions.” The all your feed will be stuff you’re interested in.
TL;DR: you have to curate it yourself.