AI

AI crawler wars threaten to make the web more closed for everyone

2 Mins read

These measures still offer immediate protection. After all, AI companies can’t use what they can’t obtain, regardless of how courts rule on copyright and fair use. But the effect is that large web publishers, forums, and sites are often raising the drawbridge to all crawlers—even those that pose no threat. This is even the case once they ink lucrative deals with AI companies that want to preserve exclusivity over that data. Ultimately, the web is being subdivided into territories where fewer crawlers are welcome.

How we stand to lose out

As this cat-and-mouse game accelerates, big players tend to outlast little ones.  Large websites and publishers will defend their content in court or negotiate contracts. And massive tech companies can afford to license large data sets or create powerful crawlers to circumvent restrictions. But small creators, such as visual artists, YouTube educators, or bloggers, may feel they have only two options: hide their content behind logins and paywalls, or take it offline entirely. For real users, this is making it harder to access news articles, see content from their favorite creators, and navigate the web without hitting logins, subscription demands, and captchas each step of the way.

Perhaps more concerning is the way large, exclusive contracts with AI companies are subdividing the web. Each deal raises the website’s incentive to remain exclusive and block anyone else from accessing the data—competitor or not. This will likely lead to further concentration of power in the hands of fewer AI developers and data publishers. A future where only large companies can license or crawl critical web data would suppress competition and fail to serve real users or many of the copyright holders.

Put simply, following this path will shrink the biodiversity of the web. Crawlers from academic researchers, journalists, and non-AI applications may increasingly be denied open access. Unless we can nurture an ecosystem with different rules for different data uses, we may end up with strict borders across the web, exacting a price on openness and transparency. 

While this path is not easily avoided, defenders of the open internet can insist on laws, policies, and technical infrastructure that explicitly protect noncompeting uses of web data from exclusive contracts while still protecting data creators and publishers. These rights are not at odds. We have so much to lose or gain from the fight to get data access right across the internet. As websites look for ways to adapt, we mustn’t sacrifice the open web on the altar of commercial AI.

Shayne Longpre is a PhD Candidate at MIT, where his research focuses on the intersection of AI and policy. He leads the Data Provenance Initiative.


Source link

Related posts
AI

Meta AI Introduces MLGym: A New AI Framework and Benchmark for Advancing AI Research Agents

3 Mins read
The ambition to accelerate scientific discovery through AI has been longstanding, with early efforts such as the Oak Ridge Applied AI Project…
AI

Microsoft Researchers Introduces BioEmu-1: A Deep Learning Model that can Generate Thousands of Protein Structures Per Hour on a Single GPU

3 Mins read
Proteins are the essential component behind nearly all biological processes, from catalyzing reactions to transmitting signals within cells. While advances like AlphaFold…
AI

Building a Legal AI Chatbot: A Step-by-Step Guide Using bigscience/T0pp LLM, Open-Source NLP Models, Streamlit, PyTorch, and Hugging Face Transformers

3 Mins read
In this tutorial, we will build an efficient Legal AI CHatbot using open-source tools. It provides a step-by-step guide to creating a…

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *