The DMCA Was Built to Stop DVD Piracy. Google Wants to Use It Against Scrapers
How a 12-page complaint is trying to turn every CAPTCHA into a federal copyright perimeter
On December 19, 2025, Google filed a lawsuit against SerpApi in the Northern District of California. The case number is 25-10826, and the complaint is 12 pages long. Twelve pages that could reshape how the entire scraping industry operates.
We are not talking about a cease-and-desist letter or a Terms of Service dispute. Google did not send SerpApi any communication before filing the lawsuit. No cease-and-desist, no attempt to resolve their concerns directly. SerpApi told us this was highly unusual, and that had Google reached out, they might have learned that their claims lack merit.
Google is invoking the Digital Millennium Copyright Act, specifically Section 1201, the anti-circumvention provision. The same statute originally designed to prevent people from cracking DVD encryption is now being pointed at a SERP scraping API.
Before proceeding, let me thank NetNut, the platinum partner of the month. They have prepared a juicy offer for you: up to 1 TB of web unblocker for free.
We reached out to both Google and SerpApi for comment on this case. Google did not respond. SerpApi did, and we will include their statements throughout this article where relevant.
Let us break down what happened, why it matters, and what it could mean for anyone who scrapes the web for a living.
The Facts
Google’s complaint tells a straightforward story. SerpApi, founded in 2017 by Julien Khaleghy, operates a paid API that sends automated queries to Google Search and returns the results as structured JSON. Google estimates that SerpApi sends hundreds of millions of artificial search requests per day, and that this volume has increased by as much as 25,000% over the past two years.
In January 2025, Google deployed a technological protection measure called SearchGuard. SearchGuard works by sending JavaScript challenges to incoming search queries. For regular browser users, the challenge is invisible: the browser runs the JavaScript, sends back the expected response, and the search results load normally. For automated systems, the challenge is a wall. Bots that cannot execute JavaScript or that fail behavioral checks get blocked.
According to Google’s complaint, SerpApi’s response to SearchGuard was to build circumvention mechanisms. The complaint alleges that SerpApi creates “fake browsers using a multitude of IP addresses that Google sees as normal users,” misrepresents device and location information when solving challenges, and syndicates authorization tokens from legitimate requests to unauthorized machines around the world. Google also alleges that SerpApi uses automated means to bypass CAPTCHAs that SearchGuard deploys as a secondary verification layer. SerpApi disputes these factual allegations.
The complaint cites SerpApi’s own blog posts, where the company reportedly described SearchGuard as making “web scraping more difficult” but claimed to be “fortunate to be minimally impacted” because its services had “already pre-solved Google’s JavaScript challenge.”
The Legal Theory
This is where it gets interesting for the scraping industry, because Google chose not to sue under the Computer Fraud and Abuse Act (CFAA). That would have been the traditional route. Instead, Google went with the DMCA.
The context matters. The CFAA path has been significantly narrowed by the hiQ Labs v. LinkedIn case. In that landmark decision, the Ninth Circuit held that scraping publicly available data does not violate the CFAA, and warned against allowing companies to create “information monopolies.” The Supreme Court vacated and remanded the case under its Van Buren ruling, but on remand, the Ninth Circuit reaffirmed its original position.
After hiQ, the CFAA is a much weaker weapon against scraping of publicly visible content. Google needed a different legal framework. Section 1201 of the DMCA provides one.
Section 1201 has two relevant provisions. The first, Section 1201(a)(1)(A), prohibits the act of circumventing a technological measure that effectively controls access to a copyrighted work. The second, Section 1201(a)(2), prohibits trafficking in technology designed to circumvent such measures. Google’s complaint invokes both.
The argument chain goes like this: Google’s search results contain copyrighted content, specifically images in Knowledge Panels licensed from third parties, merchant-supplied product images in Google Shopping, and licensed content from Google Maps. SearchGuard is a technological measure that controls access to these search results pages (and therefore to the copyrighted works within them). SerpApi circumvents SearchGuard. Therefore, SerpApi violates Section 1201.
Each act of circumvention carries statutory damages of between $200 and $2,500. Google alleges billions of individual circumventions. Do the math, and the potential damages exceed what SerpApi could ever pay. Google itself notes in the complaint that SerpApi “reportedly earns a few million dollars in annual revenue, but already faces liability that is orders of magnitude higher and growing.”
SerpApi’s Position
When we reached out to SerpApi, they were clear about their stance. On the fundamental legality of what they do, SerpApi told us: “We embrace the term ‘scraping,’ and we practice it legally and transparently. SerpApi accesses publicly visible search results, the same ones available to any browser, and delivers clean, structured JSON back to our customers. We’ve operated this way since 2017, serving developers, researchers, and businesses who need reliable access to public information at scale.”
On the legal boundaries of automated access to search results, their position is equally direct: “The law on this is clear, and we’re prepared to defend that position in court. Scraping is legal, and we stand behind our products and customers. Our API replicates real-time searches with no login, no bypass of any paywall, and no access to anything that isn’t already available to anyone with a browser. U.S. courts have upheld this repeatedly; hiQ Labs v. LinkedIn is a key precedent. The data Google surfaces lives on the open web. Google didn’t create it.”
In February 2026, SerpApi filed a motion to dismiss. Their arguments include the assertion that the DMCA is a copyright protection statute, not a website protection statute, and that Google is improperly trying to use it to control access to public portions of its website. They also argue that mimicking browser behavior to access publicly available pages is not the same as cracking encryption or disabling authentication, and that any ambiguity in the definition of "circumvention" must be given its narrowest reasonable reading, citing the "First Amendment interest in maintaining accessibility of the Internet as an open forum."
SerpApi also pointed out what they see as an absurdity in Google’s theory. If statutory damages were calculated at scale, the total “would exceed U.S. GDP.” Congress, they argue, never intended Section 1201 to be used this way.
On the DMCA claim specifically, SerpApi told us: “The DMCA’s anti-circumvention provision was designed to protect copyrighted works, full stop. Google is not protecting access to copyrighted works. Google is improperly attempting to use the DMCA to limit access to the public portions of its website. We believe that the law is on our side.”
The Hypocrisy Argument
SerpApi is not shy about making this point. In a blog post about the lawsuit, they argue that Google’s case threatens access to public data on the open internet and this resonates widely in the scraping community. As they told us: “Google indexed the web without anyone’s permission. That’s how search works. Now it’s trying to pull up the ladder behind it, prohibiting the practices that it used, and still uses today, to build its business empire. That’s why SerpApi is standing up to Google. Not just to protect our business, but to protect legal competition and open access to public information on the internet.”
Google Search operates by crawling, indexing, and presenting content from billions of websites. Many of those website owners never explicitly consented to being indexed. Google’s position has always been that robots.txt provides the mechanism for opting out, and that the default state of the open web is crawlable. Now Google is arguing that its own search results should be exempt from the same logic.
The irony is not lost on legal commentators either. Above the Law( described the case as Google “pulling up the ladder after climbing it.” Eric Goldman’s blog published an extensive guest analysis arguing that Google’s DMCA strategy represents an attempt to relitigate hiQ Labs through a different statutory framework.
Why This Matters Beyond SerpApi
If Google’s legal theory prevails, the implications extend far beyond one API company. The core question is whether deploying an anti-bot system on a publicly accessible website is enough to invoke federal copyright law against anyone who bypasses it.
Think about what that means in practice. Every CAPTCHA, every JavaScript challenge, every behavioral analysis system deployed on a public website could potentially become a “technological protection measure” under Section 1201. Any scraper that solves a CAPTCHA, executes JavaScript to render a page, or rotates IP addresses to avoid detection could be committing a federal offense.
This is not hypothetical. The legal theory applies to any website that hosts copyrighted content (which is almost all of them) and deploys some form of bot detection (which is increasingly all of them).
Eric Goldman’s blog highlighted this exact concern. The guest analysis by Kieran McCarthy warns that accepting Google’s theory would allow any website deploying anti-bot technology to invoke federal law against circumvention, “transforming speed bumps and CAPTCHAs into federally enforceable copyright perimeters.”
The Electronic Frontier Foundation has also weighed in. Staff attorney Tori Noble stated that “the right to scrape publicly available information keeps the Internet free and open,” cautioning that overly broad DMCA interpretations undermine innovation and research.
SerpApi made a similar point when we asked about the impact on consumers: “Scraping-powered services benefit all kinds of consumers who use the web every day. Scraping helps to maintain the free and open flow of information across the internet, ultimately encouraging things like price transparency, competition, and informed decision-making, all to benefit consumers. Expanding the DMCA as Google has suggested would only benefit the largest tech incumbents and hinder transparency and healthy competition.”
The Emerging Legal Pattern
Google’s lawsuit does not exist in isolation. In October 2025, Reddit filed a 41-page complaint against SerpApi, Perplexity AI, Oxylabs, and AWMProxy in the Southern District of New York. The complaint is far more aggressive than Google’s, both in tone and in scope: six legal counts including three separate DMCA claims, unfair competition, unjust enrichment, and civil conspiracy.
Reddit’s framing is vivid. It describes the defendants as “similar to would-be bank robbers, who, knowing they cannot get into the bank vault, break into the armored truck carrying the cash instead.” AWMProxy is characterized as “a former Russian botnet.” Perplexity is compared to “a North Korean hacker.” The language is clearly designed to make scrapers look like criminals.
The underlying theory is similar to Google’s. Reddit has signed licensing deals with both Google and OpenAI to grant them programmatic access to its data. Companies that want Reddit content at scale are expected to pay for it. But when scrapers circumvent SearchGuard to harvest Google’s search results, they also harvest Reddit content without paying a cent. According to data Reddit obtained through a subpoena to Google, the three scraping defendants accessed almost three billion Google SERPs containing Reddit content in just two weeks during July 2025. SerpApi alone accounted for over 1.8 billion of those page accesses. Like Google, Reddit did not send SerpApi any communication before filing suit. SerpApi disputes these figures and the other factual allegations in Reddit’s complaint, and has filed a motion to dismiss in that case as well.
Reddit also produced a piece of evidence that reads like a detective novel. It created a hidden “test post” that could only be crawled by Google’s search engine and was not otherwise accessible anywhere on the internet. Within hours, the contents of that post appeared in Perplexity’s “answer engine.” The only way Perplexity could have obtained that content was through scraping Google’s search results. Reddit calls this technique the equivalent of “marked bills” in a bank robbery investigation.
The Reddit complaint also reveals a detail that connects directly to our industry: after Reddit sent a cease-and-desist letter to Perplexity in May 2024, Perplexity’s citations to Reddit content did not decrease. They increased forty-fold.
And in December 2025, in Ziff Davis v. OpenAI, a federal judge in the Southern District of New York ruled that robots.txt files do not “effectively control access” under Section 1201. Judge Sidney Stein compared robots.txt to a “keep off the grass” sign that “relies on readers to decide to comply rather than enforcing any kind of access control itself.” The ruling is important because it sets a baseline: passive, voluntary measures are not enough to trigger DMCA protection.
But SearchGuard is not robots.txt. It is an active system that executes JavaScript, performs behavioral analysis, deploys CAPTCHAs, and makes real-time decisions about whether to grant access. Whether this kind of system meets the “effectively controls access” standard is the open legal question. The answer will likely set the direction for the entire industry.
Legal commentators have identified what they call the “DMCA 1201 scraping strategy”: platforms deploy technological protection measures specifically to create legal standing under Section 1201, then sue when those measures are circumvented. The sequence is intentional. Deploy, document, sue. Whether courts view this as legitimate copyright protection or as strategic rent-seeking will determine the outcome.
There is also a relevant doctrinal debate. The Lexmark case in the Sixth Circuit introduced the “front door/back door” argument: if a house’s front door is unlocked, putting a lock on the back door does not mean the house is “access-controlled.” Applied here: if anyone with a regular browser can access Google Search results, does deploying SearchGuard against automated systems meaningfully “control access” to the copyrighted works within those results?
The AI Angle
There is one more layer worth noting. As Search Engine Land reported, OpenAI used SerpApi to scrape Google Search results for ChatGPT responses on current events, after Google declined to provide direct access to its search index. SerpApi listed OpenAI as a customer on its website as recently as May 2024 before removing the listing. Other reported customers include Meta, Apple, and Perplexity.
This context matters because Google already has a massive structural advantage in the AI race when it comes to fresh web data. Cloudflare CEO Matthew Prince put numbers on it: “For every one page that OpenAI sees, Google is seeing 3.2 pages.” Against Microsoft, the ratio is 4.8 to 1. The reason is simple. Publishers cannot block Googlebot without disappearing from search results. So Google gets access to the web at a scale that no competitor can match, and it can use that data not just for search but also for training and running its AI products.
In this context, suing companies that make it easier for competitors to scrape Google’s search results is not just about protecting copyrighted images in Knowledge Panels. It is also an act of defense of a competitive advantage. If OpenAI or any other AI company can get structured search data through SerpApi, they partially close the gap that Google’s crawler monopoly creates. Shutting down that channel through litigation serves Google’s position in the AI race, even if the complaint is framed purely in terms of copyright protection.
What Happens Next
The case is still in its early stages. SerpApi filed its motion to dismiss on February 20, 2026. According to the court docket, the initial case management conference before Judge Yvonne Gonzalez Rogers is scheduled for March 30, 2026, and a hearing on the motion to dismiss is set for May 19, 2026.
If the motion to dismiss fails and the case proceeds to discovery and trial, it will force courts to answer questions that have been left open since hiQ. Is a JavaScript challenge a “technological protection measure” under the DMCA? Can anti-bot systems on publicly accessible websites invoke federal anti-circumvention law? Does the DMCA protect the act of accessing a public webpage, or only the copyrighted works behind genuine access controls like encryption and authentication?
For the scraping industry, the stakes are high. A ruling in Google’s favor would give any website with copyrighted content and a bot-detection system a federal cause of action against scrapers. A ruling in SerpApi’s favor would confirm that the DMCA was not designed to protect public webpages from automated access, regardless of the technical measures deployed.
We will follow the case closely. Whatever happens, the days of operating in a legal gray area are coming to an end. The courts will have to draw a line, and that line will define the rules for the next decade of web scraping.
*Disclaimer: We are not lawyers. This article represents our analysis of publicly available court filings and legal commentary. Consult legal counsel for advice specific to your situation.*



This case feels like a dangerous direction for the open web.
If bypassing anti-bot systems on publicly accessible pages becomes a DMCA issue, it would effectively allow large platforms to wrap public information in technical barriers and then enforce them through copyright law. That starts to look less like infrastructure protection and more like control over the marketplace of information, pushing the internet toward digital feudalism where a few dominant platforms decide who is allowed to build on top of publicly visible data.
The internet moves forward through continuous competition. Expanding legal barriers around public data risks strengthening platforms that already have enormous structural advantages.
At Kameleo we see this dynamic every day. We constantly compete against evolving anti-bot systems and support companies building SERP APIs and similar data access services with a reliable stealth browser stack. Several of our customers operate in exactly this space, and we’re glad to help them keep building.