The quiet shutdown of Google Search results with 100 listings didn’t go unnoticed. Even though the corporation neither covered nor commented on the end of support for the &num=100 operator, the fact that this parameter stopped working was spotted almost immediately, because an enormous number of SEO tools, parsers, and specialists of all kinds interact with the search engine.
So, how shall we live and work with this now? How can you increase the number of search results in Google to 100, and is it even possible? We’ll take a detailed look at the issue below.
Why the Disappearance of &num=100 Matters
Many SEO specialists dubbed the removal of the &num=100 parameter in Google search results a “Googlopocalypse.” The point is that now, instead of a top-100 result set, the search engine always shows no more than 10 links. This can’t help but impact specialized SEO services and parsers that used to collect data from the SERP at scale.
Data collection now becomes significantly more complicated and slower – literally by a factor of ten. Previously, you could get all 100 results in a single request (on one page). Now you have to send 10 consecutive requests to Google and parse 10 pages to get the same 100 results.
In fact, other operators don’t work either – those for top-30 or top-50 results. Google SERP scraping is now only possible in batches of 10 results.
Specialized services that offered API interfaces for SERP scraping announced price increases almost immediately, which is hardly surprising and entirely logical.
What Exactly Changed and When: A Brief Timeline

Starting around September 10-11, 2025, quiet A/B testing began. In some browsers and locations, the Google Search num=100 operator still worked, while in others it didn’t. Among the more or less authoritative sources, we found only a post on X by a user named SEOwner, who noticed the issue almost immediately and initiated the discussion. This was their moment of fame 😉
On September 14, the change affected all search users across all regions and all languages. Most likely, the test results satisfied both the development team and the company leadership. Authoritative news outlets and specialized SEO resources wrote about that soon after.
The problem is truly significant and, in many ways, resembles an apocalypse, at least for everyone who works heavily with SERP scraping.
It’s worth noting that the &num=100 parameter was never officially documented anywhere by the search engine, but it was impossible to miss. The reason is simple: after you enter a query in the search box, all related operators appear inside the URL of the results page – you can see them there and even adjust them manually (if you want).
It’s also important to note that some large websites observed a significant drop in traffic in their webmaster consoles after this change. The direct connection is obvious: if a site isn’t in the top 10, practically nobody reaches it when browsing the results. And SEO scrapers definitely accounted for a significant share of searches and clicks.
You can understand Google, too:
- Most users don’t scroll below the top 5, or even the top 7. Nobody likes long lists. And it takes a lot of time to review large amounts of information from all the suggested sites and pages.
- On mobile devices – which account for the lion’s share of traffic – reviewing the top 100 is extremely difficult and inconvenient.
- The fewer results there are on a page, the more attention goes to additional SERP elements: ads, related services, widgets, and so on. And Google makes money primarily from ads.
- And right now, Google is actively shifting focus toward ready-made answers and its AI assistant. Accordingly, &num=100 impedes that process.
So, Google search with 100 results is now history…
Why &num=100 Was So Convenient
- Time savings. If a user added the num=100 parameter to the end of the URL, even manual SERP review became much faster and easier. They only had to do it once, and then they could browse a large number of search results without extra clicks or page changes.
- A major speed boost for parsing. Automated collection of Google Search’s top 100 results required just one request to the search service. That meant minimal server/PC resources and very little time needed to gather the results.
- A more complete and comprehensive view of Google Search. One hundred results covered virtually every need: competitor research, tracking your own rankings, parsing keyword occurrences, and more.
- It was genuinely simple and practical. Many SEO specialists, SMM managers, and other professionals regularly used &num=100.
How This Affected Typical Parsing Scenarios
SERP scraping has become more complicated and far less straightforward than it used to be. Before, you could follow a clear, linear progression from num=1 to num=100, there was a dedicated “Results per page” selector, and the output behaved predictably. Now, the number of results shown can vary depending on the type of login/session and the query itself. A large part of the SERP is taken up by different modules such as People Also Ask, videos, maps, AI overviews, and more. Many of these load dynamically as you interact with them. As a result, rankings have become “fluid,” and positions can shift.
Higher load on scrapers, and on Google Search itself. To collect the top 100 results in Google Search now, you need to query it 10 times instead of once. It’s no surprise that CAPTCHA prompts and the costs of solving/bypassing them have increased significantly. How to bypass CAPTCHA: a list of reliable tools.
Gathering rankings for competitors and your own sites now takes more time.
Previously, the top 100 was essentially “flat” – you could see all domains in one unified list. Now you have to additionally check whether the results and target sites repeat from page to page. You also have to keep in mind that some competitors may appear in organic results, while others show up in extra SERP blocks: ads, videos, maps, AI search elements, People Also Ask, and so on.
The long “tail” is effectively hidden. SERP scraping has become more expensive at the unit-cost level, and this couldn’t help but affect subscription prices for specialized services.
Read also: How to Scrape Google People Also Ask.
How Scripts Work Now Without &num=100

Many parser developers asked the same question: how can you get 100 Google search results now? We’re no exception. Below are the options that remain for collecting a top-100 set without the &num=100 parameter.
One thing is certain: there is no direct replacement for &num=100 anymore. But that’s not a reason to give up. Google SERP parsing is still possible, it’s just more complex now.
Pagination in Batches of 10 Using the &start= parameter
If you click through the pagination in Google’s search results, you’ll notice that the URL contains a special parameter: &start=10, &start=20, &start=30, and so on.
In other words, when you go to the next page, Google returns the next batch of 10 results. And so the search engine “remembers” where it stopped, it specifies the starting position relative to the first result (which is effectively zero-based, like array indexing).
Technically, you can request results starting from almost any number, for example, &start=99.
In your parsing scripts, however, you can build a loop where the increment increases in steps of 10. Then, after 10 iterations, you’ll get the desired top 100 positions.
Here’s what it may look like in a Python parser code:
query = "your search query" # Replace with your value
max_pages = 10 # how many pages to collect, each with 10 results
print("List of resulting Google URLs:\n")
for page in range(max_pages):
start = page * 10
params = f"q={query}&start={start}&hl=ru" # hl specifies the language
url = f"https://www.google.com/search?{params}"
# Your parser logic would go here.
# For now, we just print the resulting URLs to the console.
print(f"Page {page+1:2d}: {url}")
This is the simplest and most logical approach, at least as long as Google continues to support the &start= parameter, and it doesn’t get removed the way &num=100 did.
Clicks on Pagination Pages (When Automating in Headless Browsers)
A bit further down, we’ll talk about the challenges of Google SERP scraping. Here we’ll only mention that the number of results on a single page can be less than 10. For example, if the user isn’t logged in, Google may return 5 items per page. On top of that, the page markup has become much more complex, and the pages load a large amount of JavaScript.
So, to make a parser behave as closely as possible to a real user, it’s advisable to use anti-detect browsers or headless browsers. In that case, you can literally “click” the required elements on the page. At the same time, it also makes sense to “humanize” cursor movement, scrolling patterns, natural delays, and so on. Common libraries for connecting headless browsers include Playwright, Puppeteer, Selenium, Nodriver, etc.
Example implementation that clicks links in the pagination block:
import random
import time
from playwright.sync_api import sync_playwright
def human_sleep(a=0.6, b=1.8):
"""Random delay"""
time.sleep(random.uniform(a, b))
def human_scroll(page):
"""Natural page scrolling"""
scroll_height = page.evaluate("document.body.scrollHeight")
current = 0
while current < scroll_height:
step = random.randint(200, 500)
current += step
page.mouse.wheel(0, step)
human_sleep(0.2, 0.6)
def extract_results(page):
"""
Extract organic results from the page.
Return a list of dictionaries.
"""
results = []
items = page.locator("div#search div.g")
# You can replace this container selector with your own, if the current one becomes outdated.
count = items.count()
for i in range(count):
item = items.nth(i)
link = item.locator("a").first
title = item.locator("h3").first
if link.count() == 0 or title.count() == 0:
continue
results.append({
"title": title.inner_text(), # Extract the title here
"url": link.get_attribute("href") # Extract the URL
})
return results
def click_next_page(page):
"""
A real click on the "Next" pagination button.
Returns False if the button doesn't exist.
"""
next_button = page.locator("a#pnnext") # This selector may change; update if needed
if next_button.count() == 0:
return False
# Small pause before clicking
human_sleep(1.0, 2.5)
next_button.click()
return True
def parse_google_top_100(query):
collected = []
with sync_playwright() as p:
browser = p.chromium.launch(
headless=True,
args=[
"--disable-blink-features=AutomationControlled"
]
)
context = browser.new_context(
viewport={"width": 1366, "height": 768},
user_agent=(
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36"
)
)
page = context.new_page()
page.goto(
"https://www.google.com/search?q=" + query.replace(" ", "+"),
wait_until="domcontentloaded"
)
human_sleep(2, 4)
page_number = 1
while len(collected) < 100:
print(f"Parsing page {page_number}")
human_scroll(page)
results = extract_results(page)
for r in results:
if len(collected) >= 100:
break
collected.append(r)
print(f"Total found: {len(collected)}")
if not click_next_page(page):
print("Pagination is over")
break
page_number += 1
page.wait_for_load_state("domcontentloaded")
human_sleep(2, 4)
browser.close()
return collected
if __name__ == "__main__":
data = parse_google_top_100("example search query")
print("\nFinal list:")
for i, item in enumerate(data, start=1):
print(f"{i}. {item['title']} — {item['url']}")
This script uses a Python + Playwright setup. Chromium is launched as a headless browser. The parser assumes that there may be fewer than 10 results per page, so it explicitly counts results and only stops once it reaches 100. Page transitions are implemented via a real click on the pagination link. “Humanized” delays and scrolling are included as well.
Don’t forget to install the required libraries and browser binaries:
pip install playwright
playwright install chromium
Also, make sure you run through proxies (ideally residential or mobile ones) with rotation, if possible, either on a short time interval or on every request.
API Custom Search JSON
This was Google’s official API interface for developers building solutions based on programmable custom search. Access to this API is no longer available for new integrations. All current clients must migrate to the Vertex AI Search API by January 1, 2027.
As the name suggests, the Custom Search JSON API returns search results in a structured format, as a JSON response that follows the OpenSearch 1.1 specification.
There’s no point in providing a script example, because Google Custom Search is only available to those who connected to it earlier.
Link to the official developer documentation.
Access to the API is paid. Only the first 100 requests per day are free.
Alternative Services that Provide a Similar SERP-Scraping API for Google

There are plenty of providers on the market, but it’s important to remember that these services are middlemen. They don’t guarantee 100% relevant results. Technically, they are either pre-built databases that were parsed in advance or systems that scrape Google for your specific query.
Each service has its own API syntax, limits, technical capabilities, and pricing. Examples include SerpAPI, DataForSEO, Zenserp, and others.
These providers were hit the hardest once Google removed support for num=100.
Conclusion and Recommendations
Google SERP scraping has become more difficult, and it’s not only because &num=100 was removed. Today, Google Search is essentially a full-fledged web application with a large amount of JavaScript. As a result, accessing the final HTML requires special solutions and approaches, such as headless browsers, realistic browser fingerprints and user profiles, as well as high-quality proxies to bypass CAPTCHAs and temporary blocks.
That’s exactly what our service provides on the proxy side. Froxy offers over 10 million residential and mobile IPs with automatic rotation. Connecting to parsers takes literally one line of code, and all other management features are handled through a convenient dashboard or via API.

