Many website security systems, including Web Application Firewalls (WAF), block requests from unwanted users based on their IP addresses. So far, no more effective protection methods have been found — and most likely never will be, unless a total system of physical user verification is introduced.
As long as IP addresses remain in use, proxy servers will remain relevant. They can forward requests to target websites on behalf of the user, effectively replacing the real IP address of the user or scraper with their own.
The larger the volume of data you collect, the more critical proxy rotation becomes, since a single proxy node may not be able to handle a high request load. This article explains how to set up a rotating IP proxy at the right moment under the right conditions, as well as what issues may arise during rotation and how to resolve them.
You might not be surprised if we say that rotating IP proxies is an essential requirement for a scraper, so let’s break it down.
Benefits of IP/proxy rotation:
Technical issues you may encounter when working through rotating proxies:
Each of these problems has solutions or workarounds, but they all require deeper technical expertise as well as extra time, effort, and usually budget.
That’s why proxy services that handle IP rotation, quality monitoring, and uptime management have become so widespread. Many even allow selection by specific city and ISP, offering a full “infrastructure-as-a-service” model. One example of such a provider is Froxy.
The basic, most straightforward approach when working with proxy lists is to randomize selection. But there are many caveats. For example: when the proxy list is periodically updated (you’ll want to move it to a separate file or database), when proxy quality is inconsistent (you’ll need mandatory checks, at least for reachability and ping), when proxies must be integrated with different libraries (connection methods vary), or when special protocols are required (e.g., SOCKS5 instead of standard HTTP), and so on.
Managed proxy providers may also expose an API and special endpoints for forced IP rotation. This is a different model from rotating raw proxies inside your scraping code. Enough theory—let’s move to practice and show concrete examples of how to rotate IPs.
A very simple example is choosing a random value from a list. Assume the target site is scraped using the requests library (Python):
import requests, random
proxy_list = ['ip1:port1', 'ip2:port2', 'ip3:port3', 'user:password@123.456.789.002:8080']
proxy = random.choice(proxy_list)
response = requests.get('https://httpbin.org/ip', proxies={'http': f'http://{proxy}'})
print(response.text)
Instead of a random choice, you can iterate over the list in a loop:
import requests
from itertools import cycle
proxy_list = ['ip1:port1', 'ip2:port2', 'ip3:port3']
for proxy in cycle(proxy_list):
response = requests.get('https://httpbin.org/ip', proxies={'http': f'http://{proxy}'})
print(f"Proxy: {proxy}, Answer: {response.json()['origin']}")
The script itself (don’t forget to create and fill the proxy file):
import requests, time
from playwright.sync_api import sync_playwright
from itertools import cycle
# Ping threshold in milliseconds
PING_LIMIT = 100
# Test URL for proxy verification
TEST_URL = "https://httpbin.org/ip"
def check_proxies():
"""Checks proxies from all_proxies.txt and saves working ones with ping < 100 ms to good_proxies.txt."""
good = []
with open("all_proxies.txt", "r", encoding="utf-8") as f:
proxies = [line.strip() for line in f if line.strip() and not line.startswith("#")]
print(f"[INFO] Found {len(proxies)} proxy for verification...")
for proxy in proxies:
proxies_dict = {"http": proxy, "https": proxy}
start = time.perf_counter()
try:
r = requests.get(TEST_URL, proxies=proxies_dict, timeout=5)
latency = (time.perf_counter() - start) * 1000
if r.ok and latency < PING_LIMIT:
good.append(proxy)
print(f"[OK] {proxy} — {latency:.0f} ms")
else:
print(f"[BAD] {proxy} — slowly ({latency:.0f} ms)")
except Exception:
print(f"[ERR] {proxy} — unavailable")
# Save suitable proxies
with open("good_proxies.txt", "w", encoding="utf-8") as f:
for p in good:
f.write(p + "\n")
print(f"[DONE] Working proxies are saved: {len(good)} pcs.")
def parse_with_proxies(url):
"""This function opens a page via Playwright, cycling through proxies from good_proxies.txt on connection errors."""
try:
with open("good_proxies.txt", "r", encoding="utf-8") as f:
proxies = [line.strip() for line in f if line.strip()]
except FileNotFoundError:
print("[ERROR] The file good_proxies.txt not found. First, run check_proxies().")
return
if not proxies:
print("[ERROR] There are no suitable proxies for parsing.")
return
proxy_cycle = cycle(proxies)
print(f"[INFO] Loaded {len(proxies)} working proxies")
for proxy in proxy_cycle:
print(f"[TRY] Loading {url} via {proxy}")
try:
with sync_playwright() as p:
browser = p.chromium.launch(
headless=True,
proxy={"server": proxy}
)
page = browser.new_page()
page.goto(url, timeout=15000)
print("[SUCCESS] Page loaded successfully.")
print("Title:", page.title())
browser.close()
break # if the page opened — exit the loop
except Exception as e:
print(f"[FAIL] Error with {proxy}: {e}")
continue
else:
print("[STOP] All proxies from good_proxies.txt exhausted, parsing stopped.")
if __name__ == "__main__":
# Step 1. Check proxies and create good_proxies.txt
check_proxies()
# Step 2. Example usage of the scraper
parse_with_proxies("https://example.com/")
Perfect proxies for accessing valuable data from around the world.
With a managed rotating-proxy infrastructure, you simply create an account and purchase proxy traffic. In the dashboard, you create a new port or filter where you specify: the location from which proxies will be selected, the ISP (optional), the rotation logic (replace exit IPs every N minutes/seconds, maximum session hold time, or rotate IP on every new request).
The service will return connection parameters — effectively an entry point into the proxy network. See the BackConnect proxy overview for more details on the mechanism.
A minimal example of launching a headless browser with Playwright through the Froxy proxy (code comments translated):
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch(proxy={
'server': 'http://proxy.froxy.com:9000',
'username': 'user', # fill in your credentials
'password': 'pass' # fill in your credentials
})
page = browser.new_page()
page.goto('https://httpbin.org/ip')
print(page.content())
browser.close()
Even if the proxy IPs are rotated on the provider side (Froxy), you do not need to change the connection settings in your script.
Note that providers may impose technical limits. For example, time-based automatic rotation intervals may range from 90 seconds to 1 hour. There may also be limits on concurrent connections and on the number of devices allowed in a whitelist.
Websites can track not only IP addresses but also sessions (via cookies), HTTP headers, and other fingerprinting signals. Therefore, when rotating IP addresses, it’s important to synchronize sessions correctly — especially if you interact with a target site after authenticating into an account or after passing validation (see also Cloudflare bypass techniques).
Example of “sticky” sessions (binding a session to a single proxy):
import requests
def create_session_with_proxy(proxy):
s = requests.Session()
s.proxies.update({"http": proxy, "https": proxy})
s.headers.update({"User-Agent": "Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36"})
return s
proxy = "http://user:pass@1.2.3.4:8000"
sess = create_session_with_proxy(proxy)
# Login — cookies will be stored in sess.cookies
login_resp = sess.post("https://example.com/login", data={"user":"u","pass":"p"})
# Continue actions in the same session and through the same proxy
resp = sess.get("https://example.com/protected")
Example of synchronizing an IP switch with clearing/updating cookies:
# When you changed the proxy:
sess.close()
sess = create_session_with_proxy(new_proxy) # a clean session, new cookie jar
If you use a proxy service with rotating IP addresses (such as Froxy), it’s advisable to create several distinct ports/filters configured with maximum session hold (sticky sessions) or with long rotation intervals. Exporting a list of ports gives you the practical equivalent of static proxies that you can rotate under your own rules — for example, switching ports together with clearing or renewing the session.
Here are recommendations presented as actionable tips. Whether to follow them is up to each scraper developer:
Rotating IPs and proxies can solve many problems when scraping target sites. But they’re not a silver bullet. For a scraper to run continuously without stops or bans, you need a comprehensive, thoughtful approach. In some cases, you should keep sessions and proxies sticky while also emulating fingerprints and human behavior; in other cases, it’s useful to rotate IPs on every request.
Build your rotation strategy based on the protection mechanisms you observe on the target site. If those protection policies are unknown, start simple and progressively increase complexity — the most advanced anti-blocking techniques require more compute and more sophisticated scripts.
Above all else, proxy quality matters. For highly trusted rotating proxies with precise targeting, consider Froxy with over 10 million residential and mobile IPs with fine-grained location options.