Sign In Sign Up

Proxies

node-fetch + HTTP Proxy: Full Example for Scraping Projects

Learn how to set up node-fetch with an HTTP proxy for web scraping. A complete working example, proxy server setup tips, and common pitfalls to avoid.

Team Froxy 22 Jul 2025 7 min read
node-fetch + HTTP Proxy: Full Example for Scraping Projects

Modern JavaScript projects typically run on remote servers using the Node.js environment. Whether you're exchanging data between scripts (usually via an API) or scraping external websites, developers need a simple and reliable way to send HTTP requests. While there are many libraries available for this purpose, the built-in node-fetch has become one of the most popular thanks to its simplicity and its familiar, browser-like Fetch API syntax.

In this article, we’ll explore what node-fetch is, why it’s convenient to use, what limitations it has, and how to use node-fetch with proxy servers, complete with practical examples.

What Is node-fetch and Why Do Developers Use It?

node-fetch is a built-in Node.js library that mimics the default Fetch API interface found in browsers for JavaScript code. Previously, node-fetch was a separate module that required manual installation, but since 2022, it has been included by default in Node.js version 18 and later.

Developers turn to node-fetch as a lightweight and straightforward alternative to third-party HTTP clients for working with HTTP requests and responses. It’s commonly used to exchange data with remote web services or to build an API for your own application for instance, when transferring data or files between external clients and your server. It can also be used to communicate with CLI tools.

One of the key benefits of node-fetch is its out-of-the-box support for asynchronous operations and Promises. But let’s take a closer look at everything step by step.

node-fetch Basics

Simply put, node-fetch is an HTTP client that allows you to send requests and receive responses from a server, such as HTML, JSON, files, and more. No additional software or libraries are required to get started.

When you call node-fetch from your JavaScript scripts, the library:

  1. Sends an HTTP request
  2. Retrieves the raw HTML (similar to viewing a page’s source code in a text editor)
  3. Returns the HTML content as plain text

By default, node-fetch sends a GET request unless you specify otherwise. You can easily change the request method to POST, PUT, DELETE, or any other HTTP verb.

It’s important to note that node-fetch does not load or execute any scripts used for rendering in traditional browsers. It doesn’t build a DOM structure, display content, or interact with any kind of browser: headless or otherwise.

In fact, the fetch() function in browsers serves a similar purpose, both act as native HTTP clients.

Node-fetch is particularly convenient for exchanging structured data with remote servers or websites, such as JSON or XML, since the responses can be easily parsed and processed. When used as a node-fetch proxy client, it enables reliable, low-overhead data extraction from many web sources.

A Simple Example of Using node-fetch Asynchronously

If you're working with an older version of Node.js (below version 18), you'll first need to install the node-fetch library manually:

npm install node-fetch

Create a plain text file and change its extension to .js. For example, name it node-fetch.js, and add the following code:

// Import the node-fetch library into your projectimport fetch from 'node-fetch';// Send a GET request using fetch(); the method is GET by default, so no need to specify itconst response = await fetch('https://my-target.site.com/posts/1');// Convert the server's response to plain textconst body = await response.text();// Output the response to the consoleconsole.log(body);

To run your script, enter this in the terminal:

node node-fetch.js

Here’s what a POST request might look like, for example, to submit a blog post:

// Import the node-fetch library into your projectimport fetch from 'node-fetch';// Explicitly specify the request method (POST) and the body contentconst response = await fetch('https://my-target.site.com/post', { method: 'POST', body: 'My first post from node-fetch'});// Optionally parse the server's response as JSONconst data = await response.json();// Output the result to the consoleconsole.log(data);

The node-fetch library also handles errors and HTTP headers well, making it useful for debugging and working with custom request metadata.

node-fetch for Web Scraping: Pros and Limitations

node-fetch for Web Scraping

node-fetch is well-suited for basic web scraping tasks. It serves as a lightweight and convenient tool for sending HTTP requests and retrieving the raw HTML of web pages and when paired with a proxy setup, node-fetch with proxy becomes a simple yet effective solution for bypassing basic blocking mechanisms.

Advantages of using node-fetch for scraping:

  • It’s a built-in library in Node.js: no need for extra installations.
  • The syntax closely mirrors the browser’s Fetch API. You don’t have to learn any unfamiliar commands, operators, or parameter formats.
  • It has a minimal resource footprint.
  • It allows quick access to the raw HTML of target pages and easy exchange of structured data like JSON or XML.
  • It supports debugging and works with HTTP headers and specific metadata (status codes, cookies, redirects, encoding, user agent, etc.).
  • Native support for Promises and asynchronous operations.
  • Availability of extensions and plugins (e.g., for compression handling, request counters, and more).

Limitations of node-fetch:

  • Due to how Node.js is designed, node-fetch doesn’t support caching or certain connection properties like keepalive, destination, integrity, mode, type, and others.
  • It only works with absolute URLs, relative URLs are not supported.
  • It can’t render web pages. It always works with raw source code only. For modern websites and single-page apps, this can be a serious limitation. Also, you can’t redirect node-fetch traffic through a headless browser. To understand the contrast, check out materials on scraping with Puppeteer.
  • There are no built-in functions for extracting specific content. You’ll need to use third-party parsers to process the server’s response.
  • Out of the box, node-fetch does not support proxy servers.

Why Web Scraping Often Requires Proxies

Proxies aren’t just a helpful add-on for scraping, they're a critical necessity. Many modern websites are designed to detect and block automated traffic in order to reduce server load and protect sensitive content. One of the first things these sites analyze is the user's IP address and the number of requests coming from it. A high volume of requests from the same IP is easy to detect, and that address can quickly be blocked — temporarily or permanently, depending on the IP type.

Rotating proxies (residential and even better, mobile) are the most effective solution for bypassing such blocks. Without proxies, scraping can slow down dramatically or even come to a complete halt at any time. Proxies not only help avoid blocks but also allow you to send multiple requests in parallel, significantly speeding up the scraping process. Combined with lightweight node-fetch with proxy, this approach becomes both scalable and resource-efficient. By rotating IP addresses and selecting proxy servers from specific geographic locations, you can also bypass geo-restrictions and access regional content libraries.

Of course, proxies alone aren’t enough to defeat all anti-scraping measures, but they are the foundational element on which other evasion strategies are built.

Residential Proxies

Perfect proxies for accessing valuable data from around the world.

Try With Trial $1.99, 100Mb

node-fetch Doesn’t Support Proxies Natively — Here’s Why

The node-fetch library is deliberately designed to be minimalist: it doesn’t include built-in proxy support in order to stay lightweight and easy to use.

When sending requests, node-fetch relies on the host machine’s network settings. Technically, this means that if you want to route requests through a proxy server, you can do so by configuring your operating system’s network settings.

Similarly, the browser’s Fetch API doesn’t have native proxy support either; fetch() relies on the browser or system-level proxy configurations.

Given that node-fetch aims to mirror the browser Fetch API as closely as possible, this design decision is more than reasonable.

But it's not exactly convenient...

For the record, Node.js itself doesn’t offer any built-in mechanisms for proxy support, either. We'll explain how to work around this limitation in the next section.

Using node-fetch with HTTP Proxies

node-fetch with HTTP Proxies

To make node-fetch work with a proxy, you’ll need to explicitly define a custom connection agent for each type of request: http, https, or socks within the Node.js environment.

This is done using the agent parameter, and it requires additional proxy agent packages, such as:

  • https-proxy-agent
  • http-proxy-agent
  • socks-proxy-agent (if you need SOCKS support)

Example: Sending a Request via HTTP Proxy with https-proxy-agent

// Import node-fetch and https-proxy-agentimport fetch from 'node-fetch';import HttpsProxyAgent from 'https-proxy-agent';// Define the actual address of your proxyconst proxyAgent = new HttpsProxyAgent('http://YOURPROXYLOGIN:PSSWRD@YOUR-PROXYHOST:PORT');// Use an async IIFE to send the request(async () => {// Send a request to the target site const response = await fetch('https://target-site.com', {// Use the proxy as the connection agent  agent: proxyAgent });// Read the response as plain text const text = await response.text();// Log the result to the console console.log(text);})();Note: When using node-fetch with a proxy, everything works through an “agent.” There’s no need to bloat your code, each library handles a specific task. You can easily swap out the agent for a different one (http, https, socks, or even a custom implementation) as needed.

Example: Rotating HTTP Proxies in a Scraper Script

// Import node-fetch and https-proxy-agentimport fetch from 'node-fetch';import HttpsProxyAgent from 'https-proxy-agent';// List of proxies (replace with your actual proxy addresses)let proxyList = [ 'http://user1:pass1@proxy1.example.com:8080', 'http://user2:pass2@proxy2.example.com:8080', 'http://user3:pass3@proxy3.example.com:8080'];// Function to select a random proxy from the listfunction getRandomProxy() { const index = Math.floor(Math.random() * proxyList.length); return proxyList[index];}// Function to remove a bad (non-working) proxy from the listfunction removeBadProxy(badProxy) { proxyList = proxyList.filter(proxy => proxy !== badProxy); console.log(`Proxy removed: ${badProxy}`);}// Function to send a request using a rotating proxyasync function scrapeWithProxy(url) { if (proxyList.length === 0) { console.log('No available proxies left'); return; } const proxy = getRandomProxy(); const agent = new HttpsProxyAgent(proxy); try { const res = await fetch(url, { agent, timeout: 10000 }); // 10-second timeout if (!res.ok) { throw new Error(`HTTP error: ${res.status}`); } const text = await res.text(); console.log(`Response from proxy ${proxy.substring(0, 30)}...: received ${text.length} characters`); } catch (err) { console.error(`Error with proxy ${proxy.substring(0, 30)}...:`, err.message); removeBadProxy(proxy); }}// Example list of target URLs (replace with your own)const urls = [ 'https://httpbin.org/ip', 'https://httpbin.org/headers', 'https://httpbin.org/user-agent'];// Run scraping for each URL in the list(async () => { for (const url of urls) { if (proxyList.length === 0) { console.log('Stopped: all proxies are unavailable'); break; } await scrapeWithProxy(url); }})();With each connection failure, such as a timeout or network error, node-fetch http proxy is automatically removed from the list. If all proxies are exhausted, the script stops gracefully. Detailed messages are logged to the console showing which proxy was removed and why. The built-in timeout helps prevent the script from hanging on unresponsive connections.

Example: Using socks-proxy-agent to Access Blocked Resources

Unlike high-level HTTP/HTTPS proxies, SOCKS proxies operate on the transport layer of the OSI model and don’t interfere with HTTP headers. This provides maximum privacy and leaves no technical traces that a request was routed through a proxy node.

Here’s an example of a node-fetch with proxy script using a SOCKS5 (if you haven’t installed the required library, run npm install socks-proxy-agent beforehand):

// Import node-fetch and socks-proxy-agentimport fetch from 'node-fetch';import { SocksProxyAgent } from 'socks-proxy-agent';// Example of a SOCKS5 proxy (replace with your own if needed)const socksProxy = 'socks5h://127.0.0.1:9050'; // Example uses a local Tor proxy// Create a SOCKS proxy agentconst agent = new SocksProxyAgent(socksProxy);// Target URL you want to access (might be blocked without a proxy)const targetUrl = 'https://httpbin.org/ip';// Define the asynchronous request logicasync function fetchThroughSocks() { try { // Send the request using node-fetch with a SOCKS proxy agent and a timeout const res = await fetch(targetUrl, { agent, timeout: 10000 }); // 10-second timeout if (!res.ok) { throw new Error(`HTTP error: ${res.status}`); }// Read the response as plain text const data = await res.text();// Print the result to the console console.log(`Response received: ${data}`); } catch (err) { // Output error message if request fails console.error(`Error: ${err.message}`); }}// Run the functionfetchThroughSocks();

Conclusion and Recommendation

node-fetch proxy

The easiest way to use node-fetch with proxy is by configuring a custom connection agent. Depending on your needs, the agent can operate over HTTP, HTTPS, or SOCKS protocols. We’ve shown practical script examples for each of these options above.

The node-fetch library essentially replaces a basic HTTP client, making it possible to exchange requests and data between your own scripts or with external web applications without resorting to heavy browser-based solutions or headless browsers combined with APIs. However, since node-fetch does not render page content like a browser, it’s best suited for simpler scraping and integration tasks.

The main challenge isn’t configuring the agent — it’s the quality of the proxies themselves. The performance and reliability of your script will largely depend on your proxy provider’s IP reputation, network stability, and global coverage.

Choose Froxy as your proxy provider, and you won’t be disappointed: over 10 million IPs, presence in 200+ locations, a mix of datacenter, residential, and mobile proxies, and automatic rotation. 

Get notified on new Froxy features and updates

Be the first to know about new Froxy features to stay up-to-date with the digital marketplace and receive news about new Froxy features.

Related articles