However, the Python library ecosystem constantly evolves, particularly with the rise of asynchronous programming using asyncio. This shift has opened doors for new libraries designed to leverage non-blocking I/O for enhanced performance, especially in I/O-bound applications.
That’s where the HTTPX library comes in, a relative newcomer that bills itself as a “next generation HTTP client for Python,” offering both synchronous and asynchronous APIs, along with support for modern web features such as HTTP/2.
What is Requests?
For those new to Python or in need of a refresher, Requests is a simple and elegant HTTP library for Python, created by Kenneth Reitz almost fifteen years ago. Its main goal is to make HTTP requests easy and human-friendly. You want to send some data? Make a GET or POST request? Handle headers, cookies, or authentication? Requests make these tasks intuitive.
Its synchronous nature means that when you make a request, your program waits for the response before moving on. This is fine for many applications, but for tasks requiring numerous concurrent HTTP calls (such as web scraping or interacting with multiple microservices), this blocking behaviour can become a significant bottleneck.
What is HTTPX?
According to its official documentation, HTTPX is a,
“…fully featured HTTP client for Python 3, which provides sync and async APIs, and support for both HTTP/1.1 and HTTP/2.”
It was developed by Encode (the team behind Starlette, Uvicorn and Django Rest Framework).
Some of HTTPX’s selling points include,
- Async Support: Native async/await syntax for non-blocking operations.
- HTTP/2 Support: Unlike Requests (which primarily supports HTTP/1.1 out of the box), HTTPX can speak HTTP/2, potentially offering performance benefits like multiplexing.
- Requests-like API: It aims to provide a familiar API for those accustomed to Requests, easing the transition.
- Transport API: A more advanced feature allowing custom transport behaviour, useful for testing or specific network configurations.
The claims for HTTPX are intriguing. A Requests-compatible API with the power of async/await, and potential performance gains. But is it the heir apparent, capable of unseating the reigning champion, or is it a niche tool for specific async use cases? There’s only one way to find out. Let’s put them both to the test.
Setting up a Development Environment
Before we start coding, we should set up a separate development environment for each project we work on. I’m using conda, but feel free to use whatever method suits you.
# Create our test environment (Python 3.7+ is recommended for async features)
# And activate it
(base) $ conda create -n httpx_test python=3.11 -y
(base) $ conda activate httpx_test
Now that our environment is active, let’s install the necessary libraries:
(httpx_test) $ pip install requests httpx[http2] asyncio aiohttp uvicorn fastapi jupyter nest_asyncio
I’m using Jupyter for my code, so if you’re following along, type in Jupyter Notebook into your command prompt. You should see a jupyter notebook open in your browser. If that doesn’t happen automatically, you’ll likely see a screenful of information after the Jupyter Notebook command. Near the bottom, you will find a URL that you should copy and paste into your browser to launch the Jupyter Notebook.
Your URL will be different to mine, but it should look something like this:-
http://127.0.0.1:8888/tree?token=3b9f7bd07b6966b41b68e2350721b2d0b6f388d248cc69da
Comparing HTTPX and Requests’ Performance
To compare performance, we’ll run a series of HTTP GET requests using both libraries and time them. We’ll examine synchronous operations first, then look into the asynchronous capabilities.
For our target, we’ll use httpbin.org, a fantastic service for testing HTTP requests. Think of it as a testing and debugging tool for developers who are building or working with software that makes HTTP requests (like web clients, API clients, scrapers, etc.). Instead of having to set up your own web server to see what your HTTP requests look like or to test how your client handles different server responses, you can send your requests to a test server at httpbin.org. It has a variety of endpoints that are designed to return specific types of responses, allowing you to inspect and verify your client’s behaviour.
Local FastAPI Server Setup
Let’s create a simple FastAPI app to serve as our async endpoint. Save this as test_server.py:
# test_server.py
from fastapi import FastAPI
import asyncio
app = FastAPI()
@app.get("/fast")
async def read_fast():
return {"message": "Hello from FastAPI!"}
@app.get("/slow")
async def read_slow():
await asyncio.sleep(0.1) # Simulate some I/O-bound work
return {"message": "Hello slowly from FastAPI!"}
Start this server in a separate terminal window by typing this command.
uvicorn test_server:app --reload --host 127.0.0.1 --port 8000
We’ve set up everything we need to. Let’s get started with our code examples.
Example 1 — Simple Synchronous GET Request
Let’s start with a basic scenario: fetching a simple JSON response 20 times sequentially.
import requests
import httpx
import time
import nest_asyncio
nest_asyncio.apply()
URL = "https://httpbin.org/get"
NUM_REQUESTS = 20
# --- Requests ---
start_time_requests = time.perf_counter()
for _ in range(NUM_REQUESTS):
response = requests.get(URL)
assert response.status_code == 200
end_time_requests = time.perf_counter()
time_requests = end_time_requests - start_time_requests
print(f"Execution time (Requests, Sync): {time_requests:.4f} seconds")
# --- HTTPX (Sync Client) ---
start_time_httpx_sync = time.perf_counter()
with httpx.Client() as client: # Using a client session is good practice
for _ in range(NUM_REQUESTS):
response = client.get(URL)
assert response.status_code == 200
end_time_httpx_sync = time.perf_counter()
time_httpx_sync = end_time_httpx_sync - start_time_httpx_sync
print(f"Execution time (HTTPX, Sync): {time_httpx_sync:.4f} seconds")
The output.
Execution time (Requests, Sync): 22.6370 seconds
Execution time (HTTPX, Sync): 11.4099 seconds
That’s a decent uplift from HTTPX over Requests already. It’s almost twice as fast at synchronous retrieval in our test.
Example 2 — Simple Asynchronous GET Request (Single Request) using HTTPX
Now, let’s test HTTPX’s asynchronous capabilities by making a single request to the local FastAPI server that we set up before
import httpx
import asyncio
import time
LOCAL_URL_FAST = "http://127.0.0.1:8000/fast"
async def fetch_with_httpx_async_single():
async with httpx.AsyncClient() as client:
response = await client.get(LOCAL_URL_FAST)
assert response.status_code == 200
start_time_httpx_async = time.perf_counter()
asyncio.run(fetch_with_httpx_async_single())
end_time_httpx_async = time.perf_counter()
time_httpx_async_val = end_time_httpx_async - start_time_httpx_async
print(f"Execution time (HTTPX, Async Single): {time_httpx_async_val:.4f} seconds")
The Output.
Execution time (HTTPX, Async Single): 0.0319 seconds
This is quick, as expected for a local request. This test primarily verifies that the async machinery works. The real test for async comes with concurrency.
Example 3 — Concurrent Asynchronous GET Requests.
This is where HTTPX’s async capabilities should truly shine over Requests. We’ll make 100 requests to our /slow endpoint concurrently.
import httpx
import asyncio
import time
import requests
LOCAL_URL_SLOW = "http://127.0.0.1:8001/slow" # 0.1s delay
NUM_CONCURRENT_REQUESTS = 100
# --- HTTPX (Async Client, Concurrent) ---
async def fetch_one_httpx(client, url):
response = await client.get(url)
return response.status_code
async def main_httpx_concurrent():
async with httpx.AsyncClient() as client:
tasks = [fetch_one_httpx(client, LOCAL_URL_SLOW) for _ in range(NUM_CONCURRENT_REQUESTS)]
results = await asyncio.gather(*tasks)
for status_code in results:
assert status_code == 200
start_time_httpx_concurrent = time.perf_counter()
asyncio.run(main_httpx_concurrent())
end_time_httpx_concurrent = time.perf_counter()
time_httpx_concurrent_val = end_time_httpx_concurrent - start_time_httpx_concurrent
print(f"Execution time (HTTPX, Async Concurrent to /slow): {time_httpx_concurrent_val:.4f} seconds")
# --- For Comparison: Requests (Sync, Sequential to /slow) ---
# This will be slow, demonstrating the problem async solves
start_time_requests_sequential_slow = time.perf_counter()
for _ in range(NUM_CONCURRENT_REQUESTS):
response = requests.get(LOCAL_URL_SLOW)
assert response.status_code == 200
end_time_requests_sequential_slow = time.perf_counter()
time_requests_sequential_slow_val = end_time_requests_sequential_slow - start_time_requests_sequential_slow
print(f"Execution time (Requests, Sync Sequential to /slow): {time_requests_sequential_slow_val:.4f} seconds")
Typical Output
Execution time (HTTPX, Async Concurrent to /slow): 0.1881 seconds
Execution time (Requests, Sync Sequential to /slow): 10.1785 seconds
Now this is not too shabby! HTTPX leveraging asyncio.gather completed 100 requests (each with a 0.1s simulated delay) in just over 1 second. Because the tasks are I/O-bound, asyncio can switch between them while they wait for the server’s response. The total time is roughly the time of the longest individual request, plus a small amount of overhead for managing concurrency.
In contrast, the synchronous Requests code took over 10 seconds (100 requests * 0.1s/request = 10 seconds, plus overhead). This demonstrates the power of asynchronous operations for I/O-bound tasks. HTTPX isn’t just “faster” in an absolute sense; it enables a fundamentally more efficient way of handling concurrent I/O.
What about HTTP/2?
HTTPX supports HTTP/2 if the server also supports it and the h2 library is installed (pip install httpx[h2]). HTTP/2 offers benefits such as multiplexing (sending multiple requests over a single connection) and header compression.
import httpx
import asyncio
import time
# A public server that supports HTTP/2
HTTP2_URL = "https://github.com"
# HTTP2_URL = "https://www.cloudflare.com" # Another option
NUM_HTTP2_REQUESTS = 20
async def fetch_http2_info():
async with httpx.AsyncClient(http2=True) as client: # Enable HTTP/2
for _ in range(NUM_HTTP2_REQUESTS):
response = await client.get(HTTP2_URL)
# print(f"HTTP Version: {response.http_version}, Status: {response.status_code}")
assert response.status_code == 200
assert response.http_version in ["HTTP/2", "HTTP/2.0"] # Check if HTTP/2 was used
start_time = time.perf_counter()
asyncio.run(fetch_http2_info())
end_time = time.perf_counter()
print(f"Execution time (HTTPX, Async with HTTP/2): {end_time - start_time:.4f} seconds for {NUM_HTTP2_REQUESTS} requests.")
The Output
Execution time (HTTPX, Async with HTTP/2): 0.7927 seconds for 20 requests.
While this test confirms HTTP/2 usage, quantifying its speed benefits over HTTP/1.1 in a simple script can be a bit tricky. HTTP/2’s advantages often become more apparent in complex scenarios with many small resources or on high-latency connections. For many common API interactions, the difference might not be dramatic unless the server is specifically optimised to leverage HTTP/2 features heavily. However, having this capability is a significant forward-looking feature.
Beyond Raw Speed
Performance isn’t everything. Developer experience, features, and ease of use are crucial, so let’s look at some of these in our comparison of the two libraries.
Async/Await Support
HTTPX. Native first-class support. This is its most significant differentiator.
REQUESTS. Purely synchronous. To get async behaviour with a Requests-like feel, you’d typically look to libraries like aiohttp (which has a different API) or use Requests within a thread pool executor (which adds complexity and isn’t true asyncio).
HTTP/2 Support
HTTPX. We already mentioned this, but to recap, this functionality is built in.
REQUESTS. No native HTTP/2 support. Third-party adapters exist, but aren’t as integrated.
API Design & Ease of Use
HTTPX. Intentionally designed to be very similar to Requests. If you’re familiar with Requests, HTTPX will feel familiar. Here’s a quick code comparison.
# Requests
r = requests.get('https://example.com', params={'key': 'value'})
# HTTPX (sync)
r = httpx.get('https://example.com', params={'key': 'value'})
# HTTPX (async)
async with httpx.AsyncClient() as client:
REQUESTS. The gold standard for simplicity in synchronous HTTP calls.
Client Sessions / Connection Pooling
Both libraries strongly encourage using client sessions (requests.Session() and httpx.Client() / httpx.AsyncClient()) for performance benefits, such as connection pooling and cookie persistence. The usage for both is very similar.
Dependency Footprint
REQUESTS. Relatively lightweight (charset_normalizer, idna, urllib3, certifi).
HTTPX. Has a few more core dependencies (httpcore, sniffio, anyio, certifi, idna), and h11 for HTTP/1.1. If you add h2 for HTTP/2, that’s another. This is understandable given its broader feature set.
Maturity & Community
REQUESTS. Extremely mature, massive community, battle-tested over a decade.
HTTPX. Younger but actively developed by a reputable team (Encode) and is gaining traction rapidly. It’s considered stable and production-ready.
When to Choose HTTPX? When to Stick with Requests?
With all that said, how do you choose between the two? Here’s what I think.
Choose HTTPX if …
- You need asynchronous operations. This is the primary reason. If your application involves many I/O-bound HTTP calls, HTTPX with asyncio will offer significant performance improvements and better resource utilisation.
- You need HTTP/2 support. If you’re interacting with servers that leverage HTTP/2 for performance, HTTPX provides this out of the box.
- You’re starting a new project and want to future-proof it. HTTPX’s modern design and async capabilities make it a strong choice for new applications.
- You want a single library for both sync and async HTTP calls. This can simplify your dependency management and codebase if you have mixed needs.
- You need advanced features. Like its Transport API for fine-grained control over request dispatch, or for testing.
Stick with Requests if …
- Your application is purely synchronous and has simple HTTP needs. If Requests is already doing the job well and you don’t face I/O bottlenecks, there might be no compelling reason to switch.
- You’re working on a legacy codebase heavily reliant on Requests. Migrating might not be worth the effort unless you specifically need HTTPX’s features.
- Minimising dependencies is critical. Requests has a slightly smaller footprint.
- The learning curve for asyncio is a barrier for your team. While HTTPX offers a sync API, its main power lies in its async capabilities.
Summary
My investigation reveals that HTTPX is a very competent library. While it doesn’t magically make single, synchronous HTTP calls drastically faster than Requests (network latency is still king there), its true power comes to the fore in asynchronous apps. When making numerous concurrent I/O-bound calls, HTTPX offers substantial performance gains and a more efficient way to structure code, as demonstrated in our concurrent test.
Many claim that HTTPX is “better”, but it depends on the context. If “better” means having native async/await support, HTTP/2 capabilities, and a modern API that also caters to synchronous needs, then yes, HTTPX arguably holds an edge for new development. Requests remains an excellent, reliable library for synchronous tasks, and its simplicity is still its greatest strength.
For concurrent asynchronous operations, the effective throughput when using httpx can be an order of magnitude greater than that of sequential synchronous Requests, which is a game-changer.
If you are a Python developer handling HTTP calls, particularly in modern web applications, microservices, or data-intensive tasks. In that case, HTTPX is not merely a library to observe — it is a library to begin using. The transition from Requests is smooth for synchronous code, and its overall feature set and async prowess make it a compelling choice for the future of Python HTTP clients.




