How does redirect caching work?

For our website, we have a _redirects file with ~25000 entries.

Our site has quite a slow TTFB.

Does Netlify’s edge cache _redirect lookups, or does it search the whole file on every request?

We recommend under 10k redirects. Under 1k is a best practice.

That many redirects will ABSOLUTELY impact your service time on EVERY asset. It’s not about what’s cached or not - it’s about our need to parse 25k redirects on every request to see if it matches. We do cache where we can, but parsing/matching is the time drain.

I looked at your redirects and I don’t see any easy wins (I was hoping to see a pattern like:

/a/b -> /x/b
/a/c -> /x/c

…that I could advise you on collapsing. Of course I didn’t analyze 25k redirects, so you might want to make sure there’s nothing you can optimize further.

If you can’t do that, perhaps you can split the site into a few different ones, using a workflow like this?

While it talks about multiple repos one site, you can also use proxying for one repo multiple sites - and if you could “balance” the redirects across them in some way, that would help speed things up on each site.

Another potential optimization is using some client side redirects instead of the thousands of 301’s you have in our redirects. If you’re doing 200 redirects, that isn’t an option - but for 301’s, you could use client side javascript to do the redirect. It will slow down the request a bit - two pageloads - but if you scope that implementation to the more rarely used paths, the good of the many (all site access) will be improved in performance, while the experience of the few (who visit the rare pages) would be impacted a slight bit (but maybe no net loss, since no slow ttfb?)

Hey! Thanks so much for your response and looking into it, I really appreciate it.

Do you have any idea what the impact is, in terms of ms? I find the TTFB to be extremely slow on our site (seems to be around 1.5-2s at the moment).

Have you considered caching 301s at the edge? That seems like a sensible improvement that will really boost Netlify’s redirect performance.

I could also put Cloudflare in front of it; Cloudflare will cache 301s. Do let me know your thoughts.

I’ve heard “10k redirects can add hundreds of ms”. All cases will differ, so there is no exact metric.

As I said, we do some caching, and if you have a redirect like:

/blog/* 301!

…that gets cached and requests to /blog/* will be quick - but that doesn’t affect every other asset like logo.png. It also doesn’t fix the problem to “just cache 301s”, for instance:

  • we have to check for files that AREN’T covered by a redirect, over and over again. We can’t necessarily cache all your RULES at the CDN edge, when you have this many rules! We want to use the cache for FILES, not rules, so the cache size for files is much larger, and 25k redirects is way more than we want people to have, so we aren’t optimizing for that use case.
  • Also, we have dozens of CDN nodes and EACH has its own, independent cache, so one node might get everything cached - and the other nodes don’t.

If you want to send an x-nf-request-id for a slow request that you’d like me to look into, I’ll be happy to advise if there are any easy wins - sometimes, you just have DNS misconfigured, and it isn’t even about redirects! Talking about this in the abstract isn’t going to help further, I think :slight_smile:

this article describes what I’m looking for:

If you proxy through cloudflare to Netlify, won’t be able to help much further though. Here’s why:

Please make any x-nf-request-id you send is from a direct-to-netlify connection :slight_smile:

1 Like

Thanks Chris, this is super useful.

I’ve done a fair bit of testing in the last day, and what I’ve discovered is, the warm-cache performance seems to be fine.

First request for a page can take up to 3 seconds, probably average around 1500ms. I’m not sure this is really related to the redirects; it may just be a cold-cache.

Subsequent requests to this page from the same location have a TTFB of under 100ms.

I would suggest perhaps adding a line in the docs about how the redirects work and the intended use cases. Sites like ours have a lot of legacy, and 301s need to last indefinitely.

I’ve done some research and the client-side redirect seems like a good way to go; thank you for suggesting it, we’ll build that in in due course.

Would you mind just checking this request:

x-nf-request-id: d87cec20-8d78-4098-bc44-836985e86a93-12136997

It’s for:

Here’s my timing:


(Gigabit fibre from the UK)

Hi, @chrism2671. I do show this response was slow. It also occurred during the following service incident:

If you see this slow time to first byte (TTFB) behavior again, please let us know.

Here’s another (the first I tried):

x-nf-request-id: 4b395587-7501-4090-a905-daad1b7d9c35-56850187

for this page:

Screenshot from 2020-08-31 15-24-10
(that says ‘waiting 2.08s’ in the bottom left)


A lot of this may be to do with the fact that the requests are returning to our load balancer. If you’re not in the US and geographically close to the load balancer, given that you have an A record pointed to this, then your requests may take a little longer than desired (in addition to the non-edge caching which was discussed before).

You may want to consider revising your DNS setup to make use of a CNAME entry for the site (with external DNS) or Netlify DNS.

@Pie Hi Scott! Thanks for taking a look.

Our main domain has a CNAME of Are you seeing something else?

Hi, @chrism2671, I do show that was uncached content and that it took an extremely long time to send.

I don’t have a fix for this slow TTFB issue. There is an open issue tracking this and we will update this community topic to notify you if/when the issue is known to be resolved.

I also confirm that is using a CNAME correctly, that is the primary custom domain for the site, and that the response for that x-nf-request-id was from a CDN node in the EU.

If there are other questions about this, please let us know.

Apologies, yeah – I checked config on the apex domain thinking that that was primary. D’oh!

To echo Luke’s point, we’ll loop back on any developments in the open issue.