Netlify lambda functions sometimes return responses from previous deployments

Hi,

I recently noticed that netlify lambda functions sometimes return responses from previous deployments.

To test this issue, I commit a version.js which output an integer version number and bump the version number from 1 to 2 to 3 in several commits.

For example, when visiting https://eager-booth-dc4b16.netlify.com/.netlify/functions/version, it should output ‘version: 3’.

I noticed that it outputs ‘version: 3’ right after my commit. But after several minutes, it outputs ‘version: 2’ when I visit the link again. Then after a couple of minutes, it outputs ‘version: 3’.

It seems that some other users reported similar issues in the past year:


It seems that the problem still exists? And it looks like some edge nodes still point to previous versions of lambda function after the latest deployment.

I have used netlify for some time. The other functions work very well and I really enjoy it. Hope this issue can be solved.

Thank you.

2 Likes

Hi, @dps. Yes, I do believe that same issue is affecting this site’s functions.

Are you still getting the previous version when you test? If so, we can get that corrected for this site and please let us know if this is the case.

We also have an open issue tracking this behavior. The issue and this community topic are now cross-linked for tracking. We’ll update you here if/when the issue is known to be resolved.

Again, please let us know if you still see the previous version when you test and we’ll be able to get the issue with the site’s function resolved by making a change here.

1 Like

Hi luke,

I’m facing the issue described in this thread too. Was searching for the issue and stumbled onto this. I often get responses from older deployments/versions of my netlify functions. Is this been fixed yet?

I’ve this same issue too. When I used my browser, its hitting old function, when using curl, it gets the newer version.

I verified that browser is not using cache, and indeed hitting the server.

if it helps, 128.199.185.38:443 -> this is returning old version and 157.230.37.202:443 is returning newer version.

I further verified this by forcing curl to hit those servers separately, and confirmed that behavior.

hi @abejith and @rajatjindal - i have added your information to the issue we are working on.

I’ll let you know as soon as we have an update for you - I know this is a frustrating bug to encounter and deal with. I’m sorry for the inconvenience.

Hi @perry, I am still facing this issue too.

I have checked the ARN (of AWS lambda function) for a recently deployed netlify function and seen two different ARNs when requesting from different devices or from the same device at different times. I guess one ARN is the latest version and the other one is the previous version.

And I have faced this type of issues several times in the past a few months. It seems that these issues can be solved by themselves after several minutes or hours.

yeah this seems to happen way too frequently. I’ve been getting responses from my function that I deployed over 12 hours ago. even tried sleeping over it, but no help. :slight_smile:

not writing this in complaint mode (because Netlify just deserve love :heart:), but this is resulting in lot of productivity loss (and not to mention lot of build minutes trying to debug/reproduce).

noted, @rajatjindal and @dps. thanks for chiming in - as mentioned, i’ll report back here as soon as we have an update.

1 Like

can’t wait to get back to using Netlify functions.

1 Like

@perry please sign me up for updates as well. @luke, it seems this is something you are able to fix on your side, if you could apply the fix to the site smplrspace-dev, I’d be grateful. Thank you.

Also, is there any workaround until this is fixed? Like push again to force new deployment or something? What shall we do if we get this in prod sites?

I am facing the same issue. I am getting response from old builds. Though sometimes it works but sometimes it doesn’t.

hey there, we’re super pleased to report that after a effort, we think we tracked down and remediated the source of this problem system wide - we have rolled out a fix :tada:

If you keep seeing this issue, please let us know!

1 Like