6:09:19 PM: Build script success
6:09:19 PM: Starting to deploy site from 'public/'
6:09:21 PM: Creating deploy tree asynchronously
6:09:28 PM: 6078 new files to upload
6:09:28 PM: 0 new functions to upload
6:11:00 PM: System Error
6:39:11 PM: Build script success
6:39:11 PM: Starting to deploy site from 'public/'
6:39:13 PM: Creating deploy tree asynchronously
6:39:18 PM: 6078 new files to upload
6:39:18 PM: 0 new functions to upload
6:40:56 PM: System Error
6:47:58 PM: Build script success
6:47:58 PM: Starting to deploy site from 'public/'
6:47:59 PM: Creating deploy tree asynchronously
6:48:04 PM: 6078 new files to upload
6:48:04 PM: 0 new functions to upload
6:49:39 PM: System Error
well, that’s a bummer - we were feeling hopeful we had addressed the issue. we’ll take another look at this next week with the team - it seems possible that this error only occurs when a large amount of files are being uploaded.
to my knowledge there is no explicit limit, but there very may well be practical limits.
We have been working on this part of the build process and also the logging for a little while now (you can see how the logging error changed from “System Error” to “build exited unexpectedly”.
There is another person here in the forums who also is trying to upload the same high amount of files (in the tens of thousands) at the same time:
(ignore the first half of the thread until we changed his build timeout)
and my best guess is that something with these large volume of files is causing problems, which is why I need to go figure out who to talk to about it (and then find time to troubleshoot).
One question I do have: what kind of deployment workflow are you using? are you just pushing to the repo and then deploying via CI?
I wonder if it would work better if you used the CLI it is worth a try if you have the inclination to experiment a bit.
I know this is super frustrating but I promise that we are actively looking at it, it is just proving more complicated to debug than initially expected, and before we can roll out a fix for these unique cases, we need to test and make sure it is safe to use at scale for 800000+ people worldwide…
7:57:26 PM: Skipping functions preparation step: no functions directory set
7:57:26 PM: Caching artifacts
7:57:26 PM: Started saving node modules
7:57:26 PM: Finished saving node modules
7:57:26 PM: Started saving pip cache
7:57:26 PM: Finished saving pip cache
7:57:26 PM: Started saving emacs cask dependencies
7:57:26 PM: Finished saving emacs cask dependencies
7:57:26 PM: Started saving maven dependencies
7:57:26 PM: Finished saving maven dependencies
7:57:26 PM: Started saving boot dependencies
7:57:26 PM: Finished saving boot dependencies
7:57:26 PM: Started saving go dependencies
7:57:26 PM: Finished saving go dependencies
7:57:30 PM: Build script success
7:57:30 PM: Starting to deploy site from 'public'
7:57:33 PM: Creating deploy tree asynchronously
7:57:35 PM: Creating deploy upload records
7:58:31 PM: 6403 new files to upload
7:58:31 PM: 0 new functions to upload
8:00:06 PM: Build exited unexpectedly
mkdir -p public && node --max-old-space-size=8192 ./download.js && mkdir -p public && node --max-old-space-size=8192 ./structure.js
I am running 2 NodeJS, the first is downloading structured XML from the API. And the second parses this XML into JSON files. Because of this, I have 40,000+ JSON files.
Howdy, and sorry for the long-standing deploy troubles you’re seeing! I spent awhile looking into this via our internal logs, and the reason for that error is still not obvious to me; as you can see from the logs, the build completes succesfully and the problem is in uploading.
So, we have scheduled time with our build system experts tomorrow to get their advice as to how we can get your builds to function reliably.
Hi there and thanks for your patience while we worked with our build team to understand the failure. Unfortunately all we found is that the deploy fails due to running out of memory, which is not expected/usual (I’ve never seen it before, and I’ve debugged well north of 20,000 builds in the past 4 years).
It’s interesting because your site doesn’t seem much bigger than tons of sites that work well:
6:39:18 PM: 6078 new files to upload
is well within the range of site sites we intend to support. Can you confirm that’s as many files as your build generates rather than a very small (<10%) subset? If it’s a huge deploy I could try to talk you through sharding your site or uploads to try to get a successful deploy, but if it isn’t, we may not be able to create a fix in the near term - in which case we’d refund you as much of any past payments you’ve made as we can. I don’t see any charges in your account at present but if you’ve had some I would love to hear about it in that case.
But, I’d love to first hear how many files your build is, and how much space they take up on disk when you build locally, just to get that sense!
Thanks in advance for your help in troubleshooting.