Jump to content

Stalling issue with downloading online lists


Recommended Posts


Probably the same as the issue you referenced in https://www.cach.ly/support/index.php?/topic/2007-download-full-cache-data-of-more-than-50-entries-stalls/&do=findComment&comment=10544, Nic:

Reasonably frequently (but irregularly), when trying to download one of my online lists for offline use (with "full cache data"), the request stalls on the last chunk (network request) of 50 after completing the other chunks. Which chunk it is tends to be different -- my example today is a list with 226 cache entries, and it consistently (over several attempts within a few minutes of each other) stalls on the "50-100" chunk, after all the others have long disappeared as completed. When I cancel the download, the resulting list contains 176 entries, which confirms that it was only this second of five chunks that didn't work. Over the last several months (and years), the same list has worked without trouble most times, and made the same kind of trouble some other times, with only tiny changes to the list over time (added or removed one or two caches, maybe added some personal info to a cache or two).

I'm using Cachly 7.1.1(1) on iOS 17.1.2, on an iPhone 12 mini. Doesn't matter if I use my home wifi or a mobile data connection. I'm a GC.com premium user.

To help with debugging this, I created smaller lists with between 40-50 of these 226 caches each. Every one of these lists was downloaded without incident.

After this, I deleted the original list online, and created a new list on GC.com by starting to add all the caches from the smaller lists to a new list, and downloading the larger list again into Cachly after each smaller list (new ID, but I reused the old name). Having combined two of the smaller lists (96 caches), it downloaded fine. Added the third (to 136 caches), it downloaded fine. But after adding the fourth (40 additional caches) it stalled again, on the "1-50 caches" network request.

After a long wait, I cancelled the download. I deleted the offline versions of the individual chunks and redownloaded the likely "offender" as a small list - no problem. Redownloaded the 186-caches list: stalled again on the last chunk.

Ok, so then I started narrowing down the smaller list from 40 to 20 and so forth. I got to the point where it was the mini-list of GC99VAV, GC3B0TC and GC7EFVJ that changed a list from fine to broken if added as #147 to #149. But adding only GC3B0TC and GC7EFVJ to the previous list of 146 other caches did not break it, nor did adding only GC99VAV or GC99VAV and GC3B0TC. As part of lists including up to 148 caches, each of these were fine. But once I added one of them as the 149th cache to the list, it broke. (And of course, same as with the bigger lists used before, just downloading a list of these three caches was no problem at all.)

So, here I am, thoroughly confused. I'll send you the debug file in a minute, Nic, and I need to do something else now after a couple of hours of debugging Cachly, but if I can I'll try to reduce the "other side" of this problem in terms of the 146 seemingly fine caches at some point in the coming days or weeks, as well, and test more thoroughly if it could actually be the case that the position (being added as the 149th cache to a list) could really have significance. Sounds super-weird to me.

Is this helpful? Happy to take instructions for adding further useful info.


Philipp / MrDosinger

Link to comment
Share on other sites

On 12/9/2023 at 2:36 AM, MrDosinger said:

Is this helpful? Happy to take instructions for adding further useful info.

This is helpful, and I have seen this issue reported many times over the last year. The issue is, this is a API bug where one of the requests is returning a timeout error. I have reported this to HQ many times, but they say they cannot replicate it.

Link to comment
Share on other sites

9 hours ago, Nic Hubbard said:

The issue is, this is a API bug where one of the requests is returning a timeout error. I have reported this to HQ many times, but they say they cannot replicate it. 

Thank you! So, what's next? Is there any hope of getting HQ to work on this? With my pared-down problem list BMCYGKP, so far the problem has appeared every time I've tried downloading it -- maybe replicable, after all? If not, is there a way for Cachly to catch the timeout from the API and break down the offending chunk further until there is just one or two caches missing, in which case Cachly might conclude the download with a message so the user knows which cache(s) are missing but that they can otherwise happily use their list?

Link to comment
Share on other sites

  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Create New...