Showing posts with label cache. Show all posts
Showing posts with label cache. Show all posts

Saturday, 28 February 2026

Feed Cache Goes Live

After several days of testing, the feed Cache has now gone live on one page. Requesting the Walk Collections Page is now very fast as this is required to query all the posts in the feed. The old code took time to request all the feed uris, whereas the new code just requests the cached feed file from the GitHub CDN.

There is also a new page that was used for testing that pulls back a Random Selection of Walks.

Other pages may benefit this, but certainly where the labels have limited results this generally works fairly efficiently using the feed itself.

The only issue found during the test period was where the CDN seemed to either go down or lose the data. This seemed to resolve itself so it is not certain what caused this. It was a one-off issue. In such cases the code falls back to the request the live feed.

Wednesday, 25 February 2026

Site Search Response times and Employing a Cached Feed

Currently, the search page routines on Griffmonsters Walks use a script to pull back all posts from the feed as json which then can be interrogated. This has to be performed in a recursive manner as each feed request only contains a certain number of entries, for whatever reason. Therefore, to return all entries, a feed request has to count the entries in the returned data and use this number to make another feed request with the appropriate start-index value, recursing over the feed requests until a feed with no entries is returned.

https://griffmonster-walks.blogspot.com/feeds/posts/summary?alt=json&start-index=38

This works well, albeit the user has to wait a few seconds until all items have been retrieved. As the number of walks on the site increases, this wait time will get progressively worse and it has been something that has been in the back of my mind for some time now. To which, in recent days I have a proposed solution - a cache of feed data. After some research, the direction I am currently testing is a cache generated on GitHub using a simple script and node.js to run a similar routine to what we currently have on the search pages. This, combined with a GitHub workflow that can regenerate the cache file provides a mechanism to provide a cached copy of the full feed available via a uri request to GitHub. To keep the size down, the cache is filtered with only the components required by the search routines.

Initial tests are looking promising, although in some cases there was a 10+ second delay in retrieving the cached data. With some more investigation and tweaks, plus taking advantage of the GitHub CDN, this now seems to be lightning fast compared to the present routines. There is more testing to do, but currently a running test page is looking good, and the promise of more responsive search pages are on the horizon.

To make this complete, there is already a daily cache refresh in the GitHub workflow, plus a routine in my local ant workflow that can be fired when deploying new or updated posts, this will trigger a cache update, and a CDN purge of the data.

I will keep you updated on progress..