Saturday, 28 February 2026

Feed Cache Goes Live

After several days of testing, the feed Cache has now gone live on one page. Requesting the Walk Collections Page is now very fast as this is required to query all the posts in the feed. The old code took time to request all the feed uris, whereas the new code just requests the cached feed file from the GitHub CDN.

There is also a new page that was used for testing that pulls back a Random Selection of Walks.

Other pages may benefit this, but certainly where the labels have limited results this generally works fairly efficiently using the feed itself.

The only issue found during the test period was where the CDN seemed to either go down or lose the data. This seemed to resolve itself so it is not certain what caused this. It was a one-off issue. In such cases the code falls back to the request the live feed.

Wednesday, 25 February 2026

Site Search Response times and Employing a Cached Feed

Currently, the search page routines on Griffmonsters Walks use a script to pull back all posts from the feed as json which then can be interrogated. This has to be performed in a recursive manner as each feed request only contains a certain number of entries, for whatever reason. Therefore, to return all entries, a feed request has to count the entries in the returned data and use this number to make another feed request with the appropriate start-index value, recursing over the feed requests until a feed with no entries is returned.

https://griffmonster-walks.blogspot.com/feeds/posts/summary?alt=json&start-index=38

This works well, albeit the user has to wait a few seconds until all items have been retrieved. As the number of walks on the site increases, this wait time will get progressively worse and it has been something that has been in the back of my mind for some time now. To which, in recent days I have a proposed solution - a cache of feed data. After some research, the direction I am currently testing is a cache generated on GitHub using a simple script and node.js to run a similar routine to what we currently have on the search pages. This, combined with a GitHub workflow that can regenerate the cache file provides a mechanism to provide a cached copy of the full feed available via a uri request to GitHub. To keep the size down, the cache is filtered with only the components required by the search routines.

Initial tests are looking promising, although in some cases there was a 10+ second delay in retrieving the cached data. With some more investigation and tweaks, plus taking advantage of the GitHub CDN, this now seems to be lightning fast compared to the present routines. There is more testing to do, but currently a running test page is looking good, and the promise of more responsive search pages are on the horizon.

To make this complete, there is already a daily cache refresh in the GitHub workflow, plus a routine in my local ant workflow that can be fired when deploying new or updated posts, this will trigger a cache update, and a CDN purge of the data.

I will keep you updated on progress..

Friday, 13 February 2026

Site Updates and New Tools

It is only February, and—like buses you wait ages for—the Griffmonster websites have seen two major updates within six weeks of each other.

The site was overdue for a significant refresh. Work commitments over recent years meant that maintenance and improvements were left unattended for longer than intended. Hopefully, things will now settle into a steadier rhythm, with only incremental updates in the months ahead.

The first round of updates at the start of the year focused primarily on under-the-hood improvements. The most recent changes concentrate on the front end, improving usability for new visitors and bringing a more consistent design across the site.

What Has Changed

  • A new home page – Less cluttered, fully responsive, and designed to provide clear access to all key areas of the site. Font-based icons are now used to reinforce a consistent visual theme.
  • Unified search pages – Lists and walks have been combined into single search interfaces, with results displayed simultaneously on both map and list views.
  • New search functionality, including:
    • A location search that defaults to the user’s location (where available), while also allowing place name and postcode searches.
    • A collection search, accessible from collection pages and linked references within walk pages. (These correspond to Blogger labels, though “collection” is a clearer description.) Some collections remain hidden, such as archived pages and deprecated news items previously used before Facebook became the primary news channel.
    • Adjustments to the text/keyword search widget so that results align more closely with the rest of the site. This still uses Blogger’s built-in functionality and therefore remains paginated, whereas other searches display all results on a single map and/or list view by querying site feeds.
  • A flexible layout that optimises screen space, particularly on larger displays where search results are now presented in columns.
  • Expanded top menu navigation, making it easier to move between key sections when using mobile devices.
  • A new GPX Route Creator, described in more detail below.
  • Greater prominence given to the Hiking Time Calculator, a utility I use regularly—particularly when planning routes involving public transport. After calibrating it against several previously completed walks to determine an accurate average walking speed when hiking with regular companions, it has become a very practical planning tool.
  • Similar structural updates have also been applied to the Rhodes site, although these are less extensive due to the smaller number of walks currently listed there.

Overall, these changes should make the site easier to navigate, more intuitive for new visitors, and more efficient when searching for walks tailored to specific requirements.

GPX Route Creator

The new tool can be found on the GPX Route Creator page.

For many years, all walks have begun with route research, followed by plotting on an online mapping utility and exporting the result as a GPX file. Over time, many different tools have been used, with GPS Visualizer being one of the most consistent since GPX files became widely used alongside traditional OS maps. For the Rhodes walks in particular, GPX tools were invaluable when reliable mapping was not available.

It has long been an ambition to develop an in-house GPX creator—not necessarily to replace established tools entirely, but as both a learning exercise and a practical addition to the site.

An earlier alpha version was built using customised open-source code and quietly hosted on the site. While functional, it was cluttered, somewhat temperamental, and difficult to use on mobile devices. It never quite achieved the simplicity that was intended.

That version has now been retired and replaced with a new build created from scratch. Although still considered a beta version, it is significantly cleaner and easier to use.

Key Improvements

  • Use of the full page width, with sidebars, header, and footer removed to maximise map space.
  • A full-screen map option for all devices.
  • A collapsible and draggable control panel for route creation, including undo, clear, and export functions.
  • A snap-to-path toggle to assist with accurate route plotting—particularly helpful when working on a mobile device.

Initial testing has been very positive, and we have already used it to create numerous new routes in place of external tools.

One limitation is the absence of elevation data within exported GPX files. Elevation support was explored using the Open-Meteo Elevation API, but request rate limits frequently resulted in HTTP 429 responses. As a result, this feature was removed. Elevation data can still be added afterwards using external tools such as GPS Visualizer.

Overall, we are pleased with what has been achieved so far and would welcome any feedback from those who use it.

Looking Ahead: Undocumented Routes

The broader future direction of the site is also under consideration—not its existence, but its purpose.

When the site began in 2010, it functioned primarily as a personal blog documenting completed walks, with the hope that others might find the information useful. Over time, the emphasis has shifted. The core objective now is to record validated walks and provide GPX downloads that others can confidently use. While a personal narrative element remains, it is no longer the primary focus.

During the planning process for new walks, multiple routes are often developed, though only some are ultimately undertaken. Over the years, this has resulted in a substantial collection of researched routes that have not been personally walked but do exist as fully developed GPX files.

The question now is whether these should be made publicly available as undocumented routes.

In the UK, where routes often follow established Public Rights of Way protected by legislation, there is a reasonable basis for sharing such routes even if they have not been personally validated on the ground.

This is not the case for the sister site, Great Rhodes Walks, where Greek paths and tracks do not carry the same legal certainty. A route cannot be assumed to be passable unless it has been physically walked, and even then access conditions can change without formal notice.

For the UK site, releasing a limited number of clearly labelled undocumented routes may be worthwhile as a trial. The response will help determine whether this becomes a more permanent feature.

We shall see.

Wednesday, 11 February 2026

Monday, 19 January 2026

To Markdown or not to Markdown

The Griffmonster Walks has always relied upon XML workflows where the raw data is composed either in walkML, our very own flavour of XML dedicated to described routes and trails, or more recently as HTML extension to the GPX metadata. 


In either case it requires manually adding the markup to the authored content which can be time consuming. Even the walkML can take custom HTML sections in addition to the native XML format. It has historically been one of the tedious parts of the entire workflow and great use has been made of copy/paste of existing data to provide a template framework.


In addition, over the years, it has become apparent that there are no totally free or open source options to undertake the authoring and we have relied upon notepad++ using the xml plugin as our authoring tool. This is the mainstay of all development and authoring for Griffmonsters Walks.

In more recent times there has been the consideration to employ MarkDown to author the content and then to convert this to the required XML/HTML. MarkDown is a lot easier to write than to code up HTML/XML, and with many years of experience in authoring MarkDown, it seems that this may be a way forward. A similar project was undertaken some years ago during my employment days, but for whatever reason it was abandoned. Unfortunately I was not directly involved in the project and therefore was not party to why this was shelved.


So, it is time to look at this from the viewpoint of Griffmonsters Walks. Thus far, two options have presented themselves:


  1. An online solution, StackEdit https://stackedit.io/app# with HTML export
  2. An offline solution using Notepad++ with the Markdown Viewer plugin which also has a native HTML export
The idea is to author the data in MarkDown, export to HTML and use XSLT to adjust the data into the HTML code required for the walk data. This sounds fairly simple to undertake providing we use a few simple rules in the MarkDown to define the various sections of the data.

Further to this. An initial investigation has revealed more. One issue that most of the HTML export routines have is that the data is not structured. This can be overcome in a subsequent XSLT but I would rather start with a properly structured data. There are many many MarkDown editors but I am favouring Notepad++ on account that it is my goto tool for authoring and code dev.

Another tool that has been found is pandoc https://github.com/jgm/pandoc which can return structured HTML. This is a command line tool which I can integrate into an ant workflow.

This will be another little project to keep me out of trouble!

Postscript

This was supposed to be a little investigation but it has turned out to become a whole new workflow as it has gone so well. So here we go, what we have done:

  1. Having looked around at the options, Notepad++ was the most familiar and easy to use for authoring in MarkDown. It doesnt really matter as any MarkDown editor can be employed as long as the output is consistent
  2. Use Pandoc to convert MarkDown to HTML. This runs a lot better than expected, with switches to provide a template to output into, adjustment of white space, and most importantly structured HTML markup. IT even, by default, marks up images exactly how we mark up images within the blog, using the figure and figurecaption elements
  3. Use a simple XSLT to adjust the Pandoc output to the HTML required by the current pipelines, this basically adjust identifiers and classes
  4. Added another XSLT to merge the HTML into the GPX Metadata Extension
  5. Integrated this into an Ant workflow which now enables authoring, hitting the button and seeing the end result churned out ready to publish

The only caveats to this is that it does require adherence to the h1 headings which drive it. That is not a big issue as a markdown template will be a sufficient starting point. File name convention does need to be adhered to and currently the metadata still sits in the gpx file although this too could be added to markdown, although it is simply filling in fields in the gpx.

This has taken just a single day to both author a sample document and develop the workflow. I really never expected that. This will speed up the blog post generation no ends in the future. I think I deserve a drink for effort!

Friday, 9 January 2026

Walks Publishing Worklow

The workflow for the data that gets published to both the Griffmonster Walks and Rhodes Walks site follows an XML pipeline.

This uses

  1. Source data that can either be Griffmonster Walks own walkML or GPX data with HTML extension metadata
  2. An added step to add in QR code images and print ready map images. This step also runs several clean up jobs to iron out invalid data
  3. Transformation to the Blog HTML ready for publishing - this is held in a preload folder
  4. A publish folder is then used when specific items can be published. This may not be totally necessaary as the script has an allowance for not publishing items that have been published within 90 hours