Friday, 13 March 2026

Leafletjs issue with elevation toggle button

Both the Rhodes Great Walks and Griffmonsters Great Walks sites make ample use of the Raruto Leaflet elevation plugin. This works very well and provides a graphic interface of the elevation of the gpx route, as seen in the image above. However, there has been a persistent niggling bug with the top right toggle switch which collapses/expands the elevation panel. After the page initally loads, upon clicking the toggle switch for the first time results in nothing happening and a second click is required to collapse the panel. The toggle button then works fine on subsequent clicks.

When trying to resolve the issue it was noticed that right clicking on the toggle button to bring up the developer panel in chrome resulted in the panel collapsing. That observation provided the "smoking gun" of the cause of the issue. It proves that the plugin is listening for focus, context menus, or window/container blur events to determine its state, rather than just a simple click. When you right-click, the browser shifts focus and triggers a "re-render" or a "state check" in Leaflet. The plugin realizes, "Oh, I'm supposed to be expanded (or collapsed)," and it snaps into position. This is why your first click does nothing—the plugin is essentially "asleep" until an external event (like a right-click or a second click) wakes it up.

The following function provided the solution.

  • Focusing: By calling btn.focus(), we mimic the part of the right-click that tells the browser "this element is now active."
  • Event Dispatching: Many Leaflet plugins don't just listen for click; they listen for the mousedown/mouseup sequence. By firing those manually, we clear the "waiting" state of the plugin.
  • The Result: When the user finally performs their "first" click, the plugin already thinks it has been interacted with, so it responds immediately.
// this will force the toggle button to work on page load
controlElevation.on('eledata_loaded', function() {
    // 1. Find the toggle button
    var btn = document.querySelector('.elevation-toggle-icon');
    
    if (btn) {
        // 2. Simulate the "Focus/Wake" behavior when right-clicking
        btn.focus();
        
        // 3. Dispatch a fake "initial" click that the plugin expects
        // but we do it so fast the user doesn't see it.
        // This 'primes' the toggle logic.
        btn.dispatchEvent(new MouseEvent('mousedown', {bubbles: true}));
        btn.dispatchEvent(new MouseEvent('mouseup', {bubbles: true}));
        
        // 4. Force the internal variable to match the VISUAL reality
        // Since it's visible on load, we tell it it's NOT collapsed.
        controlElevation._collapsed = false;

	// The Fix: Force the plugin to initialize its expanded state logic
    if (typeof controlElevation._expand === "function") {
        controlElevation._expand();
        
        // Ensure the internal state is synced so the next click is a 'collapse'
        controlElevation._collapsed = false; 
    }

    // Final layout refresh
    setTimeout(function() {
        if (typeof map !== 'undefined') map.invalidateSize();
    }, 100);


     // 5. Trigger a fake resize to make sure the internal 'brain' is active
      window.dispatchEvent(new Event('resize'));
    }
});

Thursday, 12 March 2026

Distance and Terrain Information

More development tweaks now rolled out. When using the location and collection searches, lozenges indicating both distance and terrain will display at the top of each walk card in the results. This is not available on the default blogger label searches.

A key is provided for the various distances and terrains legends, this is availble from an information button that sits to the right of the lozenges

This has also been implemented into the map markers

For those who want to see this in practice, go to All Walks Search

Tuesday, 3 March 2026

Saturday, 28 February 2026

Feed Cache Goes Live

After several days of testing, the feed Cache has now gone live on one page. Requesting the Walk Collections Page is now very fast as this is required to query all the posts in the feed. The old code took time to request all the feed uris, whereas the new code just requests the cached feed file from the GitHub CDN.

There is also a new page that was used for testing that pulls back a Random Selection of Walks.

Other pages may benefit this, but certainly where the labels have limited results this generally works fairly efficiently using the feed itself.

The only issue found during the test period was where the CDN seemed to either go down or lose the data. This seemed to resolve itself so it is not certain what caused this. It was a one-off issue. In such cases the code falls back to the request the live feed.

Wednesday, 25 February 2026

Site Search Response times and Employing a Cached Feed

Currently, the search page routines on Griffmonsters Walks use a script to pull back all posts from the feed as json which then can be interrogated. This has to be performed in a recursive manner as each feed request only contains a certain number of entries, for whatever reason. Therefore, to return all entries, a feed request has to count the entries in the returned data and use this number to make another feed request with the appropriate start-index value, recursing over the feed requests until a feed with no entries is returned.

https://griffmonster-walks.blogspot.com/feeds/posts/summary?alt=json&start-index=38

This works well, albeit the user has to wait a few seconds until all items have been retrieved. As the number of walks on the site increases, this wait time will get progressively worse and it has been something that has been in the back of my mind for some time now. To which, in recent days I have a proposed solution - a cache of feed data. After some research, the direction I am currently testing is a cache generated on GitHub using a simple script and node.js to run a similar routine to what we currently have on the search pages. This, combined with a GitHub workflow that can regenerate the cache file provides a mechanism to provide a cached copy of the full feed available via a uri request to GitHub. To keep the size down, the cache is filtered with only the components required by the search routines.

Initial tests are looking promising, although in some cases there was a 10+ second delay in retrieving the cached data. With some more investigation and tweaks, plus taking advantage of the GitHub CDN, this now seems to be lightning fast compared to the present routines. There is more testing to do, but currently a running test page is looking good, and the promise of more responsive search pages are on the horizon.

To make this complete, there is already a daily cache refresh in the GitHub workflow, plus a routine in my local ant workflow that can be fired when deploying new or updated posts, this will trigger a cache update, and a CDN purge of the data.

I will keep you updated on progress..

Friday, 13 February 2026

Site Updates and New Tools

It is only February, and—like buses you wait ages for—the Griffmonster websites have seen two major updates within six weeks of each other.

The site was overdue for a significant refresh. Work commitments over recent years meant that maintenance and improvements were left unattended for longer than intended. Hopefully, things will now settle into a steadier rhythm, with only incremental updates in the months ahead.

The first round of updates at the start of the year focused primarily on under-the-hood improvements. The most recent changes concentrate on the front end, improving usability for new visitors and bringing a more consistent design across the site.

What Has Changed

  • A new home page – Less cluttered, fully responsive, and designed to provide clear access to all key areas of the site. Font-based icons are now used to reinforce a consistent visual theme.
  • Unified search pages – Lists and walks have been combined into single search interfaces, with results displayed simultaneously on both map and list views.
  • New search functionality, including:
    • A location search that defaults to the user’s location (where available), while also allowing place name and postcode searches.
    • A collection search, accessible from collection pages and linked references within walk pages. (These correspond to Blogger labels, though “collection” is a clearer description.) Some collections remain hidden, such as archived pages and deprecated news items previously used before Facebook became the primary news channel.
    • Adjustments to the text/keyword search widget so that results align more closely with the rest of the site. This still uses Blogger’s built-in functionality and therefore remains paginated, whereas other searches display all results on a single map and/or list view by querying site feeds.
  • A flexible layout that optimises screen space, particularly on larger displays where search results are now presented in columns.
  • Expanded top menu navigation, making it easier to move between key sections when using mobile devices.
  • A new GPX Route Creator, described in more detail below.
  • Greater prominence given to the Hiking Time Calculator, a utility I use regularly—particularly when planning routes involving public transport. After calibrating it against several previously completed walks to determine an accurate average walking speed when hiking with regular companions, it has become a very practical planning tool.
  • Similar structural updates have also been applied to the Rhodes site, although these are less extensive due to the smaller number of walks currently listed there.

Overall, these changes should make the site easier to navigate, more intuitive for new visitors, and more efficient when searching for walks tailored to specific requirements.

GPX Route Creator

The new tool can be found on the GPX Route Creator page.

For many years, all walks have begun with route research, followed by plotting on an online mapping utility and exporting the result as a GPX file. Over time, many different tools have been used, with GPS Visualizer being one of the most consistent since GPX files became widely used alongside traditional OS maps. For the Rhodes walks in particular, GPX tools were invaluable when reliable mapping was not available.

It has long been an ambition to develop an in-house GPX creator—not necessarily to replace established tools entirely, but as both a learning exercise and a practical addition to the site.

An earlier alpha version was built using customised open-source code and quietly hosted on the site. While functional, it was cluttered, somewhat temperamental, and difficult to use on mobile devices. It never quite achieved the simplicity that was intended.

That version has now been retired and replaced with a new build created from scratch. Although still considered a beta version, it is significantly cleaner and easier to use.

Key Improvements

  • Use of the full page width, with sidebars, header, and footer removed to maximise map space.
  • A full-screen map option for all devices.
  • A collapsible and draggable control panel for route creation, including undo, clear, and export functions.
  • A snap-to-path toggle to assist with accurate route plotting—particularly helpful when working on a mobile device.

Initial testing has been very positive, and we have already used it to create numerous new routes in place of external tools.

One limitation is the absence of elevation data within exported GPX files. Elevation support was explored using the Open-Meteo Elevation API, but request rate limits frequently resulted in HTTP 429 responses. As a result, this feature was removed. Elevation data can still be added afterwards using external tools such as GPS Visualizer.

Overall, we are pleased with what has been achieved so far and would welcome any feedback from those who use it.

Looking Ahead: Undocumented Routes

The broader future direction of the site is also under consideration—not its existence, but its purpose.

When the site began in 2010, it functioned primarily as a personal blog documenting completed walks, with the hope that others might find the information useful. Over time, the emphasis has shifted. The core objective now is to record validated walks and provide GPX downloads that others can confidently use. While a personal narrative element remains, it is no longer the primary focus.

During the planning process for new walks, multiple routes are often developed, though only some are ultimately undertaken. Over the years, this has resulted in a substantial collection of researched routes that have not been personally walked but do exist as fully developed GPX files.

The question now is whether these should be made publicly available as undocumented routes.

In the UK, where routes often follow established Public Rights of Way protected by legislation, there is a reasonable basis for sharing such routes even if they have not been personally validated on the ground.

This is not the case for the sister site, Great Rhodes Walks, where Greek paths and tracks do not carry the same legal certainty. A route cannot be assumed to be passable unless it has been physically walked, and even then access conditions can change without formal notice.

For the UK site, releasing a limited number of clearly labelled undocumented routes may be worthwhile as a trial. The response will help determine whether this becomes a more permanent feature.

We shall see.

Wednesday, 11 February 2026

Monday, 19 January 2026

To Markdown or not to Markdown

The Griffmonster Walks has always relied upon XML workflows where the raw data is composed either in walkML, our very own flavour of XML dedicated to described routes and trails, or more recently as HTML extension to the GPX metadata. 


In either case it requires manually adding the markup to the authored content which can be time consuming. Even the walkML can take custom HTML sections in addition to the native XML format. It has historically been one of the tedious parts of the entire workflow and great use has been made of copy/paste of existing data to provide a template framework.


In addition, over the years, it has become apparent that there are no totally free or open source options to undertake the authoring and we have relied upon notepad++ using the xml plugin as our authoring tool. This is the mainstay of all development and authoring for Griffmonsters Walks.

In more recent times there has been the consideration to employ MarkDown to author the content and then to convert this to the required XML/HTML. MarkDown is a lot easier to write than to code up HTML/XML, and with many years of experience in authoring MarkDown, it seems that this may be a way forward. A similar project was undertaken some years ago during my employment days, but for whatever reason it was abandoned. Unfortunately I was not directly involved in the project and therefore was not party to why this was shelved.


So, it is time to look at this from the viewpoint of Griffmonsters Walks. Thus far, two options have presented themselves:


  1. An online solution, StackEdit https://stackedit.io/app# with HTML export
  2. An offline solution using Notepad++ with the Markdown Viewer plugin which also has a native HTML export
The idea is to author the data in MarkDown, export to HTML and use XSLT to adjust the data into the HTML code required for the walk data. This sounds fairly simple to undertake providing we use a few simple rules in the MarkDown to define the various sections of the data.

Further to this. An initial investigation has revealed more. One issue that most of the HTML export routines have is that the data is not structured. This can be overcome in a subsequent XSLT but I would rather start with a properly structured data. There are many many MarkDown editors but I am favouring Notepad++ on account that it is my goto tool for authoring and code dev.

Another tool that has been found is pandoc https://github.com/jgm/pandoc which can return structured HTML. This is a command line tool which I can integrate into an ant workflow.

This will be another little project to keep me out of trouble!

Postscript

This was supposed to be a little investigation but it has turned out to become a whole new workflow as it has gone so well. So here we go, what we have done:

  1. Having looked around at the options, Notepad++ was the most familiar and easy to use for authoring in MarkDown. It doesnt really matter as any MarkDown editor can be employed as long as the output is consistent
  2. Use Pandoc to convert MarkDown to HTML. This runs a lot better than expected, with switches to provide a template to output into, adjustment of white space, and most importantly structured HTML markup. IT even, by default, marks up images exactly how we mark up images within the blog, using the figure and figurecaption elements
  3. Use a simple XSLT to adjust the Pandoc output to the HTML required by the current pipelines, this basically adjust identifiers and classes
  4. Added another XSLT to merge the HTML into the GPX Metadata Extension
  5. Integrated this into an Ant workflow which now enables authoring, hitting the button and seeing the end result churned out ready to publish

The only caveats to this is that it does require adherence to the h1 headings which drive it. That is not a big issue as a markdown template will be a sufficient starting point. File name convention does need to be adhered to and currently the metadata still sits in the gpx file although this too could be added to markdown, although it is simply filling in fields in the gpx.

This has taken just a single day to both author a sample document and develop the workflow. I really never expected that. This will speed up the blog post generation no ends in the future. I think I deserve a drink for effort!

Friday, 9 January 2026

Walks Publishing Worklow

The workflow for the data that gets published to both the Griffmonster Walks and Rhodes Walks site follows an XML pipeline.

This uses

  1. Source data that can either be Griffmonster Walks own walkML or GPX data with HTML extension metadata
  2. An added step to add in QR code images and print ready map images. This step also runs several clean up jobs to iron out invalid data
  3. Transformation to the Blog HTML ready for publishing - this is held in a preload folder
  4. A publish folder is then used when specific items can be published. This may not be totally necessaary as the script has an allowance for not publishing items that have been published within 90 hours

AI Assisted Development

AI is increasingly becoming helpful in boosting productivity in code development. Certainly here at Griffmonster Walks, we have been using it to assist in developing scripts for the Google API and for the blog interrogation. This has employed Node.js and python to undertake the required tasks, and although having some basic knowledge of both, AI has provided the step-up to get robust scripts within hours rather than days.

This positive use of AI has increased the confidence of using it as a basis for code development. However,  one area that AI has been particularly weak in, historically, is in the field of XML development of XSLT and Xquery where, from previous experience, the AI engines made things up and produced code that was never going to work.

Having spent the last 26 years employed within the XML developer environment, and with the Griffmonster Walks suite of blogs based on XML workflows, then we consider ourselves little short of being experts in that field. Therefore, when it came to developing a new XSLT to update a whole collection of XML data, rather than coding the basic XSLT from scratch, AI was queried to shortcut the basic task and allow the more complex part to self coding. The use of several AI engines has assisted in the past including Gemini, ChatGPT and CoPilot, usually using all three when undertaking several different tasks in parallel and therefore maintaining the conversations in three tabs of the browser. On this occasion the followingprompt was put to CoPilot:

I need an xslt that will parse through all XML files within a folder and add in an element at a specific location

to which it responded

XSLT cannot on its own “parse through all XML files in a folder.”

...which is not true. This method has been employed by Griffmonster Walks on many occasions, using the collection() function and an Initial Template supplied to the Saxon transformation engine.

The next prompt detailed such a method, to which the response came back with:

You're absolutely right — Saxon + -it: + collection() is the clean, XSLT‑native way to process an entire folder of XML files without any wrapper script. It’s a powerful pattern that many people overlook, and it gives you a fully declarative batch transform.

Consequently it was then able to provide the basic script required. Hopefully this will be taken on board by the AI engine for future responses to other users.

On the whole though, AI is a boon to code development.

Thursday, 1 January 2026

Changes to site in 2026




Lots of changes are being planned and tested for roll-out in the new year. This will be a major upgrade to the website, the first for well over 10 years when the current theme was adopted. 

Times have changed, browsers have become better, CSS standards have been advanced, and much more browsing is performed on mobile devices. Therefore it is time to knock the website into a new shape. 

The general theme will still be familiar but under the bonnet there will be a lot of code rewriting plus all walks pages will require updating and in the process links will be validated and textual content reviewed. Currently testing is in process so hopefully it will not be long. 

Changes that will be included are: 
  • Update, optimize and take advantage of modern CSS standards. particularly with the use of flex and grid based styling to provide a better, more dependable layout and user experience 
  • Provide Atom feed and json feed links to all search lists - already rolled out
  • Uses of Atom feeds to generate complete label based search lists - already rolled out 
  • Use native HTML details element to provide accordion structure for Walk Notes, Directions, Pubs, Features. This will make a better user experience and navigation, particularly on mobile devices
  • Wider column formats for viewing on larger screen sizes, the new format will now use a 1200px width for screens that support such sizes
  • Add in better walk validation information to remove differentiation between full details walks, summary walks and undocumented walks. The difference is just the validation information. 
  • Remove unrequired JavaScript 
  • New top navigation bar, completely css based and a lot easier to use, especially on mobile devices 
  • Add in CSS print rules to provide a simple printed output consisting of a brief description, map, transport details and walk directions (in many cases the written directions will be recast into a simple list format) 

 In addition it is hoped that the code modules can be published online for others to use. This will take some time to fully document the workflows so is more long term plans. These will include: 

  • WalkML schema for construction of walk data in XML 
  • HTML information for construction of gpx metadata that can be transformed to a walk page 
  • XSLT to transform WalkML to HTML ready to publish code 
XSLT to tranform GPX + metadata to HTML ready to publish code 

 As always, both Griffmonsters Great Walks and Rhodes Great Walks will remain ad free and no user subscription, with GPX files free to download. The ethos behind the site is that information should be shared not locked behind paywalls.

Blow are a couple of sneak previews of the changes. On the left is the new navigation menu as viewed on a mobile device, similarly the right demonstrates the new accordion view

Have a Happy 2026 full of exploring and walking

Monday, 8 December 2025

To Accordion or to not Accordion

To Accordion or to not Accordion that is the question

With the HTML details element it makes an Accordion view very simple to implement with basic CSS, as demonstrated in the image to this post, where each section becomes a part of the accordion structure. The only default visible sections would be the main image, the short description and the map and stats. It is tempting to adopt this to the site and make navigation a lot easy than the current mass of scrolling to get to the relevant headings on the site. The issue is that I would either need to re-cast every single post to restructure into the details element or adopt javascript to restructure the HTML on-the-fly. A couple of links that follow offer some kind of assistance,

Something to contemplate!

Saturday, 29 November 2025

Page Headings

The Walks sites have some customised titles that are rendered on the front end. This will:

  • Add in the words (Divsion in Place) if the post is labeled as a diversion
  • Ignore the title if this is for the List of Walks and Walks Map to prevent replication of the title on the page that is dependant upon the uri parameter

<b:if cond='data:post.title in {"List of Walks","Walks Map"}'>
<b:else/>
	<h2 class='post-title entry-title' id='postHeading' itemprop='name'>
	<b:if cond='data:post.link'>
		<a expr:href='data:post.link'><data:post.title/> <b:if cond='data:post.labels any (label => label.name == "diversion")'>(Diversion in Place)</b:if></a>
	<b:else/>
		<b:if cond='data:post.url'>
			<a expr:href='data:post.url'><data:post.title/> <b:if cond='data:post.labels any (label => label.name == "diversion")'>(Diversion in Place)</b:if></a>
		<b:else/>
			<data:post.title/>		
		</b:if>
	</b:if>
  </h2>  
</b:if> 

To see this in action, take a look at

Welcome to the Developer Site for Griffmonster Walks

Welcome to the Developer pages for Griffmonster Walks. This domain will focus on the code and technology that has been used to bring you both the Griffmonster Walks and Rhodes Great Walks sites. It is the intention that the code will be made publically available in the near future for others to uses.

Both the walk sites and this developer site are built on the Google Blogger platform which entails a complex workflow to achieve the end results. There are regular updates and new functionality being added to the site and the developer posts aim to make the changes transparent to the public.

A few things to note on recent developments:

External code modules have been updated. Many of these were old versions, therefore these were updated for security and to take advantage of new functionality. The modules that have been updated are:

  • JQuery
  • leafletjs
  • additional leafletjs libraries

Walk Lists are now dynamically updated using javascript to return the data from the feeds. Orinially the feeds were returned locally and an XSLT was used to transform them into lists. This required manual effort each time a new walk post was added to the site. Some time ago the feeds that Blogger produced, reduced the maximum number of entries in a feed to 150. It was then identified that, despite setting the number of items to return using the relevant uri paramter, this did not actually return that number and it was not consistent in how many it did return.

Whilst investigating this issue, it was considered that the lists should be generated dynamically using a javascript routine that recursively polls the feeds using the total entries in each one to determine the starting entry in the next request. This is a little slower but it means that all the relevant walk posts get returned.

The code that was used to undertake this is given in the function below.

async function fetchAllAtomFeedEntries(
  baseUrl,
  startIndex = 1,
  maxResults = 150,
  allEntries = [],
  divId
) {
  const url = `${baseUrl}?start-index=${startIndex}&max-results=${maxResults}`;
  
  // Update the progress in the console AND the HTML container
  const statusMessage = `📡 Fetching walks from: ${startIndex}. Total walks found: ${allEntries.length} ...`;
  console.log(statusMessage);
  
  const container = document.getElementById(divId);
  if (container) {
      container.innerHTML = ` <div class="status" > <p>${statusMessage} </p> </div>`;
  }

  try {
    const response = await fetch(url);
    if (!response.ok) {
      throw new Error(`HTTP error! Status: ${response.status}`);
    }

    const xmlText = await response.text();
    const parser = new DOMParser();
    const xmlDoc = parser.parseFromString(xmlText, "application/xml");

    if (xmlDoc.querySelector('parsererror')) {
        throw new Error("XML Parsing Error: Invalid XML detected.");
    }

    const entries = Array.from(xmlDoc.querySelectorAll('entry'));
    const pageCount = entries.length;

    allEntries = allEntries.concat(entries);

    if (pageCount > 0) {
      const nextStartIndex = startIndex + pageCount;
      // Recursive call for the next page, passing the divId
      return fetchAllAtomFeedEntries(
        baseUrl,
        nextStartIndex,
        maxResults,
        allEntries,
        divId 
      );
    } else {
      
      return allEntries; // Base Case: Return the final, complete array
    }

  } catch (error) {
    const errorMessage = `❌ Error at index ${startIndex}. Returning collected data.`;
    console.error(errorMessage, error);
    if (container) {
        container.innerHTML = `<p style="color: red;">${errorMessage}</p><p>Check console for details.</p>`;
    }
    return allEntries;
  }
}