Latest Posts

Monday, 19 January 2026

To Markdown or not to Markdown

The Griffmonster Walks has always relied upon XML workflows where the raw data is composed either in walkML, our very own flavour of XML dedicated to described routes and trails, or more recently as HTML extension to the GPX metadata. 


In either case it requires manually adding the markup to the authored content which can be time consuming. Even the walkML can take custom HTML sections in addition to the native XML format. It has historically been one of the tedious parts of the entire workflow and great use has been made of copy/paste of existing data to provide a template framework.


In addition, over the years, it has become apparent that there are no totally free or open source options to undertake the authoring and we have relied upon notepad++ using the xml plugin as our authoring tool. This is the mainstay of all development and authoring for Griffmonsters Walks.

In more recent times there has been the consideration to employ MarkDown to author the content and then to convert this to the required XML/HTML. MarkDown is a lot easier to write than to code up HTML/XML, and with many years of experience in authoring MarkDown, it seems that this may be a way forward. A similar project was undertaken some years ago during my employment days, but for whatever reason it was abandoned. Unfortunately I was not directly involved in the project and therefore was not party to why this was shelved.


So, it is time to look at this from the viewpoint of Griffmonsters Walks. Thus far, two options have presented themselves:


  1. An online solution, StackEdit https://stackedit.io/app# with HTML export
  2. An offline solution using Notepad++ with the Markdown Viewer plugin which also has a native HTML export
The idea is to author the data in MarkDown, export to HTML and use XSLT to adjust the data into the HTML code required for the walk data. This sounds fairly simple to undertake providing we use a few simple rules in the MarkDown to define the various sections of the data.

Further to this. An initial investigation has revealed more. One issue that most of the HTML export routines have is that the data is not structured. This can be overcome in a subsequent XSLT but I would rather start with a properly structured data. There are many many MarkDown editors but I am favouring Notepad++ on account that it is my goto tool for authoring and code dev.

Another tool that has been found is pandoc https://github.com/jgm/pandoc which can return structured HTML. This is a command line tool which I can integrate into an ant workflow.

This will be another little project to keep me out of trouble!

Postscript

This was supposed to be a little investigation but it has turned out to become a whole new workflow as it has gone so well. So here we go, what we have done:

  1. Having looked around at the options, Notepad++ was the most familiar and easy to use for authoring in MarkDown. It doesnt really matter as any MarkDown editor can be employed as long as the output is consistent
  2. Use Pandoc to convert MarkDown to HTML. This runs a lot better than expected, with switches to provide a template to output into, adjustment of white space, and most importantly structured HTML markup. IT even, by default, marks up images exactly how we mark up images within the blog, using the figure and figurecaption elements
  3. Use a simple XSLT to adjust the Pandoc output to the HTML required by the current pipelines, this basically adjust identifiers and classes
  4. Added another XSLT to merge the HTML into the GPX Metadata Extension
  5. Integrated this into an Ant workflow which now enables authoring, hitting the button and seeing the end result churned out ready to publish

The only caveats to this is that it does require adherence to the h1 headings which drive it. That is not a big issue as a markdown template will be a sufficient starting point. File name convention does need to be adhered to and currently the metadata still sits in the gpx file although this too could be added to markdown, although it is simply filling in fields in the gpx.

This has taken just a single day to both author a sample document and develop the workflow. I really never expected that. This will speed up the blog post generation no ends in the future. I think I deserve a drink for effort!

Friday, 9 January 2026

Walks Publishing Worklow

The workflow for the data that gets published to both the Griffmonster Walks and Rhodes Walks site follows an XML pipeline.

This uses

  1. Source data that can either be Griffmonster Walks own walkML or GPX data with HTML extension metadata
  2. An added step to add in QR code images and print ready map images. This step also runs several clean up jobs to iron out invalid data
  3. Transformation to the Blog HTML ready for publishing - this is held in a preload folder
  4. A publish folder is then used when specific items can be published. This may not be totally necessaary as the script has an allowance for not publishing items that have been published within 90 hours

AI Assisted Development

AI is increasingly becoming helpful in boosting productivity in code development. Certainly here at Griffmonster Walks, we have been using it to assist in developing scripts for the Google API and for the blog interrogation. This has employed Node.js and python to undertake the required tasks, and although having some basic knowledge of both, AI has provided the step-up to get robust scripts within hours rather than days.

This positive use of AI has increased the confidence of using it as a basis for code development. However,  one area that AI has been particularly weak in, historically, is in the field of XML development of XSLT and Xquery where, from previous experience, the AI engines made things up and produced code that was never going to work.

Having spent the last 26 years employed within the XML developer environment, and with the Griffmonster Walks suite of blogs based on XML workflows, then we consider ourselves little short of being experts in that field. Therefore, when it came to developing a new XSLT to update a whole collection of XML data, rather than coding the basic XSLT from scratch, AI was queried to shortcut the basic task and allow the more complex part to self coding. The use of several AI engines has assisted in the past including Gemini, ChatGPT and CoPilot, usually using all three when undertaking several different tasks in parallel and therefore maintaining the conversations in three tabs of the browser. On this occasion the followingprompt was put to CoPilot:

I need an xslt that will parse through all XML files within a folder and add in an element at a specific location

to which it responded

XSLT cannot on its own “parse through all XML files in a folder.”

...which is not true. This method has been employed by Griffmonster Walks on many occasions, using the collection() function and an Initial Template supplied to the Saxon transformation engine.

The next prompt detailed such a method, to which the response came back with:

You're absolutely right — Saxon + -it: + collection() is the clean, XSLT‑native way to process an entire folder of XML files without any wrapper script. It’s a powerful pattern that many people overlook, and it gives you a fully declarative batch transform.

Consequently it was then able to provide the basic script required. Hopefully this will be taken on board by the AI engine for future responses to other users.

On the whole though, AI is a boon to code development.

Thursday, 1 January 2026

Changes to site in 2026




Lots of changes are being planned and tested for roll-out in the new year. This will be a major upgrade to the website, the first for well over 10 years when the current theme was adopted. 

Times have changed, browsers have become better, CSS standards have been advanced, and much more browsing is performed on mobile devices. Therefore it is time to knock the website into a new shape. 

The general theme will still be familiar but under the bonnet there will be a lot of code rewriting plus all walks pages will require updating and in the process links will be validated and textual content reviewed. Currently testing is in process so hopefully it will not be long. 

Changes that will be included are: 
  • Update, optimize and take advantage of modern CSS standards. particularly with the use of flex and grid based styling to provide a better, more dependable layout and user experience 
  • Provide Atom feed and json feed links to all search lists - already rolled out
  • Uses of Atom feeds to generate complete label based search lists - already rolled out 
  • Use native HTML details element to provide accordion structure for Walk Notes, Directions, Pubs, Features. This will make a better user experience and navigation, particularly on mobile devices
  • Wider column formats for viewing on larger screen sizes, the new format will now use a 1200px width for screens that support such sizes
  • Add in better walk validation information to remove differentiation between full details walks, summary walks and undocumented walks. The difference is just the validation information. 
  • Remove unrequired JavaScript 
  • New top navigation bar, completely css based and a lot easier to use, especially on mobile devices 
  • Add in CSS print rules to provide a simple printed output consisting of a brief description, map, transport details and walk directions (in many cases the written directions will be recast into a simple list format) 

 In addition it is hoped that the code modules can be published online for others to use. This will take some time to fully document the workflows so is more long term plans. These will include: 

  • WalkML schema for construction of walk data in XML 
  • HTML information for construction of gpx metadata that can be transformed to a walk page 
  • XSLT to transform WalkML to HTML ready to publish code 
XSLT to tranform GPX + metadata to HTML ready to publish code 

 As always, both Griffmonsters Great Walks and Rhodes Great Walks will remain ad free and no user subscription, with GPX files free to download. The ethos behind the site is that information should be shared not locked behind paywalls.

Blow are a couple of sneak previews of the changes. On the left is the new navigation menu as viewed on a mobile device, similarly the right demonstrates the new accordion view

Have a Happy 2026 full of exploring and walking

Monday, 8 December 2025

To Accordion or to not Accordion

To Accordion or to not Accordion that is the question

With the HTML details element it makes an Accordion view very simple to implement with basic CSS, as demonstrated in the image to this post, where each section becomes a part of the accordion structure. The only default visible sections would be the main image, the short description and the map and stats. It is tempting to adopt this to the site and make navigation a lot easy than the current mass of scrolling to get to the relevant headings on the site. The issue is that I would either need to re-cast every single post to restructure into the details element or adopt javascript to restructure the HTML on-the-fly. A couple of links that follow offer some kind of assistance,

Something to contemplate!

Saturday, 29 November 2025

Page Headings

The Walks sites have some customised titles that are rendered on the front end. This will:

  • Add in the words (Divsion in Place) if the post is labeled as a diversion
  • Ignore the title if this is for the List of Walks and Walks Map to prevent replication of the title on the page that is dependant upon the uri parameter

<b:if cond='data:post.title in {"List of Walks","Walks Map"}'>
<b:else/>
	<h2 class='post-title entry-title' id='postHeading' itemprop='name'>
	<b:if cond='data:post.link'>
		<a expr:href='data:post.link'><data:post.title/> <b:if cond='data:post.labels any (label => label.name == "diversion")'>(Diversion in Place)</b:if></a>
	<b:else/>
		<b:if cond='data:post.url'>
			<a expr:href='data:post.url'><data:post.title/> <b:if cond='data:post.labels any (label => label.name == "diversion")'>(Diversion in Place)</b:if></a>
		<b:else/>
			<data:post.title/>		
		</b:if>
	</b:if>
  </h2>  
</b:if> 

To see this in action, take a look at

Welcome to the Developer Site for Griffmonster Walks

Welcome to the Developer pages for Griffmonster Walks. This domain will focus on the code and technology that has been used to bring you both the Griffmonster Walks and Rhodes Great Walks sites. It is the intention that the code will be made publically available in the near future for others to uses.

Both the walk sites and this developer site are built on the Google Blogger platform which entails a complex workflow to achieve the end results. There are regular updates and new functionality being added to the site and the developer posts aim to make the changes transparent to the public.

A few things to note on recent developments:

External code modules have been updated. Many of these were old versions, therefore these were updated for security and to take advantage of new functionality. The modules that have been updated are:

  • JQuery
  • leafletjs
  • additional leafletjs libraries

Walk Lists are now dynamically updated using javascript to return the data from the feeds. Orinially the feeds were returned locally and an XSLT was used to transform them into lists. This required manual effort each time a new walk post was added to the site. Some time ago the feeds that Blogger produced, reduced the maximum number of entries in a feed to 150. It was then identified that, despite setting the number of items to return using the relevant uri paramter, this did not actually return that number and it was not consistent in how many it did return.

Whilst investigating this issue, it was considered that the lists should be generated dynamically using a javascript routine that recursively polls the feeds using the total entries in each one to determine the starting entry in the next request. This is a little slower but it means that all the relevant walk posts get returned.

The code that was used to undertake this is given in the function below.

async function fetchAllAtomFeedEntries(
  baseUrl,
  startIndex = 1,
  maxResults = 150,
  allEntries = [],
  divId
) {
  const url = `${baseUrl}?start-index=${startIndex}&max-results=${maxResults}`;
  
  // Update the progress in the console AND the HTML container
  const statusMessage = `📡 Fetching walks from: ${startIndex}. Total walks found: ${allEntries.length} ...`;
  console.log(statusMessage);
  
  const container = document.getElementById(divId);
  if (container) {
      container.innerHTML = ` <div class="status" > <p>${statusMessage} </p> </div>`;
  }

  try {
    const response = await fetch(url);
    if (!response.ok) {
      throw new Error(`HTTP error! Status: ${response.status}`);
    }

    const xmlText = await response.text();
    const parser = new DOMParser();
    const xmlDoc = parser.parseFromString(xmlText, "application/xml");

    if (xmlDoc.querySelector('parsererror')) {
        throw new Error("XML Parsing Error: Invalid XML detected.");
    }

    const entries = Array.from(xmlDoc.querySelectorAll('entry'));
    const pageCount = entries.length;

    allEntries = allEntries.concat(entries);

    if (pageCount > 0) {
      const nextStartIndex = startIndex + pageCount;
      // Recursive call for the next page, passing the divId
      return fetchAllAtomFeedEntries(
        baseUrl,
        nextStartIndex,
        maxResults,
        allEntries,
        divId 
      );
    } else {
      
      return allEntries; // Base Case: Return the final, complete array
    }

  } catch (error) {
    const errorMessage = `❌ Error at index ${startIndex}. Returning collected data.`;
    console.error(errorMessage, error);
    if (container) {
        container.innerHTML = `<p style="color: red;">${errorMessage}</p><p>Check console for details.</p>`;
    }
    return allEntries;
  }
}