Blog

Blogging on programming and life in general.

  • I've been using the gatsby-plugin-smoothscroll plugin in the majority of GatsbyJS builds to provide a nice smooth scrolling effect to a HTML element on a page. Unfortunately, it lacked the capability of providing an offset scroll to position, which is useful when a site has a fixed header or navigation.

    I decided to take the gatsby-plugin-smoothscroll plugin and simplify it so that it would not require a dependency on polyfilled smooth scrolling as this is native to most modern browsers. The plugin just contains a helper function that can be added to any onClick event with or without an offset parameter.

    Usage

    The plugin contains a smoothScrollTo helper function that can be imported onto the page:

    // This could be in your `pages/index.js` file.
    
    import smoothScrollTo from "gatsby-plugin-smoothscroll-offset";
    

    The smoothScrollTo function can then be used within an onClick event handler:

    <!-- Without offset -->
    <button onClick={() => smoothScrollTo("#some-id")}>My link without offset</button>
    
    <!-- With offset of 80px -->
    <button onClick={() => smoothScrollTo("#some-id", 80)}>My link with offset</button>
    

    Demo

    A demonstration of the plugin in use can be found by navigating to my Blog Archive page and clicking on any of the category links.

    Prior to this plugin, the category list header would be covered by the sticky navigation.

    Smooth Scrolling without Offset

    Now that an offset of 80px can be set, the category list header is now visible.

    Smooth Scrolling with Offset

    Links

  • I woke up yesterday morning to a serendipitous discovery that all my stock positions had successfully been transferred from Freetrade to Trading 212. There really is nothing more rewarding than seeing all investments under one stockbroker with a nice five-figure number staring back at you.

    Since I started investing in stocks at the start of 2022, the only stock broker app that was available to me was Freetrade and it made my introduction to making investments into hand-picked stocks very straight-forward. But as my portfolio grew, so did my requirements and when Trading 212 opened its doors to new sign-ups (after being on a very long waiting list), I decided to see if the grass was truly greener on the other side... and it was.

    Trading 212 had what Freetrade didn't:

    • An active community of like-minded users commenting on their views and insights against each stock.
    • 5.2% (as of today 5.17%) interest on held cash.
    • Introduction of a Cash ISA.
    • Ability to view stock graphs in detailed view with the ability to annotate specific trendlines.
    • Free use of the use of a Stocks and Shares ISA.
    • Lower FX rates.
    • Fractional shares on ETFs.

    Unfortunately for Freetrade, I just couldn't see a future where they could provide the features I needed in addition to free use of the service. I was being charged £60 per year for the privilege of a Stocks and Shares ISA - free on Trading 212.

    Even though I explored Trading 212 when it became available last year, I made a decision to only start investing at the start of the 2024 tax year to avoid any ISA-related tax implications by utilising two Stocks and Shares (S&S) ISAs. This is now a void issue as you are able to invest in two different S&S ISAs as long as you do not exceed the yearly £20k limit.

    Planning The Move

    I am currently seven months into using Trading 212 for investing but it was only until October I felt I was in a position to transfer all my stock holding from Freetrade. Why such a long wait?

    The wait was primarily due to not really understanding the correct route to transferring my portfolio without eating into my current years tax free allocation, whilst retaining the average stock price per holding. I also had concerns over the large sum of money to transfer and it's something that shouldn't be taken lightly.

    I am hoping this post will provide some clarity through my experience in transferring my portfolio to Trading 212, even if it is tailored more towards what I experienced in moving away from Freetrade.

    In-Specie Transfer

    In-specie wasn't a term I was familiar with prior to researching how I could move my stock portfolio to another platform.

    'In specie' is a Latin term meaning 'in the actual form'. Transferring an asset 'in specie' means to transfer the ownership of that asset from one person/company/entity to another person/company/entity in its current form, that is without the need to convert the asset to cash.

    Before in-specie transfers, the only way to move from one stock broker to another was to sell all your holdings as cash to then reinvest again within the new brokerage. The main disadvantages of doing this is:

    • Time out of the market creating more risk to price fluctuations.
    • Potential loss due to the difference between the sell and buy prices.
    • Additional brokerage fees when repurchasing the same assets with a new provider.
    • Loss of tax efficiency if you have a large portfolio that might wipe out or exceed the yearly tax-free allocation.
    • Missed dividend payouts.
    • Taking losses on selling stocks that haven't made a profit.

    I've noticed over the last couple of years in-specie transfers have become more universally supported amongst the smaller stock brokers (the ones you and I are more likely to use) such as Freetrade, Trading 212 and InvestEngine, which makes moving from one platform to another a much simpler process.

    Even though the process has become simpler, it is still a time-consuming process as transfer completion can take anywhere between 4-6 weeks based on the coordination between both stock platforms.

    My In-Specie Transfer Timeline

    My own in-specie transfer had taken a little longer than I hoped - around six weeks with the key milestones dated below.

    12/10/24

    Initiated the transfer process in Trading 212 by selecting the stocks I wanted to transfer. You can select specific stocks or your whole portfolio. I based my transfer on selecting all my holdings and specifying the average stock price as I want to retain my position.

    23/10/24

    Freetrade emailed to confirm a transfer request has been received and to confirm that my portfolio is in order to allow the process to move smoothly, which entailed:

    • Adding £17 fee per each US holding in my account.
    • Rounding up any fractional shares. - Shares in their fractional state cannot be transferred. For one of my stock holdings, I decided to purchase slightly more and round up the total value rather than sell down as this stock in particular is in the negative.

    12/11/24

    Three weeks had passed and I hadn't heard anything from either party. I contacted Trading 212 support to report a delay in transfer and if any reason could be provided for this. I didn't get a reply, but the next day, things started ticking along. Maybe this gave them the 'kick' they needed?

    13/11/24

    Trading 212 completed arrangements with Freetrade and they were now in a position to start the actual transfer that will be over the course of a two week period.

    21/11/24

    I woke up to find all stocks had been transferred whilst maintaining my average stock price. There is still one minor job awaiting completion: transfer of a small amount of cash. The most important job had been done and I could now rest easy.

    Next steps

    Once the small amount of cash has been transferred, I plan on cancelling my yearly Freetrade Standard plan expiring in June 2025. By the time the transfer has been completed, I will have an outstanding 6 months left on my subscription that I can get refunded (minus a £5 admin fee).

  • When developing custom forms in Umbraco using ASP.NET Core’s Tag Helpers and DataAnnotations, I noticed that display names and validation messages weren’t being rendered for any of the fields.

    [Required(ErrorMessage = "The 'First Name' field is required.")]
    [Display(Name = "First Name")]
    public string? FirstName { get; set; }
    
    [Required(ErrorMessage = "The 'Last Name' field is required.")]
    [Display(Name = "Last Name")]
    public string? LastName { get; set; }
    
    [Required(ErrorMessage = "The 'Email Address' field is required.")]
    [Display(Name = "Email Address")]
    public string? EmailAddress { get; set; }
    
    

    This was quite an odd issue that (if I'm honest!) took me quite some time to resolve as I followed my usual approach to building forms — an approach I’ve used many times in Umbraco without any issues. The only difference in this instance was that I was using an Umbraco form wrapper.

    @using (Html.BeginUmbracoForm<ContactFormController>("Submit"))
    {
        <fieldset>
            <!-- Form fields here -->
        </fieldset>
    }
    

    I must have been sitting under a rock as I have never come across this from the years working in Umbraco. It could be down to the fact that the forms I have developed in the past didn't rely so heavily on .NET's DataAnnotation attributes.

    The only solution available to remedy this problem was to install a Nuget package (currently in beta) that has kindly been created by Dryfort.com, which resolves the display name and validation attributes for in-form rendering.

    The Nuget package works in Umbraco 10 onwards. I've personally used it in version 13 without any problem. Until there is an official Umbraco fix, this does the job nicely and highly recommended if you encounter similar issues.

  • As someone who specializes in integrations, I’m hardly ever surprised when I come across yet another CRM platform I’ve never heard of. It feels like there are almost as many CRMs out there as stars in the night sky — okay, maybe that's a bit of an exaggeration, but you get the idea.

    I was introduced to another platform while working on a small integration project: Nexudus. Nexudus is a comprehensive system designed specifically for managing coworking spaces, shared workspaces and flexible offices, whilst incorporating the features you’d expect from a customer relationship management platform.

    For one part of this integration, newsletter subscribers needed to be stored in Nexudus through a statically-generated site built on Astro, hosted in Netlify. The only way to pass subscriber data to Nexudus is through their API platform, which posed an opportunity to build this integration using Netlify serverless functions.

    The Newsletter Subscriber API documentation provides a good starting point for sending through subscriber details and assigning to specific newsletter groups. However, one issue arose during integration whereby the endpoint would error if a user was already subscribed within Nexudus, even if it was a subscription for different group.

    It would seem how Nexudus deals with existing subscribers will require a separate update process, as just using the Add Newsletter API endpoint alone does not take into consideration changes to subscription groups. It would be more straight-forward if the Mailchimp API approach was taken, whereby the same user email address can be assigned to multiple mailing lists through a single API endpoint.

    When developing the Netlify serverless function, I put in additional steps to that will allow existing subscribers to be added to new subscription groups through the following process:

    1. Look up the subscriber by email address.
    2. If a subscriber is not found, a new record is created.
    3. If a subscriber is found, update the existing record by passing through any changed values by the record ID.
    4. For an updated record, the new group ID will need to be sent along with the group ID's the user is already assigned to.

    A Github repository has been created containing the aforementioned functionality that can be found here: nexudus-netlify-functions. I may add other Nexudus API endpoints that I have been working on to this repo going forward.

  • In a world filled with technological innovation that fulfils the majority of one's every need, one can sometimes end up feeling all too sterile, especially around the creative-led tasks that should invoke something more visceral.

    It’s only a matter of time before many of us start to feel a void from relying on multifunctional devices that have become deeply intertwined with every part of our lives. Loosening even a small thread of this technological dependence can bring a profound sense of focus.

    One aspect I felt I had to change was my approach to writing as I started getting the feeling that the process was becoming all too sterile and monotonous. I had the urge to go back to a more tactile method of publishing content by starting the process with good old-fashioned pen and paper.

    One thing that became noticeably apparent when returning to this method of curating content is that the real world is far less forgiving, requiring the brain to relearn how to organise thoughts for long-form writing. In the early stages of drafting blog posts by hand, my pages were cluttered with crossed-out sentences and scribbled words. It became evident that I was really reliant on the forgiving nature of writing apps where blocks of text could easily be moved around.

    However, with each blog post I wrote by hand, my brain has managed to think further ahead when it previously lacked forethought where I regularly experienced writer's block. The posts I've published throughout September have all been curated by initially compiling a basic outline, which is then expanded upon into a longer form on paper first. This is probably how I managed to increase my output during the month. I can only attribute this to the lack of visual distractions creating a more kinesthetic environment for thoughts to gestate.

    My approach to writing has changed over the years since I have been blogging and I am reminded of how I used to assimilate ideas from a post I wrote back in 2015: Pen + Paper = Productivity. It is here where I said something profound that has been lost on me:

    Paper has no fixed structure that you are forced to conform to, which makes processing your own thoughts very easy. Unfortunately, software for note-taking has not advanced nearly as fast. It's still all too linear and fixed.

    It's been nine years since that post was written, and while technology has advance to the point of offering the convenience of writing on tablets, which I’ve done for a while using my own Apple iPad and Apple Pencil — it simply doesn’t compare. No matter how much we try to mimic the experience with "paperlike" screen protectors.

    Even though technology helps us accomplish things faster, it comes at the cost of not being in the moment. Sometimes, the journey is more meaningful than the destination, and we don’t always need to rely on technology simply because it’s there.

    Does going back to basics make the publishing process longer? Surprisingly, not as much as you’d think. I was pleasantly surprised to discover that after everything is written down on paper, the final steps are mostly mechanical — typing it up on my laptop, running a spell and grammar check, adding an image, and finally hitting the publish button.

    When handwriting long-form content, the process needs to be as easy and frictionless as possible by investing in a good quality writing instrument. To quote Emmert Wolf: An artist is only as good as his tools. Using a better pen has encouraged me to write more, especially compared to the fatigue I felt with a Bic Crystal, which I find more suited to casual note-taking.

    Conclusion

    Who knows, maybe this new approach will even improve the overall legibility of my handwriting — it really has deteriorated since I left school. Most likely the result of many years of programming. I don't think I will ever stop relying on my wife to write birthday and greeting cards anytime soon.

    I’d forgotten just how satisfying the experience of handwriting blog posts can be. It’s a bit like channelling the spirit of Bob Ross, layering words like brushstrokes that gradually form paragraphs into passages. When you're done, you can sit back and admire the canvas of carefully crafted marks you’ve created.

  • At times there is need to get a list of files that have been updated. This could for the following reasons:

    • Audit compliance to maintain records of application changes.
    • Backup verification to confirm the right files were backed up.
    • Verification of changed files to confirm which files were added, modified, or deleted during an update.
    • Security checks to ensure that there have been no unauthorised or suspicious files changed or installed through hacking.
    • Troubleshooting Issues after a new application release by seeing a list of changed files can help identify the source of issues.

    Based on the information I found online, I put together a PowerShell script that was flexible enough to meet the needs of the above scenarios, as I encountered one of them this week. I'll let you guess the scenario I faced.

    At its core, the following PowerShell script uses the Get-ChildItem command to list out all files recursively across all sub-folders, ordered by the created date descending with the addition of handful of optional parameters.

    Get-ChildItem -Path C:\My-Path -Recurse -Include *.png | 
    			Select -Last 5 CreationTime,LastWriteTime,FullName | 
    			Sort-Object -Property CreationTime -Descending | 
    			Export-Csv "file-list.csv"
    

    Breakdown of the parameters used:

    Parameter/Object Detail Is Optional
    -Path The folder path to where files need to be listed. No
    -Recurse Get files from the path and its subdirectories Yes
    -Include Filter the file output through a path element or pattern,. This only works when the "Recurse" parameter is present. Yes
    Select Set the maximum output (-Last) and list of fields to be listed. Yes
    Sort-Object Specify field and sort order. Yes
    Export-Csv Export the list of files list to a CSV. Yes

    If the files need to be sorted by last modified date, the Sort-Object property needs to be set to "LastWriteTime".

    When the script is run, you'll see the results rendered in the following way:

    CreationTime        LastWriteTime       FullName
    ------------        -------------       --------
    25/05/2023 20:33:44 25/05/2023 20:33:44 X:\Downloads\synology\Screenshot 2023-05-25 at 20.33.38.png
    16/05/2023 14:18:21 16/05/2023 14:18:21 X:\Downloads\synology\Screenshot 2023-05-16 at 14.18.15.png
    

    Further Information

  • I've been working with custom functionality for registering and authenticating external site users in Umbraco 13 using its Members feature.

    A custom Member Type was created so I could create field properties to specifically store all member registeration data. This consisted of Textboxes, Textareas and Dropdown fields.

    Getting values for fields in code is very straight-forward, but I encountered issues in when dealing with fields that consist of preset values, such as a Dropdown list of titles (Mr/Mrs/Ms/etc).

    Based on the Umbraco documentation for working with a Dropdown field, I should be able to get the selected value through this one line of code:

    @if (Model.HasValue("title"))
    {
        <p>@(Model.Value<string>("title"))</p>
    }
    

    When working with custom properties from a Member Type, the approach seems to be different. A GetValue() is the only accessor we have available to us to output a value - something we are already accustomed to working in Umbraco.

    IMember? member = memberService.GetByEmail("johndoe@gmail.com");
    string title = member.Properties["title"].GetValue()?.ToString(); // Output: "[\"Mr\"]"
    

    However, the value is returned as a serialized array. This is also the case when using the typed GetValue() accessor on the property:

    IMember? member = memberService.GetByEmail("johndoe@gmail.com");
    string title = member.GetValue<string>("title"); // Output: "[\"Mr\"]"
    

    Umbraco 13 - Dropdown Value From Custom Member Type Property

    The only way to get around this was to create a custom extension method to deserialize the string array so the value alone could be output:

    public static class MemberPropertyExtensions
    {
        /// <summary>
        /// Gets the selected value of a Dropdown property.
        /// </summary>
        /// <param name="property"></param>
        /// <returns></returns>
        public static string? GetSelectedDropdownValue(this IProperty property)
        {
            if (property == null)
                return string.Empty;
    
            string? value = property?.GetValue()?.ToString();
    
            if (string.IsNullOrEmpty(value))
                return string.Empty;
    
            string[]? propertyArray = JsonConvert.DeserializeObject<string[]>(value);
    
            return propertyArray?.FirstOrDefault();
        }
    }
    

    It's a simple but effective solution. Now our original code can be updated by adding our newly created GetSelectedDropdownValue() method to the property:

    IMember? member = memberService.GetByEmail("johndoe@gmail.com");
    string title = member.Properties["title"].GetSelectedDropdownValue();
    

    Useful Information

  • I've had a Spotify music membership for as long as I can remember. Many other subscriptions held throughout my life have come and gone, but Spotify has stood the test of time.

    A seed of doubt was planted when Spotify began raising the prices of their plans multiple times over a short period of time, beginning in April 2021. Even then, I was relatively unconcerned; it was annoying, but I felt content knowing there were no better music providers that could compete with what Spotify provided. Spotify made music very accessible to me in every way.

    During the first price hike, I trialled Apple Music during a brief period of insanity only to quickly come running back to the safety of Spotify.

    The penny dropped in May 2024, during the third price hike, when I began to question whether my Spotify usage was worth paying £11.99 per month. Even though I listen to music, I occasionally go through periods where I only listen to podcasts, which are freely available online and podcasting platforms.

    First Steps To Considering YouTube Music As A Viable Replacement

    Before making any hasty decisions, I audited all subscriptions both my wife and I use to if there is any possibility of making cost savings... Just like a Conservative party government imposing austerity measures, except my actions wouldn't lead to a Liz Truss level economic crises.

    It wasn't until I discovered my wife's YouTube Premium subscription, which she had purchased through the Apple App Store for an absurdly high price. A word to the wise: Never buy subscriptions through Apple's App Store because Apple charges a commission on top. My wife was paying around £18 per month compared to £12.99 if purchased directly from the YouTube website.

    I digress...

    This was enough to get me thinking about upgrading to the Family tier that included:

    • Ad-free videos
    • Offline downloads
    • YouTube Music
    • Add up to 5 members to the subscription

    All this costing £19.99 per month. At this price, we would be making savings if we moved away from our individual YouTube and Spotify plans. I was already sold on ad-free videos (those advertisements are so annoying!) and if I could be persuaded to subscribe to YouTube Music, this would end up being a very cost-effective option.

    The writing was on the wall. My Spotify days were numbered. I looked into what was involved (if possible) in migrating all my playlists over to YouTube Music.

    Requirements and Initial Thoughts of YouTube Music

    Prior to carrying out any form of migration, I opted for a 30 day free trial of YouTube Music as I wanted to see if it met as many key requirements as possible.

    Requirement Requirement Met?
    Availability of all songs from artists I listen to including the obscure ones Yes
    Podcasts Big yes
    Native MacOS app Room for improvement
    Ability to cast music to my speakers on my network Yes
    Quality new music suggestions Yes

    Overall, YouTube Music met majority of my requirements. As expected, it does take a little while to familiarise one self with the interface but there are similarities when compared with Spotify.

    YouTube Music - The Extension of YouTube

    YouTube Music is really an extension of YouTube in how it is able to pull in specific YouTube content, whether that is music videos, podcasts or shows. All the audio related content in video form you would normally view in YouTube is encompassed here. In most cases, this is seen as an advantage, however the only aspect where the lines between music and video get blurred is in the auto-generated "Liked music" playlist.

    You may find the "Liked music" playlist is already prefilled with videos you have liked on YouTube. If YouTube Music deems a liked video as music, it will also be shown here, which isn't necessarily accurate. For example, it automatically listed a Breaking Bad Parody video I liked from 9 years ago. If you prefer your randomly liked videos to stay in solely in YouTube, you have to manually disable the "Show your liked music from YouTube" feature in the settings.

    The Music Catalog and New Music Recommendations

    The music catalog size is on par with Spotify and there hasn't been a time where a track wasn't available. In fact, there were 3-4 tracks in my Spotify playlist that was no longer accessible, but this was not the case on YouTube Music, which was a surprise.

    During times when I am in the search for new music, I found the recommendation algorithm far better than Spotify and after a couple weeks of using YouTube Music I was compiled some really good personalised mixes - something that will get even better in time. Due to its link with YouTube, I was recommended even more options of live performances, remixes and cover tracks.

    What surprised me the most is the a feature I didn't even think I needed: The Offline Mixtape. There are times when I don't actually know what tracks I want to listen to when on the road and the Offline Mixtape compiles a list of tracks consisting of a combination of my liked songs and similar tracks for added variation. All automatically synchronised to my devices.

    Podcasts

    From the podcasts I listen to on Spotify I didn't have any issues in finding on YouTube Music. There is an added benefit of playing a podcast as audio or video (if the podcast offers this format), which is a nice touch. I was also recommended new types of podcasts that I would have never been exposed to based on what I listen to. I am sure (and correct me if I am wrong) Spotify didn't make recommendations as visible as what I am seeing in YouTube Music where podcasts are categorised. For example, the categories offered to me are: Wealth, Finances, Health, Mysteries, etc

    Lack of Native Desktop App

    The lack of a native desktop app detracts from my otherwise glowing review of YouTube Music. I was surprised to find that there isn't one, given that this is the norm among other music providers.

    Even though Chrome allows you to download it as a Progressive Web App, it's better than nothing. It just doesn't seem integrated enough. I keep accidentally closing the YouTube Music app on my MacOS by clicking the "close" button when all I want to do is hide the window.

    It can also be laggy at times, especially when Chromecasting to a smart speaker. When I change tracks, my speaker takes a few seconds to catch up.

    Overall, it's good but not great. Does not have the same polish as the Spotify app. But it's definitely manageable. The lack of a native desktop app has not dissuaded me from using it. If needed, I can always use the YouTube Music app on my Pixel or iPad.

    The Migration

    After a satisfactory trial period using YouTube Music, I looked for ways to move all my Spotify playlists. There are many options through online services and software that can aid the migration process, which can be used for free (sometimes with limitations) or at a cost.

    After carrying out some research on the various options available to me, I opted for a free CLI tool built in Python: spotify_to_ytmusic. It has received a lot of good reviews from a Reddit post and received positive feedback where users were able to migrate thousands of their songs spanning multiple playlists with ease. The only disadvantage with free options that provide unlimited migration is that they aren't necessarily straight-forward for the average user and some technical acumen is required.

    The installation, setup and familiarising yourself with the CLI commands to use the spotify_to_ytmusic application is the only part that takes some time. But once you have generated API Keys in both Spotify and Google, followed the instructions as detailed in the Github repo, the migration process itself doesn't take long at all.

    Conclusion

    When I told one of my coworkers that I had switched to YouTube Music, I received a sceptical look and a response to confirm I am of sane mind. This exemplifies how we have simply accepted Spotify as the only acceptable music platform, blinded to alternatives.

    YouTube Premium, which includes YouTube Music in one package, is an extremely good deal. Not only can you watch YouTube videos ad-free, but you also get a music library comparable to Spotify at a similar price.

    If you have been questioning whether YouTube Music is worth a try. Question no more and make the move.

  • The Google Maps Distance Matrix API gives us the capability to calculate travel distance and time between multiple locations across different modes of transportation, such as driving walking, or cycling. This is just one of the many other APIs Google provides to allow us to get the most out of location and route related data.

    I needed to use the Google Distance Matrix API (GDMA) to calculate the distance of multiple points of interests (destinations) from one single origin. The dataset of destinations consisted of sixty to one-hundred rows of data containing the following:

    • Title
    • Latitude
    • Longitude

    This dataset would need to be parsed to the GDMA as destinations in order get the information on how far each item was away from the origin. One thing came to light during integration was that the API is limited to only outputting 25 items of distance data per request.

    The limit posed by the GDMA would be fine for the majority of use-cases, but in my case this posed a small problem as I needed to parse the whole dataset of destinations to ensure all points of interests were ordered by the shortest distance.

    The only way I could get around the limits posed by the GDMA was to batch my requests 25 destinations at a time. The dataset of data I would be parsing would never exceed 100 items, so I was fairly confident this would be an adequate approach. However, I cannot be 100% certain what the implications of such an approach would be if you were dealing with thousands of destinations.

    The code below demonstrates a small sample-set of destination data that will be used to calculate distance from a single origin.

    /*
    	Initialise the application functionality.
    */
    const initialise = () => {
    	const destinationData = [
                        {
                          title: "Wimbledon",
                          lat: 51.4273717,
                          long: -0.2444923,
                        },
                        {
                          title: "Westfields Shopping Centre",
                          lat: 51.5067724,
                          long: -0.2289425,
                        },
                        {
                          title: "Sky Garden",
                          lat: 51.3586154,
                          long: -0.9027887,
                        }
                      ];
                      
    	getDistanceFromOrigin("51.7504091", "-1.2888729", destinationData);
    }
    
    /*
    	Processes a list of destinations and outputs distances closest to the origin.
    */
    const getDistanceFromOrigin = (originLat, originLong, destinationData) => {
      const usersMarker = new google.maps.LatLng(originLat, originLong);
      let distanceInfo = [];
      
      if (destinationData.length > 0) {
      	// Segregate dealer locations into batches.
        const destinationBatches = chunkArray(destinationData, 25);
    
        // Make a call to Google Maps in batches.
        const googleMapsRequestPromises = destinationBatches.map(batch => googleMapsDistanceMatrixRequest(usersMarker, batch));
    
        // Iterate through all the aynchronous promises returned by Google Maps batch requests.
        Promise.all(googleMapsRequestPromises).then(responses => {
          const elements = responses.flatMap(item => item.rows).flatMap(item => item.elements);
    
          // Set the distance for each dealer in the dealers data
          elements.map(({ distance, status }, index) => {
            if (status === "OK") {
              destinationData[index].distance = distance.text;
              destinationData[index].distance_value = distance.value;
            }
          });
          
          renderTabularData(destinationData.sort((a, b) => (a.distance_value > b.distance_value ? 1 : -1)));
        })
        .catch(error => {
          console.error("Error calculating distances:", error);
        });
      }
    }
    
    /*
    	Outputs tabular data of distances.
    */
    renderTabularData = (destinationData) => {
    	let tableHtml = "";
      
        tableHtml = `<table>
                        <tr>
                            <th>No.</th>
                            <th>Destination Name</th>
                            <th>Distance</th>
                        </tr>`;
    
    	if (destinationData.length === 0) {
            tableHtml += `<tr colspan="2">
                            <td>No data</td>
                        </tr>`;
      }
      else {
            destinationData.map((item, index) => {
      		        tableHtml += `<tr>
                                    <td>${index+1}</td>
                                    <td>${item.title}</td>
                                    <td>${item.distance}</td>
                                </tr>`;
                });
      }
      
      tableHtml += `</table>`;
      
      document.getElementById("js-destinations").innerHTML = tableHtml;
    }
    
    /*
    	Queries Google API Distance Matrix to get distance information.
    */
    const googleMapsDistanceMatrixRequest = (usersMarker, destinationBatch) => {
      const distanceService = new google.maps.DistanceMatrixService();
      let destinationsLatLong = [];
      
      if (destinationBatch.length === 0) {
      	return;
      }
      
      destinationBatch.map((item, index) => {
        destinationsLatLong.push({
          lat: parseFloat(item.lat),
          lng: parseFloat(item.long),
        });
      });
      
      const request = 
            {
              origins: [usersMarker],
              destinations: destinationsLatLong,
              travelMode: "DRIVING",
            };
    
      return new Promise((resolve, reject) => {
        distanceService.getDistanceMatrix(request, (response, status) => {
          if (status === "OK") {
            resolve(response);
          } 
          else {
            reject(new Error(`Unable to retrieve distances: ${status}`));
          }
        });
      });
    };
    
    /*
    	Takes an array and resizes to specified size.
    */
    const chunkArray = (array, chunkSize) => {
      const chunks = [];
    
      for (let i = 0; i < array.length; i += chunkSize) {
        chunks.push(array.slice(i, i + chunkSize));
      }
    
      return chunks;
    }
    
    /*
    	Load Google Map Distance Data.
    */
    initialise();
    

    The getDistanceFromOrigin() and googleMapsDistanceMatrixRequest() are the key functions that take the list of destinations, batches them into chunks of 25 and returns a tabular list of data. This code can be expanded further to be used alongside visual representation to render each destination as pins on an embedded Google Map, since we have the longitude and latitude points.

    The full working demo can be found via the following link: https://jsfiddle.net/sbhomra/ns2yhfju/. To run this demo, a Google Maps API key needs to be provided, which you will be prompted to enter on load.

  • Published on
    -
    2 min read

    The Silent Blogger

    Everyone has different reasons for blogging. It could be for professional development, knowledge exchange, documenting a personal journey, or just as a form of self-expression. My motive for blogging includes a small portion of each of these reasons, with one major difference: you have to find me.

    I don't go out of my way to promote this small portion of the internet web-sphere that I own. In the past, I experimented with syndicating articles to more prominent blogging media platforms and communities, but it didn't fulfil my expectations or bring any further benefits.

    I've observed that my demeanour mirrors an approach to blogging in that I don't feel the need to go to excessive lengths to disclose my accomplishments or a problems I've solved. This could be due to my age, as I am more comfortable just being myself. I have nothing to prove to anyone.

    A 13th century poet, Rumi, once said:

    In silence, there is eloquence. Stop weaving and see how the pattern improves.

    This quote implies that silence is the source of clarity that allows thoughts to develop naturally and emerge.

    Ever since I stopped the pursuit of recognition and a somewhat futile attempt to force my written words onto others, the natural order has allowed this blog to grow organically. Those who have found me from keyword searches has resulted in better interaction and monetisation (through my Buy Me A Coffee page). Fortunately, since I've made an effort to make this blog as SEO-friendly as possible, my posts appear to perform fairly well across search engines.

    No longer do I stress over feeling the need to write blog posts using the "carrot and stick" approach just to garner more readership. I found I benefit from blogging about the things of interest. It's quality over quantity.

    If you have got this far in this very random admission of silent blogging, you're probably thinking: So what's your point?

    I suppose what I'm trying to say is that it's okay to blog without the expectation of having to promote every single post out to the world in hopes for some recognition. Previously, this was my way of thinking, and I've since realised that I was blogging (for the most part) for the wrong reasons. In one of my posts written in 2019 I was in pursuit to be in the same league as the great bloggers I idolised:

    I look at my blogging heroes like Scott Hanselman, Troy Hunt, Mosh Hamedani and Iris Classon (to name a few) and at times ponder if I will have the ability to churn out great posts on a regular basis with such ease and critical acclaim as they do.

    I've learnt not to be so hard on myself and lessen expectations. When you trade your expectations for appreciation, your whole world changes; even though a sense of achievement feels great, it's far more important to enjoy what you're doing (roughly para-phrasing Tony Robbins here).

    This new perspective has reaffirmed my belief that I have always enjoyed blogging, but being a silent blogger provides a sense of freedom.