Blog

Categorised by 'Software & Applications'.

  • At times there is need to get a list of files that have been updated. This could for the following reasons:

    • Audit compliance to maintain records of application changes.
    • Backup verification to confirm the right files were backed up.
    • Verification of changed files to confirm which files were added, modified, or deleted during an update.
    • Security checks to ensure that there have been no unauthorised or suspicious files changed or installed through hacking.
    • Troubleshooting Issues after a new application release by seeing a list of changed files can help identify the source of issues.

    Based on the information I found online, I put together a PowerShell script that was flexible enough to meet the needs of the above scenarios, as I encountered one of them this week. I'll let you guess the scenario I faced.

    At its core, the following PowerShell script uses the Get-ChildItem command to list out all files recursively across all sub-folders, ordered by the created date descending with the addition of handful of optional parameters.

    Get-ChildItem -Path C:\My-Path -Recurse -Include *.png | 
    			Select -Last 5 CreationTime,LastWriteTime,FullName | 
    			Sort-Object -Property CreationTime -Descending | 
    			Export-Csv "file-list.csv"
    

    Breakdown of the parameters used:

    Parameter/Object Detail Is Optional
    -Path The folder path to where files need to be listed. No
    -Recurse Get files from the path and its subdirectories Yes
    -Include Filter the file output through a path element or pattern,. This only works when the "Recurse" parameter is present. Yes
    Select Set the maximum output (-Last) and list of fields to be listed. Yes
    Sort-Object Specify field and sort order. Yes
    Export-Csv Export the list of files list to a CSV. Yes

    If the files need to be sorted by last modified date, the Sort-Object property needs to be set to "LastWriteTime".

    When the script is run, you'll see the results rendered in the following way:

    CreationTime        LastWriteTime       FullName
    ------------        -------------       --------
    25/05/2023 20:33:44 25/05/2023 20:33:44 X:\Downloads\synology\Screenshot 2023-05-25 at 20.33.38.png
    16/05/2023 14:18:21 16/05/2023 14:18:21 X:\Downloads\synology\Screenshot 2023-05-16 at 14.18.15.png
    

    Further Information

  • I've had a Spotify music membership for as long as I can remember. Many other subscriptions held throughout my life have come and gone, but Spotify has stood the test of time.

    A seed of doubt was planted when Spotify began raising the prices of their plans multiple times over a short period of time, beginning in April 2021. Even then, I was relatively unconcerned; it was annoying, but I felt content knowing there were no better music providers that could compete with what Spotify provided. Spotify made music very accessible to me in every way.

    During the first price hike, I trialled Apple Music during a brief period of insanity only to quickly come running back to the safety of Spotify.

    The penny dropped in May 2024, during the third price hike, when I began to question whether my Spotify usage was worth paying £11.99 per month. Even though I listen to music, I occasionally go through periods where I only listen to podcasts, which are freely available online and podcasting platforms.

    First Steps To Considering YouTube Music As A Viable Replacement

    Before making any hasty decisions, I audited all subscriptions both my wife and I use to if there is any possibility of making cost savings... Just like a Conservative party government imposing austerity measures, except my actions wouldn't lead to a Liz Truss level economic crises.

    It wasn't until I discovered my wife's YouTube Premium subscription, which she had purchased through the Apple App Store for an absurdly high price. A word to the wise: Never buy subscriptions through Apple's App Store because Apple charges a commission on top. My wife was paying around £18 per month compared to £12.99 if purchased directly from the YouTube website.

    I digress...

    This was enough to get me thinking about upgrading to the Family tier that included:

    • Ad-free videos
    • Offline downloads
    • YouTube Music
    • Add up to 5 members to the subscription

    All this costing £19.99 per month. At this price, we would be making savings if we moved away from our individual YouTube and Spotify plans. I was already sold on ad-free videos (those advertisements are so annoying!) and if I could be persuaded to subscribe to YouTube Music, this would end up being a very cost-effective option.

    The writing was on the wall. My Spotify days were numbered. I looked into what was involved (if possible) in migrating all my playlists over to YouTube Music.

    Requirements and Initial Thoughts of YouTube Music

    Prior to carrying out any form of migration, I opted for a 30 day free trial of YouTube Music as I wanted to see if it met as many key requirements as possible.

    Requirement Requirement Met?
    Availability of all songs from artists I listen to including the obscure ones Yes
    Podcasts Big yes
    Native MacOS app Room for improvement
    Ability to cast music to my speakers on my network Yes
    Quality new music suggestions Yes

    Overall, YouTube Music met majority of my requirements. As expected, it does take a little while to familiarise one self with the interface but there are similarities when compared with Spotify.

    YouTube Music - The Extension of YouTube

    YouTube Music is really an extension of YouTube in how it is able to pull in specific YouTube content, whether that is music videos, podcasts or shows. All the audio related content in video form you would normally view in YouTube is encompassed here. In most cases, this is seen as an advantage, however the only aspect where the lines between music and video get blurred is in the auto-generated "Liked music" playlist.

    You may find the "Liked music" playlist is already prefilled with videos you have liked on YouTube. If YouTube Music deems a liked video as music, it will also be shown here, which isn't necessarily accurate. For example, it automatically listed a Breaking Bad Parody video I liked from 9 years ago. If you prefer your randomly liked videos to stay in solely in YouTube, you have to manually disable the "Show your liked music from YouTube" feature in the settings.

    The Music Catalog and New Music Recommendations

    The music catalog size is on par with Spotify and there hasn't been a time where a track wasn't available. In fact, there were 3-4 tracks in my Spotify playlist that was no longer accessible, but this was not the case on YouTube Music, which was a surprise.

    During times when I am in the search for new music, I found the recommendation algorithm far better than Spotify and after a couple weeks of using YouTube Music I was compiled some really good personalised mixes - something that will get even better in time. Due to its link with YouTube, I was recommended even more options of live performances, remixes and cover tracks.

    What surprised me the most is the a feature I didn't even think I needed: The Offline Mixtape. There are times when I don't actually know what tracks I want to listen to when on the road and the Offline Mixtape compiles a list of tracks consisting of a combination of my liked songs and similar tracks for added variation. All automatically synchronised to my devices.

    Podcasts

    From the podcasts I listen to on Spotify I didn't have any issues in finding on YouTube Music. There is an added benefit of playing a podcast as audio or video (if the podcast offers this format), which is a nice touch. I was also recommended new types of podcasts that I would have never been exposed to based on what I listen to. I am sure (and correct me if I am wrong) Spotify didn't make recommendations as visible as what I am seeing in YouTube Music where podcasts are categorised. For example, the categories offered to me are: Wealth, Finances, Health, Mysteries, etc

    Lack of Native Desktop App

    The lack of a native desktop app detracts from my otherwise glowing review of YouTube Music. I was surprised to find that there isn't one, given that this is the norm among other music providers.

    Even though Chrome allows you to download it as a Progressive Web App, it's better than nothing. It just doesn't seem integrated enough. I keep accidentally closing the YouTube Music app on my MacOS by clicking the "close" button when all I want to do is hide the window.

    It can also be laggy at times, especially when Chromecasting to a smart speaker. When I change tracks, my speaker takes a few seconds to catch up.

    Overall, it's good but not great. Does not have the same polish as the Spotify app. But it's definitely manageable. The lack of a native desktop app has not dissuaded me from using it. If needed, I can always use the YouTube Music app on my Pixel or iPad.

    The Migration

    After a satisfactory trial period using YouTube Music, I looked for ways to move all my Spotify playlists. There are many options through online services and software that can aid the migration process, which can be used for free (sometimes with limitations) or at a cost.

    After carrying out some research on the various options available to me, I opted for a free CLI tool built in Python: spotify_to_ytmusic. It has received a lot of good reviews from a Reddit post and received positive feedback where users were able to migrate thousands of their songs spanning multiple playlists with ease. The only disadvantage with free options that provide unlimited migration is that they aren't necessarily straight-forward for the average user and some technical acumen is required.

    The installation, setup and familiarising yourself with the CLI commands to use the spotify_to_ytmusic application is the only part that takes some time. But once you have generated API Keys in both Spotify and Google, followed the instructions as detailed in the Github repo, the migration process itself doesn't take long at all.

    Conclusion

    When I told one of my coworkers that I had switched to YouTube Music, I received a sceptical look and a response to confirm I am of sane mind. This exemplifies how we have simply accepted Spotify as the only acceptable music platform, blinded to alternatives.

    YouTube Premium, which includes YouTube Music in one package, is an extremely good deal. Not only can you watch YouTube videos ad-free, but you also get a music library comparable to Spotify at a similar price.

    If you have been questioning whether YouTube Music is worth a try. Question no more and make the move.

  • While manually importing data into a Google Sheet to complete the boring chore of data restructuring, I wondered if there was any way that the initial import might be automated. After all, it would be much more efficient to link directly to an external platform to populate a spreadsheet.

    Google App Scripts provides a UrlFetchApp service giving us the ability to make HTTP POST and GET requests against an API endpoint. The following code demonstrates a simple API request to a HubSpot endpoint that will return values from a Country field by performing a GET request with an authorization header.

    function run() {
      apiFetch();
    }
    
    function apiFetch() {
      // API Endpoint options, including header options.
      var apiOptions = {
         "async": true,
         "crossDomain": true,
         "method" : "GET",
         "headers" : {
           "Authorization" : "Bearer xxx-xxx-xxxxxxx-xxxxxx-xxxxx",
           "cache-control": "no-cache"
         }
       };
    
      // Fetch contents from API endpoint.
      const apiResponse = UrlFetchApp.fetch("https://api.hubapi.com/crm/v3/properties/contact/country?archived=false", apiOptions);
      
      // Parse response as as JSON object.
      const data = JSON.parse(apiResponse.getContentText());
    
      // Populate "Sheet1" with data from API.
      if (data !== null && data.options.length > 0) {
          // Select the sheet.
          const activeSheet = SpreadsheetApp.getActiveSpreadsheet();
          const sheet = activeSheet.getSheetByName("Sheet1");
    
          for (let i = 0; i < data.options.length; i++) {
            const row = data.options[i];
    
            // Add value cells in Google sheet.
            sheet.getRange(i+1, 1).setValue(row.label);
          }
      }
    }
    

    When this script is run, a request is made to the API endpoint to return a JSON response containing a list of countries that will populate the active spreadsheet.

    The Google App Script official documentation provides even more advanced options for configuring the UrlFetchAppservice to ensure you are not limited in how you make your API requests.

    In such little code, we have managed to populate a Google Sheet from an external platform. I can see this being useful in a wide variety of use cases to make a Google Sheet more intelligent and reduce manual data entry.

    In the future, I'd be very interested in trying out some AI-related integrations using the ChatGPT API. If I manage to think of an interesting use case, I'd definitely write a follow-up blog post.

  • I've been delving further into the world of Google App Scripts and finding it my go-to when having to carry out any form of data manipulation. I don't think I've ever needed to develop a custom C# based import tool to handle the sanitisation and restructuring of data ever since learning the Google App Script approach.

    In this post, I will be discussing how to search for a value within a Google Sheet and return all columns within the row the searched value resides. As an example, let's take a few columns from a dataset of ISO-3166 Country and Region codes as provided by this CSV file and place them in a Google Sheet named "Country Data".

    The "Country Data" sheet should have the following structure:

    name alpha-2 alpha-3 country-code
    Australia AU AUS 036
    Austria AT AUT 040
    Azerbaijan AZ AZE 031
    United Kingdom of Great Britain and Northern Ireland GB GBR 826
    United States of America US USA 840

    App Script 1: Returning A Single Row Value

    Our script will be retrieving the two-letter country code by the country name - in this case "Australia". To do this, the following will be carried out:

    1. Perform a search on the "Country Data" sheet using the findAll() function.
    2. The getRow() function will return single row containing all country information.
    3. A combination of getLastColumn() and getRange() functions will output values from the row.
    function run() {
      var twoLetterIsoCode = getCountryTwoLetterIsoCode("Australia"); 
    }
    
    function getCountryTwoLetterIsoCode(countryName) {
      var activeSheet = SpreadsheetApp.getActiveSpreadsheet();
      var countryDataSheet = activeSheet.getSheetByName('Country Data');
    
      // Find text within sheet.
      var textSearch = countryDataSheet.createTextFinder(countryName).findAll();
    
      if (textSearch.length > 0) {
        // Get single row from search result.
        var row = textSearch[0].getRow();    
        // Get the last column so we can use for the row range.
        var rowLastColumn = countryDataSheet.getLastColumn();
        // Get all values for the row.
        var rowValues = countryDataSheet.getRange(row, 1, 1, rowLastColumn).getValues();
    
        return rowValues[0][1]; // Two-letter ISO code from the second column.
      }
      else {
        return "";
      }
    }
    

    When the script is run, the twoLetterIsoCode variable will contain the two-letter ISO code: "AU".

    App Script 2: Returning Multiple Row Matches

    If we had a dataset that contained multiple matches based on a search term, the script from the first example can be modified using the same fundamental functions. In this case, all we need to do is use a for loop and pass all row values to an array.

    The getCountryTwoLetterIsoCode() will look something like this:

    function getCountryTwoLetterIsoCode(countryName) {
      var activeSheet = SpreadsheetApp.getActiveSpreadsheet();
      var countryDataSheet = activeSheet.getSheetByName('Country Data');
    
      // Find text within sheet.
      var textSearch = countryDataSheet.createTextFinder(countryName).findAll();
    
      // Array to store all matched rows.
      var searchRows = [];
    
      if (textSearch.length > 0) {
        // Loop through matches.
        for (var i=0; i < textSearch.length; i++) {
          var row = textSearch[i].getRow();  
          // Get the last column so we can use for the row range.
          var rowLastColumn = countryDataSheet.getLastColumn();
          // Get all values for the row.
          var rowValues = countryDataSheet.getRange(row, 1, 1, rowLastColumn).getValues(); 
    
          searchRows.push(rowValues);
        }
      }
    
      return searchRows;
    }
    

    The searchRows array will contain a collection of matched rows as well as the column data. To carry out a similar output as shown in the first App Script example - the two-letter country code, the function can be called in the following way:

    // Get first match.
    var matchedCountryData = getCountryTwoLetterIsoCode("Australia")[0];
    
    // Get the second column value (alpha-2).
    var twoLetterIsoCode = matchedCountryData[0][1];
    

    Conclusion

    Both examples have demonstrated different ways of returning row values of a search term. The two key lines of code that allows us to do this are:

    // Get the last column so we can use for the row range.
    var rowLastColumn = countryDataSheet.getLastColumn();
    
    // Get all values for the row.
    var rowValues = countryDataSheet.getRange(row, 1, 1, rowLastColumn).getValues();
    
  • Whenever there is a need to restructure an Excel spreadsheet to an acceptable form to be used for a SaaS platform or custom application, my first inclination is to build something in C# to get the spreadsheet into a form I require.

    This week I felt adventurous and decided to break the mundane job of formatting a spreadsheet using an approach I've been reading up on for some time but just never got a chance to apply in a real-world scenario - Google App Scripts.

    What Is A Google App Script?

    Released in 2009, Google App Scripts is a cloud-based platform that allows you to automate tasks across Google Workspace products such as Drive, Docs, Sheets, Calendar, Gmail, etc. You could think of App Scripts as similar to writing a macro in Microsoft Office. They both can automate repeatable tasks and extend the standard features of the application.

    The great thing about Google App Script development is being able to use popular web languages (HTML/CSS/JavaScript) to build something custom. Refreshing when compared to the more archaic option of using VBA in Microsoft Office.

    Some really impressive things can be achieved using App Scripts within the Google ecosystem.

    Google Sheets App Script

    The Google App Script I wrote fulfils the job of taking the contents of cells in a row from one spreadsheet to be copied into another. The aim is to carry out automated field mapping, where the script would iterate through each row from the source spreadsheet and create a new row in the target spreadsheet where the cell value would be placed in a different column.

    This example will demonstrate a very simple approach where the source spreadsheet will contain five columns where each row contains numbers in ascending order to then be copied to the target spreadsheet in descending order.

    Before we add the script, we need to create two spreadsheets:

    • Source sheet: Source - Numbers Ascending
    • Target sheet: Destination - Numbers Descending

    The source sheet should mirror the same structure as the screenshot (below) illustrates.

    Google Sheet - Source

    The target sheet just needs to contain the column headers.

    The App Script can be created by:

    1. Navigating to Extensions > App Scripts from the toolbar. This will open a new tab presenting an interface to manage our scripts.
    2. In the "Files" area, press the "+" and select "Script".
    3. Name the script file: "export-cells-demo.gs".

    Add the following code:

    // Initialiser.
    function run() {
      sendDataToDestinationSpreadSheet();
    }
    
    // Copies values from a source spreadsheet to the target spreadsheet.
    function sendDataToDestinationSpreadSheet() {
      var activeSheet = SpreadsheetApp.getActiveSpreadsheet();
    
      // Get source spreadsheet by its name.
      var sourceSheet = activeSheet.getSheetByName('Source - Numbers Ascending');
    
      // Select the source spreadsheet cells.
      var sourceColumnRange = sourceSheet.getRange('A:E');
      var sourceColumnValues = sourceColumnRange.getValues();
    
      // Get target spreadsheet by its name..
      var targetSheet = activeSheet.getSheetByName('Destination - Numbers Descending');
    
      // Iterate through all rows from the source sheet.
      // Start index at 1 to ignore the column header.
      for(var i = 1; i < sourceColumnValues.length; i++) {
        // Get the cell value for the row.
        var column1 = sourceColumnValues[0,i][0];
        var column2 = sourceColumnValues[0,i][1];
        var column3 = sourceColumnValues[0,i][2];
        var column4 = sourceColumnValues[0,i][3];
        var column5 = sourceColumnValues[0,i][4];
        
        // Use getRange() to get the value position by declaring the row and column number.
        // Use setValue() to copy the value into target spreadsheet column.
        targetSheet.getRange(i+1, 1).setValue(column5);
        targetSheet.getRange(i+1, 2).setValue(column4);
        targetSheet.getRange(i+1, 3).setValue(column3);
        targetSheet.getRange(i+1, 4).setValue(column2);
        targetSheet.getRange(i+1, 5).setValue(column1);
      }
    }
    

    The majority of this script should be self-explanatory with the aid of comments. The only part that requires further explanation is where the values in the target sheet are set, as this is where we insert the numbers for each row in descending order:

    ...
    ...
    targetSheet.getRange(i+1, 1).setValue(column5);
    targetSheet.getRange(i+1, 2).setValue(column4);
    targetSheet.getRange(i+1, 3).setValue(column3);
    targetSheet.getRange(i+1, 4).setValue(column2);
    targetSheet.getRange(i+1, 5).setValue(column1);
    ...
    ...
    

    The getRange function accepts two parameters: Row Number and Column Number. In this case, the row number is acquired from the for loop index as we're using the same row position in both source and target sheets. However, we want to change the position of the columns in order to display numbers in descending order. To do this, I set the first column in the target sheet to contain the value of the last column from the source sheet and carried on from there.

    All the needs to be done now is to run the script by selecting our "run()" function from the App Scripts toolbar and pressing the "Run" button.

    The target spreadsheet should now contain the numbered values for each row in descending order.

    Google Sheet - Target

    Voila! You've just created your first Google App Script in Google Sheets with simple field mapping.

    Conclusion

    Creating my first Google App Script in a real-world scenario to carry out some data manipulation has opened my eyes to the possibilities of what can be achieved without investing additional time developing something like a Console App to do the very same thing.

    There is a slight learning curve involved to understand the key functions required to carry out certain tasks, but this is easily resolved with a bit of Googling and reading through the documentation.

    My journey into Google App Scripts has only just begun and I look forward to seeing what else it has to offer!

  • NOTE: This post began as a demonstration of how quickly I can create and publish blog entries on the go using Working Copy Git Client on my iPad while in India. I got 90% of the way through this post, but I didn't have time to finish it, utterly defeating the point of the post. Anyway, without further ado…

    I thought this would be the most opportune moment to try out the Working Copy Git Client app on my iPad to see if I’m able to update my blog on the go. I’m also using this time as a small test to myself to see whether I’m able to focus and write on the fly anywhere. The last time I did anything similar was on my return from Bali, where I wrote my experiences offline using Evernote on my phone to then add to my website later.

    This time, I wanted to try something different as I have a few hours to kill. So I'm writing this post sitting in the seating area of Heathrow Airport waiting to board my flight. If everything goes to plan, this very post should auto-publish on commit to my website hosted on Netlify.

    I’m off to India to experience one of the most major milestones in my life - getting married Indian style! I’d like to class this as part deux of “getting married” after performing our English ceremony during the back end of Covid restrictions last year. So this seems a great time to put the Working Copy app and my iPad writing flow to the test, where my beloved Macbook Pro is nowhere in sight to aid the publishing of this post.

    As I write this post, it looks like my core writing eco-system will remain unchanged:

    1. Write up the post in Evernote.
    2. Spelling, phrasing and grammar check using Quillbot (previously Grammarly).
    3. Add post in markdown format to my Gatsby website project.
    4. Commit website updates.

    The true test will come when I write a post that contains more than a handful of images. It's something that doesn't happen very often - only when it comes to holidays. But I can see cropping, compressing and positioning images for a blog post on an iPad a little fiddly, especially on an 10 inch iPad Air.

    So how does Working Copy fair in my very first piece of "on-the-go" writing?

    Initial Setup

    Getting up and running couldn't have been easier. After installation, I logged into my Bitbucket account and cloned my website repo, I was ready to go. For a site housing approximately 294 posts, it has a relatively small footprint (images are included): 54MB. I am so glad I decided to make the switch to GatsbyJS static-site generation and move away from the traditional server-side application connected to a database.

    Everything you'd expect from a git client is present, such as:

    • Clone
    • Pull
    • Push
    • Fetch
    • Merge
    • Branch Creation

    Editor

    The editor is as I would expect it to be. Simple, clear and concise with basic syntax highlight, which is something that I would expect from an app built for a tablet device. If you think this can be a replacement for your traditional coding IDE, you'd be mistaken. Anyway, why would you want to do full-on coding on a tablet device?

    Working Copy Editor

    I like how easily navigable the interface is. There is little to no learning curve when using it for the first time.

    Native Integration with iOS Files

    Now, this is where I feel I'm familiar territory. Being able to drag and drop files, such as images and text files from iOS’s Files layer into my Git repo - just like if I was working on a laptop. I’ve also seen other users write their posts outside using iA Writer or Pretext editor apps before dropping the text file into their repo.

    Working with Images

    I've never found working with image manipulation on the iPad that easy, hence why I like to use my Macbook Pro for the final finessing of my post before publishing. I have the tools I need to resize and compress images. Most photo app's on the iPad solely pull in images housed in Apple Photos. Even though I have an iPad and Macbook Pro, I don't like being locked into a single eco-system, especially when the majority of apps work well across different platforms.

    Image Size is a free app that allowed me to crop and resize photos stored in iOS Files by simply stating the dimensions I require. Added bonus: The app is free!

    Once the image is resized, I can carry out compression using TinyPNG and lastly simply perform a drag and drop into the Working Copy app.

    Conclusion

    I always wanted to have the ability to update my website on the go on a tablet device and Working Copy makes this very easy. When you add the iOS File System and Image Size app, you have everything for your writing needs.

    I never thought that I'd be comfortable publishing posts directly to my website from an iPad. Overall, I found the writing experience to be very efficient as I encountered fewer distractions when compared to working on my Macbook Pro.

    If there was anything I could change, it would be the size of my iPad. I did find Working Copy took a lot of screen real-estate, especially when having multiple windows open. The 12-inch iPad Pro looks very tempting.

    Did I manage to write and submit this post directly from my iPad? Yes.
    Would I do it again? Yes!

  • I'm writing this post simply for the purpose of reminding myself of how I built my first shell script... I'm not sure if admitting this is a positive or negative thing. Writing shell scripts has never truly been a part of my day-to-day work, but understood the core concepts and what they are able to do.

    I decided to write this shell script to perform a rather easy task: cycle through all CSS and JS files in a directory and change a string within each file with the value that was entered into the shell script. The main use of this script, for my use, is to update path references in these files.

    # Script Name: CSS/JS Path Replacer #
    # Description: Iterates through all directories to replace strings found in CSS and JS files. #
    #               Place this bash script in the directory where the files reside. #
    #               For example: /resources. #
    
    # Take the search string.
    read -p "Enter the search string: " search
    
    # Take the replace string.
    read -p "Enter the replace string: " replace
    
    FILES="./**/*.min.css ./**/*.css ./**/*.min.js ./**/*.js"
    
    for f in $FILES
    do
    	echo "Processing $f file..."
      
    	sed -i "s/${search//\//\\/}/${replace//\//\\/}/g" $f
    done
    
    $SHELL
    

    To carry out a replace within each file, I'm using the sed command:

    sed -i "s/old-value/new-value/g" filename.txt
    

    This command tells sed to find all occurrences of "old-value" and replace with "new-value" in a file called "filename.txt". As the main purpose of the script is to update path references within CSS and JS files, I had to escape forward slashes in the "search" and "replace" variables when parsed to the sed command.

    Bash script in use:

    Enter the search string: /surinder-v1/resources
    Enter the replace string: /surinder-v2/resources
    Processing ./css/site.min.css file...
    Processing ./css/site.css file...
    Processing ./css/site.min.css file...
    Processing ./css/styleguide.css file...
    Processing ./js/global.min.js file...
    Processing ./js/global.min.js file...
    

    Once the script has processed all the files, any reference to "/surinder-v1/resources" will be replaced with "/surinder-v2/resources".

    Further useful information about using the "sed" command can be found in the following post: How to use sed to find and replace text in files in Linux / Unix shell.

  • I’ll be the first to admit that I very rarely (if at all!) assign a nice pretty share image to any post that gets shared on social networks. Maybe it’s because I hardly post what I write to social media in the first place! :-) Nevertheless, this isn’t the right attitude. If I am really going to do this, then the whole process needs to be quick and render a share image that sets the tone before that will hopefully entice a potential reader to click on my post.

    I started delving into how my favourite developer site, dev.to, manages to create these really simple text-based share images dynamically. They have a pretty good setup as they’ve somehow managed to generate a share image that contains relevant post related information perfectly, such as:

    • Post title
    • Date
    • Author
    • Related Tech Stack Icons

    For those who are nosey as I and want to know how dev.to undertakes such functionality, they have kindly written the following post - How dev.to dynamically generates social images.

    Since my website is built using the Gatsby framework, I prefer to use a local process to dynamically generate a social image without the need to rely on another third-party service. What's the point in using a third-party service to do everything for you when it’s more fun to build something yourself!

    I had envisaged implementing a process that will allow me to pass in the URL of my blog posts to a script, which in turn will render a social image containing basic information about a blog post.

    Intro Into Puppeteer

    Whilst doing some Googling, one tool kept cropping up in different forms and uses - Puppeteer. Puppeteer is a Node.js library maintained by Google Chrome’s development team and enables us to control any Chrome Dev-Tools based browser through scripts. These scripts can programmatically execute a variety of actions that you would generally do in a browser.

    To give you a bit of an insight into the actions Puppeteer can carry out, check out this Github repo. Here you can see Puppeteer is a tool for testing, scraping and automating tasks on web pages. It’s a very useful tool. The only part I spent most of my time understanding was its webpage screenshot feature.

    To use Puppeteer, you will first need to install the library package in which two options are available:

    • Puppeteer Core
    • Puppeteer

    Puppeteer Core is the more lighter-weight package that can interact with any Dev-Tool based browser you already have installed.

    npm install puppeteer-core
    

    You then have the full package that also installs the most recent version of Chromium within the node_modules directory of your project.

    npm install puppeteer
    

    I opted for the full package just to ensure I have the most compatible version of Chromium for running Puppeteer.

    Puppeteer Webpage Screenshot Script

    Now that we have Puppeteer installed, I wrote a script and added it to the root of my Gatsby site. The script carries out the following:

    • Accepts a single argument containing the URL of a webpage. This will be the page containing information about my blog post in a share format - all will become clear in the next section.
    • Approximately screenshot a cropped version of the webpage. In this case 840px x 420px - the exact size of my share image.
    • Use the page name in the URL as the image file name.
    • Store the screenshot in my "Social Share” media directory.
    const puppeteer = require('puppeteer');
    
    // If an argument is not provided containing a website URL, end the task.
    if (process.argv.length !== 3) {
      console.log("Please provide a single argument containing a website URL.");
      return;
    }
    
    const pageUrl = process.argv[2];
    
    const options = {
        path: `./static/media/Blog/Social Share/${pageUrl.substring(pageUrl.lastIndexOf('/') + 1)}.jpg`,
        fullPage: false,
        clip: {
          x: 0,
          y: 0,
          width: 840,
          height: 420
        }
      };
      
      (async () => {
        const browser = await puppeteer.launch({headless: false});
        const page = await browser.newPage()
        await page.setViewport({ width: 1280, height: 800, deviceScaleFactor: 1.5 })
        await page.goto(pageUrl)
        await page.screenshot(options)
        await browser.close()
      })(); 
    

    The script can be run as so:

    node puppeteer-screenshot.js http://localhost:8000/socialcard/Blog/2020/07/25/Using-Instagram-API-To-Output-Profile-Photos-In-ASPNET-2020-Edition
    

    I made an addition to my Gatsby project that generated a social share page for every blog post where the URL path was prefixed with /socialcard. These share pages will only be generated when in development mode.

    Social Share Page

    Now that we have our Puppeteer script, all that needs to be accomplished is to create a nice looking visual for Puppeteer to convert into an image. I wanted some form of automation where blog post information was automatically populated.

    I’m starting off with a very simple layout taking some inspiration from dev.to and outputting the following information:

    • Title
    • Date
    • Tags
    • Read time

    Working with HTML and CSS isn’t exactly my forte. Luckily for me, I just needed to do enough to make the share image look presentable.

    Social Card Page

    You can view the HTML and CSS on JSFiddle. Feel free to update and make it better! If you do make any improvements, update the JSFiddle and let me know!

    Next Steps

    I plan on adding some additional functionality allowing a blog post teaser image (if one is added) to be used as a background and make things look a little more interesting. At the moment the share image is very plain. As you can tell, I keep things really simple as design isn’t my strongest area. :-)

    If all goes to plan, when I share this post to Twitter you should see my newly generated share image.

  • The first thing that came into my head when testing the waters to start the process of moving over to Gatsby was my blog post content. If I could get my content in a form a Gatsby site accepts then that's half the battle won right there, the theory being it will simplify the build process.

    I opted to go down the local storage route where Gatsby would serve markdown files for my blog post content. Everything else such as the homepage, archive, about and contact pages can be static. I am hoping this isn’t something I will live to regret but I like the idea my content being nicely preserved in source control where I have full ownership without relying on a third-party platform.

    My site is currently built on the .NET framework using Kentico CMS. Exporting data is relatively straight-forward, but as I transition to a somewhat content-less managed approach, I need to ensure all fields used within my blog posts are transformed appropriately into the core building blocks of my markdown files.

    A markdown file can carry additional field information about my post that can be declared at the start of the file, wrapped by triple dashes at the start and end of the block. This is called frontmatter.

    Here is a snippet of one of my blog posts exported to a markdown file:

    ---
    title: "Maldives and Vilamendhoo Island Resort"
    summary: "At Vilamendhoo Island Resort you are surrounded by serene beauty wherever you look. Judging by the serendipitous chain of events where the stars aligned, going to the Maldives has been a long time in the coming - I just didn’t know it."
    date: "2019-09-21T14:51:37Z"
    draft: false
    slug: "/Maldives-and-Vilamendhoo-Island-Resort"
    disqusId: "b08afeae-a825-446f-b448-8a9cae16f37a"
    teaserImage: "/media/Blog/Travel/VilamendhooSunset.jpg"
    socialImage: "/media/Blog/Travel/VilamendhooShoreline.jpg"
    categories: ["Surinders-Log"]
    tags: ["holiday", "maldives"]
    ---
    
    Writing about my holiday has started to become a bit of a tradition (for those that are worthy of such time and effort!) which seem to start when I went to [Bali last year](/Blog/2018/07/06/My-Time-At-Melia-Bali-Hotel). 
    I find it's a way to pass the time in airports and flights when making the return journey home. So here's another one...
    

    Everything looks well structured and from the way I have formatted the date, category and tags fields, it will lend itself to be quite accommodating for the needs of future posts. I made the decision to keep the slug value void of any directory structure to give me the flexibility on dynamically creating a URL structure.

    Kentico Blog Posts to Markdown Exporter

    The quickest way to get the content out was to create a console app to carry out the following:

    1. Loop through all blog posts in post date descending.
    2. Update all images paths used as a teaser and within the content.
    3. Convert rich text into markdown.
    4. Construct frontmatter key-value fields.
    5. Output to a text file in the following naming convention: “yyyy-MM-dd---Post-Title.md”.

    Tasks 2 and 3 will require the most effort…

    When I first started using Kentico, all references to images were made directly via the file path and as I got more familiar with Kentico, this was changed to use permanent URLs. Using permanent URL’s caused the link to an image to change from "/Surinder/media/Surinder/myimage.jpg", to “/getmedia/27b68146-9f25-49c4-aced-ba378f33b4df /myimage.jpg?width=500”. I need to create additional checks to find these URL’s and transform into a new path.

    Finding a good .NET markdown converter is imperative. Without this, there is a high chance the rich text content would not be translated to a satisfactorily standard, resulting in some form of manual intervention to carry out corrections. Combing through 250 posts manually isn’t my idea of fun! :-)

    I found the ReverseMarkdown .NET library allowed for enough options to deal with Rich Text to Markdown conversion. I could set in the conversion process to ignore HTML that couldn’t be transformed thus preserving content.

    Code

    using CMS.DataEngine;
    using CMS.DocumentEngine;
    using CMS.Helpers;
    using CMS.MediaLibrary;
    using Export.BlogPosts.Models;
    using ReverseMarkdown;
    using System;
    using System.Collections.Generic;
    using System.Configuration;
    using System.IO;
    using System.Linq;
    using System.Text;
    using System.Text.RegularExpressions;
    
    namespace Export.BlogPosts
    {
        class Program
        {
            public const string SiteName = "SurinderBhomra";
            public const string MarkdownFilesOutputPath = @"C:\Temp\BlogPosts\";
            public const string NewMediaBaseFolder = "/media";
            public const string CloudImageServiceUrl = "https://xxxx.cloudimg.io";
    
            static void Main(string[] args)
            {
                CMSApplication.Init();
    
                List<BlogPost> blogPosts = GetBlogPosts();
    
                if (blogPosts.Any())
                {
                    foreach (BlogPost bp in blogPosts)
                    {
                        bool isMDFileGenerated = CreateMDFile(bp);
    
                        Console.WriteLine($"{bp.PostDate:yyyy-MM-dd} - {bp.Title} - {(isMDFileGenerated ? "EXPORTED" : "FAILED")}");
                    }
    
                    Console.ReadLine();
                }
            }
    
            /// <summary>
            /// Retrieve all blog posts from Kentico.
            /// </summary>
            /// <returns></returns>
            private static List<BlogPost> GetBlogPosts()
            {
                List<BlogPost> posts = new List<BlogPost>();
    
                InfoDataSet<TreeNode> query = DocumentHelper.GetDocuments()
                                                   .OnSite(SiteName)
                                                   .Types("SurinderBhomra.BlogPost")
                                                   .Path("/Blog", PathTypeEnum.Children)
                                                   .Culture("en-GB")
                                                   .CombineWithDefaultCulture()
                                                   .NestingLevel(-1)
                                                   .Published()
                                                   .OrderBy("BlogPostDate DESC")
                                                   .TypedResult;
    
                if (!DataHelper.DataSourceIsEmpty(query))
                {
                    foreach (TreeNode blogPost in query)
                    {
                        posts.Add(new BlogPost
                        {
                            Guid = blogPost.NodeGUID.ToString(),
                            Title = blogPost.GetStringValue("BlogPostTitle", string.Empty),
                            Summary = blogPost.GetStringValue("BlogPostSummary", string.Empty),
                            Body = RichTextToMarkdown(blogPost.GetStringValue("BlogPostBody", string.Empty)),
                            PostDate = blogPost.GetDateTimeValue("BlogPostDate", DateTime.MinValue),
                            Slug = blogPost.NodeAlias,
                            DisqusId = blogPost.NodeGUID.ToString(),
                            Categories = blogPost.Categories.DisplayNames.Select(c => c.Value.ToString()).ToList(),
                            Tags = blogPost.DocumentTags.Replace("\"", string.Empty).Split(',').Select(t => t.Trim(' ')).Where(t => !string.IsNullOrEmpty(t)).ToList(),
                            SocialImage = GetMediaFilePath(blogPost.GetStringValue("ShareImageUrl", string.Empty)),
                            TeaserImage = GetMediaFilePath(blogPost.GetStringValue("BlogPostTeaser", string.Empty))
                        });
                    }
                }
    
                return posts;
            }
    
            /// <summary>
            /// Creates the markdown content based on Blog Post data.
            /// </summary>
            /// <param name="bp"></param>
            /// <returns></returns>
            private static string GenerateMDContent(BlogPost bp)
            {
                StringBuilder mdBuilder = new StringBuilder();
    
                #region Post Attributes
    
                mdBuilder.Append($"---{Environment.NewLine}");
                mdBuilder.Append($"title: \"{bp.Title.Replace("\"", "\\\"")}\"{Environment.NewLine}");
                mdBuilder.Append($"summary: \"{HTMLHelper.HTMLDecode(bp.Summary).Replace("\"", "\\\"")}\"{Environment.NewLine}");
                mdBuilder.Append($"date: \"{bp.PostDate.ToString("yyyy-MM-ddTHH:mm:ssZ")}\"{Environment.NewLine}");
                mdBuilder.Append($"draft: {bp.IsDraft.ToString().ToLower()}{Environment.NewLine}");
                mdBuilder.Append($"slug: \"/{bp.Slug}\"{Environment.NewLine}");
                mdBuilder.Append($"disqusId: \"{bp.DisqusId}\"{Environment.NewLine}");
                mdBuilder.Append($"teaserImage: \"{bp.TeaserImage}\"{Environment.NewLine}");
                mdBuilder.Append($"socialImage: \"{bp.SocialImage}\"{Environment.NewLine}");
    
                #region Categories
    
                if (bp.Categories?.Count > 0)
                {
                    CommaDelimitedStringCollection categoriesCommaDelimited = new CommaDelimitedStringCollection();
    
                    foreach (string categoryName in bp.Categories)
                        categoriesCommaDelimited.Add($"\"{categoryName}\"");
    
                    mdBuilder.Append($"categories: [{categoriesCommaDelimited.ToString()}]{Environment.NewLine}");
                }
    
                #endregion
    
                #region Tags
    
                if (bp.Tags?.Count > 0)
                {
                    CommaDelimitedStringCollection tagsCommaDelimited = new CommaDelimitedStringCollection();
    
                    foreach (string tagName in bp.Tags)
                        tagsCommaDelimited.Add($"\"{tagName}\"");
    
                    mdBuilder.Append($"tags: [{tagsCommaDelimited.ToString()}]{Environment.NewLine}");
                }
    
                #endregion
    
                mdBuilder.Append($"---{Environment.NewLine}{Environment.NewLine}");
    
                #endregion
    
                // Add blog post body content.
                mdBuilder.Append(bp.Body);
    
                return mdBuilder.ToString();
            }
    
            /// <summary>
            /// Creates files with a .md extension.
            /// </summary>
            /// <param name="bp"></param>
            /// <returns></returns>
            private static bool CreateMDFile(BlogPost bp)
            {
                string markdownContents = GenerateMDContent(bp);
    
                if (string.IsNullOrEmpty(markdownContents))
                    return false;
    
                string fileName = $"{bp.PostDate:yyyy-MM-dd}---{bp.Slug}.md";
                File.WriteAllText($@"{MarkdownFilesOutputPath}{fileName}", markdownContents);
    
                if (File.Exists($@"{MarkdownFilesOutputPath}{fileName}"))
                    return true;
    
                return false;
            }
    
            /// <summary>
            /// Gets the full relative path of an file based on its Permanent URL ID. 
            /// </summary>
            /// <param name="filePath"></param>
            /// <returns></returns>
            private static string GetMediaFilePath(string filePath)
            {
                if (filePath.Contains("getmedia"))
                {
                    // Get GUID from file path.
                    Match regexFileMatch = Regex.Match(filePath, @"(\{){0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}(\}){0,1}");
    
                    if (regexFileMatch.Success)
                    {
                        MediaFileInfo mediaFile = MediaFileInfoProvider.GetMediaFileInfo(Guid.Parse(regexFileMatch.Value), SiteName);
    
                        if (mediaFile != null)
                            return $"{NewMediaBaseFolder}/{mediaFile.FilePath}";
                    }
                }
    
                // Return the file path and remove the base file path.
                return filePath.Replace("/SurinderBhomra/media/Surinder", NewMediaBaseFolder);
            }
    
            /// <summary>
            /// Convert parsed rich text value to markdown.
            /// </summary>
            /// <param name="richText"></param>
            /// <returns></returns>
            public static string RichTextToMarkdown(string richText)
            {
                if (!string.IsNullOrEmpty(richText))
                {
                    #region Loop through all images and correct the path
    
                    // Clean up tilda's.
                    richText = richText.Replace("~/", "/");
    
                    #region Transform Image Url's Using Width Parameter
    
                    Regex regexFileUrlWidth = new Regex(@"\/getmedia\/(\{{0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}\}{0,1})\/([\w,\s-]+\.[A-Za-z]{3})(\?width=([0-9]*))", RegexOptions.Multiline | RegexOptions.IgnoreCase);
    
                    foreach (Match fileUrl in regexFileUrlWidth.Matches(richText))
                    {
                        string width = fileUrl.Groups[4] != null ? fileUrl.Groups[4].Value : string.Empty;
                        string newMediaUrl = $"{CloudImageServiceUrl}/width/{width}/n/https://www.surinderbhomra.com{GetMediaFilePath(ClearQueryStrings(fileUrl.Value))}";
    
                        if (newMediaUrl != string.Empty)
                            richText = richText.Replace(fileUrl.Value, newMediaUrl);
                    }
    
                    #endregion
    
                    #region Transform Generic File Url's
    
                    Regex regexGenericFileUrl = new Regex(@"\/getmedia\/(\{{0,1}[0-9a-fA-F]{8}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{4}\-[0-9a-fA-F]{12}\}{0,1})\/([\w,\s-]+\.[A-Za-z]{3})", RegexOptions.Multiline | RegexOptions.IgnoreCase);
    
                    foreach (Match fileUrl in regexGenericFileUrl.Matches(richText))
                    {
                        // Construct media URL required by image hosting company - CloudImage. 
                        string newMediaUrl = $"{CloudImageServiceUrl}/cdno/n/n/https://www.surinderbhomra.com{GetMediaFilePath(ClearQueryStrings(fileUrl.Value))}";
    
                        if (newMediaUrl != string.Empty)
                            richText = richText.Replace(fileUrl.Value, newMediaUrl);
                    }
    
                    #endregion
    
                    #endregion
    
                    Config config = new Config
                    {
                        UnknownTags = Config.UnknownTagsOption.PassThrough, // Include the unknown tag completely in the result (default as well)
                        GithubFlavored = true, // generate GitHub flavoured markdown, supported for BR, PRE and table tags
                        RemoveComments = true, // will ignore all comments
                        SmartHrefHandling = true // remove markdown output for links where appropriate
                    };
    
                    Converter markdownConverter = new Converter(config);
    
                    return markdownConverter.Convert(richText).Replace(@"[!\", @"[!").Replace(@"\]", @"]");
                }
    
                return string.Empty;
            }
    
            /// <summary>
            /// Returns media url without query string values.
            /// </summary>
            /// <param name="mediaUrl"></param>
            /// <returns></returns>
            private static string ClearQueryStrings(string mediaUrl)
            {
                if (mediaUrl == null)
                    return string.Empty;
    
                if (mediaUrl.Contains("?"))
                    mediaUrl = mediaUrl.Split('?').ToList()[0];
    
                return mediaUrl.Replace("~", string.Empty);
            }
        }
    }
    

    There is a lot going on here, so let's do a quick breakdown:

    1. GetBlogPosts(): Get all blog posts from Kentico and parse them to a “BlogPost” class object containing all the fields we want to export.
    2. GetMediaFilePath(): Take the image path and carry out all the transformation required to change to a new file path. This method is used in GetBlogPosts() and RichTextToMarkdown() methods.
    3. RichTextToMarkdown(): Takes rich text and goes through a transformation process to relink images in a format that will be accepted by my image hosting provider - Cloud Image. In addition, this is where ReverseMarkdown is used to finally convert to markdown.
    4. CreateMDFile(): Creates the .md file based on the blog posts found in Kentico.
  • My Fujifilm X100F camera only comes out of hibernation when I go on holiday. Most of the time, I fail to ensure my camera settings are correct before I take the very first snap. This happened on my last holiday to Loch Lomond.

    When it came to the job of carrying out some image processing from RAW to JPEG, I noticed all of my photos EXIF dates were incorrect. I am such stickler for correct EXIF information, including geolocation wherever possible. EXIF information is so useful for cataloging when consumed by photo applications, whether it’s on my Synology or uploaded to Google Photos.

    Due to the high number of photos with incorrect date stamps, I needed a tool that will automate the correction process. After a bit of Googling, I found an application called exiftool by Phil Harvey that allows the EXIF date/time stamp to be modified using a method in the documentation called “Shift”.

    The exiftool has no GUI (graphical user interface) and will need to be run in Terminal on a Mac or command line for Windows users. The command to use is relatively simple and the only complex thing you will have to do is calculate how many days, months, years, hours, minutes and seconds you need to add or subtract.

    In my case, the calculation was a matter of subtracting 3 days from all the photos and the command to do this looks like the following:

    exiftool -AllDates-='0:0:3 0:0:0' -m /Volumes/LochLomond
    

    Lets breakdown the command to get a better understanding what each part does.

    • exiftool: Runs the application and you have to ensure that your Terminal/Command Line is run in the same directory exiftool is housed.
    • AllDates: Modifies all dates in a photo.
    • -=‘0:0:3 0:0:0’: Subtract 3 days off the photos exif date. If you wanted to add 3 days, use “+=” instead. The date time format is presented as “<year>:<month>:<day> <hours>:<minute>:<second>”.
    • -m: Ignore minor errors and warnings (as stated in the documentation).
    • /Volumes/LochLomond: Location to where all the photos reside.

    When making mass changes to files, it’s always recommended to ensure you have a back up of all photos for you too fallback on if you accidentally mess up the EXIF update.