Blog

Blogging on programming and life in general.

  • I’ll be the first to admit that I very rarely (if at all!) assign a nice pretty share image to any post that gets shared on social networks. Maybe it’s because I hardly post what I write to social media in the first place! :-) Nevertheless, this isn’t the right attitude. If I am really going to do this, then the whole process needs to be quick and render a share image that sets the tone before that will hopefully entice a potential reader to click on my post.

    I started delving into how my favourite developer site, dev.to, manages to create these really simple text-based share images dynamically. They have a pretty good setup as they’ve somehow managed to generate a share image that contains relevant post related information perfectly, such as:

    • Post title
    • Date
    • Author
    • Related Tech Stack Icons

    For those who are nosey as I and want to know how dev.to undertakes such functionality, they have kindly written the following post - How dev.to dynamically generates social images.

    Since my website is built using the Gatsby framework, I prefer to use a local process to dynamically generate a social image without the need to rely on another third-party service. What's the point in using a third-party service to do everything for you when it’s more fun to build something yourself!

    I had envisaged implementing a process that will allow me to pass in the URL of my blog posts to a script, which in turn will render a social image containing basic information about a blog post.

    Intro Into Puppeteer

    Whilst doing some Googling, one tool kept cropping up in different forms and uses - Puppeteer. Puppeteer is a Node.js library maintained by Google Chrome’s development team and enables us to control any Chrome Dev-Tools based browser through scripts. These scripts can programmatically execute a variety of actions that you would generally do in a browser.

    To give you a bit of an insight into the actions Puppeteer can carry out, check out this Github repo. Here you can see Puppeteer is a tool for testing, scraping and automating tasks on web pages. It’s a very useful tool. The only part I spent most of my time understanding was its webpage screenshot feature.

    To use Puppeteer, you will first need to install the library package in which two options are available:

    • Puppeteer Core
    • Puppeteer

    Puppeteer Core is the more lighter-weight package that can interact with any Dev-Tool based browser you already have installed.

    npm install puppeteer-core
    

    You then have the full package that also installs the most recent version of Chromium within the node_modules directory of your project.

    npm install puppeteer
    

    I opted for the full package just to ensure I have the most compatible version of Chromium for running Puppeteer.

    Puppeteer Webpage Screenshot Script

    Now that we have Puppeteer installed, I wrote a script and added it to the root of my Gatsby site. The script carries out the following:

    • Accepts a single argument containing the URL of a webpage. This will be the page containing information about my blog post in a share format - all will become clear in the next section.
    • Approximately screenshot a cropped version of the webpage. In this case 840px x 420px - the exact size of my share image.
    • Use the page name in the URL as the image file name.
    • Store the screenshot in my "Social Share” media directory.
    const puppeteer = require('puppeteer');
    
    // If an argument is not provided containing a website URL, end the task.
    if (process.argv.length !== 3) {
      console.log("Please provide a single argument containing a website URL.");
      return;
    }
    
    const pageUrl = process.argv[2];
    
    const options = {
        path: `./static/media/Blog/Social Share/${pageUrl.substring(pageUrl.lastIndexOf('/') + 1)}.jpg`,
        fullPage: false,
        clip: {
          x: 0,
          y: 0,
          width: 840,
          height: 420
        }
      };
      
      (async () => {
        const browser = await puppeteer.launch({headless: false});
        const page = await browser.newPage()
        await page.setViewport({ width: 1280, height: 800, deviceScaleFactor: 1.5 })
        await page.goto(pageUrl)
        await page.screenshot(options)
        await browser.close()
      })(); 
    

    The script can be run as so:

    node puppeteer-screenshot.js http://localhost:8000/socialcard/Blog/2020/07/25/Using-Instagram-API-To-Output-Profile-Photos-In-ASPNET-2020-Edition
    

    I made an addition to my Gatsby project that generated a social share page for every blog post where the URL path was prefixed with /socialcard. These share pages will only be generated when in development mode.

    Social Share Page

    Now that we have our Puppeteer script, all that needs to be accomplished is to create a nice looking visual for Puppeteer to convert into an image. I wanted some form of automation where blog post information was automatically populated.

    I’m starting off with a very simple layout taking some inspiration from dev.to and outputting the following information:

    • Title
    • Date
    • Tags
    • Read time

    Working with HTML and CSS isn’t exactly my forte. Luckily for me, I just needed to do enough to make the share image look presentable.

    Social Card Page

    You can view the HTML and CSS on JSFiddle. Feel free to update and make it better! If you do make any improvements, update the JSFiddle and let me know!

    Next Steps

    I plan on adding some additional functionality allowing a blog post teaser image (if one is added) to be used as a background and make things look a little more interesting. At the moment the share image is very plain. As you can tell, I keep things really simple as design isn’t my strongest area. :-)

    If all goes to plan, when I share this post to Twitter you should see my newly generated share image.

  • I thought I should write a quick post for those who may also experience lost internet connection on a Sony TV (running Android OS) as my parents encountered a couple weeks ago. My parents have had their TV for a few years now and never experienced connection issues... unless the Internet was truly down.

    The TV is connected to the internet modem via an ethernet cable. Even though the network status was marked as "connected" there was no Internet connection. Other household devices were connected to the Internet successfully, which confirmed there was solely an issue with the Sony TV. Luckily, after a lot of fumbling around the settings and a lot of Googling, there was in fact a simple solution requiring no technical knowledge.

    The issue occurs when the date-time is incorrect. This needs to be corrected by carrying out the following steps:

    1. Press the Home button on the remote control.
    2. Select Settings (cog icon) found on the top right.
    3. Go to System Settings and select Date and Time.
    4. In Date and Time, select Automatic date & time set to Use network time.
    5. Carry out a hard reboot by pressing and holding down the Power button on the remote.

    I’d also recommend checking if there are any OS updates at the same time just to see if Sony has released any fixes for the issue. At the time of writing, it doesn’t look like this issue has been resolved. I can confirm I checked for any outstanding updates to which there were none.

    Even now I don’t understand how the date-time on the TV shifted out of sync. This shouldn’t happen again as we have now set the date-time to be set automatically via the network.

    So why are there Internet issues if the date-time isn't correct on a device? Ensuring the correct time on a device is more important than you might think. If the clock on a device manages to diverge too far from the correct time (more than a few hours), the Operating System and applications that are dependent on Internet-based services and authorisation will be rejected. As a result, either cause issues where some applications do not function, or on a wider scale where connection to the Internet is dropped in its entirety.

  • Don’t you love the good ol’ days back when querying API’s provided by social platforms was a straight forward process and at a time when it felt like there were fewer barriers for entry. This thought came to mind when I had to use Instagram’s API feed again recently to output a list of photos for one of my personal projects.

    The last time I carried out any interaction with Instagram's API was back in 2013 where it was a much more simpler affair. All that was required is to generate an access token that never expired, which could then be used against any API endpoint to get data back from Instagram.

    Reading Jamie Maguires really useful three-part blog post on tapping into Instagrams API gave me a good foundation into how the API has changed since I used it last. But the example he used required the profile you wanted to interact with had to be set up as a business user. However, interacting with Instagram’s API requires a lot more effort if you (like most users) do not have a business profile.

    As far as I understand it, if you plan on making any interaction with Instagrams API as a non-business user the process is:

    1. Create Facebook Developer App
    2. Authenticate application by logging into the Instagram account. This will then generate a short-lived token valid for 1 hour.
    3. Exchange the short-lived token for a long-lived token that is valid for 60 days. This token will need to be stored somewhere at the application level.
    4. Ensure the long-lived token is refreshed before it expires within the 60-day window.

    It is very important that we are using the long-lived token and to continually renew it by having some form of background process that carries out this check, whether this is at application or Azure level. We can then use this token to make queries to Instagram API endpoints.

    In this post, I am going to perform very simple integration to output a list of images from an Instagram profile. By demonstrating this, we should then get a fair idea on how to interact with the other API endpoints. Personally, setting everything up the process to acquire an access-token is the part that requires the most effort. So let's get to it!

    Create a Facebook Developer Application

    Since Facebook has taken over Instagram, naturally the application process starts within Facebook's developer site, which can be found at: https://developers.facebook.com/apps/. Once you have logged in using your Facebook credentials, the following steps will need to be carried out:

    • Select "Add New App". In the popup, enter the application name and contact email.
    • You will be presented with a list of products. Select "Instagram" by pressing the "Setup" button.
    • From the left-hand navigation, go to: Settings > Basic. Scroll to the bottom and click on the "Add platform" button. From the popup, select "Website".
    • Enter the website URL. For testing, this will be the URL we set up in the previous section. Ensure this URL is prefixed with https://. Click the "Save" button.
    • Again, from the left-hand navigation (under Products > Instagram), select: "Basic Display". At the bottom of the page, click the "Create New App" button.
    • Enter the following fields (based on our test domain):
    • Valid OAuth Redirect URIs: https://myinstagramapp.surinderbhomra.com/Instagram/Auth
    • Deauthorize Callback URL: https://myinstagramapp.surinderbhomra.com
    • Data Deletion Requests: https://myinstagramapp.surinderbhomra.com
    • Add an Instagram Test user. This can be the clients Instagram profile name.
    • Add instagram_graph_user_media permission, so we can read the profile images.
    • Click "Save Changes" button.

    You will have noticed I have added website URL’s for the OAuth Redirect, Deauthorize Redirect and Deletion Request fields. As these fields are required, you can enter the dummy website URL for local development purposes. Just remember to change this when you move the application to the live domain. In the meantime to utilise those dummy URL’s, your local host file will need to be updated. Please refer to a post I wrote in 2012 for further information. It might be over 8 years old, but the process is the same even if the Facebook Developer interface has changed.

    Once the Developer Application has been set up, grab the App ID and App Secret to be used in our demo application.

    Photo Feed Application

    The Photo Feed application will provide a very simple demonstration of the authentication process and interacting with the API to get back our media objects. From here, you will have the tools to improve and expand on this to delve deeper into other API endpoints Instagram has to offer.

    To start, we have two helper classes that the application will rely on:

    1. InstagramAuthProvider
    2. InstagramMediaProvider

    InstagramAuthProvider

    The InstagramAuthProvider carries out all authentication processes for acquiring both short and long-lived tokens.

    public class InstagramAuthProvider
    {
        #region Json Response Objects
    
        public class AuthenticateRequest
        {
            [JsonProperty("error_type")]
            public string ErrorType { get; set; }
    
            [JsonProperty("code")]
            public int StatusCode { get; set; }
    
            [JsonProperty("error_message")]
            public string ErrorMessage { get; set; }
    
            [JsonProperty("access_token")]
            public string AccessToken { get; set; }
    
            [JsonProperty("user_id")]
            public long UserId { get; set; }
        }
    
        public class LongLivedTokenRequest
        {
            [JsonProperty("access_token")]
            public string AccessToken { get; set; }
    
            [JsonProperty("token_type")]
            public string TokenType { get; set; }
    
            [JsonProperty("expires_in")]
            public long ExpiresInSeconds { get; set; }
        }
    
        #endregion
    
        /// <summary>
        /// Carries out initial authentication approach after user had approved app to Instagram account link.
        /// Returns a short-lived token valid for 1 hour.
        /// </summary>
        /// <param name="code"></param>
        /// <returns></returns>
        public static async Task<AuthenticateRequest> GetAccessTokenAsync(string code)
        {
            string authResponse = string.Empty;
    
            if (!string.IsNullOrEmpty(code))
            {
                Dictionary<string, string> parameters = new Dictionary<string, string>
                    {
                        { "client_id", ConfigurationManager.AppSettings["Instagram.ClientID"].ToString() },
                        { "client_secret", ConfigurationManager.AppSettings["Instagram.AppSecret"].ToString() },
                        { "grant_type", "authorization_code" },
                        { "redirect_uri", $"{ConfigurationManager.AppSettings[“Site.Domain"]}{ConfigurationManager.AppSettings["Instagram.AuthRedirectPath"]}" },
                        { "code", code }
                    };
    
                FormUrlEncodedContent encodedParameters = new FormUrlEncodedContent(parameters);
    
                HttpClient client = new HttpClient();
    
                HttpResponseMessage response = await client.PostAsync("https://api.instagram.com/oauth/access_token", encodedParameters);
                authResponse = await response.Content.ReadAsStringAsync();
            }
    
            return JsonConvert.DeserializeObject<AuthenticateRequest>(authResponse);
        }
    
        /// <summary>
        /// Exchanges a short-lived token for a long-lived token that are valid for 60 days.
        /// </summary>
        /// <param name="shortliveAccessToken"></param>
        /// <returns></returns>
        public static async Task<LongLivedTokenRequest> GetLongLifeTokenAsync(string shortliveAccessToken)
        {
            string authResponse = string.Empty;
    
            if (!string.IsNullOrEmpty(shortliveAccessToken))
            {
                HttpClient client = new HttpClient();
    
                HttpResponseMessage response = await client.GetAsync($"https://graph.instagram.com/access_token?client_secret={ConfigurationManager.AppSettings["Instagram.AppSecret"].ToString()}&grant_type=ig_exchange_token&access_token={shortliveAccessToken}");
                authResponse = await response.Content.ReadAsStringAsync();
            }
    
            return JsonConvert.DeserializeObject<LongLivedTokenRequest>(authResponse);
        }
    
        /// <summary>
        /// Refresh a long-lived Instagram User Access Token that is at least 24 hours old but has not expired.
        /// </summary>
        /// <param name="longLivedAccessToken"></param>
        /// <returns></returns>
        public static async Task<LongLivedTokenRequest> RefreshTokenAsync(string longLivedAccessToken)
        {
            string authResponse = string.Empty;
    
            if (!string.IsNullOrEmpty(longLivedAccessToken))
            {
                HttpClient client = new HttpClient();
    
                HttpResponseMessage response = await client.GetAsync($"https://graph.instagram.com/refresh_access_token?grant_type=ig_refresh_token&access_token={longLivedAccessToken}");
                authResponse = await response.Content.ReadAsStringAsync();
            }
    
            return JsonConvert.DeserializeObject<LongLivedTokenRequest>(authResponse);
        }
    }
    

    InstagramMediaProvider

    The InstagramMediaProvider returns media information based on the authentication token.

    public class InstagramMediaProvider
    {
        #region Json Response Objects
    
        public class MediaCollection
        {
            [JsonProperty("data")]
            public List<MediaInfo> Data { get; set; }
        }
    
        public class MediaInfo
        {
            [JsonProperty("id")]
            public string Id { get; set; }
    
            [JsonProperty("caption")]
            public string Caption { get; set; }
    
            [JsonProperty("permalink")]
            public string InstagramUrl { get; set; }
    
            [JsonProperty("media_type")]
            public string Type { get; set; }
    
            [JsonProperty("thumbnail_url")]
            public string VideoThumbnailUrl { get; set; }
    
            [JsonProperty("media_url")]
            public string Url { get; set; }
        }
    
        #endregion
    
        private string _accessToken;
    
        public InstagramMediaProvider(string accessToken)
        {
            _accessToken = accessToken;
        }
    
        /// <summary>
        /// Gets list of all user images.
        /// </summary>
        /// <returns></returns>
        public async Task<List<MediaInfo>> GetUserMedia()
        {
            var mediaInfo = await GetAllMediaAsync();
    
            if (mediaInfo?.Data.Count > 0)
                return mediaInfo.Data;
            else
                return new List<MediaInfo>();
        }
    
        /// <summary>
        /// Outputs information about a single media item.
        /// </summary>
        /// <returns></returns>
        private async Task<MediaCollection> GetAllMediaAsync()
        {
            string mediaResponse = string.Empty;
    
            HttpClient client = new HttpClient();
    
            HttpResponseMessage response = await client.GetAsync($"https://graph.instagram.com/me/media?fields=id,media_type,media_url,thumbnail_url,permalink,caption,timestamp&access_token={_accessToken}");
    
            mediaResponse = await response.Content.ReadAsStringAsync();
    
            if (response.StatusCode != HttpStatusCode.OK)
                return null;
    
            return JsonConvert.DeserializeObject<MediaCollection>(mediaResponse);
        }
    }
    

    Authorisation and Authentication

    The authorisation and authentication functionality will be performed in the InstagramController.

    public class InstagramController : Controller
    {
         /// <summary>
         /// Authorises application with Instagram.
         /// </summary>
         /// <returns></returns>
        public ActionResult Authorise()
        {
            return Redirect($"https://www.instagram.com/oauth/authorize?client_id={ConfigurationManager.AppSettings["Instagram.AppID"].ToString()}&redirect_uri={ConfigurationManager.AppSettings[“Site.Domain"]}{ConfigurationManager.AppSettings["Instagram.AuthRedirectPath"]}&scope=user_profile,user_media&response_type=code");
        }
    
        /// <summary>
        /// Makes authentication request to create access token.
        /// </summary>
        /// <param name="code"></param>
        /// <returns></returns>
        public async Task<ActionResult> Auth(string code)
        {
            InstagramAuthProvider.AuthenticateRequest instaAuth = await InstagramAuthProvider.GetAccessTokenAsync(code);
    
            if (!string.IsNullOrEmpty(instaAuth?.AccessToken))
            {
                InstagramAuthProvider.LongLivedTokenRequest longTokenRequest = await InstagramAuthProvider.GetLongLifeTokenAsync(instaAuth.AccessToken);
    
                if (!string.IsNullOrEmpty(longTokenRequest?.AccessToken))
                {                
                   // Storing long-live token in a session for demo purposes. 
                   // Store the token in a more permanent place, such as a database.
                    Session["InstagramAccessToken"] = longTokenRequest.AccessToken;
    
                    return Redirect("/");
                }
            }
    
            return Content("Error authenticating.");
        }
    }
    

    Authorise

    Before we can get any of our tokens, the first step is to get authorisation from Instagram against our web application. So somewhere in the application (preferably not publicly visible), we will need an area that will kick this off. In this case, by navigating to /Instagram/Authorise, will cause the Authorise action in the controller to be fired.

    All the Authorise action does is takes you to Instagrams login page and sends over the App ID and Redirect Path. Remember, the Redirect path needs to be exactly as you've set it in your Facebook Developer Application. Once you have successfully logged in, you’ll be redirected back to the application.

    NOTE: I’ll be honest here and say I am not sure if there is a better way to acquire the access token as it seems very odd to me that you have to log in to Instagram first. If any of you know of a better way, please leave a comment.

    Auth

    If the authorise process was successful, you will be redirected back the application and be given an authorisation code. The authorisation code will be parsed to the InstagramAuthProvider.GetAccessTokenAsync() method to be exchanged for our first access-token valid - short-lived valid for 1 hour.

    The last step of the process is to now send the short-lived access token to the InstagramAuthProvider.GetLongLifeTokenAsync() method that will carry out a final exchange to retrieve our long-life token valid for 60 days. It is this token we need to store somewhere so we can use it against any Instagram API endpoint.

    In my example, the long-life token is stored in a Session for demonstration purposes. In a real-world application, we would want to store this in a database somewhere and have a scheduled task in place that will call the InstagramAuthProvider.RefreshTokenAsync() method every 59 days for the long life token to be renewed for another 60 days.

    Photo Feed

    Now we come onto the easy part - use the long-live access token to return a list of photos.

    public class InstagramMediaController : Controller
    {
        /// <summary>
        /// Outputs all user images from Instagram profile.
        /// </summary>
        /// <returns></returns>
        [OutputCache(Duration = 60)]
        public PartialViewResult PhotoGallery()
        {
            if (Session["InstagramAccessToken"] != null)
            {
                InstagramMediaProvider instaMedia = new InstagramMediaProvider(Session["InstagramAccessToken"].ToString());
    
                return PartialView("_PhotoGallery", Task.Run(() => instaMedia.GetUserMedia()).Result);
            }
    
            return PartialView("_PhotoGallery", new List<InstagramMediaProvider.MediaInfo>());
        }
    }
    

    This controller contains a single piece of functionality - a PhotoGallery partial view. The PhotoGallery partial view uses the InstagramMediaProvider.GetUserMedia() method to return a collection of profile photos.

    Final Thoughts

    Before carrying out any Instagram integration, think about what you are trying to achieve. Look through the documentation to ensure the API's you require are available as some permissions and features may require business or individual verification to access live data.

    Also, you may (or may not) have noticed whilst going through the Facebook Development Application setup that I never submitted the application for review. I would advise you to always submit your application to Facebook as you might find your access is revoked. For personal purposes where you're only outputting your own Instagram profile information, you could just leave it "In development" mode. But again, I do not advise this, especially when working with Business accounts.

  • Ever since I’ve been forced to work from home over the last 3 months, I noticed in the first few weeks of the coronavirus lockdown my network performance has been subpar. Not ideal when a stable internet connection is ones only gateway to the outside world and enable you to work from home.

    My current network setup is quite simple and consists of:

    • ISP router - set to modem mode
    • Billion 7800DXL wireless router
    • Synology 4-bay NAS
    • Wireless Access Point
    • Wireless Security Camera

    The bottle-neck out of the whole setup is the Billion router. It’s not a basic router by any means and it has served me well since I last upgraded my network back in 2014. After being in use 24/7 over the last 6 years, signs of wear were starting to show. My internet connection would just randomly drop or wind to a halt. Carrying out a factory reset did not resolve the connection stability. Next step was to check for the latest firmware, but it seems that Billion has a really short firmware release cycle - not something you’d expect for a router costing just under £200. The last firmware was released in 2015, which I had already installed.

    I had to make a decision to either waste time on faffing around with the Billion router or look for a replacement. I decided on the latter.

    UniFi Dream Machine Router

    UniFi Dream Machine Boxed

    I knew straight-away what router I wanted to purchase - UniFi Dream Machine by Ubiquiti. I was sold on the name alone!

    Ubiquiti are known for making high-quality network solutions that are suitable for consumers and businesses alike. You can start off with a small setup based on your infrastructure needs knowing at a later point (if needed) you have the option to purchase additional hardware and upscale your network. From the reviews I’ve read online, the company really makes nice hardware that can seamlessly integrate with one another - part of the Ubiquiti eco-system.

    I was so tempted to go overkill on my new network set up just so I could do some additional tinkering, but the Dream Machine Router provides all the functionality I need and more.

    Form Factor

    The Dream Machine isn’t like any other router I’ve purchased previously where the form factor has been a boring horizontal slab with two or three antennae poking out - a piece of hardware I would always hide away in my cabinet. But the Dream Machine is a nice looking piece of kit and even comes across very applesque. It can stand proud and upright in full view for all to see. Plus it has has a really cool blue ring light.

    It’s definitely heavier than any of my previous routers, which isn’t surprising from the amount of tech being crammed into this oversized mint tic tac, containing:

    • ARM Cortex Processor
    • Cooling Fan
    • 4 Port Gigabit Switch
    • Integrated Wireless Antenna 2.4/5GHz

    Setup

    I love the fact that from the moment you take the router out the box and connect to mains, you can literally get online and everything set up in no longer than 10 mins all from within the UniFi Network Controller mobile app. I don’t think I’ve ever been so excited to go through setting up a network device before. The mobile app makes things really simple that even my parents wouldn’t have a problem in carrying out the setup steps. I think I spent more time thinking about what I should call my wireless network. :-)

    I won’t go into too much detail here on the setup steps, but they consist of the following:

    • Find and connect to the device via Bluetooth.
    • Create a UI.com account, or login using your existing credentials.
    • Set auto-optimise settings.
    • Setup Wi-Fi network.
    • Set a firmware update schedule.
    • Perform network speed test to dial in with the speeds provided by your ISP.

    Once those steps are carried out, you just need to let the device go through its configuration process. Once complete, you can join the network wirelessly.

    If in the future, you decide to expand your network with additional UniFi devices, the setup process will be the same.

    “Prosumer” Configuration/Monitoring

    For a device that costs around £300, don’t think for one second the UniFi mobile app is the only route to making configuration changes. The mobile app is a protective bubble for the standard consumer who just wants a secure and reliable wireless setup without being too exposed to the inner workings. If I didn’t have a home NAS and didn’t feel the need to control how certain wireless access points could connect to devices, the mobile app would have more than sufficed.

    UniFi Network Controller App Home Screen

    Phew! The UniFi Network Controller App is telling me "Everything is great!".

    I get a real kick out of seeing the vast array of network analytics and see how my internet usage has increased since working from home. You have at your disposal overall statistics on hardware performance, internet speed, threat maps, device and application usage to name a few.

    UniFi Traffic Stats

    If like me, you require more control over your network, this can be done by logging into the web interface, which is just as intuitive as the mobile app. Here I was able to configure port forwarding, network groups, firewall and guest network. Trust me, there is a tonne more configuration options you can change really easily making you feel like a network pro!

    UniFi Web Interface

    Security

    In addition to wanting a more stable and reliable network device, security also played a big factor in the reason why I purchased a Ubiquiti device.

    Unlike all the routers I’ve had in the past that probably only received 2-3 updates in their lifetime, Ubiquiti has turned that on its head. By just looking at their software release page, it’s a hive on activity. Up-to-date firmware enhances the longevity of the device by fixing any possible vulnerabilities as well as ensuring the device continues to function at its optimum level. To ensure you are always running the most up-to-date firmware, Ubiquiti have made the process very easy. The device will automatically install newly released firmware automatically based on your set schedule.

    There is also an option to enable Threat Management (currently in beta) that will protect your network from attacks, malware and malicious activity. Does this feature slow down the incoming traffic? The answer is no. The device has a whopping 850Mbps throughput limit. Amazing!

    Conclusion

    I purchased the Ubiquiti Dream Machine a couple of weeks into the start of the Covid-19 lockdown and I’m happy to report my network is more speedy then I could’ve hoped for. This is something you immediately notice when performing large file downloads/uploads. In fact, my parents log onto my Synology remotely and even they have noted an improvement.

    I’ll admit even after numerous research before making the purchase, I was still questioning whether spending such a large amount on a wireless router was worth it. But this concern was soon quashed knowing I have a piece of hardware that is more future-proof than what its competitors are currently offering and can later tie into a larger network architecture when needed.

    By buying this router you’ll be living the network dream!

  • I’ve been doing a little research into how I can make posts written in markdown more suited for my needs and decided to use this opportunity to develop my own Gatsby Markdown plugin. Ever since I moved to Gatsby, making my own Markdown plugin has been on my todo list as I wanted to see how I could render slightly different HTML markup based on the requirements of my blog post content.

    As this is my first markdown plugin, I thought it best to start small and tackle bug-bear of mine - making external links automatically open in a new window. From what I have looked online, some have suggested to just add an HTML anchor tag to the markdown as you will then have the ability to apply all attributes you’d want - including target. I’ll quote my previous post about aligning images in markdown and why I’m not keen on mixing HTML with markdown:

    HTML can be mingled alongside the markdown syntax... I wouldn't recommend this from a maintainability perspective. Markdown is platform-agnostic so your content is not tied to a specific platform. By adding HTML to markdown, you're instantly sacrificing the portability of your content.

    Setup

    We will need to create a local Gatsby plugin, which I’ve named gatsby-remark-auto-link-new-window. Ugly name... maybe you can come up with something more imaginative. :-)

    To register this to your Gatsby project, you will need start of with the following:

    • Creating a plugin folder at the root of your project (if one hasn’t been created already).
    • Add a new folder based on the name of our plugin, in this case - /plugins/gatsby-remark-auto-link-new-window.
    • Every plugin consists of two files:
      • index.js - where the plugin code to carry out our functionality will reside.
      • package.json - contains the details of our plugin, such as name, description, dependencies etc. For the moment this can just contain an empty JSON object {}. If we were to publish our plugin, this will need to be completed in its entirety.

    Now that we have our bare-bones structure, we need to register our local plugin by adding a reference to the gatsby-config.js file. Since this is a plugin to do with transforming markdown, the reference will be added inside the 'gatsby-transformer-remark options:

    {
      // ...
      resolve: 'gatsby-transformer-remark',
        options: {
          plugins: [        
            {
              resolve: 'gatsby-remark-embed-gist',
              options: {
                username: 'SurinderBhomra',
              },
            },
            {
              resolve: 'gatsby-remark-auto-link-new-window',
              options: {},
            },
            'gatsby-remark-prismjs',
          ],
        },
      // ...
    }
    

    For the moment, I’ve left the options field empty as we currently have no settings to pass to our plugin. This is something I will show in another blog post.

    To make sure we have registered our plugin with no errors, we need run our build using the gatsby clean && gatsby develop command. This command will always need to be run after every change made to the plugin. By adding gatsby clean, we ensure the build clears out all the previously built files prior to the next build process.

    Rewriting Links In Markdown

    As the plugin is relatively straight-forward, let’s go straight into the code that will be added to our index.js file.

    const visit = require("unist-util-visit");
    
    module.exports = ({ markdownAST }) => {
      visit(markdownAST, "link", (node) => {
        if (node.url.startsWith("http")) {
          node.data = node.data || {};
          node.data.hProperties = node.data.hProperties || {};
          node.data.hProperties.target = "_blank";
          node.data.hProperties.rel = "noopener noreferrer";
        }
      });
      return markdownAST;
    };
    

    As you can see, I want to target all markdown link nodes and depending on the contents of the url property we will perform a custom transformation. If the url property starts with an "http" we will then add a new attribute, "target" using hProperties. hProperties refers to the HTML attributes of the targeted node.

    To see the changes take effect, we will need to re-run gatsby clean && gatsby develop.

    Now that we have understood the basics, we can beef up our plugin by adding some more functionality, such as plugin options. But that's for another post.

  • Published on
    -
    1 min read

    Aligning Images In Markdown

    Every post on this site is written in markdown since successfully moving over to GatsbyJS. Overall, the transition has been painless and found that writing blog posts using the markdown syntax is a lot more efficient than using a conventional WYSIWYG editor. I never noticed until making the move to markdown how fiddly those editors were as you sometimes needed to clean the generated markup at HTML level.

    Of course, all the efficiency of markdown does come at a minor cost in terms of flexibility. Out of the minor limitations, there was one I couldn't let pass. I needed to find a way to position images left, right and centre as majority of my previous posts have been formatted in this way. When going through the conversion process from HTML to markdown, all my posts were somewhat messed up and images were rendered 100% width.

    HTML can be mingled alongside the markdown syntax, so I do have an option to use the image tag and append styling. I wouldn't recommend this from a maintainability perspective. Markdown is platform-agnostic so your content is not tied to a specific platform. By adding HTML to markdown, you're instantly sacrificing the portability of your content.

    I found a more suitable approach would be to handle the image positioning by appending a hashed value to the end of the image URL. For example, #left, #right, or #centre. We can at CSS level target the src attribute of the image and position the image along with any additional styling based on the hashed value. Very neat!

    img[src*='#left'] {
    float: left;
    margin: 10px 10px 10px 0;
    }
    
    img[src*='#center'] {
    display: block;
    margin: 0 auto;
    }
    
    img[src*='#right'] {
    float: right;
    margin: 10px 0 10px 10px;
    }
    

    Being someone who doesn’t delve into front-end coding techniques as much as I used to, I am amazed at the type of things you can do within CSS. I’ve obviously come late to the more advanced CSS selectors party.

  • I decided to write this post to act primarily as a reminder to myself for when I'm publishing an ASP.NET Core project ready for a production environment. Most of the ASP.NET Core projects I'm currently working on are based on pre-existing client or platform-based boilerplates and when taking these on, they vary in quality and a result, some key project settings are just not implemented.

    I will be covering the following areas:

    • Ensuring the correct environment variable is set for your publish profile.
    • Setting custom error pages.
    • Switching between development and production appsetting.json files.

    Setting Environment In Publish Profile

    After you have created the publish profile, update the .pubxml file (found under the "/Properties/PublishProfiles" directory within your project) and add a EnvironmentName variable:

    <PropertyGroup>
        <EnvironmentName>Production</EnvironmentName>
    </PropertyGroup>
    

    This variable is very much key to the whole operation. Without it, the project will be stuck in development mode and the sections, listed below, will not work.

    Setting Custom Error Pages

    We are only interested in seeing a custom error page when in production mode. To do this, we need to:

    1. Update the Startup.cs file to enable status code error pages.
    2. Create an error controller to render the custom error pages.

    Startup.cs

    To serve our custom error page, we need to declare the route using the app.UseStatusCodePagesWithReExecute() method. This method includes a placeholder {0}, which will be replaced with the status code integer - 404, 500, etc. We can then render different views depending on the error code returned. For example:

    • /Error/404
    • /Error/500
    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        // Render full blown exception only if in development mode.
        if (env.IsDevelopment())
        {
            app.UseDeveloperExceptionPage();
        }
        else
        {
            app.UseStatusCodePagesWithReExecute("/Error/{0}");
            app.UseHsts();
        }
    }
    

    Error Controller

    Based on the status code returned, different views can be rendered.

    public class ErrorController : Controller
    {
        [ResponseCache(Duration = 0, Location = ResponseCacheLocation.None, NoStore = true)]
        [Route("/Error/{statusCode}")]
        public ViewResult Status(int statusCode)
        {
            if (statusCode == StatusCodes.Status404NotFound)
            {
                return View("~/Views/Error/NotFound.cshtml");
            }
            else
            {
                return View("~/Views/Error/GeneralError.cshtml");
            }
        }
    }
    

    Web.config

    Being a .NET Core project, there is one area that is easily overlooked as we're so focused on the Startup.cs and appsettings.json files - that is the web.config. We need to ensure the environment variable is set here also by adding the following:

    <environmentVariables>
        ...
        <environmentVariable name="ASPNETCORE_ENVIRONMENT" value="Production" />
        ...
    </environmentVariables>
    

    If the "ASPNETCORE_ENVIRONMENT" value isn't set correctly at this point, this will cause issues/inconsistencies globally.

    Switching To appsetting.production.json

    You've probably noticed that your ASP.NET Core project contains three appsettings.json files - each one for your environment:

    • appsettings.json
    • appsettings.development.json
    • appsettings.production.json

    If your ASP.NET Core project version is less than 3.0, you can switch between each appsettings.json file by adding the following code to your Startup.cs file:

    public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
    {
        IConfigurationBuilder configBuilder = new ConfigurationBuilder()
            .SetBasePath(env.ContentRootPath)
            .AddJsonFile("appsettings.json", true, true)
            .AddJsonFile($"appsettings.{env.EnvironmentName}.json", true)
            .AddEnvironmentVariables();
    
        Configuration = configBuilder.Build();
    }
    

    However, if running on ASP.NET Core 3.0+, you will need to use WebHost.CreateDefaultBuilder(args) method that will be added to the Programs.cs file.

    public class Program
    {
        public static void Main(string[] args)
        {
            CreateHostBuilder(args).Build().Run();
        }
    
        public static IHostBuilder CreateHostBuilder(string[] args) =>
            Host.CreateDefaultBuilder(args)
                .UseContentRoot(Directory.GetCurrentDirectory())
                .ConfigureWebHostDefaults(webBuilder =>
                {
                    webBuilder.UseStartup<Startup>();
                });
    }
    

    The CreateDefaultBuilder performs the following environment-related tasks (to name a few):

    • Sets the content root to the path returned by Directory.GetCurrentDirectory().
    • Loads host configuration from environment variables prefixed with ASPNETCORE_ (for example, ASPNETCORE_ENVIRONMENT).
    • Loads application configuration settings in the following order, starting from appsettings.json and then appsettings.{Environment}.json.

    As you can see, from ASP.NET Core 3.0 onwards, quite a lot is being done for you from such a minimal amount of code.

  • I’ve had this blog since 2007 when I had a bright idea to make a small mark on the internet. Back then, there weren’t many platforms to easily contribute ones technical thoughts freely in writing as there are now. All you really had were forums and a handful of other sites in the likes of 4GuysFromRolla.com, C# Corner and LearnASP.com (to name a few that come to mind). Now I could be wrong about the accuracy of this opening statement as 2007 was a long time ago - back in much simpler times when I was a Junior Web Developer.

    We have now come to a point where we’re spoilt for choice. There are multiple mediums where you have the freedom to publish your own technical writing in a more public and accessible way, the main ones being:

    • Medium
    • Dev.to
    • Hashnode.com
    • LinkedIn Articles

    At present, I post some of my writing to Medium whether that is a clone of my own blog content or new material specifically for the platform. However, I’m now rethinking where I should be publishing my content as I am now seeing more of my fellow developers on Twitter posting content to dev.to, when they previously used Medium.

    I really like dev.to found its approach to content curation to the developer community refreshing, which makes for very addictive reading and you can really see the passion in the writers. Should I start cross-posting there as well for more exposure? How does this affect things from an SEO standpoint where I have the same post on my blog as well as Medium and dev.to? All I know is anything I cross-post from my blog to Medium gets ranked higher in Google search results, which is to be expected.

    If I’m being honest to myself, I like posting content where I’m another small cog part of a wider community as there is a higher chance in like-minded individuals gaining access to your content and in the process get involved by commenting and imparting their knowledge on your written piece. You can’t help but feel rewarded when your article gets a like, clap or comment, which in return makes you want to do the same for other contributers. This doesn’t really happen on a personal website.

    When you are posting on another platform you don’t have to worry about technical issues or hosting. The only thing you need to do is write! But you have to remember that you’re writing in a platform that is not your own and any future changes will be out of your control.

    As great as these other writing platforms are, you are restricted in really seeing the developers personality, which is something that speaks volumes when viewing their personal website. They present their content in their own unique way and most importantly write about things freely that, otherwise, may not be within the parameters of a third-party platform.

    Final Thoughts

    As I’ve noted a shift in the number of technical posts being published to dev.to, I will more than likely do the same and cross-post any relevant content from my personal site. You can’t help but feel it’s the best place to get exposure to programming related content. Having said this, I still feel there’s is space for me to also cross-post to Medium. But what I won’t do is cross-post the same content to both. This feels counter-intuitive. Use the most appropriate platform that has the highest chance of targeting the readers based on the subject matter in hand.

    I don’t think I could ever stop writing within my own site as I like the freedom of expression - no strings attached. I can write about whatever I want and if there happens to be a post I’d like to also publish to the likes of either Medium or dev.to, I got the flexibility to do that as well.

  • If you’re seeing this post, then this means I have fully made the transition to a static-generated website architecture using GatsbyJS. I started this process late December last year but then started taking it seriously into the new year. It’s been a learning process getting to grips with a new framework as well as a big jump for me and my site.

    Why has it been a big jump?

    Everything is static. I have downsized my website footprint exponentially. All 250+ blog posts have been migrated into markdown files, so from now on, I will be writing in markdown and (with the help of Netlify) pushing new content by a simple git commit. Until now, I have always had a website that used server-side frameworks that stored all my posts in a database. It’s quite scary moving to a framework that feels quite unnatural to how I would normally build sites and the word “static” when used in relation to a website reminds me of a bygone era.

    Process of Moving To Netlify

    I was pleasantly surprised by how easy the transition to Netlify was. There is a vast amount of resources available that makes for good reading before making the switch to live. After linking my website Bitbucket repository to a site, the only things left to do to make it live were the following:

    • Upload a _redirects file, listing out any redirects you require Netlify to handle. For GatsbyJS sites, this will need to be added to the /static directory.
    • Setup Environment variables to allow the application to easily switch between development and production states. For example, my robots.txt is set to be indexable when only in production mode.
    • Add CNAME records to your existing domain that point to your Netlify domain. For example, surindersite.netlify.com.
    • Issue a free Let’s Encrypt SSL certificate, which is easily done within the account Domain settings.

    Post live, the only thing that stumped me was the Netlify domain didn’t automatically redirect to my custom domain. This is something I thought Netlify would automatically handle once the domain records were updated. To get around this, an explicit domain 301 redirect needs to be added to your _redirects file.

    # Domain Redirect
    https://surinderbhomra.netlify.com/*     https://www.surinderbhomra.com/:splat    301!
    

    New Publishing Process

    Before making the switchover, I had to carry out some practice runs on how I would be updating my website just to be sure I could live with the new way of adding content. The process is now the following:

    1. Use “content/posts” branch to add a new blog post.
    2. Create a new .md file that consists of the date and slug. In my case, all my markdown files are named "2010-04-02---My-New-Post.md".
    3. Ensure all categories and tags in the markdown frontmatter is named correctly. This is an important step to ensure no unnecessary new categories or tags are created.
    4. Add any images used in the post to the site. The images should reference Imagekit.io.
    5. Check over the post locally.
    6. Push to master branch and let Netlify carry out the rest.

    Out of all the steps, I have only found steps 3 and 4 to require a little effort when compared to using a CMS platform, as previously, I could select from a predefined list of categories and upload images directly. Not a deal-breaker.

    Next Steps

    I had a tight deadline to ensure I made the move to Netlify before my current hosting renews for another year and still have quite a bit of improvement to make. Have you seen my Google Lighthouse score!?! It’s shockingly bad due to using the same HTML markup and CSS from my old site. I focused my efforts cramming in all the functionality to mimic how my site used to work and efficiencies in keeping build times to Netlify low.

    First thing on the list - rebuild website templates from the ground up.

  • Yesterday, I was frantically trying to debug why some documents weren’t getting returned when using the DocumentHelper.GetDocuments() method. Normally when this happens, I need delve deeper to see what SQL Kentico is generating via the API in order to get a little more information on where the querying could be going wrong. To do this, I perform a little “hacky” approach (purely for debugging) whereby I break the SQL within the API by insert a random character within the OrderBy or Where condition parameters.

    Voila! The can see the SQL in the error page.

    But it was only yesterday where I was shown a much more elegant solution by simply adding a GetFullQueryText() to your GetDocuments(), which then returns the SQL with all the parameters populated for you:

    string debugQuery = DocumentHelper.GetDocuments()
                                      .OnSite(SiteContext.CurrentSiteName)
                                      .Types(DocumentTypeHelper.GetClassNames(TreeProvider.ALL_CLASSNAMES))
                                      .Path("/", PathTypeEnum.Children)
                                      .Culture(LocalizationContext.PreferredCultureCode)
                                      .OrderBy("NodeLevel ASC", "NodeOrder ASC")
                                      .GetFullQueryText();
    

    I can’t believe I did not know this after so many years working on Kentico! How embarrassing...