Blog

Tagged by 'gatsbyjs'

  • I've been using the gatsby-plugin-smoothscroll plugin in the majority of GatsbyJS builds to provide a nice smooth scrolling effect to a HTML element on a page. Unfortunately, it lacked the capability of providing an offset scroll to position, which is useful when a site has a fixed header or navigation.

    I decided to take the gatsby-plugin-smoothscroll plugin and simplify it so that it would not require a dependency on polyfilled smooth scrolling as this is native to most modern browsers. The plugin just contains a helper function that can be added to any onClick event with or without an offset parameter.

    Usage

    The plugin contains a smoothScrollTo helper function that can be imported onto the page:

    // This could be in your `pages/index.js` file.
    
    import smoothScrollTo from "gatsby-plugin-smoothscroll-offset";
    

    The smoothScrollTo function can then be used within an onClick event handler:

    <!-- Without offset -->
    <button onClick={() => smoothScrollTo("#some-id")}>My link without offset</button>
    
    <!-- With offset of 80px -->
    <button onClick={() => smoothScrollTo("#some-id", 80)}>My link with offset</button>
    

    Demo

    A demonstration of the plugin in use can be found by navigating to my Blog Archive page and clicking on any of the category links.

    Prior to this plugin, the category list header would be covered by the sticky navigation.

    Smooth Scrolling without Offset

    Now that an offset of 80px can be set, the category list header is now visible.

    Smooth Scrolling with Offset

    Links

  • Disqus is a popular commenting system used on many websites and seems to be the go-to for bloggers who want to add some form of engagement from their readers. I’ve used Disqus ever since I had a blog and never experienced any problems even throughout the different iterations this site has gone through over the years.

    No complaints here, for a free service that encompasses both function and form.

    Even since I redeveloped my site in July, I’ve attempted to make page performance of paramount importance and put pages on a strict diet by removing unnecessary third-party plugins. Even though I was fully aware that Disqus adds bloat, I just assumed it's a necessary evil. However, I felt I really had to do something after reading a blog post by Victor Zhou on the reasons why he decided to move away from Disqus. The reasons are pretty damning.

    Disqus increases both the page load requests and weight. I can confirm these findings myself. On average, Disqus was adding around 1.6MB-2MB of additional bloat to my blog pages. This is the case even when a blog post had no comments. As a result, the following Google Lighthouse scores took a little beating:

    • Performance: 82/100
    • Best Practices: 75/100

    Pretty bad when you take into consideration that most of the pages on my site consist of text and nothing overly complex.

    Migrating to another commenting provider as Victor Zhou had done could be an option. There are other options I've noticed my fellow bloggers use, such as:

    Each one of these options has its pros and cons whether that might be from a pricing or feature standpoint. I decided to remain with Disqus for the moment as migrating comments is another task I don't currently have time to perform. I would be content in keeping Disqus if I could find a way to negate the page bloat by lazy-loading Disqus on demand.

    I've seen other Disqus users going down the lazy-loading approach within their builds, but couldn't find anything specifically for a Gatsby JS site. Luckily, the solution is quite straightforward and requires minimal code changes.

    Code

    The GatsbyJS gatsby-plugin-disqus plugin makes it easy to integrate Disqus functionality. All that needs to be done is to add the following component to the page:

    ...
    ...
    let disqusConfig = {
            url: `${site.siteMetadata.siteUrl+postUrl}`,
            identifier: postId,
            title: postTitle,
        }
    ...
    ...
    ...
    <Disqus config={disqusConfig} />
    ...
    ...
    

    The only way to add lazyload functionality to this plugin is by controlling when it should be rendered. I decided to render Disqus through a simple button click.

    import { useStaticQuery, graphql } from "gatsby";
    import React, { useState } from 'react';
    import { Disqus } from 'gatsby-plugin-disqus';
    
    const DisqusComponent = ({ postId, postTitle, postUrl }) => {
        const [disqusIsVisible, setDisqusVisibility] = useState(false);
    
        // Set Disqus visibility state on click.
        const showCommentsClick = event => {
          setDisqusVisibility(true);
        };
    
        let disqusConfig = {
            url: postUrl,
            identifier: postId,
            title: postTitle,
        }
    
        return (
          <>
            {!disqusIsVisible && (
              <div>
                <button onClick={showCommentsClick}>Load Comments</button>
              </div>
            )}
            {disqusIsVisible && (
              <Disqus config={disqusConfig} />
            )}
          </>
        )
    }
    

    The code above is an excerpt from a React component I place within a blog post page. React state is used to set the visibility via the showCommentsClick() onClick function. When this event is fired, two things happen:

    1. The "Load Comments" button disappears.
    2. Disqus comments are rendered.

    We can confirm the lazyloading capability works by looking at the "Network" tab in Chrome Developer Tools. You probably won't notice the page speed improvement in delaying the load of Disqus, but within the "Network" tab you'll see a lower number of requests.

    Disqus Lazy-Loading Demo

    Conclusion

    Changing the way Disqus loads on a webpage may come across as a little pedantic and an insignificant performance improvement. I believe where performance savings can be made, it should be done. Since I have rolled out the Disqus update across all pages based on the approach discussed here, the Google Lighthouse scores have increased to:

    • Performance: 95/100
    • Best Practices: 92/100

    For the first time, my website has a Google Lighthouse score ranging between 95-100 across all testing criteria.

    Conclusion - Part 2

    As I neared the end of writing this post, I just happened to come across another Gatsby Disqus plugin - disqus-react, that another blogger Janosh wrote about. This is the officially supported React plugin written by the Disqus team and contains lazy-load functionality:

    All Disqus components are lazy-loaded, meaning they won’t negatively impact the load times of your posts (unless someone is specifically following a link to one of the comments in which case it takes a little longer than on a purely static site).

    Could this really be true? Was this post written in vain?

    Janosh had stated he is using this plugin on his website and out of curiosity, I decided to download the disqus-react git repo and run the code examples locally to see how Disqus gets loaded onto the page.

    I ran both Google Lighthouse and checked Chrome's "Network" tab and after running numerous tests, no lazy-loading functionality was present. I could see Disqus JS files and avatar images being served on page load. I even bulked up the blog post body content to ensure Disqus was not anywhere in view - maybe the component will only load when in view? This made no difference.

    Unless anyone else can provide any further insight, I will be sticking to my current implementation.

  • ActiveCampaign is a comprehensive marketing tool that helps businesses automate their email marketing strategies and create targeted campaigns. If the tracking code is used, visitors can be tracked to understand how they interact with your content and curate targeted email campaigns for them.

    I recently registered for a free account to test the waters in converting readers of my blog posts into subscribers to create a list of contacts that I could use to send emails to when I have published new content. For this website, I thought I'd create a Contact Form that will serve the purpose of allowing a user to submit a query as well as being added to a mailing list in the process.

    ActiveCampaign provides all the tools to easily create a form providing multiple integration options, such as:

    • Simple JavaScript embed
    • Full embed with generated HTML and CSS
    • Link to form
    • WordPress
    • Facebook

    As great as these out-of-the-box options are, we have no control over how our form should look or function within our website. For my use, the Contact Form should utilise custom markup, styling, validation and submission process.

    Step 1: Creating A Form

    The first step is to create our form within ActiveCampaign using the form builder. This can be found by navigating to Website > Forms section. When the "Create a form" button is clicked, a popup will appear that will give us options on the type of form we would like to create. Select "Inline Form" and the contact list you would like the form to send the registrations to.

    My form is built up based on the following fields:

    • Full Name (Standard Field)
    • Email
    • Description (Account Field)

    ActiveCampaign Form Builder

    As we will be creating a custom-built form later, we don't need to worry about anything from a copy perspective, such as the heading, field labels or placeholder text.

    Next, we need to click on the "Integrate" button on the top right and then the "Save and exit" button. We are skipping the form integration step as this is of no use to us.

    Step 2: Key Areas of An ActiveCampaign Form

    There are two key areas of an ActiveCampaign form we will need to acquire for our custom form to function:

    1. Post URL
    2. Form Fields

    To get this information, we need to view the HTML code of our ActiveCampaign Contact form. This can be done by going back to the forms section (Website > Forms section) and selecting "Preview", which will open up our form in a new window to view.

    ActiveCampaign Form Preview

    In the preview window, open up your browser Web Inspector and inspect the form markup. Web Inspector has to be used rather than the conventional "View Page Source" as the form is rendered client-side.

    ActiveCampaign Form Code

    Post URL

    The <form /> tag contains a POST action (highlighted in red) that is in the following format: https://myaccount.activehosted.com/proc.php. This URL will be needed for our custom-built form to allow us to send values to ActiveCampaign.

    Form Fields

    An ActiveCampaign form consists of hidden fields (highlighted in green) and traditional input fields (highlighted in purple) based on the structure of the form we created. We need to take note of the attribute names and values when we make requests from our custom form.

    Step 3: Build Custom Form

    Now that we have the key building blocks for what an ActiveCampaign form uses, we can get to the good part and delve straight into the code.

    import React, { useState } from 'react';
    import { useForm } from "react-hook-form";
    
    export function App(props) {
      const { register, handleSubmit, formState: { errors } } = useForm();
        const [state, setState] = useState({
            isSubmitted: false,
            isError: false
          });    
    
        const onSubmit = (data) => {
            const formData = new FormData();
    
            // Hidden field key/values.
            formData.append("u", "4");
            formData.append("f", "4");
            formData.append("s", "s");
            formData.append("c", "0");
            formData.append("m", "0");
            formData.append("act", "sub");
            formData.append("v", "2");
            formData.append("or", "c0c3bf12af7fa3ad55cceb047972db9");
    
            // Form field key/values.
            formData.append("fullname", data.fullname);
            formData.append("email", data.email);
            formData.append("ca[1][v]", data.contactmessage);
            
            // Pass FormData values into a POST request to ActiveCampaign.
            // Mark form submission successful, otherwise return error state. 
            fetch('https://myaccount.activehosted.com/proc.php', {
                method: 'POST',
                body: formData,
                mode: 'no-cors',
            })
            .then(response => {
                setState({
                    isSubmitted: true,
                });
            })
            .catch(err => {
                setState({
                    isError: true,
                });
            });
        }
    
      return (
        <div>
            {!state.isSubmitted ? 
                <form onSubmit={handleSubmit(onSubmit)}>
                    <fieldset>
                        <legend>Contact</legend>
                        <div>
                            <div>
                                <div>
                                    <label htmlFor="fullname">Name</label>
                                    <input id="fullname" name="fullname" placeholder="Type your name" className={errors.fullname ? "c-form__textbox error" : "c-form__textbox"} {...register("fullname", { required: true })} />
                                    {errors.fullname && <div className="validation--error"><p>Please enter your name</p></div>}
                                </div>
                            </div>
                            <div>
                                <div>
                                    <label htmlFor="email">Email</label>
                                    <input id="email" name="email" placeholder="Email" className={errors.contactemail ? "c-form__textbox error" : "c-form__textbox"} {...register("email", { required: true, pattern: /^[a-z0-9._%+-]+@[a-z0-9.-]+\.[a-z]{2,4}$/ })} />
                                    {errors.email && <div className="validation--error"><p>Please enter a valid email</p></div>}
                                </div>
                            </div>
                            <div>
                                <div>
                                    <label htmlFor="contactmessage">Message</label>
                                    <textarea id="contactmessage" name="contactmessage" placeholder="Message" className={errors.contactmessage ? "c-form__textarea error" : "c-form__textarea"} {...register("contactmessage", { required: true })}></textarea>
                                    {errors.contactmessage && <div className="validation--error"><p>Please enter your message</p></div>}
                                </div>
                            </div>
                            <div>
                                <input type="submit" value="Submit" />
                            </div>
                        </div>
                    </fieldset>
                    {state.isError ? <p>Unfortunately, your submission could not be sent. Please try again later.</p> : null}    
                </form>
                : <p>Thank you for your message. I will be in touch shortly.</p>}
        </div>
      );
    }
    

    The form uses FormData to store all hidden field and text input values. You'll notice the exact same naming conventions are used as we have seen when viewing the source code of the ActiveCampaign form.

    All fields need to be filled in and a package called react-hook-form is used to perform validation and output error messages for any field that is left empty. If an error is encountered on form submission, an error message will be displayed, otherwise, the form is replaced with a success message.

    Demo

    ActiveCampaign Custom Form Demo

    We will see Obi-Wan Kenobi's entry added to ActiveCampaign's Contact list for our test submission.

    ActiveCampaign Contact List

    Conclusion

    In this post, we have demonstrated how a form is created within ActiveCampaign and understand the key areas of what the created form consists of in order to develop a custom implementation using GatsbyJS or React.

    Now all I need to do is work on the front-end HTML markup and add this functionality to my own Contact page.

  • As I have been delving deeper into adding more functionality to my Gatsby site within the Netlify eco-system, it only seemed natural that I should install the CLI to make development faster and easier to test builds locally before releasing them to my Netlify site. There have been times when I have added a new feature to my site to only find it breaks during the build process eating up those precious build minutes.

    One thing that I found a miss from the Netlify CLI documentation were the steps to running a site locally, in my case a Gatsby JS site. The first time I ran the netlify dev command, I was greeted by an empty browser window served under http://localhost:8888.

    There were a couple of steps I was missing to test my site within a locally run Netlify setup.

    1) Build Site

    The Gatsby site needs to be compiled so all HTML, CSS and JavaScript files are generated as physical files on your machine. When the following command is run, all files will be generated within the /public folder of your project:

    gatsby build
    

    The build command creates a version of your site with production-ready optimisations by packaging up your site’s configurations, data and creating all the static HTML pages. Unlike the serve command, you cannot view the site once the build has been completed. Only files are generated, which is exactly what we need.

    2) Run Netlify Dev Command From Build Directory

    Now that we have a built version of the site generated locally within the /public folder, we need to run the Netlify Dev command against this directory by running the following:

    netlify dev -dir public
    

    As you can see, the dir flag is used to run our site from where the compiled site files reside. I originally had a misconception in thinking the Netlify Dev command would build my Gatsby site as well, when in fact it does not.

    Conclusion

    If you have a site hosted by Netlify, using the CLI should is highly recommended as it provides you that extra step in ensuring any updates made can be tested prior to deployment. My site uses Netlify features such as redirects and plugins, which I now can test locally instead of going down the previously inefficient route of:

    1. Deploying changes to Netlify.
    2. Waiting for the build process to complete.
    3. Test changes within the preview site.
    4. If all is good, publish the site. If not, resolve error and deploy again.

    This endless cycle of development hell is now avoided thanks to the safety net the Netlify CLI provides.

    Further Reading

  • If you haven't noticed (and I hope you have), back in June I finally released an update to my website to look more pleasing to the eye. This has been a long time coming after being on the back-burner for a few years.

    Embarrassingly, I’ve always stated in my many year in reviews that I planned on redeveloping this site over the next coming year, but never came to fruition. This is partly down to time and deciding to make content a priority. If I’m honest, it’s mostly down to lacking the skills and patience in carrying out the front-end development work.

    Thankfully, I managed to knuckle down and decided to become acquainted and learnt enough about HTML and CSS to get the site where it currently stands, with the help of Tailwind CSS and an open-source base template to act as a good starting point for a novice front-end developer.

    Tailwind CSS

    Very early on, I knew the only hope I had to give this site a new look was to use a front-end framework like Tailwind CSS, requiring a minimal learning curve to produce quick results. It’s definitely not a front-end framework to be sniffed at as more than 260000 developers have used it for their design system. So it’s a framework that is here to stay - a worthwhile investment to learn.

    Tailwind CSS is predominantly a CSS framework consisting of predefined classes to build websites directly within the markup without having to write a single line of custom CSS.

    As you’re styling directly within the markup, at first glance it can be overwhelming, especially where multiple classes need to be declared on a single HTML block. A vast difference when compared to the cleanliness of builds carried out by the very skilful team from where I work.

    It’s a small trade-off in an otherwise solid framework that gives substantial benefits in productivity. Primarily because Tailwind CSS classes aren’t very specific and gives a high level of customisability without you having to concoct CSS styles.

    Even though there are many utility classes to get acquainted with, once you have an understanding of the core concepts, front-end builds become less of an uphill battle. Through rebuilding my site, I managed to quite quickly get familiarity with creating different layouts based on viewport size and modifying margins and padding.

    I found it to be a very modular and component-driven framework, helping avoid repetition. There are UI kits on the market that give good examples of the power of Tailwind CSS that you can use to help speed up development:

    Using Tailwind CSS took away my fear of front-end development without having to think about Bootstrap, BEM, SASS mix-ins, custom utility classes, purge processing, etc.

    Base Template

    I gave myself a 3-week target (not full-time) to get the new site released and this couldn't have been done without getting a head start from a base theme. I found an open-source template built by Timothy Lin on Tailwind Awesome website that suited my key requirements:

    • Clean
    • Simple
    • Elegant
    • Maintainable
    • Easily customisable

    Another developer by the name of Leo, developed another variation of this already great template where I felt it met my requirements down to a tee.

    Even though the template code-base used was developed in Next.js, this did not matter as I could easily migrate the Tailwind markup into my Gatsby JS project. Getting Tailwind set up initially for Gatsby took a little tinkering to get right and to ensure the generated CSS footprint was kept relatively small.

    As you can see from the new site build, I was able to make further modifications to suit my requirements. This in itself is a testament to the original template build quality and the power of Tailwind CSS.

    Improvements

    As well as changing the look of my site, I thought it would be an opportune time to make a few other small enhancements.

    Google Ads

    Removing Google Ads had been on the forefront of my mind ever since I moved over to Netlify to host my website. Previously, it was a way to contribute to the yearly hosting cost. Now, this is no longer of any relevance (as I'm on the free Netlify free hosting plan), especially when weighing the importance of a meagre monetary return over improving the overall website look and load times of the site.

    In its place, I have a Buy Me A Coffee profile for those who would like to support the content I write.

    Updated Version of Gatsby JS

    It seemed natural to upgrade the version of Gatsby JS from version 2 to 4 during the reworking of my site to keep up-to-date with the latest changes and remove any deprecated code.

    Upgrading from version 2 to 4 took a little longer than I'd hoped as other elements required updating such as Node and NPM packages. This resulted in a lot of breaking changes within my code-base that I had to rectify.

    The process was arduous but worth doing as I found site builds in Netlify reduced significantly.

    Gatsby Build Caching

    I briefly spoke about improved Netlify build times (above) due to efficiencies in code changes relating to upgrading to Gatsby 4. There is one more quiver to my bow to aid further build efficiencies and that is by installing the netlify-plugin-gatsby-cache plugin within Netlify - one-click install.

    I highly recommend everyone who has a Gatsby site install this plugin as it instantly reduces build times. For a website like my own that houses over 300 posts the build minutes do start to add up.

    Features Yet To Be Implemented

    Even though the new version of my site is live, there are features I still plan on implementing.

    Algolia Site Search

    As part of getting a new version of my site released in such a short period, I had to focus on the core areas and everything else was secondary. One of the features that didn’t make the cut was the site search using Algolia.

    I do plan on reinstating the site search feature at some point as I found it helpful for me to search through my older posts and surprisingly (based on the stats) visitors to the site also made use of it.

    Short-Form Content

    I like the idea of posting smaller pieces of content that doesn't have to result in very lengthy written blog posts. Not sure what I will call this new section. There are only two names that come to mind: "Short-form" or "Bytesize". It could consist of the following types of content:

    • Small, concise code snippets.
    • Links to content I found useful online that could be useful in certain technical use-cases.
    • Book recommendations.
    • Quotes.
    • Thoughts on news articles - John Gruber style!

    At one point, I wrote blog posts I categorised as Quick Tips, till this date consists of a mere four blog posts that I never added to. I think the naming of this category wasn't quite right.

    I see this section functioning in a similar fashion to Marco Heine's Today I Learned.

    My Bookmarks

    I like the idea of having single page with a bunch of links to useful sites I keep going back to. It could be sites that you have never come across before, making all the more reason to share these links.

    Closing Thoughts

    I normally find a full-site rebuild quite trying at times. This time was different and there were two reasons for this.

    Firstly, I've already built the site in Gatsby JS and involved minimal code changes, even when taking into consideration the changes needed to update to version 4. Secondly, using Tailwind CSS as a front-end framework was a very rewarding experience especially when page builds come to fruition in such a quick turnaround.

    I hope you find the new design is more aesthetically pleasing and makes reading through blog posts a more enjoyable experience.

  • I’ve recently updated my website from the ground up (something I will write in greater detail in a future post) and when it came to releasing all changes to Netlify, I was greeted by the following error in the build log:

    7:39:29 PM: $ gatsby build
    7:39:30 PM: error Gatsby requires Node.js 14.15.0 or higher (you have v12.18.0).
    7:39:30 PM: Upgrade Node to the latest stable release: https://gatsby.dev/upgrading-node-js
    

    Based on the error, it appears that the Node version installed on my machine is older than what Netlify requires... In fact, I was surprised to discover that it was very old. So I updated Node on my local environment as well as all of the NPM packages for my website.

    I now needed to ensure my website hosted in Netlify was using the same versions.

    The quickest way to update Node and NPM versions is to add the following environment variables to your site:

    NODE_VERSION = "14.15.0"
    NPM_VERSION = "8.5.5"
    

    You can also set the Node and NPM versions by adding a netlify.toml file to the root of your website project before committing your build to Netlify:

    [build.environment]
        NODE_VERSION = "14.15.0"
        NPM_VERSION = "8.5.5" 
    
  • I created a simple GatsbyJS pagination component that would work in a similar way to my earlier ASP.NET Core version, where the user will be able to paginate through a list using the standard "Previous" and "Next" links as well as selecting individual page numbers.

    Like the ASP.NET Core version, I have tried to make this pagination component very portable, so there shouldn't be any issues in adding this straight into your project. Plug and play!

    import * as React from 'react'
    import { Link } from 'gatsby'
    import PropTypes from 'prop-types'
    
    // Create URL path for numeric pagination
    const getPageNumberPath = (currentIndex, basePath) => {
      if (currentIndex === 1) {
        return basePath
      }
      
      return `${basePath}/page-${(currentIndex)}`
    }
    
    // Create an object array of pagination numbers. 
    // The number of page numbers to render is set to 5.
    const getPaginationGroup = (basePath, currentPage, pageCount, noOfPagesNos = 5) => {
        let startPage = currentPage;
    
        if (startPage === 1 || startPage === 2 || pageCount < noOfPagesNos)
            startPage = 1;
        else
            startPage -= 2;
    
        let maxPage = startPage + noOfPagesNos;
    
        if (pageCount < maxPage) {
            maxPage = pageCount + 1
        }
    
        if (maxPage - startPage !== noOfPagesNos && maxPage > noOfPagesNos) {
            startPage = maxPage - noOfPagesNos;
        }
    
        let paginationInfo = [];
    
        for (let i = startPage; i < maxPage; i++) {        
            paginationInfo.push({
                number: i,
                url: getPageNumberPath(i, basePath),
                isCurrent: currentPage === i
            });
        }
    
        return paginationInfo;
    };
    
    export const Pagination = ({ pageInfo, basePath }) => {
        if (!pageInfo) 
            return null
    
        const { currentPage, pageCount } = pageInfo
    
        // Create URL path for previous and next buttons
        const prevPagePath = currentPage === 2 ? basePath : `${basePath}/page-${(currentPage - 1)}`
        const nextPagePath = `${basePath}/page-${(currentPage + 1)}`
        
        if (pageCount > 1) { 
            return (
                    <ol>
                        {currentPage > 1 ? 
                            <li>
                                <Link to={prevPagePath}>
                                    Go to previous page
                                </Link>
                            </li> : null}       
                        {getPaginationGroup(basePath, currentPage, pageCount).map((item, i) => {
                            return (
                                <li key={i}>
                                    <Link to={item.url} className={`${item.isCurrent ?  "is-current" : ""}`}>
                                        Go to page {item.number}
                                    </Link>
                                </li>
                            )
                        })}
                        {currentPage !== pageCount ?
                            <li>
                                <Link to={nextPagePath}>
                                    Go to next page
                                </Link>
                            </li> : null}
                    </ol>
            )
        }
        else {
            return null
        }
      }
    
    Pagination.propTypes = {
        pageInfo: PropTypes.object,
        basePath: PropTypes.string
    }
    
    export default Pagination;
    

    This component requires just two parameters:

    1. pageInfo: A page context object created when Gatsby generates the site pages. The object should contain two properties consisting of the current page the that is being viewed (currentPage) and total number of pages (pageCount).
    2. basePath: The parent URL of where the pagination component will reside. For example, if your listing page is "/customers", this will be the base path. The pagination component will then prefix this to construct URL's in the format of - "/customers/page-2".
  • You probably haven't noticed (and you'd be forgiven if this is the case!) that my site now has the ability to search through posts. This is a strange turn of events for me as I decided to remove search capability from my site many years ago as I didn't feel it added any benefits for the user. This became evident from Google Analytics stats where searches never hit high enough numbers to warrant having it. The numbers don't lie!

    So what caused this turnaround?

    I've noticed that I'm regularly referring back through posts to refresh myself on things I've done in the past and to find solutions to issues I know I've previously written about. Having a search would make trawling through my few hundred posts a lot easier. So this is more of a personal requirement than commercial. But there is an exciting aspect to this as well - experimenting with Algolia. Using Algolia search is something I've been meaning to look into for a long time and integrating with GatbsyJS.

    The thought of having the good ol' magnifying glass back in the navigation makes me nostalgic!

    Note: In this post, I won't be covering the basic Algolia setup or the plugins needed to install as there is already a great wealth of information online. Check out my "Useful Links" section at the end of the post.

    Basic Setup

    Integrating Algolia into GatbsyJS was relatively straight-forward due to the wealth of information that others have already written and also the plugins themselves. The plugins make light work of rendering search results quickly allowing enough customisations to the HTML markup for easy implementation within any site. By default, the plugins contain the following components:

    • InstantSearch
    • SearchBox
    • Hits
    import algoliasearch from 'algoliasearch/lite';
    import PropTypes from 'prop-types';
    import { Link } from 'gatsby';
    import { InstantSearch, Hits, Highlight, SearchBox } from 'react-instantsearch-dom';
    import React from 'react';
    
    // Get API keys from the environment file.
    const appId = process.env.GATSBY_ALGOLIA_APP_ID;
    const searchKey = process.env.GATSBY_ALGOLIA_SEARCH_KEY;
    const searchClient = algoliasearch(appId, searchKey);
    
    const SearchPage = () => (
      <InstantSearch
        searchClient={searchClient}
        indexName={process.env.GATSBY_ALGOLIA_INDEX_NAME}
      >
        <SearchBox />
        <Hits hitComponent={Hit} />
      </InstantSearch>
    );
    
    function Hit(props) {
      return (
        <article className="hentry post">
          <h3 className="entry-title">
            <Link to={props.hit.fields.slug}>
              <Highlight attribute="title" hit={props.hit} tagName="mark" />
            </Link>
          </h3>
          <div className="entry-meta">
            <span className="read-time">{props.hit.fields.readingTime.text}</span>
          </div>
          <p className="entry-content">
            <Highlight hit={props.hit} attribute="summary" tagName="mark" />
          </p>
        </article>
      );
    }
    
    Hit.propTypes = {
      hit: PropTypes.object.isRequired,
    };
    
    export default SearchPage;
    

    The InstantSearch is the core component that directly interacts with Algolia's API and takes in two properties, "searchClient" and "indexName" containing the Application ID and Search Key that is acquired from the Algolia account setup. This component contains two child components, SearchBox is the search textbox and Hits that displays results from the search query.

    It is the Hits component where we can customise the HTML with our own markup by using it's "hitComponent" attribute. In my case, I created a function to generate HTML where I access the properties from the search index. What's really cool is here is we have the ability to also highlight our search term where they may occur in the results by using the Highlight component (also provided by the Algolia plugin) and adding a "tagName" attribute.

    Removing The SearchBox Component

    The standard implementation may not suit all scenarios as you may want a search term to be sent to the InstantSearch component differently. For example, it could be from a custom search textbox or (as in my case) read from a query-string parameter. It wasn't until I started delving further into the standard setup I realised you cannot just remove the SearchBox component and pass a value directly, but there is a workaround.

    I have expanded upon the code-snippet, above, to demonstrate how my search page works...

    import algoliasearch from 'algoliasearch/lite';
    import PropTypes from 'prop-types';
    import { Link } from 'gatsby';
    import { InstantSearch, Hits, Highlight, connectSearchBox } from 'react-instantsearch-dom';
    import Layout from "../components/global/layout";
    import React, { Component } from "react";
    
    // Get API keys from the environment file.
    const appId = process.env.GATSBY_ALGOLIA_APP_ID;
    const searchKey = process.env.GATSBY_ALGOLIA_SEARCH_KEY;
    const searchClient = algoliasearch(appId, searchKey);
    const VirtualSearchBox = connectSearchBox(() => <span />);
    
    class SearchPage extends Component { 
      state = {
        searchState: {
          query: '',
        },
      };
    
      componentDidMount() {   
        // Get "term" query string parameter value.
        let search = window.location.search;
        let params = new URLSearchParams(search);
        let searchTerm = params.get("term");
    
        // Send the query string value to a "searchState" object used by Algolia.
        this.setState(state => ({
          searchState: {
            ...state.searchState,
            query: searchTerm,
          },
        }));
     }
    
      render() {
          // Default "instantSearch" HTML to prompt user to enter a search term.
          var instantSearch = null;
          
          // If there is a search term, utilise Algolia's instant search.
          if (this.state.searchState.query) {
            instantSearch = <div className="entry-content">
                              <h2>You've searched for "{this.state.searchState.query}".</h2>
                              <div className="post-list archives-list">
                              <InstantSearch
                                  searchClient={searchClient}
                                  indexName={process.env.GATSBY_ALGOLIA_INDEX_NAME}
                                  searchState={this.state.searchState}
                                >
                                  <VirtualSearchBox />
                                  <Hits hitComponent={Hit} />
                                </InstantSearch>  
                              </div>
                            </div>
          }
          else {
            instantSearch = <div className="entry-content">
                              <h2>You haven't entered a search term.</h2>
                              <p>Carry out a search by clicking the <em>magnifying glass</em> in the navigation.</p>
                            </div>
          }
    
          return (
            <Layout>
              <header className="page-header">
                <h1>Search</h1>
                <p>Search the knowledge-base...</p>
              </header>
              <div id="primary" className="content-area">
                <div id="content" className="site-content" role="main">
                    <div className="layout-fixed">
                        <article className="page hentry">
                          {instantSearch}
                        </article>
                    </div>
                </div>
              </div>
          </Layout>
        )
      }
    }
    
    function Hit(props) {
      return (
        <article className="hentry post">
          <h3 className="entry-title">
            <Link to={props.hit.fields.slug}>
              <Highlight attribute="title" hit={props.hit} tagName="mark" />
            </Link>
          </h3>
          <div className="entry-meta">
            <span className="read-time">{props.hit.fields.readingTime.text}</span>
          </div>
          <p className="entry-content">
            <Highlight hit={props.hit} attribute="summary" tagName="mark" />
          </p>
        </article>
      );
    }
    
    Hit.propTypes = {
      hit: PropTypes.object.isRequired,
    };
    
    export default SearchPage
    

    My code is reading from a query-string value and passing that to a "searchState". The searchState object is created by React InstantSearch internally. Every widget inside the library has its own way of updating it. It contains parameters on the type of search that should be performed, such as query, sorting and pagination, to name a few. All we're interested in doing is updating the query parameter of this object with our search term.

    If the query parameter from the "searchState" object is empty, render search results, otherwise, display a message stating a search term is required.

    One thing to notice is the SearchBox has been replaced with a VirtualSearchBox, which uses the connector of the search box to create a virtual widget - in our case an empty span tag. This will link the InstantSearch component with the query. Having some form of search box component is compulsory.

    Conclusion

    I prefer not to use the out-of-the-box search box component as I can potentially save requests to Algolia's API, as searches aren't being made on the fly as a user enters a search term. This is the plugins default behaviour.

    Passing a search term through a query-string may come across as a little backwards, especially when it's rather nice to see search results change before your eyes as you type letter-by-letter. However, this approach misses one key element: Tracking in Google Analytics. Even though I will be primary the person making the most use of my site search, it'll be interesting to see who else uses it and what search keywords are used.

    Useful Links

  • I’ll be the first to admit that I very rarely (if at all!) assign a nice pretty share image to any post that gets shared on social networks. Maybe it’s because I hardly post what I write to social media in the first place! :-) Nevertheless, this isn’t the right attitude. If I am really going to do this, then the whole process needs to be quick and render a share image that sets the tone before that will hopefully entice a potential reader to click on my post.

    I started delving into how my favourite developer site, dev.to, manages to create these really simple text-based share images dynamically. They have a pretty good setup as they’ve somehow managed to generate a share image that contains relevant post related information perfectly, such as:

    • Post title
    • Date
    • Author
    • Related Tech Stack Icons

    For those who are nosey as I and want to know how dev.to undertakes such functionality, they have kindly written the following post - How dev.to dynamically generates social images.

    Since my website is built using the Gatsby framework, I prefer to use a local process to dynamically generate a social image without the need to rely on another third-party service. What's the point in using a third-party service to do everything for you when it’s more fun to build something yourself!

    I had envisaged implementing a process that will allow me to pass in the URL of my blog posts to a script, which in turn will render a social image containing basic information about a blog post.

    Intro Into Puppeteer

    Whilst doing some Googling, one tool kept cropping up in different forms and uses - Puppeteer. Puppeteer is a Node.js library maintained by Google Chrome’s development team and enables us to control any Chrome Dev-Tools based browser through scripts. These scripts can programmatically execute a variety of actions that you would generally do in a browser.

    To give you a bit of an insight into the actions Puppeteer can carry out, check out this Github repo. Here you can see Puppeteer is a tool for testing, scraping and automating tasks on web pages. It’s a very useful tool. The only part I spent most of my time understanding was its webpage screenshot feature.

    To use Puppeteer, you will first need to install the library package in which two options are available:

    • Puppeteer Core
    • Puppeteer

    Puppeteer Core is the more lighter-weight package that can interact with any Dev-Tool based browser you already have installed.

    npm install puppeteer-core
    

    You then have the full package that also installs the most recent version of Chromium within the node_modules directory of your project.

    npm install puppeteer
    

    I opted for the full package just to ensure I have the most compatible version of Chromium for running Puppeteer.

    Puppeteer Webpage Screenshot Script

    Now that we have Puppeteer installed, I wrote a script and added it to the root of my Gatsby site. The script carries out the following:

    • Accepts a single argument containing the URL of a webpage. This will be the page containing information about my blog post in a share format - all will become clear in the next section.
    • Approximately screenshot a cropped version of the webpage. In this case 840px x 420px - the exact size of my share image.
    • Use the page name in the URL as the image file name.
    • Store the screenshot in my "Social Share” media directory.
    const puppeteer = require('puppeteer');
    
    // If an argument is not provided containing a website URL, end the task.
    if (process.argv.length !== 3) {
      console.log("Please provide a single argument containing a website URL.");
      return;
    }
    
    const pageUrl = process.argv[2];
    
    const options = {
        path: `./static/media/Blog/Social Share/${pageUrl.substring(pageUrl.lastIndexOf('/') + 1)}.jpg`,
        fullPage: false,
        clip: {
          x: 0,
          y: 0,
          width: 840,
          height: 420
        }
      };
      
      (async () => {
        const browser = await puppeteer.launch({headless: false});
        const page = await browser.newPage()
        await page.setViewport({ width: 1280, height: 800, deviceScaleFactor: 1.5 })
        await page.goto(pageUrl)
        await page.screenshot(options)
        await browser.close()
      })(); 
    

    The script can be run as so:

    node puppeteer-screenshot.js http://localhost:8000/socialcard/Blog/2020/07/25/Using-Instagram-API-To-Output-Profile-Photos-In-ASPNET-2020-Edition
    

    I made an addition to my Gatsby project that generated a social share page for every blog post where the URL path was prefixed with /socialcard. These share pages will only be generated when in development mode.

    Social Share Page

    Now that we have our Puppeteer script, all that needs to be accomplished is to create a nice looking visual for Puppeteer to convert into an image. I wanted some form of automation where blog post information was automatically populated.

    I’m starting off with a very simple layout taking some inspiration from dev.to and outputting the following information:

    • Title
    • Date
    • Tags
    • Read time

    Working with HTML and CSS isn’t exactly my forte. Luckily for me, I just needed to do enough to make the share image look presentable.

    Social Card Page

    You can view the HTML and CSS on JSFiddle. Feel free to update and make it better! If you do make any improvements, update the JSFiddle and let me know!

    Next Steps

    I plan on adding some additional functionality allowing a blog post teaser image (if one is added) to be used as a background and make things look a little more interesting. At the moment the share image is very plain. As you can tell, I keep things really simple as design isn’t my strongest area. :-)

    If all goes to plan, when I share this post to Twitter you should see my newly generated share image.

  • Published on
    -
    1 min read

    Aligning Images In Markdown

    Every post on this site is written in markdown since successfully moving over to GatsbyJS. Overall, the transition has been painless and found that writing blog posts using the markdown syntax is a lot more efficient than using a conventional WYSIWYG editor. I never noticed until making the move to markdown how fiddly those editors were as you sometimes needed to clean the generated markup at HTML level.

    Of course, all the efficiency of markdown does come at a minor cost in terms of flexibility. Out of the minor limitations, there was one I couldn't let pass. I needed to find a way to position images left, right and centre as majority of my previous posts have been formatted in this way. When going through the conversion process from HTML to markdown, all my posts were somewhat messed up and images were rendered 100% width.

    HTML can be mingled alongside the markdown syntax, so I do have an option to use the image tag and append styling. I wouldn't recommend this from a maintainability perspective. Markdown is platform-agnostic so your content is not tied to a specific platform. By adding HTML to markdown, you're instantly sacrificing the portability of your content.

    I found a more suitable approach would be to handle the image positioning by appending a hashed value to the end of the image URL. For example, #left, #right, or #centre. We can at CSS level target the src attribute of the image and position the image along with any additional styling based on the hashed value. Very neat!

    img[src*='#left'] {
    float: left;
    margin: 10px 10px 10px 0;
    }
    
    img[src*='#center'] {
    display: block;
    margin: 0 auto;
    }
    
    img[src*='#right'] {
    float: right;
    margin: 10px 0 10px 10px;
    }
    

    Being someone who doesn’t delve into front-end coding techniques as much as I used to, I am amazed at the type of things you can do within CSS. I’ve obviously come late to the more advanced CSS selectors party.