Blog

Blogging on programming and life in general.

  • One of the first steps in integrating Apple Pay is to check the domain against the Developer Account. For each merchant ID you've registered, you'll need to upload a domain-verification file. This involves placing the verification the following path for your domain:

    https://[DOMAIN_NAME]/.well-known/apple-developer-merchantid-domain-association
    

    As you can see, the "apple-developer-merchantid-domain-association" file does not contain an extension, which will cause issues in IIS permitting access to serve this file. From what I've read online, adding an "application/octet-stream" MIME type to your site should resolve the issue:

    IIS Mime Type - Octet Stream

    In my case, this didn't work. Plus I didn't like the idea of adding a MIME type purely for the purpose of accepting extension-less paths. Instead, I decided to go down the URL Rewriting route, where I would add the "apple-developer-merchantid-domain-association" file with a ".txt" extension to the "/.well-known" directory and then rewrite this path within the applications web.config file.

    <rewrite>
    	<rules>
    		<rule name="Apply Pay" stopProcessing="true">
    		  <match url=".well-known/apple-developer-merchantid-domain-association" />
    		  <action type="Rewrite" url=".well-known/apple-developer-merchantid-domain-association.txt" appendQueryString="false" />
    		</rule>
    	</rules>
    </rewrite>
    

    Through this rewrite rule, the request path is is changed internally and the URL of the request displayed in the address bar (without the extension) stays the same. Now Apple can verify the site.

  • I've owned my UniFi Dream Machine router router for a little over two years, and I'm still getting accustomed to the wide array of configuration options available in the device admin settings. My usual rule of thumb is to only fiddle with the settings if absolutely necessary.

    Today was the day when I needed to change one setting on my router so that my download and upload speeds were not limited. Embarrassingly, I've been criticising Virgin Media, my internet service provider (ISP), for not keeping their half of the bargain in supplying me with appropriate broadband speed as promised, only to discover that it was all along my Dream Machine. Very unexpected.

    In the UniFi Network settings, look out for an option called "Smart Queues" where the download and upload speeds limits can be increased or disabled in its entirety.

    UniFi Smart Queue Setting

    What is "Smart Queues" and why would we need it? "Smart Queues" helps decongest networks with lots of clients and constant load. When enabled it will reduce the maximum throughput in order to minimise latency over the network when the connection is at full capacity. Low latency is important for voice/video calls and fast-paced online multiplayer gaming. The following StackOverflow post adds further clarity on the subject:

    Most routers and modems have a design flaw called "bufferbloat"; when your Internet connection gets fully loaded (congested), they mismanage their queues of packets waiting to be sent, and let the queue grow out of control, which just adds latency with no benefit. SQM is the fix for bufferbloat.

    SQM is only tangentially related to QoS. Traditional QoS schemes prioritize some kinds of traffic over others, so when there is congestion, the lower-priority traffic gets slammed with congestion-related latency, and the high-priority traffic hopefully skates by without problems. In contrast, SQM tries to keep the latency low on all traffic even in the face of congestion, without prioritizing one kind of traffic over another.

    I made a decision to disable "Smart Queues" as there isn't enough network traffic used in my household to warrant any form of QoS consideration. This setting can be found by logging into the router Network section > Settings > Internet > WAN Networks > Advanced.

    Once disabled, the difference in internet speed is like night and day.

    Before:

    Internet Speed - Before

    After:

    Internet Speed - After

  • I've worked on numerous projects that required the user to upload a single or a collection of photos that they could then manipulate in some manner, whether it was adding filtering effects or morphing their face for TV show promotion.

    In any of these projects, the user's uploaded photo must be kept for a specific amount of time - long enough for the user to manipulate their image. The question that had always arisen in terms of GDPR, as well as development perspective, was: How long should the users' uploaded photos be stored?

    Previously, these photos were stored in the cloud in a temporary blob storage container, with an hourly task that removed images older than 6 hours. This also ensured that the storage container remained small in size, lowering usage costs.

    Then one day, it hit me... What if a user's uploaded photos could be stored locally through their own browser before any form of manipulation? Enter local storage...

    What Is Local Storage?

    Local storage allows data to be stored in the browser as key/value pairs. This data does not have a set expiration date and is not cleared when the browser is closed. Only string values can be stored in local storage - this will not be a problem, and we'll see in this post how we'll store a collection of images along with some data for each.

    Example: Storing Collection of Photos

    The premise of this example is to allow the user to upload a collection of photos. On successful upload, their photo will be rendered and will have the ability to remove a photo from the collection. Adding and removing a photo will also cause the browser's localStorage` to be updated.

    Screenshot: Storing Images in Local Storage

    A live demo of this page can be found on my JSFiddle account: https://jsfiddle.net/sbhomra/bts3xo5n/.

    Code

    HTML

    <div>
      <h1>
        Example: Storing Images in Local Storage
      </h1>
      <input id="image-upload" type="file" />
      <ul id="image-collection">    
      </ul>
    </div>
    

    JavaScript

    const fileUploadLimit = 1048576; // 1MB in bytes. Formula: 1MB = 1 * 1024 * 1024.
    const localStorageKey = "images";
    let imageData = [];
    
    // Render image in HTML by adding to the unordered list.
    function renderImage(imageObj, $imageCollection) {
      if (imageObj.file_base64.length) {
        $imageCollection.append("<li><img src=\"data:image/png;base64," + imageObj.file_base64 + "\"  width=\"200\" /><br />" + imageObj.name + "<br /><a href=\"#\" data-timestamp=\"" + imageObj.timestamp + "\" class=\"btn-delete\">Remove</a></li>")
      }
    }
    
    // Add image to local storage.
    function addImage(imageObj) {
      imageData.push(imageObj);
      localStorage.setItem(localStorageKey, JSON.stringify(imageData));
    }
    
    // Remove image from local storage by timestamp.
    function removeImage(timestamp) {
      // Remove item by the timestamp.
      imageData = imageData.filter(img => img.timestamp !== timestamp);
    
      // Update local storage.
      localStorage.setItem(localStorageKey, JSON.stringify(imageData));
    }
    
    // Read image data stored in local storage.
    function getImages($imageCollection) {
      const localStorageData = localStorage.getItem(localStorageKey);
    
      if (localStorageData !== null) {
        imageData = JSON.parse(localStorage.getItem(localStorageKey))
    
        for (let i = 0; i < imageData.length; i++) {
          renderImage(imageData[i], $imageCollection);
        }
      }
    }
    
    // Delete button action to fire off deletion.
    function deleteImageAction() {
      $(".btn-delete").on("click", function(e) {
        e.preventDefault();
    
        removeImage($(this).data("timestamp"));
    
        // Remove the HTML markup for this image.
        $(this).parent().remove();
      })
    }
    
    // Upload action to fire off file upload automatically.
    function uploadChangeAction($upload, $imageCollection) {
      $upload.on("change", function(e) {
        e.preventDefault();
    
        // Ensure validation message is removed (if one is present).
        $upload.next("p").remove();
    
        const file = e.target.files[0];
    
        if (file.size <= fileUploadLimit) {
          const reader = new FileReader();
    
          reader.onloadend = () => {
            const base64String = reader.result
              .replace('data:', '')
              .replace(/^.+,/, '');
    
            // Create an object containing image information.
            let imageObj = {
              name: "image-" + ($imageCollection.find("li").length + 1),
              timestamp: Date.now(),
              file_base64: base64String.toString()
            };
    
            // Add To Local storage
            renderImage(imageObj, $imageCollection)
            addImage(imageObj);
    
            deleteImageAction();
    
            // Clear upload element.
            $upload.val("");
          };
    
          reader.readAsDataURL(file);
        } else {
          $upload.after("<p>File too large</p>");
        }
      });
    }
    
    // Initialise.
    $(document).ready(function() {
      getImages($("#image-collection"));
    
      // Set action events.
      uploadChangeAction($("#image-upload"), $("#image-collection"));
      deleteImageAction();
    });
    

    The key functions to look at are:

    • addImage()
    • removeImage()
    • getImages()

    Each of these functions uses JSON methods to store uploaded photos as arrays of objects. Each photo contains: name, timestamp and a base64 string. One common piece of functionality used across these functions is the use of JSON methods to help us store our collection of photos in local storage:

    • JSON.stringify() - to convert an array to a string.
    • JSON.parse() - to convert a JSON string into an object array for manipulation.

    When saving or retrieving your saved value from local storage, a unique identifier through a "key" needs to be set. In my example, I've set the following global variable that is referenced whenever I need to use the "localStorage" methods.

    const localStorageKey = "images";
    

    When saving to localStorage, we will have to stringify our array of objects:

    localStorage.setItem(localStorageKey, JSON.stringify(imageData));
    

    Retrieving our array requires us to convert the value from a string back into an object:

    imageData = JSON.parse(localStorage.getItem(localStorageKey))
    

    After we've uploaded some images, we can see what's stored by going into your browsers (for Firefox) Web Developer Tools, navigating to the "Storage" tab and selecting your site. If using Chrome, go to the "Applications" tab and click on "Local Storage".

    Browser Developer Tools Displaying localStorage Values

    Storage Limits

    The maximum length of values that can be stored varies depending on the browser. The data size currently ranges between 2MB and 10MB.

    When I decided to use local storage to store user photos, I was concerned about exceeding storage limits, so I set an upload limit of 1MB per photo. When I get the chance to use my code in a real-world scenario, I intend to use Hermite Resize to implement some image compression and resizing techniques.

  • Published on
    -
    3 min read

    C# Variable Type: To 'var', Or Not To 'var'

    This post has been in the works for some time in order to express my annoyance whenever I see every variable in a C# project declared with "var". This is simply laziness... I guess typing three letters is simpler than typing the actual variable type. Even so, I believe that this, among other things, contributes to readability issues.

    I am not completely opposed to its use, as I use it in places where I deem it acceptable and has no effect on readability. My biggest gripe is when an entire project is littered solely with this form of variable declaration. Whenever I come across a project like this, I have to run a Visual Studio tool that automatically changes variables to use an explicit type instead.

    Some may argue that readability is unaffected because you can obtain type information simply by hovering over any variable in Visual Studio. A perfectly valid point. However, things become more obfuscated when quickly glancing at some code via Notepad, Github.com, or during code reviews.

    From what I've seen on forums, there are various points of view, some of which I agree with and wanted to share my thoughts.

    Opinion 1: Saves Time Through Less Keystrokes

    List<ContactInfoProvider> contacts = new List<ContactInfoProvider>();
    // vs.
    var contacts = new List<ContactInfoProvider>();
    

    This assumes a developer is using an IDE with no intellisense capability. This shouldn't be a problem since intellisense will quickly resolve the class and types in use. According to some developers I work with, declaring long type names is messy and adds unnecessary extra noise.

    I don't mind this approach since the type in use is evident.

    Opinion 2: Readability Is Not An Issue

    To expand upon the point I made in my intro about code being obfuscated, lets take the following snippet of code:

    ...
    var contactInfo = ContactInfoProvider.GetContactInfo(token);
    ...
    

    I'm unable to see what type is being returned immediately unless I delve into the ContactInfoProvider class and view the method. "var" should never be used where the type is not known because it reduces recognition readability in code.

    Opinion 3: Use "var" On Basic Variable Types

    var maxRecords = 100;
    var price = 1.1;
    var successMessage = "Surinder thinks this is wrong!";
    var isSuccess = false;
    
    // vs.
    
    int maxRecords = 100;
    decimal price = 1.1;
    string successMessage = "Surinder thinks this is correct!";
    bool isSuccess = true;
    

    I did state earlier the use of "var" is acceptable where the type used is clearly visible. Basic variable types is an exception to the rule as it's completely unnecessary.

    Opinion 4: Encourages Descriptive Variable Names

    var productItem = new ProductInfo()
    // vs.
    ProductInfo pi = new ProductInfo()
    

    This is where I disagree. Descriptive variable names should always be used, regardless of whether the variable is declared explicitly or implicitly. In my time as a developer, I've seen some dreadful variable names in both instances.

    Conclusion

    There is no right or wrong way to use "var," and it is entirely up to personal preference. I only use "var" when I'm writing quick test code, playing with LINQ, or when I'm not sure what the outcome of a method is when dealing with external API libraries.

    Even though I relish learning new code notations and refactoring, I will continue to use "var" very sparingly (if at all!) in my own code. But the overall use of it just doesn't sit right with me. Strict typing ensures the most clearest approach, so a developer knows exactly what is going to happen once it is typed.

    We should always aim to make our code as clear as possible. It's difficult enough to understand explicitly set variables that aren't self-descriptive without having to add "var" to the mix.

    Forrest Gump - Thats All I Have To Say About That
  • At the start of 2021, I started looking into making my money work a little harder. This was primarily by having a better saving strategy in place as well as entering the world of investments.

    Plum - Basic Route To Entry

    Plum was my first foray into having something out of a banking environment to manage my money. Plum is a savings app that connects to your bank account and cards to analyse your spending to get an idea of how much money it can transfer from your account into its "saving pockets". The amount of money transferred can be configured to adjust how much money you want Plum to transfer. There are multiple options available within the app.

    Once the money is transferred, you can either leave it within Plum as a separate pot of money to keep aside for a rainy day or go a step further and invest in funds. It was through this app that I got my first exposure to investing in stocks and giving it more of a serious thought after previously being quite reticent.

    As great as the wide variety of fund portfolios offered by Plum are, I found them a little restrictive and wanted to venture into making my own decisions. I invested in two funds:

    1. Tech Giants: Investing in technology shares like Facebook, Apple and Google.
    2. Balanced Bundle: With 60% shares 40% bonds, this fund offers a balanced combination of shares and bonds.

    I ended up making quite a nice return from those funds alone, but I felt I wanted more control. For example, the Tech Giants portfolio had a relatively small percentage of equity in FAANG companies.

    Freetrade - More Control

    I haven't entirely replaced Plum for Freetrade as I believe it still has its uses. Even though I withdrew all money from investment funds to re-invest into my stocks in Freetrade, I still use Plum to sneakily put money aside automatically based on my preferences.

    Freetrade wasn't my first option for carrying out investments - it was Trading 212. Unfortunately, Trading 212 are not accepting any further sign up's "due to unprecedented demand" and have been on their waiting list for over a year. Trading 212 seemed to have a greater variety of shares and a range of investment types, including CFDs, gold and crypto.

    I can't grumble with Freetrade as it has allowed me to invest in the majority of the areas I require, which is good for someone who is finding their feet in the investment game. Transferring funds is seamless.

    Investment Strategy

    My portfolio consists of S&P 500 and individual stocks in FAANG (and a few other) companies, where my monthly investment ratio is an 80/20 split:

    • S&P 500 - 80%
    • FAANG - 20% spread over multiple stocks

    I plan on revising this ratio to a 70/30 split later in the year once all wedding expenses are done and can afford to gamble a bit more. As you can see, I am focusing on S&P 500 for the moment as I feel it's safer and overall less volatile.

    For anyone new to investing, S&P 500 Index Fund is the safest place to start as you're investing in the top 500 large publicly-traded domestic US companies and is considered to be the best overall measurement of US stock market performance. The main benefit is that a decline in some sectors might be offset by gains in others.

    Investments:

    1. Vanguard S&P 500 (VAUG)
    2. Microsoft (MSFT)
    3. Amazon (AMZN)
    4. Dell (DELL)
    5. Apple (AAPL)
    6. Google (GOOGL)
    7. AMC Entertainment (AMC)

    I see investing in S&P 500 Index Fund as putting things in place for long-term wealth and not short-term wins... Short-term wins I'm hoping to achieve through specific stock investments. AMC was a little bit of a wildcard investment as the stocks were so cheap and I just kept buying through the dip. It is only now I managed to receive a 56% return.

    Conclusion

    I've come late into the stock game as I know of people who have made quite a good return from the stock options made during Covid... Who would have thought Covid would be an opportunity where more money could be made???

    Based on my current strategy, I know full well that I won't be getting a high return to supplement my current income and that's just down to playing it safe. I'm still kicking myself on why I didn't start sooner - even if it was just solely investing in an S&P 500. On average the yearly return is around 10% and pretty consistent.

    If I had been quicker off the mark and invested £200 a month over the last 10 years, this would equate to £41500 with £17300 of interest. Probably best not to focus on time lost and just focus on this point forward.

    My goal is by end of the year is to increase my portfolio and if I'm able to get some form of short-term reward, that's a bonus! I plan on writing a follow-up post by end of the year to report what worked and what didn't.

    Disclaimer: I am not in any way a financial advisor and this post is just a write up of my thoughts.

  • I created a simple GatsbyJS pagination component that would work in a similar way to my earlier ASP.NET Core version, where the user will be able to paginate through a list using the standard "Previous" and "Next" links as well as selecting individual page numbers.

    Like the ASP.NET Core version, I have tried to make this pagination component very portable, so there shouldn't be any issues in adding this straight into your project. Plug and play!

    import * as React from 'react'
    import { Link } from 'gatsby'
    import PropTypes from 'prop-types'
    
    // Create URL path for numeric pagination
    const getPageNumberPath = (currentIndex, basePath) => {
      if (currentIndex === 1) {
        return basePath
      }
      
      return `${basePath}/page-${(currentIndex)}`
    }
    
    // Create an object array of pagination numbers. 
    // The number of page numbers to render is set to 5.
    const getPaginationGroup = (basePath, currentPage, pageCount, noOfPagesNos = 5) => {
        let startPage = currentPage;
    
        if (startPage === 1 || startPage === 2 || pageCount < noOfPagesNos)
            startPage = 1;
        else
            startPage -= 2;
    
        let maxPage = startPage + noOfPagesNos;
    
        if (pageCount < maxPage) {
            maxPage = pageCount + 1
        }
    
        if (maxPage - startPage !== noOfPagesNos && maxPage > noOfPagesNos) {
            startPage = maxPage - noOfPagesNos;
        }
    
        let paginationInfo = [];
    
        for (let i = startPage; i < maxPage; i++) {        
            paginationInfo.push({
                number: i,
                url: getPageNumberPath(i, basePath),
                isCurrent: currentPage === i
            });
        }
    
        return paginationInfo;
    };
    
    export const Pagination = ({ pageInfo, basePath }) => {
        if (!pageInfo) 
            return null
    
        const { currentPage, pageCount } = pageInfo
    
        // Create URL path for previous and next buttons
        const prevPagePath = currentPage === 2 ? basePath : `${basePath}/page-${(currentPage - 1)}`
        const nextPagePath = `${basePath}/page-${(currentPage + 1)}`
        
        if (pageCount > 1) { 
            return (
                    <ol>
                        {currentPage > 1 ? 
                            <li>
                                <Link to={prevPagePath}>
                                    Go to previous page
                                </Link>
                            </li> : null}       
                        {getPaginationGroup(basePath, currentPage, pageCount).map((item, i) => {
                            return (
                                <li key={i}>
                                    <Link to={item.url} className={`${item.isCurrent ?  "is-current" : ""}`}>
                                        Go to page {item.number}
                                    </Link>
                                </li>
                            )
                        })}
                        {currentPage !== pageCount ?
                            <li>
                                <Link to={nextPagePath}>
                                    Go to next page
                                </Link>
                            </li> : null}
                    </ol>
            )
        }
        else {
            return null
        }
      }
    
    Pagination.propTypes = {
        pageInfo: PropTypes.object,
        basePath: PropTypes.string
    }
    
    export default Pagination;
    

    This component requires just two parameters:

    1. pageInfo: A page context object created when Gatsby generates the site pages. The object should contain two properties consisting of the current page the that is being viewed (currentPage) and total number of pages (pageCount).
    2. basePath: The parent URL of where the pagination component will reside. For example, if your listing page is "/customers", this will be the base path. The pagination component will then prefix this to construct URL's in the format of - "/customers/page-2".
  • Published on
    -
    3 min read

    A Mother Tongue Lost

    I've always been unashamedly honest about not having the capability of speaking my mother tongue - Punjabi. I was brought up in an Indian household where those older than I would freely converse with one another in the home-grown language handed down to them. So Punjabi isn't a completely foreign language to me and I'm capable of getting the gist of what is being said... especially Indian family gossip!

    Throughout my life, I have always been spoken to in English by family around me and speaking fluently in Punjabi wasn't something that was ever forced upon me. In some ways, I'm grateful for this. If I look back at the mindset of my younger self, I more than likely would have rejected the notion of speaking a different language as I felt inadequate when it came to anything to do with learning. I knew from a very young age I was someone who wasn't naturally gifted in the art of learning - not from the lack of trying.

    Sometimes it's easier to reject our weaknesses instead of confronting them.

    I can't help thinking how different things could have been if my primary language of birth was Punjabi and English as secondary...

    Would I feel less ostracised from my own culture?
    Would I have taken more of a liking to Indian films and music?
    Would I be more aware of key festivals and take an active part in them?

    These thoughts have always been a part of me and it is only now they've come to the surface and stare me in the face each day after many years of keeping it locked tight within. The driving force behind these thoughts is being married to someone born in India where my cultural inadequacies that were once hidden in the dark now shines brightly for her to see.

    The time has come where change needs to happen, where I no longer take humour (a sign of weakness) in not knowing my cultural heritage or language. Why hasn't this cultural yearning come sooner? Well, I guess you never yearn for the thing you've never felt was lost.

    Through my wife's influence, I think my "Indian meter" is increasing day by day and have been introduced to things that are new to me, which can both be interesting and scary at the same time. I've even been watching a lot of Indian films and to my surprise rather enjoyed.

    I think it's through her where I worry less of not knowing certain aspects of my culture, whereas in the past only ever relied on my Mum to keep me in the loop. I've previously said in my Preserving Digital Memories post the following:

    I came to the sudden realisation as we all sat down to my mum belting out her prayers that this will not last forever and it dawned on me that these are truly special moments. Being an Indian who is culturally inept in all the senses and cannot speak his native tongue, I would be comforted knowing that I'll have photos and video to look back on many years to come.

    I am comforted knowing that the little Indian culture I have left will (as a minimum) be retained, if not grow through both their influences.

    The path that lies ahead won’t be easy, especially the part where I have to grasp the concepts of learning Punjabi as well as taking more of an invested effort in special days of the year and the reasons behind them.

    I can take comfort in knowing my journey has already begun and have learnt a lot from watching Indian TV shows alone. Even though the TV shows are in Hindi, it’s a start. For some reason, I seem to have the ability to remember bad words and phrases with such ease - to my wife's displeasure.

    At the core, I will always be British but somewhere along the way, I will always be an Indian at heart too.

  • When building React applications, my components would consist of a mixture of React Function and Class components. As much as I love to use React Function components due to their simple, lightweight and concise code construction, I had a misconception they were limited in features.

    React Function components are generally classed as "stateless" where I would go as far as to send props (if required) and return the rendered output in HTML. So whenever there was a need for some form of interaction where state was required, a Class component would be used.

    It was only until recently I was made aware of React Hooks that allowed React Function components to have state. Let's take a look at a simple example where we are increasing and decreasing a number.

    import React, { useState } from 'react';
    
    const CounterApp = () => {
      const [state, setState] = useState({
        count: 0
      });  
    
      const incrementCount = () => {
        setState({
          count: state.count + 1,
        });
      }
    
      const decrementCount = () => {
        setState({
          count: state.count - 1,
        });
      }
    
      return (
        <div>
          <h1>{state.count}</h1>
    
          <button onClick={incrementCount}>Increment</button>
          <button onClick={decrementCount}>Decrement</button>
        </div>
      );
    };
    
    export default CounterApp;
    

    In the code above we have accomplished two things:

    1. Storing our counter in state.
    2. Click handlers

    The useState() hook declaration accepts properties in the way we are accustomed to when used in a Class component. The similarity doesn't stop there, you'll also notice the same approach when setting and getting a state property.

    Multiple states can also be declared and through the array destructuring syntax lets us give different names to the state variables we declared by calling useState. Really cool!

    const [age, setAge] = useState(42);
    const [fruit, setFruit] = useState('banana');
    const [todos, setTodos] = useState([{ text: 'Learn Hooks' }]);
    

    Even though this post has concentrated on state, I've found React Function components offer similar features as a Class component. This is how I will be building my components going forward.

  • I've owned a NAS in various forms for many years. Something that started initially as re-using an old PC attached to a home network to purchasing a mini NAS box with RAID support. All systems based on Windows Server Operating System. The main focus on early iteration was to purely serve files and media.

    In 2015, I decided to take a gamble and invest in a Synology system after hearing rave reviews from users. Opting for the DS416play seemed like a safe option from a features, pricing and (most importantly) expandability point of view.

    After the move to Synology, everything I had beforehand felt so archaic and over-engineered. To this very day, my DS416play is chugging along housing around 4.5TB of data consisting of a combination of documents, videos, pictures and laptop image backups. All four hard drive slots have been filled providing a total of 8TB of storage space.

    Being a piece of hardware that is on 24/7 and acts as an extension of my laptop storage (via Synology Drive and MountainDuck), I'm pleased to see that after 7 years of constant use it's still ticking along. But I feel this year I'm due an upgrade as only recently the hardware has been playing up and starting to feel a little sluggish, which is only resolved via a restart. This is to be expected from a server running on an Intel Celeron processor and 1GB of RAM.

    So is having your own dedicated NAS a worthwhile investment? - Yes.

    Some of the positive and negative points listed below revolve around the ownership of a Synology NAS, as this is the only type of NAS I've had the most experience with.

    Positives

    You're not a slave to a storage providers terms where they can change what they offer or pricing structure. Just take a look back at what Google did with their photo service.

    Easy to setup, where you require little knowledge to get up and running (based on using a Synology). I'm no network wizard by any means and Synology allows me to get set up and use all features using basic settings initially and tinker with the more advanced areas if necessary.

    Access for many users without any additional costs. I've created accounts for my parents, sister and wife so they can access all that the server has to offer, such as photos, movies and documents. This comes at no additional cost and is very easy to give access via Synology apps.

    Cost of ownership will decrease over time after your initial setup costs (detailed in the negative section). Providing you don't have any hardware issues, which is very doubtful as my own NAS has been issue free since purchase and running 24/7, no reinvestment is required. The only area where you may need to invest is in additional drives for more storage allocation, but the cost for these are nominal.

    Always accessible 24/7 locally and remotely to myself and all the users I have set up. There isn't a day that goes by where my NAS isn't used. Most of the time I use the additional NAS storage as an extension to my laptop that has very little hard-drive space through MountainDuck application.

    Solid Eco-system consisting of applications that you need and which are accessible on mobile and tablet devices. This is where Synology is steps ahead of its competitors as they invest in software as well as hardware. The core applications are Synology Drive, DS Video and Synology Photos.

    Backup's can be easily configured for on-site and off-site. This can be done through RAID configuration and using the CloudSync application to backup to one of the many well-known cloud providers.

    Negatives

    Initial setup costs can be quite high on the offset as you not only need to purchase an adequate NAS server as well as multiple hard-disk drives that will fit into your long-term expansion plan. You need to ask yourself:

    • How much data do you currently need to store?
    • What's the estimated rate of storage consumption over the next few years?
    • Does the NAS have enough hard-disk drive slots for your needs?

    If I could go back and start my Synology NAS journey again, I'd invest more in purchasing larger hard disks.

    Synology Drive On-demand Sync is a non-existent feature on Mac OS, which makes it difficult to store large files without taking up your own workstation disk space. I and many other Mac OS users have been waiting very patiently for this key feature that is already fully functioning on Windows OS. MountainDuck is a workaround but annoyingly takes you out of the otherwise solid Synology eco-system.

    Repairability can be somewhat restrictive depending on the model purchased. The majority of the components such as CPU, RAM and PSU are soldered directly onto the motherboard and if one piece were to fail, you are left with an oversized paper-weight. It is only the more expensive models that allows you to replace/upgrade the RAM.

    Conclusion

    In my view, a NAS is a very worthy investment even by today's standards. You are spoilt for choice - there is a NAS for everyone based on your needs whether you're looking for something basic or advanced. The amount of choice now available proves the popularity and is something that users are not ignoring.

    If you want true freedom and ownership over your data and don't mind a little bit of setup to get you there, a NAS would be right up your street. You'll find even more uses if the NAS you've purchased has developed applications that might prevent you from having to purchase another subscription to an online service. This would help in aiding a quicker return of investment from the original cost of the hardware. For example, through Synology I've found the following replacements for common services:

    • Google Photos > Synology Photos
    • Google Drive/OneDrive > Synology Drive
    • Evernote > Note Station
    • Nest Security Camera > Surveillance Station

    I for one is fully invested and looking for my next upgrade depending on what happens first: Hardware dies or used up all storage capacity where more drive slots are required. The Synology DS1621+ seems to be right up my street.

  • Published on
    -
    1 min read

    Decision To Cross-post To Medium

    As much as I'd like to retain content on a single web presence, I find there are some posts that don't get much traction. The majority of my posts contain technical content that Google seems to pick up very easily due to relatable key phrases and words that developers in my industry search for.

    However, posts that are less technical in nature and (in my opinion) more thought-provoking lack page views due to the organic nature of how it's written. I believe these posts are more suited to be shared on Medium.

    I get more satisfaction in the posts that speak from my experiences and thought processes, most of which you will find in the Random Thoughts and Surinder's Log categories.

    I've already shared a handful of posts on Medium in the past - my last post was published in October 2018. I still plan on making this site the first place where content is published and then cross-post to Medium as I see fit. Check out my "Technical Blogging: Where Should I Be Writing?" post that details thoughts on the very subject of cross-posting.

    Feel free to checkout my Medium profile here.