Blog

Blogging on programming and life in general.

  • Published on
    -
    6 min read

    Year In Review - 2018

    At the end of 2017, I made a new years resolution: make more of an active effort to blog. Not only within my own website but to also do a little writing elsewhere to try something a little different. Generally, I fail to stick by my resolutions, this year was different and I have to give myself a pat on the back for the number of posts I managed to crank out over the year.

    Even though I have blogged for over 11 years, I found setting myself setting a new years resolution to write more has increased my overall confidence in writing as well as enjoyment. I now find myself using writing as a release to organise my thought process, especially when trying to grasp new learning concepts. If what I write helps others, that's a bonus!

    I highly recommend everyone to write!

    Popular Posts of The Year

    This year I have written 24 posts (including this one), ranging from technical and personal entries. I've delved into my Google Analytics and picked a handful of gems where I believe in my own small way have made an impact:

    One of the most common crawl message everyone experiences when viewing their Search Console. There isn't an exact science to resolving this issue, so decided to investigate a way to reduce the number of crawl errors by looking into the type of links Google was ignoring and submitting these links into a new sitemap.

    PaginationHelper class that renders numbered pagination that can easily be reused across any list of data when called inside a controller.

    A nice helper method to easily allow for partial views to be rendered as string outside controller context in .NET Core.

    A Powershell script to clear out old IIS logs that can be run manually or on a schedule.

    Originally written for Syndicut's Medium channel, I write about my experiences using a headless CMS from past to present - Kentico Cloud. When posted to Medium, I was very happy with the responses from the Kentico community as well as the stats. Since posting, it one of the most active viewed posts on the Syndicut Medium channel.

    This year I started learning and developing my first app using React Native. I have much more to write about the subject next year. This post detailed a bug (now rectified in the new version of React Native framework) when an API endpoint returns an empty response.

    I've been using Cloudflare's CDN infrastructure to improve load times and security for my website. I developed a C# method that uses Cloudflare's API to clear cache from the CDN at code level.

    A completely out of the ordinary post - a post about travel! Writing about my time at Melia Bali is the highlight to any of the posts written this year. I was testing the waters for a different writing style and hoping to produce more in a similar format. Definitely got the creative juices flowing!

    Site Refresh

    Throughout 11 years of owning and running this site, I have only ever refreshed the look around three times in its lifespan. It's always been a low priority in my eyes and in October I released a new look to my site. It's not going to win any prizes, but it should make my posts easier on the eye with some added flair and professionalism. I've also worked very hard on implementing additional optimisations for SEO purposes.

    Guest Writing

    I made a conscious choice this year to use other places to write outside the comfort of my own website to do some writing. When writing outside your own personal blog the stakes are higher and makes things a little more challenging as you need to cater your content to a potentially different audience.

    Out of the 24 posts I've written this year, 4 of them are what I categorised as "Guest Writing"... I need to come up with a better name.

    Currently, I've written a post for C# Corner and the rest on Syndicut's Medium channel. This is an area I wish to grow in. I am actually in the middle of writing a piece (nothing to do with coding and more to do with fiction) for film/entertainment website Den of Geek. What attracted me to write for Den of Geek is not just their content, but their award-winning mental health campaign.

    Statistics

    When comparing my statistics to date over the year to last year, I have an increase in 25% in page views and users. Bounce rate I still need to work on - currently decreased by 4%. Google Search Console statistics are looking promising - average page position has improved by 5 pages and total clicks increased by 110%.

    Syndicut

    2018 will mark eight and a half years working at Syndicut and this year like always has been filled with many challenging and exciting projects. If it wasn't for the diverse range of projects, which required me to do some interesting research, I don't think this blog would be filled with the content it has today. It's been a year filled with headless CMS's, consulting, blog writing, e-commerce, Alexa, ASP.NET Core, Kentico 12 and more!

    Bring On 2019!

    Bring on 2019 and some key areas I wish to focus and grow in...

    Public Speaking

    If I had to choose one thing I need to do next year, is to present a worthy technical subject for one of .NET Oxford's Lightning Talks. For those who do not know .NET Oxford - it's a great meetup for .NET Developers discussing anything and everything in the Microsoft Development industry.

    Every so often, they have lightning talks where developers can present their own subject matter. Even though I am familiar with presenting in front of clients through work, a technical conference is a completely different kettle of fish! After all, what you present needs to be useful to the people in the same industry.

    There was something Jon Skeet said in one of the .NET Oxford meetups in October that resonated with me. He said something along the lines of: everyone should do a public talk and take part in sharing knowledge. Now if Jon Skeet says that, who am I to argue!? I just need to have a ponder on the subject I wish to talk about.

    Consulting

    I would like to do some more consulting work. Syndicut has given me the opportunity to do that, providing our ongoing expertise to help one of our clients in adopting headless CMS (Kentico Cloud) into their current infrastructure. It was a surprise to me how much I've enjoyed consulting and this is an area I wish to continue to grow in addition to what I do best - code!

    Continue To Be Social Network Hermit

    I found it refreshing not to invest too much time (if any) in getting distracted by other people's lives on social networking sites like Facebook and Instagram. I will continue the process investing time in my own life and as a result feeling much more productive in doing so.

    I do not claim martyr in this area, afterall I am an active Twitter user and this will not change. For me, Twitter houses a collection of ideas from users in an industry I am very passionate about.

    Other Bits

    I really need a coffee table and desk! It's been a long time coming. Maybe 2019 will be the year I actually get one! :-)


    Surinder signing off!

  • Today I was working on a form that is nested inside an UpdatePanel with my work colleague. I'm not really a fan of UpdatePanel's, but I have to admit it does save time and for this form, I had to make an exception as there was a lot of dynamic loading of web controls going on.

    One part of the form required the use of a FileUpload control. Unfortunately, the FileUpload control is not supported when working alongside AJAX and requires a full page postback. If you can move the FileUpload control and corresponding button that carries out the action of uploading the file outside the UpdatePanel without breaking your form structure, that would be the most suitable approach. If not, then keep reading!

    The most well-known approach is to use PostBackTrigger - part of the UpdatePanel trigger options. When the trigger is set on the button that will action the file upload and will allow the page to carry out a full postback.

    <asp:UpdatePanel ID="FormUpdatePanel" runat="server">
        <ContentTemplate>
            <div>
                <asp:FileUpload ID="EntryFile1Upload" runat="server" />
                <br />
                <asp:Button ID="UploadButton" OnClick="UploadButton_Click" runat="server" Text="Upload File" />
            </div>
        </ContentTemplate>
        <Triggers>
            <asp:PostBackTrigger ControlID="UploadButton" />
        </Triggers>
    </asp:UpdatePanel>
    

    If your upload functionality still fails to work after adding the trigger, then you might be missing enctype attribute on the form tag. This is something I've overlooked in the past as some of the CMS's I work with add this attribute automatically. You can create the attribute at page or user control level on Page Load by simply adding the following line of code:

    this.Page.Form.Enctype = "multipart/form-data";
    
  • This month I've been writing some blog posts on why I decided to start using Cloudflare service for my website and utilising its API to allow me to purge cached files from the Cloudflare CDN on demand. Before reading further, I highly suggest perusing those posts just to put everything into context for my reasoning into using Cloudflare as well as the C# code that interacts with the API, which I will be referencing later on within this very post.

    My intial Cloudflare integration evolves around serving media files more efficiently through a CDN and having the ability to refresh these files automatically as updates are made within the Kentico CMS. Cloudflare's CDN services can help cache your content across their large global network, moving static files closer to your visitor.

    Based on the Page Rules I configured within the Cloudflare dashboard, I am caching all media library files served through the /getmedia/ URL path into the Cloudflare CDN. The same file will be served through the CDN until the set cache limit has expired. We need to implement functionality that will add some automation to the Kentico platform to purge the cache of a specific media library file when updated.

    Add A Global Event

    I created an event handler for the updating of Media library files as I wanted to get details of the file being updated by leveraging the MediaFileInfo class to access the Update.After event.

    protected override void OnInit()
    {
        base.OnInit();
    
        MediaFileInfo.TYPEINFO.Events.Update.After += Update_After;
    }
    
    private void Update_After(object sender, ObjectEventArgs e)
    {
        MediaFileInfo fileInfo = e.Object as MediaFileInfo;
    
        GlobalEventFunctions.PurgeMediaCache(fileInfo);
    }
    

    PurgeMediaCache() Method

    The event above calls a GlobalEventFunctions.PurgeMediaCache() method that will pass the information about the changed file ready for purging. The file URL parsed to the Cloudflare.PurgeSelectedFiles() method needs to be exact and take into consideration how your instance of Kentico is serving media files. If Permanent URL's are being used the /getmedia/ URL needs to be constructed consisting of:

    • Current domain
    • File GUID
    • File Name
    • File Extension

    Otherwise, we can just use get the file path as normal to where the media file resides.

    public class GlobalEventFunctions
    {
        /// <summary>
        /// Purges a file from the Cloudflare cache.
        /// </summary>
        /// <param name="fileInfo"></param>
        public static void PurgeMediaCache(MediaFileInfo fileInfo)
        {
            bool permanentURLEnabled = SettingsKeyInfoProvider.GetBoolValue($"{SiteContext.CurrentSiteName}.CMSMediaUsePermanentURLs");
            string filePath = string.Empty;
                
            if (permanentURLEnabled)
                filePath = $"{GetCurrentDomain()}/getmedia/{fileInfo.FileGUID.ToString()}/{fileInfo.FileName}{fileInfo.FileExtension}";
            else
                filePath = $"{GetCurrentDomain()}/{fileInfo.FilePath}";
    
            try
            {
                // Get code from: https://www.surinderbhomra.com/Blog/Post/2018/11/11/Cloudflare-API-Purge-Files-By-URL-In-C
                CloudflareCacheHelper cloudflareHelper = new CloudflareCacheHelper();
    
                cloudflareHelper.PurgeSelectedFiles(new List<string> { filePath });
            }
            catch (Exception ex)
            {
                EventLogProvider.LogException("Cloudflare Purge File Cache", "CLOUDFLARE_PURGE", ex, SiteContext.CurrentSiteID, $"Purge File: {filePath}");
            }
        }
    
        /// <summary>
        /// Get domain from current http context.
        /// </summary>
        /// <returns></returns>
        private static string GetCurrentDomain()
        {
            return $"{HttpContext.Current.Request.Url.Scheme}{Uri.SchemeDelimiter}{HttpContext.Current.Request.Url.Host}{(!HttpContext.Current.Request.Url.IsDefaultPort ? $":{HttpContext.Current.Request.Url.Port}" : null)}";
        }
    }
    

    We need not consider any other scenarios, such as insert or deletion. If a file is inserted, there is nothing to purge as it's a new file that will be cached directly into in the CDN on first request and when it comes to deletion we can just wait for the cache to expire.

    What's Next?

    The integration I have detailed so far is just scratching the surface of what Cloudflare has to offer and will investigate further on pushing more content over to the CDN. One area, in particular, I am looking into is carrying out full page caching. You might be thinking why even bother as Kentico has pretty good caching mechanisms already in place?

    Well Cloudflare has a really neat feature called "Always Online", where a cached version of a page is served if on the off chance it happens to go down or requires a reboot to install key security updates. But implementing this feature requires strict Page Rules to be setup within the Cloudflare dashboard to ensure the general workings of Kentico are not effected.

  • Earlier this week I wrote about the reasons to why I decided to use Cloudflare for my website. I've been working on utilising Cloudflare's API to purge the cache on demand for when files need to be updated within the CDN. To do this, I decided to write a method that will primarily use one API endpoint - /purge_cache. This endpoint allows a maximum of 30 URL's at one time to be purged, which is flexible enough to fit the majority of day-to-day use cases.

    To communicate with the API, we need to provide three pieces of information:

    1. Account Email Address
    2. Zone ID
    3. API Key

    The last two pieces of information can be found within the dashboard of your Cloudflare account.

    Code - CloudflareCacheHelper Class

    The CloudflareCacheHelper class consists of a single method PurgeSelectedFiles() and the following class objects used for serializing and deserializing our responses from API requests:

    • CloudflareFileInfo
    • CloudflareZone
    • CloudflareResultInfo
    • CloudflareResponse

    Not all the properties within each of the class objects are being used at the moment based on the requests I am making. But the CloudflareCacheHelper class will be updated with more methods as I delve further into Cloudflare's functionality.

    public class CloudflareCacheHelper
    {
        public string _userEmail;
        public string _apiKey;
        public string _zoneId;
    
        private readonly string ApiEndpoint = "https://api.cloudflare.com/client/v4";
    
        /// <summary>
        /// By default the Cloudflare API values will be taken from the Web.Config.
        /// </summary>
        public CloudflareCacheHelper()
        {
            _apiKey = ConfigurationManager.AppSettings["Cloudflare.ApiKey"];
            _userEmail = ConfigurationManager.AppSettings["Cloudflare.UserEmail"];
            _zoneId = ConfigurationManager.AppSettings["Cloudflare.ZoneId"];
        }
    
        /// <summary>
        /// Set the Cloudflare API values explicitly.
        /// </summary>
        /// <param name="userEmail"></param>
        /// <param name="apiKey"></param>
        /// <param name="zoneId"></param>
        public CloudflareCacheHelper(string userEmail, string apiKey, string zoneId)
        {
            _userEmail = userEmail;
            _apiKey = apiKey;
            _zoneId = zoneId;
        }
            
        /// <summary>
        /// A collection of file paths (max of 30) will be accepted for purging cache.
        /// </summary>
        /// <param name="filePaths"></param>
        /// <returns>Boolean value on success or failure.</returns>
        public bool PurgeSelectedFiles(List<string> filePaths)
        {
            CloudflareResponse purgeResponse = null;
    
            if (filePaths?.Count > 0)
            {
                try
                {
                    HttpWebRequest purgeRequest = WebRequest.CreateHttp($"{ApiEndpoint}/zones/{_zoneId}/purge_cache");
                    purgeRequest.Method = "POST";
                    purgeRequest.ContentType = "application/json";
                    purgeRequest.Headers.Add("X-Auth-Email", _userEmail);
                    purgeRequest.Headers.Add("X-Auth-Key", _apiKey);
    
                    #region Create list of Files for Submission In The Structure The Response Requires
    
                    CloudflareFileInfo fileInfo = new CloudflareFileInfo
                    {
                        Files = filePaths
                    };
    
                    byte[] data = Encoding.ASCII.GetBytes(JsonConvert.SerializeObject(fileInfo));
    
                    purgeRequest.ContentLength = data.Length;
    
                    using (Stream fileStream = purgeRequest.GetRequestStream())
                    {
                        fileStream.Write(data, 0, data.Length);
                        fileStream.Flush();
                    }
    
                    #endregion
    
                    using (WebResponse response = purgeRequest.GetResponse())
                    {
                        using (StreamReader purgeStream = new StreamReader(response.GetResponseStream()))
                        {
                            string responseJson = purgeStream.ReadToEnd();
    
                            if (!string.IsNullOrEmpty(responseJson))
                                purgeResponse = JsonConvert.DeserializeObject<CloudflareResponse>(responseJson);
                        }
                    }
                }
                catch (Exception ex)
                {
                    throw ex;
                }
    
                return purgeResponse.Success;
            }
    
            return false;
        }
    
        #region Cloudflare Class Objects
    
        public class CloudflareFileInfo
        {
            [JsonProperty("files")]
            public List<string> Files { get; set; }
        }
    
        public class CloudflareZone
        {
            [JsonProperty("id")]
            public string Id { get; set; }
    
            [JsonProperty("type")]
            public string Type { get; set; }
    
            [JsonProperty("name")]
            public string Name { get; set; }
    
            [JsonProperty("content")]
            public string Content { get; set; }
    
            [JsonProperty("proxiable")]
            public bool Proxiable { get; set; }
    
            [JsonProperty("proxied")]
            public bool Proxied { get; set; }
    
            [JsonProperty("ttl")]
            public int Ttl { get; set; }
    
            [JsonProperty("priority")]
            public int Priority { get; set; }
    
            [JsonProperty("locked")]
            public bool Locked { get; set; }
    
            [JsonProperty("zone_id")]
            public string ZoneId { get; set; }
    
            [JsonProperty("zone_name")]
            public string ZoneName { get; set; }
    
            [JsonProperty("modified_on")]
            public DateTime ModifiedOn { get; set; }
    
            [JsonProperty("created_on")]
            public DateTime CreatedOn { get; set; }
        }
    
        public class CloudflareResultInfo
        {
            [JsonProperty("page")]
            public int Page { get; set; }
    
            [JsonProperty("per_page")]
            public int PerPage { get; set; }
    
            [JsonProperty("count")]
            public int Count { get; set; }
    
            [JsonProperty("total_count")]
            public int TotalCount { get; set; }
        }
    
        public class CloudflareResponse
        {
            [JsonProperty("result")]
            public CloudflareZone Result { get; set; }
    
            [JsonProperty("success")]
            public bool Success { get; set; }
    
            [JsonProperty("errors")]
            public IList<object> Errors { get; set; }
    
            [JsonProperty("messages")]
            public IList<object> Messages { get; set; }
    
            [JsonProperty("result_info")]
            public CloudflareResultInfo ResultInfo { get; set; }
        }
    
        #endregion
    }
    

    Example - Purging Cache of Two Files

    A string collection of URL's can be passed into the method to allow for the cache of a batch of files to be purged in a single request. If all goes well, the success response should be true.

    CloudflareCacheHelper cloudflareCache = new CloudflareCacheHelper();
    
    bool isSuccess = cloudflareCache.PurgeSelectedFiles(new List<string> {
                                        "https://www.surinderbhomra.com/getmedia/7907d934-805f-4bd3-86e7-a6b2027b4ba6/CloudflareResponseMISS.png",
                                        "https://www.surinderbhomra.com/getmedia/89679ffc-ca2f-4c47-8d41-34a6efdf7bb8/CloudflareResponseHIT.png"
                                    });
    

    Rate Limits

    The Cloudflare API sets a maximum of 1,200 requests in a five minute period. Cache-Tag purging has a lower rate limit of up to 2,000 purge API calls in every 24 hour period. You may purge up to 30 tags in one API call.

  • A couple day ago my website got absolutely hammered by a wave of constant SQL injection attacks by the same IP over a time period of a couple hours.

    I only managed to notice this whilst perusing the Event Log within the Kentico Administration interface. I don't normally check my own error logs as regularly as I should do, but since my site has recently gone through a bit of a revamp (which I'm still yet to post about), I wanted to ensure I haven't broken anything.

    To be honest, I am flattered that someone would think this site is worth the time and energy in trying to hack my site. Trust me, it ain't worth it.

    Even though Kentico has handled these attacks well, as a precaution I wanted to implement an additional layer of security before any further untoward activity reaches to my site. Being on a shared hosting platform my options are limited and my hands are tied to put in an infrastructure that doesn't cost the world.

    Enter Cloudflare

    I have always had some form of awareness of the Cloudflare content delivery network, just never put the service into practice. At one point I was looking into utilising Cloudflare to manage all my site media files over their CDN. But Cloudflare isn't just a CDN, it's able to offer much more:

    • Analytics - monitor traffic as well as caching ratio and more!
    • Firewall- manage access by IP, country, or query rules.
    • Rate limiting- protect your site or API from malicious traffic by blocking client IP addresses that hit a URL pattern and exceed a threshold you define.
    • Page rules - to allow caching to be triggered by a number of rules targeting specific areas of a site.

    The great thing is that these options are part of the free plan... even though there are restrictions to the number of settings you are able to put in place. The security options alone was enough of a reason to try out Cloudflare. But for my needs, the free plan seemed to tick all the boxes. I am just scratching the surface to what Cloudflare has to offer and have already got a few of the features working alongside Kentico.

    How I'm Using Cloudflare With Kentico

    As security was a main concern for me, one of the first things I did was to add in some rules through the firewall and block suspicious traffic. Naturally the next was to take advantage of the CDN capabilities. I wouldn't recommend full site caching unless you have some pretty strict page rules in place as this has a chance to cause issues with the Kentico Admin interface. By default, Cloudflare doesn't cache HTML pages, which is a good thing as it gives us a plain canvas to target the areas we want to cache.

    Being on the free plan, I only had three page rules at my disposal and made the decision to cache the following parts of my site.

    Area Regex Rule Settings
    Media Library Files (using Permanent URL's) *surinderbhomra.com/
    	getmedia/\* | <br>**Cache Level:** Caches Everything<br>**Edge Cache TTL:** A month<br>**Browser Cache TTL:** 16 days<br> |
    

    | Site CSS, JavaScript and Images | *surinderbhomra.com/
    resources/* | Cache Level: Caches Everything
    Edge Cache TTL: A month
    Browser Cache TTL: 16 days | | ScriptResource.axd/WebResource.axd | *surinderbhomra.com/*.axd | Cache Level: Caches Everything
    Edge Cache TTL: 7 days
    Browser Cache TTL: 7 days |

    Two cache types are used:

    1. Edge Cache TTL - is the setting that controls how long CloudFlare's edge servers will cache a resource before requesting a fresh copy from your server.
    2. Browser Cache TTL - is the time that CloudFlare instructs a visitor's browser to cache a resource. Until this time expires, the browser will load the resource from its local cache and speeding up the request.

    I am caching my assets for a considerable amount of time, which begs the question how will changes to files purge the cache in Cloudflare? Luckily, Cloudflare has quite a nice API, where I have the ability to purge everything or individual files (maximum of 30 in one request).

    Purge Media Library File Cache

    I am in the middle of testing a GlobalEvent handler that will carry out the following steps when files are inserted or updated:

    • Get path of the file.
    • Check if site is using Permanent URL's. As the URL to the file will be constructured differently if enabled.
    • Convert the relative path to an absolute path.
    • Pass the absolute file path to Cloudflare using the following Purge Files by URL API endpoint.

    Once I have carried out some further tests, I will be posting this code in a follow-up post.

    Purge Site File Cache

    Now attempting to purge the cache for site files such as JavaScript, CSS and images are a little more tricky as I need to keep an eye on the files changed. The easiest thing to do is I could write some code that will iterate through all the files in the /resources folder and purge everything from the CDN. Not the most elegant solution. Still, need to ponder on the correct implementation. If anyone has got any better suggestions, let me know.

    How Do I Know Cloudflare Is Caching My Site Files?

    It does take a little time in Cloudflare to cache all files on a site. It all depends on pages being viewed. A page needs to be loaded in order to submit all its contents to cache. On first load, the response header CF-Cache-Status will return a "MISS", which means the content has not been served from Cloudflare.

    Cloudflare Response - MISS

    However, when you go back to the page and re-check the page headers, the CF-Cache-Status should return "HIT". If this is not the case, check your Page Rules within the Cloudflare dashboard.

    Cloudflare Response - HIT

    Is Cloudflare Worth It?

    Quick answer - Yes!

    Setup is very straight-forward. All that is required is to carry out a change to your domain and point your DNS to the DNS Cloudflare assigns to you. There is no downtime in doing this. As a result, overall site performance has improved and page speed test faired much better.

    To give you a better insight for transparency, here are some statistics straight from my Cloudflare portal over a 24 hour period:

    Requests Through Cloudflare
    (Understandably, the number of cached items served through Cloudflare is low due to only caching specific areas)

    Cloudflare Performance
    (My basic shared hosting should now be performing better as less requests are being served via the origin server)

    Cloudflare Detected Threats
    (Blocked threats - the reason why I decided to give Cloudflare a try in the first place)

    I still have to carry out a lot more research in using Cloudflare to its full potential and will use my website as a test bed to see what I can achieve. My end goal is to make my website quicker and more secure!

  • Published on
    -
    1 min read

    Autoplaying HTML5 Video In Chrome

    Whilst working on the new look for my website, I wanted to replace areas where I previously used low-grade animated GIF's for the more modern HTML5 video. Currently, the only place I use HTML5 video is on my 404 page as a light-hearted reference to one of the many memorable quotes that only fans of the early Star Trek films will understand. These are the films I still hold in very high regard, something the recent "kelvin timeline" films are missing. Anyway, back to the post in hand...

    Based on Chrome's new policies introduced in April 2018 I was always under the impression that as long as the video is muted, this won't hinder in any way the autoplay functionality. But for the life of me, mt HTML5 video did not autoplay, even though all worked as intended in other browsers such as Firefox.

    You can work around Chrome's restrictions through JavaScript.

    Code

    The HTML is as simple as adding your HTML5 video.

    <video id="my-video" autoplay muted loop playsinline>
         <source src="/enterprise-destruction.mp4" type="video/mp4" />
    </video>
    

    All we need to do is target our video and tell it to play automatically. I have added a timeout to the script just to ensure the video has enough time to render on the page before our script can do its thing.

    var myVideo = $("#my-video");
    
    setTimeout(function () {
        myVideo.muted = true;
        myVideo.play();
    }, 100);
    

    It's worth noting that I don't generally write much about front-end approaches (excluding JavaScript) as I am first and foremost a backend developer. So this might not be the most ideal solution and appreciate any feedback.

  • Published on
    -
    2 min read

    Kentico - Call 404 Page From Code

    There will be times when you want to direct a user to a 404 page based on certain conditions from within your code. For example, when dealing with pages that use wildcard URL's, you might want to redirect the user to a 404 page if the value of that wildcard parameter returns no data.

    In my blog I have two wildcard parameters to allow the user to filter my posts by either category or tag. At code-level if no blog posts are returned based on the category or tag value, I have two choices:

    1. Display a "no results" message
    2. Redirect to a 404 page

    As you can tell by the title of this post, I wanted to go for the latter.

    From a Kentico perspective wildcard parameters in a URL aren't what I call "proper" pages and the CMS routing engine won't send you a 404 page as you'd think. So we need to carry the redirect at code-level ourselves based on the conditions we provide. As far as I'm aware, Kentico doesn't have a method in code to do this and settled for a workaround suggested by Sébastien Gumy in the following DevNet post.

    I made some minor changes to the code and placed it in a helper method for use throughout my project with ease:

    using CMS.Helpers;
    using CMS.PortalEngine;
    using CMS.URLRewritingEngine;
    
    namespace Site.Common.Kentico
    {
        public class PortalContextHelper
        {
            /// <summary>
            /// Redirect page to 404.
            /// </summary>
            public static void SendToPageNotFound()
            {
                PortalContext.Clear();
                CMSHttpContext.Current.Response.StatusCode = 404;
                URLRewriter.RewriteUrl(RequestStatusEnum.PageNotFound, string.Empty, ExcludedSystemEnum.Unknown);
            }
        }
    }
    

    The SendToPageNotFound() method can then be used in the following way:

    #region Get Querystring Parameters
    
    string tag = QueryHelper.GetString("Tag", string.Empty);
    string category = QueryHelper.GetString("Category", string.Empty);
    int pageNo = QueryHelper.GetInteger("PageNo", 1);
    
    #endregion
    
    int tagId = TagLogic.GetTagIdFromQuerystring(tag);
    
    // A custom method to get back blog posts based on parameters.
    BlogListing postData = BlogLogic.GetBlogPosts(CurrentDocument.NodeAliasPath, category, tagId, (pageNo - 1));
    
    if (postData.BlogPosts != null)
    {
        BlogListing.DataSource = postData.BlogPosts;
        BlogListing.DataBind();
    }
    else
    {
        // Send to 404 page.
        PortalContextHelper.SendToPageNotFound();
    }
    

    Please note: This has only been tested in Kentico 10.

  • Published on
    -
    2 min read

    The Journey To Kentico Cloud

    From working at Syndicut, I have had both the opportunity and pleasure of working with many different platforms. The most exciting development for me over the years has been the shift on how content management systems are being decoupled from the very applications they push content to. I have blogged about this many years ago when I first used Prismic, which at the time seemed the most viable option. Even though there were pros and cons.

    I always felt the cons were more related to the restrictions on what the platform offered and not the architecture itself. I write my thoughts on the journey to how [at Syndicut] we've used headless CMS's in the past, to now using Kentico Cloud. Kentico Cloud is indeed a very promising headless CMS platform. Even though it hasn't been in the market that long when compared to its competitors, but it embodies something more:

    • Proactive development team who take an active step towards bugs and improvements.
    • A wide variety of boilerplate templates to accommodate different code frameworks.
    • Boilerplate templates updated regularly.
    • A clear roadmap to the features developers can expect and release deadlines.
    • Accessible and quick to respond support team.

    Some highlights to take away from the post:

    The common misconception we get from clients is on the surface, a headless based CMS can appear restrictive compared to platforms they are previously used to. However, that cannot be further from the truth. Once there is an understanding of how data can be given a hierarchy, category, relationships and workflow, they seem to run with curating content fairly quickly.

    For agile projects where there is a need to manage content for multiple channels, or for creating tagged content hubs for digital marketing purposes, Kentico Cloud is the best option.

    Headless CMS is a ticket to freedom for those who wish to take it. Why waste time worrying about hardware infrastructure, security and platform updates when you can invest that time in purely building your application and content?

    As a business or developer, you might be hesitant to make the change. When I first read about decoupled architecture, I too had some hesitation as a lot of faith is invested in the platforms scalability and features. But with services like Kentico Cloud, who are pushing the boundaries with every release, they are changing our perception for the better on what we think we should expect from a headless CMS.

    Take a read here: https://medium.com/syndicut/our-headless-cms-journey-to-kentico-cloud-b26c4eb39ed7

  • I wrote a post a couple of years ago regarding my observations on developing a Kentico site using MVC in version 9. Ever since Kentico 9, there was a shift in how MVC applications are to be developed, which has pretty much stood the test of time as we've seen in releases since then. The biggest change being the CMS itself is purely used to manage content and the presentation layer is a separate MVC application connected through a Web Farm interface.

    The one thing I missed when working on sites in Kentico's MVC is the ability to get values from the current document as you could do in Kentico 8 MVC builds:

    public ActionResult Article()
    {
        TreeNode page = DocumentContext.CurrentDocument;
    
        // You might want to do something complex with the TreeNode here...
    
        return View(page);
    }
    

    In Kentico 11, the approach is to use wrapper classes using the Code Generator feature the Kentico platform offers from inside your Page Type. The Kentico documentation describes this approach quite aptly:

    The page type code generators allow you to generate both properties and providers for your custom page types. The generated properties represent the page type fields. You can use the providers to work with published or latest versions of a specific page type.

    You can then use these generated classes inside your controllers to retrieve page type data.

    Custom Route Constraint

    In order to go down a similar approach to get the current document just like in Kentico 8, we'll need to modify our MVC project and add a custom route constraint called CmsUrlConstraint. The custom route constraint will call DocumentHelper.GetDocument() method and return a TreeNode object based on the Node Alias path.

    CmsUrlConstraint

    Every Page Type your MVC website consists of will need to be listed in this route constraint, which will in turn direct the incoming request to a controller action and store the Kentico page information within a HttpContext if there is a match. To keeps things simple, the route constraint contains the following pages:

    • Home
    • Blog
    • Blog Month
    • Blog Post
    public static class RouteConstraintExtension
    {
        /// <summary>
        /// Set a new route.
        /// </summary>
        /// <param name="values"></param>
        /// <param name="controller"></param>
        /// <param name="action"></param>
        /// <returns></returns>
        public static RouteValueDictionary SetRoute(this RouteValueDictionary values, string controller, string action)
        {
            values["controller"] = controller;
            values["action"] = action;
    
            return values;
        }
    
        #region CMS Url Contraint
    
        public class CmsUrlConstraint : IRouteConstraint
        {
            /// <summary>
            /// Check for a CMS page for the current route.
            /// </summary>
            /// <param name="httpContext"></param>
            /// <param name="route"></param>
            /// <param name="parameterName"></param>
            /// <param name="values"></param>
            /// <param name="routeDirection"></param>
            /// <returns></returns>
            public bool Match(HttpContextBase httpContext, Route route, string parameterName, RouteValueDictionary values, RouteDirection routeDirection)
            {
                string pageUrl = values[parameterName] == null ? "/Home" : $"/{values[parameterName].ToString()}";
    
                // Check if the page is being viewed in preview.
                bool previewEnabled = HttpContext.Current.Kentico().Preview().Enabled;
    
                // Ignore the site resource directory containing Image, CSS and JS assets to save call to Kentico API.
                if (pageUrl.StartsWith("/resources"))
                    return false;
    
                // Get page from Kentico by alias path in its published or unpublished state.
                // PageLogic.GetByNodeAliasPath() method carries out the document lookup by Node Alias Path.
                TreeNode page = PageLogic.GetByNodeAliasPath(pageUrl, previewEnabled);
    
                if (page != null)
                {
                    // Store current page in HttpContext.
                    httpContext.Items["CmsPage"] = page;
    
                    #region Map MVC Routes
    
                    // Set the routing depending on the page type.
                    if (page.ClassName == "CMS.Home")
                        values.SetRoute("Home", "Index");
    
                    if (page.ClassName == "CMS.Blog" ||  page.ClassName == "CMS.BlogMonth")
                        values.SetRoute("Blog", "Index");
    
                    if (page.ClassName == "CMS.BlogPost")
                        values.SetRoute("Blog", "Post");
    
                    #endregion
    
                    if (values["controller"].ToString() != "Page")
                        return true;
                }
    
                return false;
            }
        }
    
        #endregion
    }
    

    To ensure page data is returned from Kentico in an optimum way, I have a PageLogic.GetByNodeAliasPath() method that ensures cache dependencies are used if the page is not in preview mode.

    Apply Route Constraint To RouteConfig

    public class RouteConfig
    {
        public static void RegisterRoutes(RouteCollection routes)
        {
            ...
    
            // Maps routes to Kentico HTTP handlers.
            // Must be first, since some Kentico URLs may be matched by the default ASP.NET MVC routes,
            // which can result in pages being displayed without images.
            routes.Kentico().MapRoutes();
    
            // Custom MVC constraint validation to check for a CMS template, otherwise fallback to default MVC routing.
            routes.MapRoute(
                name: "CmsRoute",
                url: "{*url}",
                defaults: new { controller = "HttpErrors", action = "NotFound" },
                constraints: new { url = new CmsUrlConstraint() }
            );
    
            ...
        }
    }
    

    Usage In Controller

    Now that we have created our route constraint and applied it to our RouteConfig, we can now enjoy the fruits of our labour by getting back the document TreeNode from HttpContext. The code sample below demonstrates getting some values for our Home controller.

    public class HomeController : Controller
    {
        public ActionResult Index()
        {
            TreeNode currentPage = HttpContext.Items["CmsPage"] as TreeNode;
    
            if (currentPage != null)
            {
                HomeViewModel homeModel = new HomeViewModel
                {
                    Title = currentPage.GetStringValue("Title", string.Empty),
                    Introduction = currentPage.GetStringValue("Introduction", string.Empty)
                };
    
                return View(homeModel);
            }
    
            return HttpNotFound();
        }
    }
    

    Conclusion

    There is no right or wrong in terms of the approach you as a Kentico developer use when getting data out from your page types. Depending on the scale of the Kentico site I am working on, I interchange between the approach I detail in this post and Kentico's documented approach.

  • Published on
    -
    2 min read

    ASP.NET Core - Get Page Title By URL

    To make it easy for a client to add in related links to pages like a Blog Post or Article, I like implementing some form of automation so there is one less thing to content manage. For a Kentico Cloud project, I took this very approach. I created a UrlHelper class that will carry out the following:

    • Take in an absolute URL.
    • Read the markup of the page.
    • Selects the title tag using Regex.
    • Remove the site name prefix from title text.
    using Microsoft.Extensions.Caching.Memory;
    using MyProject.Models.Site;
    using System;
    using System.IO;
    using System.Linq;
    using System.Net;
    using System.Text.RegularExpressions;
    
    namespace MyProject.Helpers
    {
        public class UrlHelper
        {
            private static IMemoryCache _cache;
    
            public UrlHelper(IMemoryCache memCache)
            {
                _cache = memCache;
            }
    
            /// <summary>
            /// Returns the a title and URL of the link directly from a page.
            /// </summary>
            /// <param name="url"></param>
            /// <returns></returns>
            public PageLink GetPageTitleFromUrl(string url)
            {
                if (!string.IsNullOrEmpty(url))
                {
                    if (_cache.TryGetValue(url, out PageLink page))
                    {
                        return page;
                    }
                    else
                    {
                        using (WebClient client = new WebClient())
                        {
                            try
                            {
                                Stream stream = client.OpenRead(url);
                                StreamReader streamReader = new StreamReader(stream, System.Text.Encoding.GetEncoding("UTF-8"));
    
                                // Get contents of the page.
                                string pageHtml = streamReader.ReadToEnd();
    
                                if (!string.IsNullOrEmpty(pageHtml))
                                {
                                    // Get the title.
                                    string title = Regex.Match(pageHtml, @"\<title\b[^>]*\>\s*(?<Title>[\s\S]*?)\</title\>", RegexOptions.IgnoreCase).Groups["Title"].Value;
    
                                    if (!string.IsNullOrEmpty(title))
                                    {
                                        if (title.Contains("|"))
                                            title = title.Split("|").First();
                                        else if (title.Contains(":"))
                                            title = title.Split(":").First();
    
                                        PageLink pageLink = new PageLink
                                        {
                                            PageName = title,
                                            PageUrl = url
                                        };
    
                                        _cache.Set(url, pageLink, DateTimeOffset.Now.AddHours(12));
    
                                        page = pageLink;
                                    }
                                }
    
                                // Cleanup.
                                stream.Flush();
                                stream.Close();
                                client.Dispose();
                            }
                            catch (WebException e)
                            {
                                throw e;
                            }
                        }
                    }
    
                    return page;
                }
                else
                {
                    return null;
                }
            }
        }
    }
    

    The method returns a PageLink object:

    namespace MyProject.Models.Site
    {
        public class PageLink
        {
            public string PageName { get; set; }
            public string PageUrl { get; set; }
        }
    }
    

    From an efficiency standpoint, I cache the process for 12 hours as going through the process of reading the markup of a page can be quite expensive if there is a lot of HTML.