Blog

Blogging on programming and life in general.

  • I had around 2000 webpage URL’s listed in a text file that needed to be generated into a simple Google sitemap.

    I decided to create a quick Google Sitemap generator console application fit for purpose. The program iterates through each line of a text file and parses it to a XmlTextWriter to create the required XML format.

    Feel free to copy and make modifications to the code below.

    Code:

    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Text;
    using System.IO;
    using System.Xml;
    
    namespace GoogleSitemapGenerator
    {
        class Program
        {
            static void Main(string[] args)
            {
                string textFileLocation = String.Empty;
    
                if (args != null && args.Length > 0)
                {
                    textFileLocation = args[0];
                }
    
                if (!String.IsNullOrEmpty(textFileLocation))
                {
                    string fullSitemapPath = String.Format("{0}sitemap.xml", GetCurrentFileDirectory(textFileLocation));
    
                    //Read text file
                    StreamReader sr = File.OpenText(textFileLocation);
    
                    using (XmlTextWriter xmlWriter = new XmlTextWriter(fullSitemapPath, Encoding.UTF8))
                    {
                        xmlWriter.WriteStartDocument();
                        xmlWriter.WriteStartElement("urlset");
                        xmlWriter.WriteAttributeString("xmlns", "http://www.sitemaps.org/schemas/sitemap/0.9");
    
                        while (!sr.EndOfStream)
                        {
                            string currentLine = sr.ReadLine();
    
                            if (!String.IsNullOrEmpty(currentLine))
                            {
                                xmlWriter.WriteStartElement("url");
                                xmlWriter.WriteElementString("loc", currentLine);
                                xmlWriter.WriteElementString("lastmod", DateTime.Now.ToString("yyyy-MM-dd"));
                                //xmlWriter.WriteElementString("changefreq", "weekly");
                                //xmlWriter.WriteElementString("priority", "1.0");
    
                                xmlWriter.WriteEndElement();
                            }
                        }
    
                        xmlWriter.WriteEndElement();
                        xmlWriter.WriteEndDocument();
                        xmlWriter.Flush();
    
                        if (File.Exists(fullSitemapPath))
                            Console.Write("Sitemap successfully created at: {0}", fullSitemapPath);
                        else
                            Console.Write("Sitemap has not been generated. Please check your text file for any problems.");
    
                    }
                }
                else
                {
                    Console.Write("Please enter the full path to where the text file is situated.");
                }
            }
    
            static string GetCurrentFileDirectory(string path)
            {
                string[] pathArr = path.Split('\\');
    
                string newPath = String.Empty;
    
                for (int i = 0; i < pathArr.Length - 1; i++)
                {
                    newPath += pathArr[i] + "\\";
                }
    
                return newPath;
            }
        }
    }
    

    I will be uploading a the console application project including the executable shortly.

  • Today I came across this really interesting tweet on my Twitter timeline today:

    Read about why we’re deleting our Facebook page: facebook.com/limitedpressin… — Limited Run (@limitedrun) July 30, 2012

    Limited Run, posted on their Facebook profile stating that they would be deleting their account due to the amount Facebook is charging for clicks on their advertising. Here’s the interesting part: About 80% of the clicks Facebook charged Limited Run, JavaScript wasn't on. And if the person clicking the ad doesn't have JavaScript, it's very difficult for an analytics service to verify the click. Only 1-2% of people going to their site have JavaScript disabled, not 80% like the clicks coming from Facebook.

    Interesting stuff.

    Before Limited Run takes down their Facebook profile, I’ve attached a screenshot of their post below:

    Limited Pressing Facebook Post

    Reading this post today reminded me on a news article I read on “virtual likes” and how advertising through Facebook doesn’t necessarily mean you’ll be any better off. It all comes down to the level of engagement user’s have with a profile page. If users are just liking the page and not interacting with your posts or general content, those likes are worth nothing. Some companies are wising up to the effectiveness of Facebook’s advertising strategy.

    Limited Run isn’t the first to ditch Facebook ad’s, General Motor’s pulled away from Facebook ad’s earlier this year due to the ad’s Facebook produce do not have the visual impact needed to justify the cost.

    I think certain aspects of Facebook is a joke filled mostly of people looking for attention, not an effective marketing tool.

  • Facebook ConnectIf I need to login and authenticate a Facebook user in my ASP.NET website, I either use the Facebook Connect's JavaScript library or SocialAuth.NET. Even though these two methods are sufficient for the purpose, I don't think it's the most ideal or efficient way.

    The Facebook Connect JavaScript library is quite basic and doesn't have the flexibility required for full .NET integration through FormsAuthentication. Whereas SocialAuth.NET provides full .NET integration and all authentication is done server-side with minimal development.

    I'd say if you are looking for a straight-forward way to integrate social site authentication, SocialAuth.NET is the way to go. It's API can communicate with other social sites such as Twitter, LinkedIn and Gmail.

    Recently, I found a better and more efficient way to authenticate Facebook users on my site using Graph API and Hammock.

    Hammock is a C# a REST library for .NET that greatly simplifies consuming and wrapping RESTful services. This allows us to embrace the social site’s core technology instead of using varied SDK's or API's. There are many community driven frameworks and API's readily available on the Internet, but they can really cause problems if they evolve too quickly or haven’t been thoroughly tested.

    Suddenelfilio, has written a useful blog post on connecting Facebook using Hammock. You will see by his example that you can interact with Facebook anyway you want.

    The same principle could also be applied to other website API's that use REST based services, such as Twitter.

  • I always found writing code to read an RSS feed within my .NET application very time-consuming and long-winded. My RSS code was always a combination of using WebRequest, WebResponse, Stream, XmlDocument, XmlNodeList and XmlNode. That’s a lot of classes just to read an RSS feed.

    Yesterday, I stumbled on an interesting piece of code on my favourite programming site StackOverflow.com, where someone asked how to parse an RSS feed in ASP.NET. The answer was surprisingly simple. RSS feeds can now be consumed using the System.ServiceModel.Syndication namespace in .NET 3.5 SP1. All you need is two lines of code:

    var reader = XmlReader.Create("http://mysite.com/feeds/serializedFeed.xml");
    var feed = SyndicationFeed.Load(reader);
    

    Here’s a full example on how we can iterate through through the SyndicationFeed class:

    public static List<BlogPost> Get(string rssFeedUrl)
    {
        var reader = XmlReader.Create(rssFeedUrl);
        var feed = SyndicationFeed.Load(reader);
    
        List<BlogPost> postList = new List<BlogPost>();
    
        //Loop through all items in the SyndicationFeed
        foreach (var i in feed.Items)
        {
            BlogPost bp = new BlogPost();
            bp.Title = i.Title.Text;
            bp.Body = i.Summary.Text;
            bp.Url = i.Links[0].Uri.OriginalString;
            postList.Add(bp);
        }
    
        return postList;
    }
    

    That’s too simple, especially when compared to the 70 lines of code I normally use to do the exact same thing.

  • Location HTTPEver since I decided to expand my online presence, I thought the best step would be to have a better domain name. My current domain name is around twenty-nine characters in length. Ouch! So I was determined to find another name that was shorter and easier to remember.

    Ever since “.me” top level domain (TLD) came out, I snapped up “surinder.me”, partly because all other domains with my first name were gone (you know who you are!) and the “.me” extension seemed to fulfil what I wanted my website to focus on. ME! Having said that, I would have loved to get a “.com” domain, but I guess that’s what happens when you enter the online world so late.

    I was ready to move over all my content to “surinder.me” until one on my techy friends told me that things are still undecided when it comes to “.me” TLD’s in general. Originally, the “.me” extension was assigned to Montenegro’s locale only. But it’s fast gained traction over the years due to it’s simplicity and wide range of possible domain names. Even companies such as Microsoft, Facebook, Wordpress and Samsung rushed to register their “.me” domains. Hence the reason why I decided to get one.

    Companies seem to be using “.me” extensions for either URL shortening services or redirects to partner sites with “.com” extensions. It doesn’t fill me with much confidence when “.me” extensions are used this way. Google’s software engineer, Matt Cutts wrote a reassuring post on his Google+ profile earlier this year by stating:

    “…regardless of the top-level domain (TLD). Google will attempt to rank new TLDs appropriately, but I don't expect a new TLD to get any kind of initial preference over .com…If you want to register an entirely new TLD for other reasons, that's your choice, but you shouldn't register a TLD in the mistaken belief that you'll get some sort of boost in search engine rankings.”

    This should put all my “.me” fears to rest…right? Well it’s nice to know Google won’t penalise a site based on an extension. In the world of web, a search optimised site is king (as it should be). It’s nice that Google have given “.me” (as a country extension) global status given the nature of how its been used of late. But if you check Google’s Geotargetable Domains article, the text in brackets worries me.

    Google’s Webmaster Tools Geotargetable Domains

    I get the feeling you can’t go wrong with a “.com” domain providing you can find something meaningful to your cause. Steps are being made in the right direction for gccTLD’s. For example, Webmaster Tools gives you the option to geographically target your “.me” site. However, I can’t find anything concrete to alleviate my concerns in the long-run.

    So where does this leave me? Well, we’ll just have to find out if my future domain contains a .me extension. Smile

  • Published on
    -
    1 min read

    The Ridiculous Price of A Domain

    I’ve been looking for a suitable replacement domain name for a while now and even making purchases that has some reference to my name. Since I’m not having much luck with new registrations, I decided to snoop around for domains that are up for sale. Lo and behold I found my ideal second-hand “.com” domain: surinder.com. However, there’s a catch…

    Currently “surinder.com” sales price is £5000! Whaaaa!!!!!?????

    Ridiculous price for surinder.com

    I know Surinder is a really cool name and damn right popular with the ladies, but seriously £5000. Even I wouldn’t have the audacity to sell my domain for that much (offers will be accepted though :-) ).

    When reading numerous articles on how domain names are valued, it seems to evolve around the sum of the domains generic value and the value of its traffic. So its not exactly clear cut. I highly recommend reading this post on “How To Value a Domain Name”, it has some really useful information.

  • Published on
    -
    1 min read

    HTTrack - Website Copier

    One of my colleagues pointed my to a really useful tool called HTTrack, that has the ability to download a website from the internet to a local directory by simply copying and pasting the site URL of your choice.

    I found this tool helpful when working on an existing site where I was unable to 100% recreate the files and directory structure within my local development environment. For example, you could be developing a new micro-site for an existing online site you may not currently have in your local environment.

    HTTrack has the ability to build recursively all directories, images and other files from the server to your computer. What’s even better, HTTrack arranges the original site's relative link-structure.

  • Microsoft’s new command-line tool, PowerShell has been out for quite a few years now and I thought today will be the day I would start using it. I needed to write a script that would move n number of files from one directory to another. This job seemed a perfect fit for PowerShell.

    #Get 'n' number of files
    $FileLimit = 10 
    
    #Destination for files
    $DropDirectory = "C:\Drop\"
    
    $PickupDirectory = Get-ChildItem -Path "C:\Pickup\"
    
    $Counter = 0
    foreach ($file in $PickupDirectory)
    {
        if ($Counter -ne $FileLimit)
        {
            $Destination = $DropDirectory+$file.Name
    
            Write-Host $file.FullName #Output file fullname to screen
            Write-Host $Destination   #Output Full Destination path to screen
            
            Move-Item $file.FullName -destination $Destination
            $Counter++
        }  
    }
    

    From the get go, I was really impressed with the flexibility of the scripting language. This is where command line fails. It is sufficient for simple tasks but not so much for complex jobs.

    As you can see from the code above, I can implement complex operations that supports variables, conditional statements, loops (while, do, for, and foreach), and that’s just the start. I don’t know why I hadn’t used PowerShell sooner. If I didn’t have the option to use PowerShell, I would have probably created a C# service or executable to do the exact same thing. Time saver!

    Since PowerShell is built on the .NET Framework, Windows PowerShell helps control and automate the administration of the operating system and applications that run on Windows. So if you are a C# programmer, you should feel comfortable in writing PowerShell scripts. All you need to be aware of is syntax differences when declaring variables and keywords.

    To end with, I will quote an amusing forum post I found when researching the difference between good ol’ Command Prompt and PowerShell:

    PowerShell has a default blue background and Command Prompt has a default black background.”

  • Google has always impressed me with the quality of their API libraries allowing us to interface with their products in a somewhat straight-forward manner. In the past, I’ve used a couple of Google’s API’s for implementing YouTube video’s or Checkout merchant within my own sites. What makes life even easier is that the API’s are available in my native programming framework - .NET.

    Google were quite slow in launching an official API upon Google Plus’s initial release and even though unofficial API’s were available, I thought it would be best to wait until an official release was made. I’ve been playing around with Google’s .NET API for a couple weeks now and only just had the time to blog about it.

    I am hoping to make this beginners guide a three part series:

    1. Profile Data
    2. User Posts
    3. User’s +1’s

    So let’s get to it!

    Today, I shall be showing you the basic API principles to get you started in retrieving data from your own Google+ profile.

    Prerequisites

    Before we can start thinking about coding our page to retrieve profile information, it’s a requirement to register your application by going to: https://code.google.com/apis/console. Providing you already have an account with Google (and who hasn’t?), this shouldn’t be a problem. If you don’t see the page (below), a new API Project needs to be created.

    Google Plus API - Console

    Only the Client ID, Client Secret and API Key will be used in our code allowing us to carry our API requests from our custom application.

    Next, download the Google Plus .NET Client. My own preference is to use the Binary release containing compiled .NET Google Client API and all dll's for all supported services.

    Building A Custom Profile Page

    1. Create a new Visual Studio Web Application.

    2. Unzip the Binary Zip file containing all Google service DLL’s. Find and reference the following DLL’s in your project:

    • Google.Apis.dll
    • Google.Apis.Authentication.OAuth2.dll
    • Google.Apis.Plus.v1.dll
    1. Copy and paste the following front-end HTML code:
    <h2>
        About Me
    </h2>
    <br />
    <table>
        <tr>
            <td valign="top">
                <asp:Image ID="ProfileImage" runat="server"></asp:Image>
            </td>
            <td valign="top">
                <strong>Name:</strong> <asp:Label ID="DisplayName" runat="server"></asp:Label>
                <br /><br />
                <strong>About Me:</strong> <asp:Label ID="AboutMe" runat="server"></asp:Label>
                <br />
                <strong>Gender:</strong> <asp:Label ID="Gender" runat="server"></asp:Label>
                <br /><br />
                <strong>Education/Employment:</strong> <asp:Literal ID="Work" runat="server"></asp:Literal>
            </td>
        </tr>
        <tr>
            <td colspan="2" valign="middle">
                <asp:HyperLink ID="GotoProfileButton" runat="server">Go to my Google+ profile</asp:HyperLink>
            </td>
        </tr>
    </table>
    
    1. Copy and paste the following C# code:
    using System;
    using System.Collections.Generic;
    using System.Linq;
    using System.Web;
    using System.Web.UI;
    using System.Web.UI.WebControls;
    using System.Text;
    using Google.Apis.Authentication.OAuth2.DotNetOpenAuth;
    using Google.Apis.Authentication.OAuth2;
    using Google.Apis.Plus.v1;
    using Google.Apis.Plus.v1.Data;
    
    namespace GooglePlusAPITest
    {
        public partial class About : System.Web.UI.Page
        {
            private string ProfileID = "100405991313749888253"; // My public Profile ID
            private string GoogleIdentifier = "<GoogleIdentifier>";
            private string GoogleSecret = "<GoogleSecret>";
            private string GoogleKey = "<GoogleKey>";
    
            protected void Page_Load(object sender, EventArgs e)
            {
                if (!Page.IsPostBack)
                    GetGooglePlusProfile();
            }
    
            private void GetGooglePlusProfile()
            {
                var provider = new NativeApplicationClient(GoogleAuthenticationServer.Description);
                provider.ClientIdentifier = GoogleIdentifier;
                provider.ClientSecret = GoogleSecret;
    
                var service = new PlusService();
                service.Key = GoogleKey;
    
                var profile = service.People.Get(ProfileID).Fetch();
    
                // Profile Name
                DisplayName.Text = profile.DisplayName;
    
                //About me
                AboutMe.Text = profile.AboutMe;
    
                //Gender
                Gender.Text = profile.Gender;
    
                // Profile Image
                ProfileImage.ImageUrl = profile.Image.Url;
    
                // Education/Employment
                StringBuilder workHTML = new StringBuilder();
    
                workHTML.Append("<ul>");
    
                foreach (Person.OrganizationsData work in profile.Organizations.ToList())
                {
                    workHTML.AppendFormat("<li>{0} ({1})", work.Title, work.Name);
                }
    
                workHTML.Append("</ul>");
    
                Work.Text = workHTML.ToString();
    
                //Link to Google+ profile
                GotoProfileButton.NavigateUrl = profile.Url;            
            }
        }
    }
    

    Once completed, the page should resemble something like this:

    Google Plus Profile Page

    I think you can all agree this example was pretty straight-forward. We are simply using the people.get method which translates into the following HTTP request:

    https://www.googleapis.com/plus/v1/people/100405991313749888253?key=APIKey
    

    Unless you really want to display my profile information on your site (who wouldn’t!), you can keep the code as it is. But you have the flexibility to change the “ProfileID” variable to an ID of your own choice. To find your Profile ID, read: How Do I Find My Google Plus User ID?.

  • Android Rooted…if you want a true Android experience.

    When thinking of all the Android smartphones I’ve purchased in the past, they all inherit a common bad trait. Each user interface is different, causing a somewhat inconsistent experience when moving onto different Android handsets. This isn’t an issue for the bog standard phone user. However, when a phone is promoted to be running on Android, you expect there to be no difference.

    Currently, we have the following variations of Android:

    • SenseUI
    • TouchWiz
    • MotoBlur
    • LG TouchMax

    We can all agree the first version of Android was not exactly a pretty sight. Thus, phone manufacturers took it upon themselves from day one to push Android to the limit. In most cases they got it right but there were others who fell short.

    Android 4.0 was supposed to be a turning point and instil the original vision Google intended to have in their operating system. Providing great design, functionality and innovation. Even though Android 4.0 is widely available, it will sadly never see the light of day. Well that is, in its true form. A good example of this is the recent Android 4.0 update released for Samsung Galaxy S2 owners. You’d be forgiven not to notice any change after upgrading due to the TouchWiz interface.

    The only option is to root. This is the “root” I plan on taking in the future (you see what I did there!). Android has a great modding community who like me care what version of Android that runs on their phone. Generally, you will find that an unadulterated version of Android runs far better than its custom-skinned counterparts. It just seems a shame the lengths we have to go through to get the version of Android of our choosing.

    So it looks like custom UI’s are here to stay. Obviously handset makers think they can offer something Android cannot and this will not be changing anytime soon. To me, simply using a stock version of Android seems like a win-win situation for both the end-user and handset makers. Handset makers can spend less time in porting upcoming versions and spend more time and money innovating their hardware. Just leave software development to the professionals.

    For a platform that has so much potential, this has to be the main sticking point. Phil Nickinson’s article, “Dear Molly Rants: Let's talk about Android 'fragmentation' ..." sums it all up quite nicely:

    Google is constantly updating the Android code. Anyone can go get it. Problem is the smartphone…Android itself is not the problem here. It can, however, be the solution.