In this short introduction to C#, I tried to do something different by making a program that scrapes a website to report if today it is showing content that we want to see. The act of scraping for an automated program is to retrieve a website's content in order to obtain some useful information. The example that we will build will check if our favourite website (e.g. gametrailers.com) are posting some information about our favourite game (e.g. Final Fantasy) and any of its modern versions and updates. If so, we can then visit the Gametrailers website safe in the knowledge that we will view something about Final Fantasy.

First, open Visual Studio and create a new console application. I named mine keywordCheck, but you are free to chose your own name.

This will create a standard program class containing a Main method that will be executed every time that we run the program. It is currently empty, so let us fix that.

Since we will be using the system's web client library to connect to and fetch the required page, let us add a reference to that at the top of our class:

using System.Net;

Now let us try to fetch the page that we require, using the following code:

static void Main(string[] args)
{
var client = new WebClient();
var url = "http://www.gametrailers.com";
Console.Write(client.DownloadString(url));
Console.ReadKey();
}

Here, we are first initialising a new instance of a web client and setting it to the client variable. Then we are setting the url variable with the required url to fetch, and finally we instruct the client to fetch this url for us and output the page’s HTML to the console. When we run the program, we can confirm that we are indeed fetching the page:

That’s great, but we’re still not there yet. Let us add a new variable to hold the keywords in. Then we can make a check to see if these keywords are included in the downloaded web page. If the website includes the text that we are looking for, we display a confirmation message:

    var client = new WebClient();
var url = "http://www.gametrailers.com";
var keywords = "final fantasy";
var pageContent = client.DownloadString(url);
if (pageContent.IndexOf(keywords, StringComparison.OrdinalIgnoreCase)
>= 0)
{
Console.WriteLine(url + " are talking about " + keywords +
" today.");
}
Console.ReadKey();

The IndexOf method will return a positive number if the text is found. This number indicates the position in the page where the keywords were found. We also instruct this method to ignore the case when comparing strings so we make sure to still find the keywords even if they are in a different case. The if statement will display a message if the returned number from IndexOf is positive.

To finish off this tutorial, we will also display a snippet of the text where the keywords are included in the fetched website. Nothing big and fancy, but it will give us a general idea of what the page’s content is.

static void Main(string[] args)
{
var client = new WebClient();
var url = "http://www.gametrailers.com";
var keywords = "final fantasy";
var pageContent = client.DownloadString(url);
var keywordLocation = pageContent.IndexOf(keywords, StringComparison
.OrdinalIgnoreCase)>= 0)
if (keywordLocation >= 0)
{
Console.WriteLine(url + " are talking about " + keywords +
" today.");
Console.WriteLine("\nSnippet:\n" + pageContent.Substring(
keywordLocation, 100));
}
Console.ReadKey();
}

And here is the result:

Next time we will see how to improve upon this code, like adding command line parameters or a GUI, for example. A full version of the code is also available on github here, part 2 of the tutorial here.

After the recent release of Internet Explorer 11, you may have noticed that you cannot log-in to your ASP.Net website with this browser if you are using forms authentication with cookies.

You may also have noticed that the session id is being stored in the url (some additional characters are being added to the site’s url seemingly out of nowhere), instead of a cookie when browsing the site only with IE11.

This happens because the cookieless parameter is not specified explicitly (such as UseDeviceProfile or AutoDetect), so it is browser dependant. To solve this issue, this parameter has to be forced in order that all browsers will use cookies to store the session id. Here is an example of the required change:

<authentication mode="Forms">
<forms cookieless="UseCookies" loginUrl="Account/Login" timeout="2880" />
</authentication>

If your web hosting company provides web servers that support only MVC3 (cough, godaddy), and you are dying to use bundling and minification, well here is a solution. This is a simple addition that will make your website work a bit faster. Bundling is the process of joining a number of files into a single one, for example the browser can get the site's css in a single request whilst using the download slots that would have been used to download multiple css files for something else. Minification is the process of removing whitespace and comments in order to make the bundled files smaller. Minification works with css and javascript files, and in the latter case it also shortens the variable names. The result is a bit unreadable to the human eye, but the smaller file size makes for quicker page loading.

So here is how to do it:

First you need to get the required library files that were added to MVC4, which are System.Web.Optimization.dll and WebGrease.dll. If you aren't sure where you might find them, start a new MVC4 solution and copy the files from the generated bin folder to the MVC3 project's own bin folder.

Next, create a folder named App_Start and inside it create a new class named BundleConfig.cs. The skeleton for this new class is as follows:

using System.Web.Optimization;

namespace same_as_your_main_project
{
public classBundleConfig
{
public static void RegisterBundles(BundleCollection bundles)
{

}
}
}

Inside the RegisterBundles method, we will then add the necessary bundle information.

bundles.Add(new StyleBundle("~/Content/css").Include(
"~/Content/style.css",
"~/Content/reset.css",
"~/Content/960.css"));

In the above example, I have added 3 css files named style, reset and 960.css and bundled them in a virtual file named ~/Content/css, which is how we will refer to them on the web page itself. Javascript files are bundled in a similar way, simply by replacing StyleBundle with ScriptBundle.

Now these bundles need to be initialised every time that the application is started. To do this, open the Global.asax.cs file and add this line somewhere in the Application_Start() method:

BundleConfig.RegisterBundles(BundleTable.Bundles);

The last thing to do is to add a link for the virtual bundle inside the page. Easy, just add this line of code:

‹link rel="stylesheet" type="text/css" href= "@BundleTable.Bundles.ResolveBundleUrl("~/Content/css")" /›

Note that the name inside the parameter that we are passing (shown in bold) is the same as the one we added earlier, which is what makes the link work. Hurrah, we have a working bundling and minification solution on MVC3. The neat thing is that the generated file will also instruct the browser to cache it for one year, resulting in additional speed gains for repeated visits!

From a true story...

Here I was putting the finishing touches to my new website design when I saw that the images on the homepage were taking a bit of time to load and so some kind of loader was necessary. After further tought, I went for a Sinclair Spectrum style loader (for those that are too young to remember, the Spectrum loader consisted of coloured bars in the screen's border that moved according to the received data from the cassette deck).

In this short tutorial I will show you how you can use the same effect on your web page.

First, we need an image that is going to be used for the loader. I grabbed mine by running a Spectrum emulator in record mode, then using a gif editor to crop part of the border. For simplicity, here is the resulting image that you need to include in your project:

spectrum loader

Then we need some css in order to show the loader

body.spec
{
background-image: url('Images/specload2.gif')
}

and apply it with JQuery once the html document is ready

$(document).ready(function () {
$("body").addClass("spec");
});

Note: you will need to add a reference to JQuery for this. So if there is no JQuery in your page yet, add this line to the page html head section:

‹script src="http://code.jquery.com/jquery-1.9.1.min.js"›‹/script›

This will have the effect of showing the loader while the rest of the page elements are loaded. The last thing to do is to remove the loader once that all the elements on the page have been loaded (again by using JQuery).

$(window).load(function () {
$("body").removeClass("spec");
});

Web Design Ledger (WDL)

If you need ideas on the visual design of your current project, then this is the site to visit. There’s new content every day, with articles mostly listing the best tools, libraries, freebies, tips and other designers’ examples.

What’s good about WDL is that it does not force you to read long articles, offering quick access to the content that you want. So be sure to spend five minutes each day on this website, and you could find the pearl that will turn your design from good to great.

Smashing Magazine

I used to like the old Smashing site more, as it provided more tutorials and less talk about designers’ existentialism. However it’s still a great source today, and it’s where all the hip web designers go and meet.

If you want to stay on the forefront of web design, then you need this site.

MSDN Magazine

Not strictly for web developers only, this website contains a lot of information about all things Microsoft and beyond. Especially if you are a developer for sites hosted on Windows servers, MSDN magazine is an essential read.

Inside a monthly issue, the MSDN site has a number of articles with something for everyone. From showing how to use the languages (C#, VB, ASP.NET, C++, and other lesser known ones), to tutorials, to lessons on how the languages work under the bonnet, to interacting with other languages, to a fun project each month, and also getting to know what Microsoft is planning to add to your favourite language in the near future.

If you are coding with Microsoft technologies, this is your essential stop every morning during coffee.

Jon Galloway’s blog

Jon Galloway’s blog may be a bit too technically detailed for some, but for me it is a goldmine of information. Jon knows a thing or two about ASP.Net MVC and I have found the code in these pages (as well as in his MVC book) invaluable whilst I was working on my projects.

Free Computer Books, Programming eBooks and IT Books

This website is the mother lode of free ebooks, and it is totally legit. The website authors have done the job of looking for free books for you. These free books can be previews of upcoming books (where you have the possibility to try and provide corrections), full versions of books which the authors have gently agreed to make available for free under some offer, or books that have been release under the GNU or similar license.

With a new ebook almost every day(not counting some occasional updates with over 100 books), you have plenty of information to sink your teeth into.

Think Quarterly

Think Quarterly is an online magazine from Google about the future of the Internet.

What? I thought you already clicked the link and went away to Think’s website, but now I see you’re still not convinced. Well, in Google’s own words, this magazine does not even try to teach you what has already been done, but provides bleeding edge information about how the next round of technological innovations will be shaped. If that is not important to help you make your website ready for the next step, I don’t know what is.

Note & Point

Unlike WDL, Note & Point requires you to dedicate some time to read its contents. Having said that, you do have time to consume the content between posts. The name says it all (once you know that Note refers to keynote presentations, and Point refers to Powerpoint), here you will find a bunch of presentations.

The problem with this kind of content is that sometimes a presentation without a presenter does not make sense at all as you do not get the explanation behind the slide. However, there are always snippets of new information that you haven’t heard of yet, or how to approach something in a different way that you usually do.

Freepik

Freepik is a search engine for free images that you can use on your website (and banners, presentations, magazines and advertising) without requiring any sort of permission for most of them. The images can either be bitmaps, vectors or photoshop files, so you have some choice. And you cannot beat free (as in beer).

Buy me a coffee Buy me a coffee