Software Development Articles

In this article, I will show you how easy it is to set up authentication with Azure Active Directory for an ASP.Net web app. No coding required!

First off, if you don’t have an Azure account, you can start a free trial here.

Before we start, let’s find out the domain names that we have available for our active directory users. We can find this from Azure Active Directory + Domain names

Now we can create an admin user. Go to Azure Active Directory, Users and groups + All users, and finally click on New user:

Then choose a name and a username for this user. The username needs to include the domain name that we got earlier. Set the role for this user as Global administrator, make a note of the auto-generated temporary password and click the create button.

In Visual Studio, create a new web application project, choose a name for your project and click OK.

In the next step, choose .Net Core 2.0 MVC and change the authentication to Work or School Accounts, Cloud – Single Organization, fill in your domain name as before, and check the Read directory data option.

Then you need to log on with the global account that you created on AAD. You will also be asked to change the password after the first time that you log in.

The project overview window will be shown as soon as the creation process is ready. Click on the Publish option.

Then create a new Azure App Service and click the publish button.

You will be asked to select some details for your new app. For the app service plan, I decided to host it on the free tier for development. Note that on other environments, you will need to choose the right machine for the job. However, any test service plan other than a free one will start eating into your monthly limit as soon as you create it.

The website will take a few moments to be deployed directly to Azure, but before we check that let’s try running the solution locally through Visual Studio. The familiar Microsoft AAD login is shown:

After logging in, you should see the account details displayed at the top of the web page:

Now if we want to make this work also on Azure we must do one final change. The web app that was created automatically, was also automatically configured to work with your local debugging environment. So let us fix that.

Go back to the Azure portal, then look for app registrations to choose your app:

Click settings, then Reply URLs:

Here we can see that the active directory app is only registered to accept login requests from the localhost url, so we also need to add the url of the Azure web app. You can find this url back from the overview of the app services module.

That's it. All you need to do now is to add more users to the directory.

When testing Android applications during development we have the option of running the app on a real device or an emulator. Testing the application on a real device is always wise before releasing the app to the app store because an emulator is not 100% compatible with a real Android phone. But let’s say that you have a team of multiple developers where it would be expensive to buy a device for each one of them. In this case an emulator can be used during development, with the app tested on a shared device after certain milestones are reached for example.

Fortunately the Android emulator that comes with Android Studio is a very powerful one and is rather faithful to what's out there on the market. To set it up, start by running this command on the console:

>android avd

By the way, avd stands for Android Virtual Device, which you should see in the application that you just launched. If you do not have Android Studio set up, you can refer to the previous post for information on how to install and set it up.

We have no devices set up yet, so let us click the Create button to set up one.

In the screenshot above, I am trying to create a device that is close to a real Samsung Galaxy S5 Mini. Most of the things in the window are quite self-explanatory. However, there is the target option (3rd option from top) that is not so obvious and is somewhat important. If you are testing a standalone application that does not interact or require any other applications on the virtual device, then I guess that any option is fine. But if you need to have your application interact with any Google service, or if you need to install a third-party app from the app store then you need to choose one of the Google APIs options. This will allow you to add a Google account so that you can download stuff from the app store. For example, in one case I needed to have my app interact with the Facebook app, so I needed to download it from the play store first.

After clicking the ok button, the virtual device should be created after a few seconds. You can now start the emulator by clicking the Start button. Another way to start the emulator without having to go through the AVD manager each time is to launch it from command prompt:

>emulator -avd <imagename>

where <imagename> is the name that we gave to the virtual device in the previous step (i.e. Samsung_Galaxy in my case). The first time it may take longer than usual to run the emulator, but it always takes some time. So don’t fret if you see nothing happening in the emulator window, apart from some shiny android text. Google are kind enough to allow you some time for making a cup of coffee before going to the next step.

As for me, I’ll just leave you with a command that we can test the emulator with the app from the previous post. Just open the command prompt and find the cordova project that we did a couple of weeks ago, and enter the following command:

>cordova emulate android

This should build the cordova project, transfer it to the emulator and run it. Magic!

If you are installing cordova for the first time, here is a checklist of things that need to be done to install it on Windows (I tested this with 7 and 8), as well as prepare it for building android apps. Let's dive right in.

  1. Install nodejs. This will be used to run the cordova commands from the CLI.

  2. Install Java JDK for compiling the android code.

  3. Install Apache Ant, used for building the required packages.

  4. Install Android Studio with Android SDK. Important for getting the necessary android components. This will also allow us to debug the solution on an android emulator and device eventually.

  5. Install git. This may be required when installing some of the cordova plugins, so it makes sense to prepare it as well.

  6. Download and install OpenSSL. Get the 64-bit version. OpenSSL will be used to read the app's key hash when it comes to releasing your app later on.

  7. In order for all the installed software to work, the correct environment variables need to be set up. This is done by opening the control panel and in the 'Search Control Panel' field type 'advanced system settings':

    Click the 'view advanced system settings' option. Then click the 'environment variables' button:

    Then we need to add some new variables: (The values in the table below are the default paths where each one of the files that we downloaded were installed. If you chose a different folder you will need to change the path values. For 32-bit machines in particular, you will need to remove the (x86) from the program files path, and the java version (jdk1.8.0_11) is also bound to change by the time this is published.

    Add the following variables:

    ANDROID_HOMEC:\Program Files (x86)\Android\android-studio\sdk
    ANDROID_PLATFORM_TOOLSC:\Program Files (x86)\Android\android-studio\sdk\platform-tools
    ANDROID_SDK_ROOTC:\Program Files (x86)\Android\android-studio\sdk
    ANDROID_TOOLSC:\Program Files (x86)\Android\android-studio\sdk\tools
    JAVA_HOMEC:\Program Files\Java\jdk1.8.0_11
  8. Next, still in the environment settings, select the Path variable and click the edit button to add the following paths to the Path variable: C:\Program Files\nodejs\;C:\ant\bin;C:\Program Files (x86)\Android\android-studio\sdk;C:\Program Files (x86)\Android\android-studio\sdk\platform-tools;C:\Program Files (x86)\Android\android-studio\sdk\tools;C:\OpenSSL\bin

  9. Now we are ready to dive into the command prompt, so go ahead and open it. type

    >ant -f fetch.xml -Ddest=system

    and press the enter button. This will make sure that ant will fetch the latest dependency information. If it does not work, i.e. cannot find ant, recheck the environment variables above. It is absolutely important that those variables are set properly at this point.

  10. The last thing to install is cordova itself. Using the npm package manager, run the following command:

    >npm install -g cordova

That should be everything, but at this point it would be nice to see something working on an actual android device. Luckily the default cordova application just does that. With cordova installed, let's run some more commands in the same command prompt window.

  • First create a new project. We need to give it a name, a unique identifier, and choose a folder to store it to:

    >cordova create FOLDERNAME com.yoururl.appname AppTitle

  • Change directory to the newly created folder:


  • Tell cordova that we want to support the android platform:

    >cordova platform add android

  • With the device connected to your computer run:

    >cordova run android

  • Alternatively we can use the android emulator to test, but we haven't set that one up yet. That is for another day, but if you happen to have an emulator running, type:

    >cordiva emulate android

If everything went well, you should see the image at the top of this page on your device.

Since we are writing a program that makes use of the public infrastructure that is the Internet, it makes sense to play fair and make our programs behave properly so that we can avoid clashes with the webmasters, or even worse retaliation.

Webmasters may not want their site to be spidered by other computers, and they have a way to say it clearly with the robots.txt file. Inside a robots.txt file, which is a simple text file stating one or more rules, a webmaster can describe if scraping is allowed at all, or otherwise limit scraping to only some files and folders, or only to some specific spiders.

Our plan to playing fair is to read the site’s robots file, and stay well clear of the actual content if we are not allowed to read it. The wisest thing to do here is to stand on the shoulders of giants, since there are already other people who have done this in the past. And luckily some of these folks have shared their efforts for everyone else to use. Which means that we do not have to reinvent the wheel, and will only write a few lines of code to read the robots.txt file and make our program compliant.

What we are going to do is to use the package manager inside Visual Studio in order to add and reuse the existing code from the RobotsTxt author Çağdaş Tekin. So first open the solution and from the Project menu choose the item that says ‘Manage NuGet Packages’. A new window will open. Choose the online option on the left and in the ‘Search Online’ box enter RobotsTxt and press enter. A list with the relevant packages will be shown:

Install the one that says ‘RobotsTxt: A robots.txt parser for .Net’. When prompted to select the project to install to, just click OK since at the moment we only have one project. The package will be installed, and a green circle with a checkmark will confirm this. We can close the package manager window and get back to our task at hand.

We will now add a new function called CheckRobots, that uses RobotsTxt to determine if we are allowed to spider the page that the user requested. Add the following code at the bottom of Form1.cs:

private bool CheckRobots(string url)
var robotsFileLocation = new Uri(url).GetLeftPart(
UriPartial.Authority) + "/robots.txt";
var robotsFileContent = client.DownloadString(robotsFileLocation);
Robots robots = Robots.Load(robotsFileContent);
return robots.IsPathAllowed("keywordChecker", url);

This function gets a url as a string parameter and returns a boolean value to signify if this url can be accessed by a robot or not. In the function’s first line, we concatenate the robots.txt filename to the base url and store it in the robotsFileLocation variable. This new url is where we are going to look for the robots file and download it. Then we download the actual robots.txt file from the website and store it in robotsFileContent, and next we ask the RobotsTxt module to scan the file and tell us if we are allowed to read the page or not. This is done in two steps. First we load the file with Robots.Load and then return the result with robots.IsPathAllowed.

This should work on sites that explicitly state their conditions with the robots.txt file. Of course on the web, we can encounter a whole lot of other possible situations that can cause an exception when reading the file. In this case if the robots.txt file does not exist or is unreadable, we will assume that we are pretty much allowed to read anything on the particular website. To do this, we will embed our code inside a try catch statement like such:

    var robotsFileLocation = new Uri(url).GetLeftPart(
UriPartial.Authority) + "/robots.txt";
var robotsFileContent = client.DownloadString(
Robots robots = Robots.Load(robotsFileContent);
return robots.IsPathAllowed("keywordChecker", url);
return true;

An exception is a condition inside our running code where the program’s flow deviates from our ideal outcome. For example in our case we are trying to read the robots.txt file, but we haven’t managed it somehow. This will cause an exception to be triggered by the code. We did not use any exception handlers so far, but it’s wise to use them in your code. A good programmer always tries to think about what unexpected situations can arise while the end user is working with the program and adds code so that exceptions do not blow up in the face of users.

If we expect that some part of our code can cause an exception, we embed this code inside a try block. Then the code that we want to run once an exception is triggered is added inside a catch block. In our case, if we cannot read the file, an exception is thrown and when we catch it the function returns the value true to signify that the url is allowed. Another code structure that we are not using here is the finally block. The finally block is used to run any code regardless if there is an exception or not, and is normally used to add some cleanup code for any opened streams or dispose of any unwanted objects. But you do not need to know this right now, so let’s move on.

Actually, the last thing we need to do is to call the new CheckRobots function from our button click event, encapsulate the code from the previous iteration inside an if statement, to be run only if we are allowed, or otherwise display a message to the user that the site is not allowed to be spidered.

private void btnCheck_Click(object sender, EventArgs e)
client = new WebClient();
var url = txtUrl.Text;
url = !string.IsNullOrEmpty(url) && Uri.IsWellFormedUriString(url,
UriKind.Absolute) ? url : "";
var keywords = txtKeywords.Text;
keywords = !string.IsNullOrEmpty(keywords) ? keywords :
"final fantasy";

if (CheckRobots(url))
var pageContent = client.DownloadString(url);
var keywordLocation = pageContent.IndexOf(keywords,
StringBuilder sb = new StringBuilder();
if (keywordLocation >= 0)
var pageIds = Regex.Matches(pageContent, @"id=""\s*?\S*?""");
string matchedId = closestId(keywordLocation, pageIds);
string idTag = matchedId.Substring(4, matchedId.Length - 5);
brwPreview.Navigate(url + "#" + idTag);
sb.AppendFormat("{0} are talking about {1} today.", url,
sb.Append("\n\nSnippet:\n" + pageContent.Substring(
keywordLocation, 100));
sb.AppendFormat("\n\nClosest id: {0}", idTag);
sb.Append("Keyword not found!");
lblResult.Text = sb.ToString();
lblResult.Text = "Blocked by robots.txt!";

We can observe that the program is becoming more modular now. What this means is that the programs is split in different modules, each with a specific function that can be reused in this and any future programs. You may have noticed that we are already reaping the benefits of modularity by re-using code for a function we need developed by someone else, but we will expand more on modularity in the next installment.

Once again, the full source code for this tutorial is available at GitHub.

Last time we left off with a GUI for our keyword checker program. What would be the next logical building block to add now that we have our checker with a nice preview pane?

Well the preview pane is not too handy if it does not show the part that we are looking for, so let us improve that today. We will use regular expressions to achieve this. In short, a regular expression acts as a text search, but instead of using a specific keyword to search for, it is generally more effective with a pattern that finds a set of text results that match this pattern. So our first piece of code is to add a reference to System.Text.RegularExpressions at the top of our form code.

But first, let’s diverge a bit to describe how we are going to scroll the preview pane to our desired location. Since the web page is in HTML format, we can find an HTML element on the page that we can just scroll to. We just need the id of that element. Thankfully, with regular expressions we can get a list of all the ids inside the page, and then scroll to the one that is closest to our content. Simple.

The regex (a shorter term for regular expression) that we are going to use to find all matches of element ids is this: @"id=""\s*?\S*?"""

And we will use it as follows:

var pageIds = Regex.Matches(pageContent, @"id=""\s*?\S*?""");

This will give us the page ids we wanted and conveniently store them as a list of matches in our pageids variable. Now we also need a private function that will give us the closest element to our content. A function is a piece of code that does a specific task, and we usually create a function for each simple task we need, so that we can use it in different parts of our program without having to rewrite the same code over and over. It could also be used by other programs, if it weren’t for that private adjective I’ve used (in technical terms called access modifier). The private access modifier limits the way that the function can be used only within the same class, in our case the program’s form. We are happy with that, so let’s move on.

Here’s our function:

private string closestId(int keywordLocation,
MatchCollection matchingIds)
int? closestId = null;
string closestIdName = null;
foreach (Match id in matchingIds)
if (closestId != null)
int idDistance = Math.Abs(id.Index - keywordLocation);
if (idDistance < closestId.Value)
closestId = idDistance;
closestIdName = id.Value;
closestId = Math.Abs(id.Index - keywordLocation);
closestIdName = id.Value;
return closestIdName;

The function, which I named closestId, will take two parameters. The first one is the index of our original keyword search (which is described in the first part of the tutorial), and the second parameter is the list of regex matches. What is important is that this list of matches contains the id and index of each match. What this function does is to iterate through the list of matches in order to find the closest one to our keywordLocation. The distance between each match and the keyword is calculated with the absolute distance function called Math.Abs (now that is a handy public function!). Every time that a new minimum distance is found, we store the value of this distance until we find a better one, whereby it will replace the current minimum. Initially the value of the closest distance is null, so the first match in the list will always be set as the closest in the first iteration. Once the loop ends, we just return the name of the closest id that we found. The function would then be called from the main function like this:

string matchedId = closestId(keywordLocation, pageIds);

Actually, we just need the id of the element without the id= part, so let’s go ahead and strip it off:

string idTag = matchedId.Substring(4, matchedId.Length - 5);

This last piece of code can also go inside the closestId function, so feel free to put it there. The last piece of the puzzle is to navigate to the page as we did before, but by adding the id to the url (prefixed with a hash sign) we get the nice effect of scrolling to the element with this id into view.

brwPreview.Navigate(url + "#" + idTag);

This method is not guaranteed to work 100% of the time, as some website may not have any Id elements or the id of the closest element may not be so close to our content, but it’s a start. I also increased the size of the window from the previous tutorial so that we have more space for the preview pane. The full source code for this tutorial is available on GitHub. Here is a sample screenshot.