Whenever I’m working on any project nowadays, I always emphasize the need to automate whatever can be automated from start to end. Keeping up with the latest processes is equally important. However, when it comes to my own site, which I only have time to update occasionally, I never thought about applying these practices. I just change the code and publish from Visual Studio.

However now with all the easily accessible functionality that is Azure DevOps I have decided that it is time to start that journey to run this website as it deserves.

The first thing I will do is that I will migrate the solution from TFVC to a git repository. This will bring its own advantages, which mainly revolve around being able to deliver stuff faster with greater quality and less risk. I will use the Azure DevOps migrate functionality, but before that, I will just delete the old MVC5 project which was excluded from the solution but still in source control.

To start the import process itself, from the repos section click on the name of the current repo and then choose Import repository.

Importing from a different repo

Then I need to specify where the source code is and choose a name for my new git repo:

Setting the Import repo options

There is a friendly warning that switching can be disruptive. Although both are source control solutions, TFVC and git are quite different and getting used to one if you exclusively worked on the other takes some getting used to. If you are planning to switch a big team, make sure that all the developers are able to use git with confidence. Start with a smaller non-critical project if need be.

I also decided not to migrate the history because Microsoft doesn't recommend it, and since it only goes back to the last 180 days it means that I will lose most of the history anyway. Once satisfied, I clicked on the Import button and 11 seconds later I was looking at the new repo.

Repo migration in progress

One small issue was that when I opened the solution in Visual Studio, there were a bunch of compiled files marked as changes:

Unwanted pending changes

This is because I needed to add the .gitignore file manually. Since the solution was migrated from TFVC there was only a .tfignore file, so I deleted it and added .gitignore (and .gitattributes) instead. These files are used to let our tooling know which files are not required to go into source control. We only need the source code in source control. Build files and packages will be automatically generated on the build server at build time so we don’t need to litter the git repo with them. So the result is that we only have two pending changes now:

These are the files we actually want to push

If you need to, the .gitignore file is available directly from github.

And I also copied the .gitattributes file from one which I had earlier:

# Set default behavior to automatically normalize line endings.
* text=auto

That’s how easy it was in my case. Next time I will talk about deployment. In the meantime, if you need help to understand git I recommend to take a look at the Learn Git Branching site.

This post was initially going to be about refactoring a mail wrapper class that I came across, but there was a slight detour so today I will just write about adding a test for this class.

Here is a reproduction of the original code with some hardcoded text changed to protect the innocent:

public static class MailWrapper
    public static Task sendMessage(String subj, String content)
        var msg = new SendGridMessage();

        var mailUser = "some@email.com"; // ConfigurationManager.AppSettings["emailFrom"];
        msg.SetFrom(new EmailAddress(mailUser, "site mailer"));

        var recipients = new List
            new EmailAddress("some.other@email.com", "Recipient")


        msg.AddContent(MimeType.Html, content);

        var apiKey = "someApiKey"; // ConfigurationManager.AppSettings["SENDGRID_APIKEY"];
        var client = new SendGridClient(apiKey);

        return client.SendEmailAsync(msg);

This method is then called from an MVC controller:

public async Task Index(ContactModel cm)
    if (ModelState.IsValid)
        var msg = "Username: " + cm.UserName +
            "<br /> Email: " + cm.Email +
            "<br /> Message: " + cm.Message;
        await MailWrapper.sendMessage("Message from the contact form", msg);
        return View("MailSent");
    return View();

There are some issues with this code:

  • Hardcoded (and sensitive) strings
  • The MailWrapper class is in the same project as the MVC code
  • No test code was written
  • The code can be cleaner and more readable
  • The return statement of the sendMessage method appears to return a Response object, but the method signature just returns Task

There is also one good thing which is very important, and that is that the controller is not coupled directly with SendGrid. In this case, if we want to change the third-party library in the future, then there is only one place where this needs to be modified.

So I’ll start first by creating a test. When looking at the code, there doesn’t seem to be enough of an argument at the moment to write a unit test because what the code is mostly doing is calling a third-party library. So in this case, we’ll go with an integration test. This will ensure that when running the tests after a code change or a third-party library update, we will know that sending emails still works.

public class MailWrapperTest
    public async Task Can_Send_A_Message()
        var response = await MailWrapper.sendMessage(
            "This is a test", 
            "Feel free to delete the message as soon as you read it.");


I have also decided to change the signature of the sendMessage method so that we can now also check the response:

public static Task sendMessage(String subj, String content)

There is a little side-effect when running the code, as there will be a mail sent to the selected recipient. In a future version, we can also check if this email was successfully received.

It’s always a good thing to get free recommendations from experts, and Azure Advisor is one such good thing.

When clicking on the Advisor tab, you will be presented with a dashboard. Give it some time to load, and you will be shown your personalised recommendations split into four categories.

An overview of Advisor recommendations

The categories are availability, security, performance, and cost. One example of each, starting with availability is enabling soft delete of blobs on storage accounts so that any accidental deletion can be recovered. An example of security is to enable secure transfer, again on storage accounts, to force all requests to be downloaded via HTTPS. For performance, Advisor can suggest optimal cache sizes and database optimisation. Finally, for costs, the tool can detect underutilised VMs and propose to switch them off or scale them down.

Often there is minimal friction in order to implement the recommended fix, providing you with details about the tip and a list of steps. Sometimes Advisor will happily do it for you if you decide to go for it. Let’s try one of the suggestions, I will go for high availability.

high availability summary

Here I am being told that creating an Azure Service Health alert will notify me of any server issues that affect my resources. This has a low impact because it will not result in any changes to the resources themselves. Let’s drill down further into the recommendation.

recommendation details

Here I have a short description of the benefits of rolling out this change, and a link to more detailed information. I also have the option to postpone this recommendation for a day, week, month, or quarter, or just dismiss so that I don’t get notified again with this suggestion. Most importantly, I have the option to create an Azure service health alert.

I’ll just fill in the required details so I get notified of service interruptions concerning any resources related to my website. The events that can trigger an alert are service issue, planned maintenance and health advisories. I can choose to be notified for any combination of them.

creating an alert rule

Then it’s just a matter of clicking the create button and we're good to go.

This post it the conclusion to a series. Make sure you alse read Part 1 and Part 2.

It’s time to concentrate on the implementation of our Outlet class. We will add a new private property where we set the time zones of each company.

private readonly string[] timezones =
    "Pacific Standard Time",
    "Central Standard Time",

This information would be fetched from a data store in practice, and you would do well to use an interface as well so it can be mocked. But I will leave this part for you as an exercise.

The code for fetching the current date and time is now as follows.

public string GetLocalDateTime(long id)
    var tz = TimeZoneInfo.FindSystemTimeZoneById(timezones[id]);
    var currentDate = dateTimeWrapper.Now();
    var offset = tz.GetUtcOffset(currentDate);
    var outletDate = currentDate.AddMinutes(offset.TotalMinutes);

    return outletDate.ToString("M/d/yyyy HH:mm");

So this method calculates the selected company’s timezone first through FindSystemTimeZoneById, and then reads the date and time from the wrapper we created in part 2. We continue by getting the timezone offset for this particular date by means of GetUtcOffset. This step is required because the offset is different between winter and summer due to daylight saving. The final step is to add this offset to the date (note also that it is in minutes because some time zones vay by half-hour steps) and return it as a string.

We did the change with confidence, and can now look at the test results:

All the tests are passing

We have now observed how code that interacts with the hosting system should be written. Whilst using the operating system’s resources is essential in a software program, having indirect access to those resources ensures that your code will be easier to test independently of any environment that it will run on.

Now that you know the basic concept, I have to reveal that there is a library that already does this. This popular library is NodaTime, which the authors describe as a better date and time API for .NET. NodaTime exposes everything through the ICock interface and even provides a mocking solution in the form of FakeClock.

Back in part 1 I jokingly closed with a test passing, but I was cheating a bit by building a method that returns a hardcoded string containing the expected result. This is perfectly acceptable in the initial phase to make a test pass, if only to be sure that the rest of the scaffolding was done correctly.

While driving home that day in the style of Papa Pig, an idea starts forming. What if we write another test to check for a second case? Back in the office the next day we add this test case for the fictional outlet with an id of 2. We need to change the test method into a parametrised test method:

[InlineData(0, "4/28/2019 10:00")]
[InlineData(1, "4/28/2019 12:00")]
public void Outlet_can_show_the_time_according_to_its_local_timezone(
    int outletId, string expectedResult)
    var result = sut.GetLocalDateTime(outletId);

In this way we may use the same test for multiple values. If we run the tests only one is passing, so we have a problem once more.

Only one of the two tests is passing

Let’s solve the problem properly this time. What we need is another layer of indirection. The Outlet class was directly dependant on the System.DateTime class, and with our change we only made it dependant on the DateTimeWrapper class. We still cannot control the behaviour of the date within our test. We need to add an interface that specifies the wrapper’s behaviour and then we will be able to mock it during the test execution. Visual Studio has a nice refactoring tool that allows us to quickly generate the interface:

Extracting the DateTimeWrapper Interface

It even allows us to choose the name and which methods to include in the interface:

We also have some control when extracting the interface

To be honest, I'm not sure if this feature is standard in all versions of Visual Studio, but here is the generated code anyway:

using System;

namespace TimeZoneHelper
    public interface IDateTimeWrapper
        DateTime Now();

That’s it, and the DateTimeWrapper class now also inherits the IDateTimeWrapper interface. Let’s use it! In the Outlet class change the type to IDateTimeWrapper:

public class Outlet
    private readonly IDateTimeWrapper dateTimeWrapper;

    public Outlet(IDateTimeWrapper dateTimeWrapper)
        this.dateTimeWrapper = dateTimeWrapper;

    public string GetLocalDateTime(long id)
        return dateTimeWrapper.Now().ToString("M/d/yyyy HH:mm");

Now any object that implements IDateTimeWrapper can be used with the Outlet class. Note that the choice for initialising the required wrapper class will be handled by the dependency injection middleware, but we won’t go into it here as we will just focus on the test.

In the test class, we will also use the interface, but on top of that we will mock it as well. It’s not that we want to make fun of it, we just want to be able to control what happens during the test. We will now use NSubstitute, which makes the task of creating mocks easier. Let us change the code inside the test constructor:

public OutletTest()
    var wrapper = Substitute.For<IDateTimeWrapper>();

    sut = new Outlet(wrapper);

We are still not specifying the behavior of the wrapper, so both tests are failing. Let’s add this behaviour, so now we can specify what the date is every time we run the test. With NSubstitute again, the code is nice and readable:

public OutletTest()
    var wrapper = Substitute.For<IDateTimeWrapper>();

    wrapper.Now().Returns(new DateTime(2019, 4, 28, 17, 00, 00, DateTimeKind.Utc));

    sut = new Outlet(wrapper);

So we have specified that every time we run the test, the UTC time is always April 28th 2019 at 17:00. We have gone through all this trouble but still the tests fail. That is because we did not implement any logic in the Outlet class, but now we have done enough groundwork to let us do that efficiently. The main point I wanted to show you was about removing hard dependencies to system objects and now we have achieved that. See you in part 3!