I have set up Azure build and deployment pipelines a number of times from scratch before, so this time I’m going to trigger it from Visual Studio which should be more straightforward.

The first step of the process is to right-click on the solution name and chose the ‘Configure Continuous Delivery to Azure…’ option.

Configue Continuous Delivery menu in Visual Studio 2019

This will open a window from where you need to pick up the location of the source code, the subscription and the app service. For the app service, you can either create a new plan or like in my case, just choose an existing one which is already running.

Choosing source repo and target web app

The output window informs you of the current status of the pipeline creation. It actually took less than a minute for me to have everything up and running.

Pipeline configuration in progress

Let’s head over to Azure DevOps and see the results. Clicking on the blue rocket icon reveals that there is both a pipeline (previously named build in previous versions of Azure DevOps) and a release. Both have been triggered automatically, and we can see that the outcome was successful.

Build Pipeline

Release pipeline

Let’s dig a little bit deeper, by clicking the edit button on the pipeline first.

Build pipeline individual tasks

Here we can see that everything has been set up as expected, and the process is intelligent enough to create the right tasks depending on the project type. In this case here, we have a process made up of a number of steps that restore the required nuget packages, build and test the solution, and then package the build output as an artifact. This artifact is then picked up by the release pipeline and deployed to the required environment.

Release pipeline visualisation

Moving to the release pipeline we can observe two things. First, the little lighting icon on the Artifacts step brings us to the trigger configuration. Here it is already set up to trigger a release whenever a new build from the master branch is available.

Continuous deployment trigger

The second thing is the warning icon in the dev deployment stage. Clicking it also reveals the details, this time of the issue. It's telling us that we need to choose a deployment agent. Microsoft supplies various agents hosted on Azure, and we have to choose one depending on our solution type.

agent pool selection

One thing to remember is that these pipelines are free to use for up to 30 hours each month. If you need more than that, you have to take out your credit card and pay for the extra consumption.

Well, that was easy, wasn’t it? I usually choose different names for the various steps, so I will go around and change them, but here we can see how powerful the template is. Next time I will also show you how to add additional stages.

Whenever I’m working on any project nowadays, I always emphasize the need to automate whatever can be automated from start to end. Keeping up with the latest processes is equally important. However, when it comes to my own site, which I only have time to update occasionally, I never thought about applying these practices. I just change the code and publish from Visual Studio.

However now with all the easily accessible functionality that is Azure DevOps I have decided that it is time to start that journey to run this website as it deserves.

The first thing I will do is that I will migrate the solution from TFVC to a git repository. This will bring its own advantages, which mainly revolve around being able to deliver stuff faster with greater quality and less risk. I will use the Azure DevOps migrate functionality, but before that, I will just delete the old MVC5 project which was excluded from the solution but still in source control.

To start the import process itself, from the repos section click on the name of the current repo and then choose Import repository.

Importing from a different repo

Then I need to specify where the source code is and choose a name for my new git repo:

Setting the Import repo options

There is a friendly warning that switching can be disruptive. Although both are source control solutions, TFVC and git are quite different and getting used to one if you exclusively worked on the other takes some getting used to. If you are planning to switch a big team, make sure that all the developers are able to use git with confidence. Start with a smaller non-critical project if need be.

I also decided not to migrate the history because Microsoft doesn't recommend it, and since it only goes back to the last 180 days it means that I will lose most of the history anyway. Once satisfied, I clicked on the Import button and 11 seconds later I was looking at the new repo.

Repo migration in progress

One small issue was that when I opened the solution in Visual Studio, there were a bunch of compiled files marked as changes:

Unwanted pending changes

This is because I needed to add the .gitignore file manually. Since the solution was migrated from TFVC there was only a .tfignore file, so I deleted it and added .gitignore (and .gitattributes) instead. These files are used to let our tooling know which files are not required to go into source control. We only need the source code in source control. Build files and packages will be automatically generated on the build server at build time so we don’t need to litter the git repo with them. So the result is that we only have two pending changes now:

These are the files we actually want to push

If you need to, the .gitignore file is available directly from github.

And I also copied the .gitattributes file from one which I had earlier:

###############################################################################
# Set default behavior to automatically normalize line endings.
###############################################################################
* text=auto

That’s how easy it was in my case. Next time I will talk about deployment. In the meantime, if you need help to understand git I recommend to take a look at the Learn Git Branching site.

This post was initially going to be about refactoring a mail wrapper class that I came across, but there was a slight detour so today I will just write about adding a test for this class.

Here is a reproduction of the original code with some hardcoded text changed to protect the innocent:

public static class MailWrapper
{
    public static Task sendMessage(String subj, String content)
    {
        var msg = new SendGridMessage();

        var mailUser = "some@email.com"; // ConfigurationManager.AppSettings["emailFrom"];
        msg.SetFrom(new EmailAddress(mailUser, "site mailer"));

        var recipients = new List
        {
            new EmailAddress("some.other@email.com", "Recipient")
        };
        msg.AddTos(recipients);

        msg.SetSubject(subj);

        msg.AddContent(MimeType.Html, content);

        var apiKey = "someApiKey"; // ConfigurationManager.AppSettings["SENDGRID_APIKEY"];
        var client = new SendGridClient(apiKey);

        return client.SendEmailAsync(msg);
    }
}

This method is then called from an MVC controller:

[HttpPost]
public async Task Index(ContactModel cm)
{
    if (ModelState.IsValid)
    {
        var msg = "Username: " + cm.UserName +
            "<br /> Email: " + cm.Email +
            "<br /> Message: " + cm.Message;
        await MailWrapper.sendMessage("Message from the contact form", msg);
        return View("MailSent");
    }
    return View();
}

There are some issues with this code:

  • Hardcoded (and sensitive) strings
  • The MailWrapper class is in the same project as the MVC code
  • No test code was written
  • The code can be cleaner and more readable
  • The return statement of the sendMessage method appears to return a Response object, but the method signature just returns Task

There is also one good thing which is very important, and that is that the controller is not coupled directly with SendGrid. In this case, if we want to change the third-party library in the future, then there is only one place where this needs to be modified.

So I’ll start first by creating a test. When looking at the code, there doesn’t seem to be enough of an argument at the moment to write a unit test because what the code is mostly doing is calling a third-party library. So in this case, we’ll go with an integration test. This will ensure that when running the tests after a code change or a third-party library update, we will know that sending emails still works.

public class MailWrapperTest
{
    [Fact]
    public async Task Can_Send_A_Message()
    {
        var response = await MailWrapper.sendMessage(
            "This is a test", 
            "Feel free to delete the message as soon as you read it.");

        response.StatusCode.Should().Be(HttpStatusCode.Accepted);
    }
}

I have also decided to change the signature of the sendMessage method so that we can now also check the response:

public static Task sendMessage(String subj, String content)

There is a little side-effect when running the code, as there will be a mail sent to the selected recipient. In a future version, we can also check if this email was successfully received.

It’s always a good thing to get free recommendations from experts, and Azure Advisor is one such good thing.

When clicking on the Advisor tab, you will be presented with a dashboard. Give it some time to load, and you will be shown your personalised recommendations split into four categories.

An overview of Advisor recommendations

The categories are availability, security, performance, and cost. One example of each, starting with availability is enabling soft delete of blobs on storage accounts so that any accidental deletion can be recovered. An example of security is to enable secure transfer, again on storage accounts, to force all requests to be downloaded via HTTPS. For performance, Advisor can suggest optimal cache sizes and database optimisation. Finally, for costs, the tool can detect underutilised VMs and propose to switch them off or scale them down.

Often there is minimal friction in order to implement the recommended fix, providing you with details about the tip and a list of steps. Sometimes Advisor will happily do it for you if you decide to go for it. Let’s try one of the suggestions, I will go for high availability.

high availability summary

Here I am being told that creating an Azure Service Health alert will notify me of any server issues that affect my resources. This has a low impact because it will not result in any changes to the resources themselves. Let’s drill down further into the recommendation.

recommendation details

Here I have a short description of the benefits of rolling out this change, and a link to more detailed information. I also have the option to postpone this recommendation for a day, week, month, or quarter, or just dismiss so that I don’t get notified again with this suggestion. Most importantly, I have the option to create an Azure service health alert.

I’ll just fill in the required details so I get notified of service interruptions concerning any resources related to my website. The events that can trigger an alert are service issue, planned maintenance and health advisories. I can choose to be notified for any combination of them.

creating an alert rule

Then it’s just a matter of clicking the create button and we're good to go.

This post it the conclusion to a series. Make sure you alse read Part 1 and Part 2.

It’s time to concentrate on the implementation of our Outlet class. We will add a new private property where we set the time zones of each company.

private readonly string[] timezones =
{
    "Pacific Standard Time",
    "Central Standard Time",
};

This information would be fetched from a data store in practice, and you would do well to use an interface as well so it can be mocked. But I will leave this part for you as an exercise.

The code for fetching the current date and time is now as follows.

public string GetLocalDateTime(long id)
{
    var tz = TimeZoneInfo.FindSystemTimeZoneById(timezones[id]);
    var currentDate = dateTimeWrapper.Now();
    var offset = tz.GetUtcOffset(currentDate);
    var outletDate = currentDate.AddMinutes(offset.TotalMinutes);

    return outletDate.ToString("M/d/yyyy HH:mm");
}

So this method calculates the selected company’s timezone first through FindSystemTimeZoneById, and then reads the date and time from the wrapper we created in part 2. We continue by getting the timezone offset for this particular date by means of GetUtcOffset. This step is required because the offset is different between winter and summer due to daylight saving. The final step is to add this offset to the date (note also that it is in minutes because some time zones vay by half-hour steps) and return it as a string.

We did the change with confidence, and can now look at the test results:

All the tests are passing

We have now observed how code that interacts with the hosting system should be written. Whilst using the operating system’s resources is essential in a software program, having indirect access to those resources ensures that your code will be easier to test independently of any environment that it will run on.

Now that you know the basic concept, I have to reveal that there is a library that already does this. This popular library is NodaTime, which the authors describe as a better date and time API for .NET. NodaTime exposes everything through the ICock interface and even provides a mocking solution in the form of FakeClock.