Last time we left off by triggering the NuKeeper build pipeline that we had just created. Today, we will edit the pipeline configuration with a schedule to run regularly and so we can get on with the rest our work safe in the knowledge that automation is taking care of the updates.

To start, click on the pipelines icon and choose Edit from the context menu of the existing pipeline.

Menu to edit the build pipeline

Choose the triggers tab and then click on the Add button to create the schedule. Enter the day of the week and the time when you wish the build to run. Make sure to uncheck the option to ‘Only schedule builds if the source or pipeline has changed’ because otherwise, NuKeeper will only check for updates whenever you also change the source code.

Adding a scheduled trigger

Once happy with the setup, save your changes.

Save the build pipeline

And enter a comment which describes the modification.

Describing what was done in this change

You’re all set now. NuKeeper will check for nuget package updates depending on the schedule. However, we can go one step further. Let’s modify the branch policy so that reviewers are added automatically when NuKeeper creates a pull request. Click on the icons as shown below:

Edit the branch policies

And add the code reviewers as required. You can include individual users or groups of users. In the latter case, any group member will be able to approve the update. When a pull request is created, all the reviewers listed here will receive a notification. We can also specify a file path, since we know that NuKeeper will only modify the csproj files.

Include the reviewers automatically

Lastly, we can also talk about configuration. The full details of the configuration options are described on the NuKeeper site, but let’s pick up a few of the most popular. These configuration options can be added to your pipeline as arguments in the NuKeeper task.

Azure DevOps NuKeeper configuration

By default, the only argument is -m 3, which means that the task will update up to three packages at any one time. We can also set the age of the packages that we want to update (with the -a argument), which will be seven days if not specified. In this way, we will wait for the package to be tried and tested in the wild for any problems before we automatically attempt to add it into our solution. However, this value can also be specified in hours or disabled completely so any new package will be picked up immediately.

Another popular argument is the c argument (which stands for change). Here we can specify the level of the version changes we want NuKeeper to include; patch, minor, or major. Some major changes can potentially include breaking changes and some teams may choose to apply these updates manually, or otherwise plan for them.

If indeed we know that there are packages with breaking changes, we can then use the exclude argument (e) so that NuKeeper will leave them alone. The consolidate argument (n) will combine all the updates into a single pull request, otherwise by default, the task will create a pull request for each package. The useprerelease argument (no shortcut for this one) will specify if we want to include alpha or beta package releases or not, and the nice thing is that by default it is set to update to a newer pre-release version only if you are already using a pre-release version of a package. I will let you explore the rest of the options, but by using the above you will already gain a big advantage with a lot of flexibility.

NuKeeper is a tool that automatically updates any third-party Nuget packages inside your solution to the latest version. Today we will create a pipeline which will be set up to specifically run this tool. NuKeeper will then scan the solution, and if there are updates it will go ahead and create a new feature branch, apply the changes, and present them as a pull-request for you to approve. Isn’t that a handy buddy?

So we will start by clicking the New Pipeline button within the Pipelines section of Azure DevOps. At the moment, the Azure DevOps YAML editor does not provide us with the full experience to install and choose a new task yet (which we need for NuKeeper), so we will use the classic editor instead.

Given the choice, we will go for the classic build pipeline editor

Select Azure Repos Git as the source control type, and find the relevant repository where the source code is stored. Obviously, if your source code is using a different source control type, it has to be configured accordingly.

Choosing the location of our solution

Next, I’m offered to create a pipeline from a template, but I will start with an empty job instead. This will create a simple job with no tasks which is fine since we only need to add one specific task. Make sure to change the name to indicate that this pipeline will be used for updating third-party libraries inside the solution. As for the agent that the pipeline will run on, I’m happy enough to use the windows-2019 hosted agent to run this job for me.

An empty build pipeline is created

In the case of NuKeeper, the job is very easy to configure. It just needs one task, which is started by clicking on the + button:

Adding a task to the build pipeline

The Add tasks section will show up. Here we can search directly for ‘nukeeper’ to get the result we want from the marketplace. The first time that a task from the marketplace is used within your organisation, it has to be approved by an administrator. Click on the ‘Get it free’ button to do that. If we want to use the nukeeper task again in any future pipelines we create, it will be already approved and the Add button will be shown directly.

Find the NuKeeper task

Clicking the button will open the NuKeeper page within the Visual Studio Marketplace site, where we need to click the ‘Get it free’ button once again.

NuKeeper on the Visual Studio Marketplace

And one more button click to install the extension.

Marketplace extension needs to be approved by the organisation the first time it is used

One thing to note is that if you are not the administrator of your organisation, you can still click the ‘Get it free’ button. At that point, an email request will be sent to the administrator who will then need to accept and do the installation step.

After a few seconds, you should see that the installation was completed successfully.

NuKeeper extension added to organisation

When I got back to my build pipeline, I still needed to refresh the page so that I can actually select the NuKeeper task that I just installed. To do this, save the pipeline first so we can keep the existing configuration and then reload the whole page.

Saving the build pipeline

You will be asked to choose the folder where the pipeline will be saved. The default will do for now, but if you have a project with multiple build pipelines, organising them into specific folders becomes important.

Choosing the destination folder for the build pipeline save

Now we can see that it is possible to add the task to the job, so go ahead and click the Add button.

Add NuKeeper task to pipeline

Once added, the NuKeeper task will be shown inside the job. NuKeeper has some configuration options, but we will stay with the default setting for now. By default, NuKeeper will update up to three packages at a time. We will use that, and I will explain the configuration options in a future post.

NuKeeper task added

So now it’s time to Save & queue the job we just configured. A new window will be shown, where we will Save and run. Note the comment in the screenshot below, which explains the changes that are being saved.

Save and run the build pipeline

Once the build is run, you can click on the job name to follow the progress of the build pipeline.

Progress of the build run

Around a minute later we get a summary of the build run.

Summary shows that everything is fine

Everything looks ok at first glance, all those green checkmarks must surely indicate that it is so. However, digging in deeper we can see that there is a problem:

Details view shows that there are problems

NuKeeper has identified that there are packages to update, but the updates failed. Apparently something with LibGit2SharpException : request failed with status code: 403. This is due to some permissions that we need to assign to the build service. In this case, we need to go to the Project settings menu, then repositories, and then choose the relevant repository that we need to allow NuKeeper to modify.

Navigating to the repository settings from the project settings

The NuKeeper task through the project collection build service needs to be able to create a new branch and contribute code changes to it. Note the required permissions marked with a green checkmark below.

Changing repository settings

Let’s run the build pipeline again. Now we can see that the updates are successful!

Running the build pipeline again

And there are also pull requests for us to approve!

Pull requests from NuKeeper are available

In the screenshot below, we can see a detailed report of the changes included in the pull request. We can also see that the pull request has triggered a build which we had set up in a previous post about setting up pull requests in Azure DevOps. This build will confirm whether the changes that NuKeeper has done will compile the code and the unit tests will all pass. With this peace of mind we can then go ahead and approve the pull request.

NuKeeper pull request details

Next time we will see the different configuration options that NuKeeper provides. We will also schedule the task to run regularly and so keep our solution always up to date with minimal effort.

SonarQube is a static code analyser which can detect bugs, vulnerabilities, code smells, as well as duplicate code. SonarQube is free and open-source and can run on several platforms on the cloud, but you can also install it on your local network, as I will show you here. For me, it’s an essential tool that also monitors the history of your code repo and displays graphs indicating how a propoject's code quality has evolved over its lifetime.

We’ll start by downloading the required files. First, there’s the SonarQube server itself, which is available from https://www.sonarqube.org/downloads/. I decided to go with version 7.9, which is the long term support version. SonarQube is a Java program, so we need Java 11.0.5 (also LTS). Download and install from https://www.oracle.com/technetwork/java/javase/downloads/jdk11-downloads-5066655.html if it’s not already on your system.

When using a SQL Server connection, SonarQube requires the appropriate JDBC driver to be available on your system. Download it from https://www.microsoft.com/en-us/download/details.aspx?id=55539, then extract and copy sqljdbc_auth.dll from the auth folder into the windows\system32 folder.

Next, we will create a database where the analysis data will be saved. The important thing for SonarQube is to use a collation option which is both case-sensitive and accent-sensitive, for example, Latin1_General_100_CS_AS:

Creating the SonarQube database inside SQL Server

We also need to configure SQL Server’s network configuration to be able to connect to it from SonarQube later. So run SQL Server Configuration Manager and in the TCP/IP properties window inside the protocols section you have to disable dynamic ports and choose a TCP port number, which is 1433 by default.

Configuring SQL Server network settings

Now extract the downloaded SonarQube files into a new folder. This will result in a number of files and folders. Open the sonar.properties file inside the conf folder to set the database connection. SonarQube has a default value for all configuration settings, so initially, all configuration lines in sonar.properties are commented out. SonarQube is set to use its own internal database by default, so for SQL Server we have to find the relevant line and uncomment it. This setting can be found under the section ‘Microsoft SQLServer 2014/2016/2017 and SQL Azure’. The value should read

sonar.jdbc.url=jdbc:sqlserver://localhost:1433;databaseName=sonar;integratedSecurity=true

With this done, we can test that everything has been set up properly. Inside the bin\windows-x86-64, there is a batch file named StartSonar.bat. If we run this command and do not get any errors in the command line, we can open a browser window and point it at http://localhost:9000, which is the default SonarQube port.

The SonarQube Start Page

As you can see, this shows us the SonarQube welcome screen, just waiting for us to set up something for it to analyse. We will do this in the next update. For now, if you manage to get this far, you may also run InstallNTService.bat. This will install the SonarQube windows service and in this way, we will ensure that it will always start automatically when you restart your computer/server. You will need to run the batch command as an administrator, and also set up the service to log on as a user who has access to the database.

So last time I’ve set up a build and a two-stage deploy pipeline. Which is all well and good. However, today I want to look at how we can leverage continuous integration and add a branch policy. This will allow us to be selective about what code we accept to be included in our master branch and eventually put into production.

To access the branch policy for the master branch, navigate through Repos > Branches, then choose branch policies from the context menu:

The branch policy menu

Here we have several options to protect against accidentally updating any code to the master branch and thus triggering the release pipeline. For example, I always set a minimum number of reviewers to at least one. Remember, having the code read by an additional pair of eyes avoids silly mistakes and may bring up questions that you didn’t think about during development.

Another option I always set is to check for linked work items. Linking work items to a commit is always a good practice to help anyone to understand the code’s history and see the relation between code changes and functional requests. Once selected, this option can be either enforced as mandatory or optional which in the latter case will just show a friendly warning.

The build validation is used to trigger a build whenever there is a pull request. This will ensure that the code compiles and passes the tests. To set this up click on the + Add Build Policy button. This will open a new window where we can select the build pipeline to be triggered, and as before this policy can be set as required or optional. I always set this one as required since I never want code that breaks the build to be included in the master branch.

Setting up a build policy

Next, we can also add code reviewers automatically. This is helpful when there are specific people who have to approve the change. Setting them automatically will add them as reviewers when a pull request is created and they will get a notification. With some simple rules, I have set up a branch policy that suits me and keeps me from inadvertently causing issues.

Following up from the previous post where we created a CI/CD pipeline directly from Visual Studio 2019, I will now show you how to add another stage to the pipeline. This new production stage will be added as the last stage and will only be triggered after someone approves it by pushing THE button.

Let’s go to our Azure DevOps release pipeline again. Hover over the previously auto-generated stage, and a plus button will be shown. Hovering over that one as well will reveal that this is in fact the Add button.

Add stage button

However, what I want in this case is for the production stage to repeat what is in the dev stage with the only difference being the Web App where the artifact will be deployed. So what I will do is to clone the dev stage and then modify it. That’s what the Clone button is for. Go on, click it to create a copy of the dev stage.

Clone stage button

We can now modify the production stage. We just need to change the name of the stage as well as the App Service name in the deploy task:

Choosing the app service to deploy to

However, in order to be able to choose the required web app, we first need to authorise it. The process that automatically generated the pipeline had also created a service principal on the web app itself, so we just need to copy this to the other web app from access control as shown below:

Authorise the app service principal

The production deployment stage is now in place, but since it was cloned from the previous stage, it will be triggered automatically after each build. We will change it so that it will only deploy to production after it is approved. The lightning icon takes you to the trigger configuration.

Button to access the pre-deployment conditions

It’s just a matter of switching pre-deployment approvals to on and selecting the people who can approve. I will leave the approval policy settings alone for my personal website, but they may come in handy when pushing code to production and other people are involved.

Setting up pre-deployment approval

So there it is, a two-stage pipeline where the Dev stage is automatically triggered on each build, and optionally pushed to Prod after approval.

Two-stage pipeline with Dev and Prod stages

Buy me a coffee Buy me a coffee