NuKeeper is a tool that automatically updates any third-party Nuget packages inside your solution to the latest version. Today we will create a pipeline which will be set up to specifically run this tool. NuKeeper will then scan the solution, and if there are updates it will go ahead and create a new feature branch, apply the changes, and present them as a pull-request for you to approve. Isn’t that a handy buddy?

So we will start by clicking the New Pipeline button within the Pipelines section of Azure DevOps. At the moment, the Azure DevOps YAML editor does not provide us with the full experience to install and choose a new task yet (which we need for NuKeeper), so we will use the classic editor instead.

Given the choice, we will go for the classic build pipeline editor

Select Azure Repos Git as the source control type, and find the relevant repository where the source code is stored. Obviously, if your source code is using a different source control type, it has to be configured accordingly.

Choosing the location of our solution

Next, I’m offered to create a pipeline from a template, but I will start with an empty job instead. This will create a simple job with no tasks which is fine since we only need to add one specific task. Make sure to change the name to indicate that this pipeline will be used for updating third-party libraries inside the solution. As for the agent that the pipeline will run on, I’m happy enough to use the windows-2019 hosted agent to run this job for me.

An empty build pipeline is created

In the case of NuKeeper, the job is very easy to configure. It just needs one task, which is started by clicking on the + button:

Adding a task to the build pipeline

The Add tasks section will show up. Here we can search directly for ‘nukeeper’ to get the result we want from the marketplace. The first time that a task from the marketplace is used within your organisation, it has to be approved by an administrator. Click on the ‘Get it free’ button to do that. If we want to use the nukeeper task again in any future pipelines we create, it will be already approved and the Add button will be shown directly.

Find the NuKeeper task

Clicking the button will open the NuKeeper page within the Visual Studio Marketplace site, where we need to click the ‘Get it free’ button once again.

NuKeeper on the Visual Studio Marketplace

And one more button click to install the extension.

Marketplace extension needs to be approved by the organisation the first time it is used

One thing to note is that if you are not the administrator of your organisation, you can still click the ‘Get it free’ button. At that point, an email request will be sent to the administrator who will then need to accept and do the installation step.

After a few seconds, you should see that the installation was completed successfully.

NuKeeper extension added to organisation

When I got back to my build pipeline, I still needed to refresh the page so that I can actually select the NuKeeper task that I just installed. To do this, save the pipeline first so we can keep the existing configuration and then reload the whole page.

Saving the build pipeline

You will be asked to choose the folder where the pipeline will be saved. The default will do for now, but if you have a project with multiple build pipelines, organising them into specific folders becomes important.

Choosing the destination folder for the build pipeline save

Now we can see that it is possible to add the task to the job, so go ahead and click the Add button.

Add NuKeeper task to pipeline

Once added, the NuKeeper task will be shown inside the job. NuKeeper has some configuration options, but we will stay with the default setting for now. By default, NuKeeper will update up to three packages at a time. We will use that, and I will explain the configuration options in a future post.

NuKeeper task added

So now it’s time to Save & queue the job we just configured. A new window will be shown, where we will Save and run. Note the comment in the screenshot below, which explains the changes that are being saved.

Save and run the build pipeline

Once the build is run, you can click on the job name to follow the progress of the build pipeline.

Progress of the build run

Around a minute later we get a summary of the build run.

Summary shows that everything is fine

Everything looks ok at first glance, all those green checkmarks must surely indicate that it is so. However, digging in deeper we can see that there is a problem:

Details view shows that there are problems

NuKeeper has identified that there are packages to update, but the updates failed. Apparently something with LibGit2SharpException : request failed with status code: 403. This is due to some permissions that we need to assign to the build service. In this case, we need to go to the Project settings menu, then repositories, and then choose the relevant repository that we need to allow NuKeeper to modify.

Navigating to the repository settings from the project settings

The NuKeeper task through the project collection build service needs to be able to create a new branch and contribute code changes to it. Note the required permissions marked with a green checkmark below.

Changing repository settings

Let’s run the build pipeline again. Now we can see that the updates are successful!

Running the build pipeline again

And there are also pull requests for us to approve!

Pull requests from NuKeeper are available

In the screenshot below, we can see a detailed report of the changes included in the pull request. We can also see that the pull request has triggered a build which we had set up in a previous post about setting up pull requests in Azure DevOps. This build will confirm whether the changes that NuKeeper has done will compile the code and the unit tests will all pass. With this peace of mind we can then go ahead and approve the pull request.

NuKeeper pull request details

Next time we will see the different configuration options that NuKeeper provides. We will also schedule the task to run regularly and so keep our solution always up to date with minimal effort.

SonarQube is a static code analyser which can detect bugs, vulnerabilities, code smells, as well as duplicate code. SonarQube is free and open-source and can run on several platforms on the cloud, but you can also install it on your local network, as I will show you here. For me, it’s an essential tool that also monitors the history of your code repo and displays graphs indicating how a propoject's code quality has evolved over its lifetime.

We’ll start by downloading the required files. First, there’s the SonarQube server itself, which is available from https://www.sonarqube.org/downloads/. I decided to go with version 7.9, which is the long term support version. SonarQube is a Java program, so we need Java 11.0.5 (also LTS). Download and install from https://www.oracle.com/technetwork/java/javase/downloads/jdk11-downloads-5066655.html if it’s not already on your system.

When using a SQL Server connection, SonarQube requires the appropriate JDBC driver to be available on your system. Download it from https://www.microsoft.com/en-us/download/details.aspx?id=55539, then extract and copy sqljdbc_auth.dll from the auth folder into the windows\system32 folder.

Next, we will create a database where the analysis data will be saved. The important thing for SonarQube is to use a collation option which is both case-sensitive and accent-sensitive, for example, Latin1_General_100_CS_AS:

Creating the SonarQube database inside SQL Server

We also need to configure SQL Server’s network configuration to be able to connect to it from SonarQube later. So run SQL Server Configuration Manager and in the TCP/IP properties window inside the protocols section you have to disable dynamic ports and choose a TCP port number, which is 1433 by default.

Configuring SQL Server network settings

Now extract the downloaded SonarQube files into a new folder. This will result in a number of files and folders. Open the sonar.properties file inside the conf folder to set the database connection. SonarQube has a default value for all configuration settings, so initially, all configuration lines in sonar.properties are commented out. SonarQube is set to use its own internal database by default, so for SQL Server we have to find the relevant line and uncomment it. This setting can be found under the section ‘Microsoft SQLServer 2014/2016/2017 and SQL Azure’. The value should read

sonar.jdbc.url=jdbc:sqlserver://localhost:1433;databaseName=sonar;integratedSecurity=true

With this done, we can test that everything has been set up properly. Inside the bin\windows-x86-64, there is a batch file named StartSonar.bat. If we run this command and do not get any errors in the command line, we can open a browser window and point it at http://localhost:9000, which is the default SonarQube port.

The SonarQube Start Page

As you can see, this shows us the SonarQube welcome screen, just waiting for us to set up something for it to analyse. We will do this in the next update. For now, if you manage to get this far, you may also run InstallNTService.bat. This will install the SonarQube windows service and in this way, we will ensure that it will always start automatically when you restart your computer/server. You will need to run the batch command as an administrator, and also set up the service to log on as a user who has access to the database.

So last time I’ve set up a build and a two-stage deploy pipeline. Which is all well and good. However, today I want to look at how we can leverage continuous integration and add a branch policy. This will allow us to be selective about what code we accept to be included in our master branch and eventually put into production.

To access the branch policy for the master branch, navigate through Repos > Branches, then choose branch policies from the context menu:

The branch policy menu

Here we have several options to protect against accidentally updating any code to the master branch and thus triggering the release pipeline. For example, I always set a minimum number of reviewers to at least one. Remember, having the code read by an additional pair of eyes avoids silly mistakes and may bring up questions that you didn’t think about during development.

Another option I always set is to check for linked work items. Linking work items to a commit is always a good practice to help anyone to understand the code’s history and see the relation between code changes and functional requests. Once selected, this option can be either enforced as mandatory or optional which in the latter case will just show a friendly warning.

The build validation is used to trigger a build whenever there is a pull request. This will ensure that the code compiles and passes the tests. To set this up click on the + Add Build Policy button. This will open a new window where we can select the build pipeline to be triggered, and as before this policy can be set as required or optional. I always set this one as required since I never want code that breaks the build to be included in the master branch.

Setting up a build policy

Next, we can also add code reviewers automatically. This is helpful when there are specific people who have to approve the change. Setting them automatically will add them as reviewers when a pull request is created and they will get a notification. With some simple rules, I have set up a branch policy that suits me and keeps me from inadvertently causing issues.

Following up from the previous post where we created a CI/CD pipeline directly from Visual Studio 2019, I will now show you how to add another stage to the pipeline. This new production stage will be added as the last stage and will only be triggered after someone approves it by pushing THE button.

Let’s go to our Azure DevOps release pipeline again. Hover over the previously auto-generated stage, and a plus button will be shown. Hovering over that one as well will reveal that this is in fact the Add button.

Add stage button

However, what I want in this case is for the production stage to repeat what is in the dev stage with the only difference being the Web App where the artifact will be deployed. So what I will do is to clone the dev stage and then modify it. That’s what the Clone button is for. Go on, click it to create a copy of the dev stage.

Clone stage button

We can now modify the production stage. We just need to change the name of the stage as well as the App Service name in the deploy task:

Choosing the app service to deploy to

However, in order to be able to choose the required web app, we first need to authorise it. The process that automatically generated the pipeline had also created a service principal on the web app itself, so we just need to copy this to the other web app from access control as shown below:

Authorise the app service principal

The production deployment stage is now in place, but since it was cloned from the previous stage, it will be triggered automatically after each build. We will change it so that it will only deploy to production after it is approved. The lightning icon takes you to the trigger configuration.

Button to access the pre-deployment conditions

It’s just a matter of switching pre-deployment approvals to on and selecting the people who can approve. I will leave the approval policy settings alone for my personal website, but they may come in handy when pushing code to production and other people are involved.

Setting up pre-deployment approval

So there it is, a two-stage pipeline where the Dev stage is automatically triggered on each build, and optionally pushed to Prod after approval.

Two-stage pipeline with Dev and Prod stages

I have set up Azure build and deployment pipelines a number of times from scratch before, so this time I’m going to trigger it from Visual Studio which should be more straightforward.

The first step of the process is to right-click on the solution name and chose the ‘Configure Continuous Delivery to Azure…’ option.

Configue Continuous Delivery menu in Visual Studio 2019

This will open a window from where you need to pick up the location of the source code, the subscription and the app service. For the app service, you can either create a new plan or like in my case, just choose an existing one which is already running.

Choosing source repo and target web app

The output window informs you of the current status of the pipeline creation. It actually took less than a minute for me to have everything up and running.

Pipeline configuration in progress

Let’s head over to Azure DevOps and see the results. Clicking on the blue rocket icon reveals that there is both a pipeline (previously named build in previous versions of Azure DevOps) and a release. Both have been triggered automatically, and we can see that the outcome was successful.

Build Pipeline

Release pipeline

Let’s dig a little bit deeper, by clicking the edit button on the pipeline first.

Build pipeline individual tasks

Here we can see that everything has been set up as expected, and the process is intelligent enough to create the right tasks depending on the project type. In this case here, we have a process made up of a number of steps that restore the required nuget packages, build and test the solution, and then package the build output as an artifact. This artifact is then picked up by the release pipeline and deployed to the required environment.

Release pipeline visualisation

Moving to the release pipeline we can observe two things. First, the little lighting icon on the Artifacts step brings us to the trigger configuration. Here it is already set up to trigger a release whenever a new build from the master branch is available.

Continuous deployment trigger

The second thing is the warning icon in the dev deployment stage. Clicking it also reveals the details, this time of the issue. It's telling us that we need to choose a deployment agent. Microsoft supplies various agents hosted on Azure, and we have to choose one depending on our solution type.

agent pool selection

One thing to remember is that these pipelines are free to use for up to 30 hours each month. If you need more than that, you have to take out your credit card and pay for the extra consumption.

Well, that was easy, wasn’t it? I usually choose different names for the various steps, so I will go around and change them, but here we can see how powerful the template is. Next time I will also show you how to add additional stages.

Buy me a coffee Buy me a coffee