SonarQube is a static code analyser which can detect bugs, vulnerabilities, code smells, as well as duplicate code. SonarQube is free and open-source and can run on several platforms on the cloud, but you can also install it on your local network, as I will show you here. For me, it’s an essential tool that also monitors the history of your code repo and displays graphs indicating how a propoject's code quality has evolved over its lifetime.

We’ll start by downloading the required files. First, there’s the SonarQube server itself, which is available from https://www.sonarqube.org/downloads/. I decided to go with version 7.9, which is the long term support version. SonarQube is a Java program, so we need Java 11.0.5 (also LTS). Download and install from https://www.oracle.com/technetwork/java/javase/downloads/jdk11-downloads-5066655.html if it’s not already on your system.

When using a SQL Server connection, SonarQube requires the appropriate JDBC driver to be available on your system. Download it from https://www.microsoft.com/en-us/download/details.aspx?id=55539, then extract and copy sqljdbc_auth.dll from the auth folder into the windows\system32 folder.

Next, we will create a database where the analysis data will be saved. The important thing for SonarQube is to use a collation option which is both case-sensitive and accent-sensitive, for example, Latin1_General_100_CS_AS:

Creating the SonarQube database inside SQL Server

We also need to configure SQL Server’s network configuration to be able to connect to it from SonarQube later. So run SQL Server Configuration Manager and in the TCP/IP properties window inside the protocols section you have to disable dynamic ports and choose a TCP port number, which is 1433 by default.

Configuring SQL Server network settings

Now extract the downloaded SonarQube files into a new folder. This will result in a number of files and folders. Open the sonar.properties file inside the conf folder to set the database connection. SonarQube has a default value for all configuration settings, so initially, all configuration lines in sonar.properties are commented out. SonarQube is set to use its own internal database by default, so for SQL Server we have to find the relevant line and uncomment it. This setting can be found under the section ‘Microsoft SQLServer 2014/2016/2017 and SQL Azure’. The value should read

sonar.jdbc.url=jdbc:sqlserver://localhost:1433;databaseName=sonar;integratedSecurity=true

With this done, we can test that everything has been set up properly. Inside the bin\windows-x86-64, there is a batch file named StartSonar.bat. If we run this command and do not get any errors in the command line, we can open a browser window and point it at http://localhost:9000, which is the default SonarQube port.

The SonarQube Start Page

As you can see, this shows us the SonarQube welcome screen, just waiting for us to set up something for it to analyse. We will do this in the next update. For now, if you manage to get this far, you may also run InstallNTService.bat. This will install the SonarQube windows service and in this way, we will ensure that it will always start automatically when you restart your computer/server. You will need to run the batch command as an administrator, and also set up the service to log on as a user who has access to the database.

So last time I’ve set up a build and a two-stage deploy pipeline. Which is all well and good. However, today I want to look at how we can leverage continuous integration and add a branch policy. This will allow us to be selective about what code we accept to be included in our master branch and eventually put into production.

To access the branch policy for the master branch, navigate through Repos > Branches, then choose branch policies from the context menu:

The branch policy menu

Here we have several options to protect against accidentally updating any code to the master branch and thus triggering the release pipeline. For example, I always set a minimum number of reviewers to at least one. Remember, having the code read by an additional pair of eyes avoids silly mistakes and may bring up questions that you didn’t think about during development.

Another option I always set is to check for linked work items. Linking work items to a commit is always a good practice to help anyone to understand the code’s history and see the relation between code changes and functional requests. Once selected, this option can be either enforced as mandatory or optional which in the latter case will just show a friendly warning.

The build validation is used to trigger a build whenever there is a pull request. This will ensure that the code compiles and passes the tests. To set this up click on the + Add Build Policy button. This will open a new window where we can select the build pipeline to be triggered, and as before this policy can be set as required or optional. I always set this one as required since I never want code that breaks the build to be included in the master branch.

Setting up a build policy

Next, we can also add code reviewers automatically. This is helpful when there are specific people who have to approve the change. Setting them automatically will add them as reviewers when a pull request is created and they will get a notification. With some simple rules, I have set up a branch policy that suits me and keeps me from inadvertently causing issues.

Following up from the previous post where we created a CI/CD pipeline directly from Visual Studio 2019, I will now show you how to add another stage to the pipeline. This new production stage will be added as the last stage and will only be triggered after someone approves it by pushing THE button.

Let’s go to our Azure DevOps release pipeline again. Hover over the previously auto-generated stage, and a plus button will be shown. Hovering over that one as well will reveal that this is in fact the Add button.

Add stage button

However, what I want in this case is for the production stage to repeat what is in the dev stage with the only difference being the Web App where the artifact will be deployed. So what I will do is to clone the dev stage and then modify it. That’s what the Clone button is for. Go on, click it to create a copy of the dev stage.

Clone stage button

We can now modify the production stage. We just need to change the name of the stage as well as the App Service name in the deploy task:

Choosing the app service to deploy to

However, in order to be able to choose the required web app, we first need to authorise it. The process that automatically generated the pipeline had also created a service principal on the web app itself, so we just need to copy this to the other web app from access control as shown below:

Authorise the app service principal

The production deployment stage is now in place, but since it was cloned from the previous stage, it will be triggered automatically after each build. We will change it so that it will only deploy to production after it is approved. The lightning icon takes you to the trigger configuration.

Button to access the pre-deployment conditions

It’s just a matter of switching pre-deployment approvals to on and selecting the people who can approve. I will leave the approval policy settings alone for my personal website, but they may come in handy when pushing code to production and other people are involved.

Setting up pre-deployment approval

So there it is, a two-stage pipeline where the Dev stage is automatically triggered on each build, and optionally pushed to Prod after approval.

Two-stage pipeline with Dev and Prod stages

I have set up Azure build and deployment pipelines a number of times from scratch before, so this time I’m going to trigger it from Visual Studio which should be more straightforward.

The first step of the process is to right-click on the solution name and chose the ‘Configure Continuous Delivery to Azure…’ option.

Configue Continuous Delivery menu in Visual Studio 2019

This will open a window from where you need to pick up the location of the source code, the subscription and the app service. For the app service, you can either create a new plan or like in my case, just choose an existing one which is already running.

Choosing source repo and target web app

The output window informs you of the current status of the pipeline creation. It actually took less than a minute for me to have everything up and running.

Pipeline configuration in progress

Let’s head over to Azure DevOps and see the results. Clicking on the blue rocket icon reveals that there is both a pipeline (previously named build in previous versions of Azure DevOps) and a release. Both have been triggered automatically, and we can see that the outcome was successful.

Build Pipeline

Release pipeline

Let’s dig a little bit deeper, by clicking the edit button on the pipeline first.

Build pipeline individual tasks

Here we can see that everything has been set up as expected, and the process is intelligent enough to create the right tasks depending on the project type. In this case here, we have a process made up of a number of steps that restore the required nuget packages, build and test the solution, and then package the build output as an artifact. This artifact is then picked up by the release pipeline and deployed to the required environment.

Release pipeline visualisation

Moving to the release pipeline we can observe two things. First, the little lighting icon on the Artifacts step brings us to the trigger configuration. Here it is already set up to trigger a release whenever a new build from the master branch is available.

Continuous deployment trigger

The second thing is the warning icon in the dev deployment stage. Clicking it also reveals the details, this time of the issue. It's telling us that we need to choose a deployment agent. Microsoft supplies various agents hosted on Azure, and we have to choose one depending on our solution type.

agent pool selection

One thing to remember is that these pipelines are free to use for up to 30 hours each month. If you need more than that, you have to take out your credit card and pay for the extra consumption.

Well, that was easy, wasn’t it? I usually choose different names for the various steps, so I will go around and change them, but here we can see how powerful the template is. Next time I will also show you how to add additional stages.

Whenever I’m working on any project nowadays, I always emphasize the need to automate whatever can be automated from start to end. Keeping up with the latest processes is equally important. However, when it comes to my own site, which I only have time to update occasionally, I never thought about applying these practices. I just change the code and publish from Visual Studio.

However now with all the easily accessible functionality that is Azure DevOps I have decided that it is time to start that journey to run this website as it deserves.

The first thing I will do is that I will migrate the solution from TFVC to a git repository. This will bring its own advantages, which mainly revolve around being able to deliver stuff faster with greater quality and less risk. I will use the Azure DevOps migrate functionality, but before that, I will just delete the old MVC5 project which was excluded from the solution but still in source control.

To start the import process itself, from the repos section click on the name of the current repo and then choose Import repository.

Importing from a different repo

Then I need to specify where the source code is and choose a name for my new git repo:

Setting the Import repo options

There is a friendly warning that switching can be disruptive. Although both are source control solutions, TFVC and git are quite different and getting used to one if you exclusively worked on the other takes some getting used to. If you are planning to switch a big team, make sure that all the developers are able to use git with confidence. Start with a smaller non-critical project if need be.

I also decided not to migrate the history because Microsoft doesn't recommend it, and since it only goes back to the last 180 days it means that I will lose most of the history anyway. Once satisfied, I clicked on the Import button and 11 seconds later I was looking at the new repo.

Repo migration in progress

One small issue was that when I opened the solution in Visual Studio, there were a bunch of compiled files marked as changes:

Unwanted pending changes

This is because I needed to add the .gitignore file manually. Since the solution was migrated from TFVC there was only a .tfignore file, so I deleted it and added .gitignore (and .gitattributes) instead. These files are used to let our tooling know which files are not required to go into source control. We only need the source code in source control. Build files and packages will be automatically generated on the build server at build time so we don’t need to litter the git repo with them. So the result is that we only have two pending changes now:

These are the files we actually want to push

If you need to, the .gitignore file is available directly from github.

And I also copied the .gitattributes file from one which I had earlier:

###############################################################################
# Set default behavior to automatically normalize line endings.
###############################################################################
* text=auto

That’s how easy it was in my case. Next time I will talk about deployment. In the meantime, if you need help to understand git I recommend to take a look at the Learn Git Branching site.