Why you should always build an Azure based VM in a Resource Group

This blog post is based on conversations I witnessed between MVPs around commissioning and more importantly de-commissioning virtual machines (VMs) hosted in Azure. Don’t worry – this blog post is not going to divulge NDA material….

…my MVP award is way too valuable to me to squander it!!

It comes down to this simple fact – you might still pay for resource usage in Azure even after you have deleted your VM….

Why?

Because if you just delete your VM – you will leave behind resources that were associated
Let’s look at how we spin up a VM:

Create VM
Creating a VM in Azure – using the Add button

I’m not going to go into all the different configurations I’ll choose – this post is about what is left behind when I delete the VM and why I’ll pay money unnecessarily.

As until I witnessed the discussion between MVPs (a quite vehement discussion BTW but that’s not for this blog post…) I had always plonked all my VMs in one Resource Group. you know to be tidy and concise…

Anyway – our VM is being deployed – this is the bit I love:

VM being created
I love the smell of new VM

……creating resources means I get to try stuff out. This particular VM is for a DEMO I am going to do at a client site – we’re going to build SQL Server using Desired State Configuration (DSC) – in fact we’re going to build 20 SQL Server instances – and then create 10 Availability Groups – all from Source Control and all scripted. That is definitely for another blog post.

It’s gonna be awesome!!

So in the time it takes for me to make a coffee I have a new VM:

deployment done
3 minutes 48 seconds – seriously that is why you need to host stuff in Azure

Let’s look at the resources in our Resource group:

ResourceGroup
There is more than just our VM in here

So I’ve done the DEMO of SQL Server DSC (blog post coming) and now we want to get rid of the VM.

This is what I used to do:

Go into Virtual Machines and hit Delete

delete VM
See ya VM..

So within about 5 minutes our VM is gone – so let’s look in the Resource Group:

leftover resources.png
There are quite a few things leftover…

So the argument was that this was “designed’ by Microsoft to get money out of people – you know by keeping the resources that are left behind.

🙄

Yeah ok….

The actual story is that Azure allows you to share resources like network configs etc and doesn’t actually know if you want to keep these things – so it allows you to piecemeal manage the resources within your resource group.

The best quote I saw was:

” All of these things are how Cloud works. Nothing to do with Azure. The same is true on AWS.  Everything is treated as single resources that can be “combined” to many different things and I don’t want anybody to just go and delete anything unless I explicitly tell them to. 

This means that yes – if you manage your VM from a VM perspective and not all the resources associated with it – you will get charged for things lying around. It’s not a completely free cloud platform – for obvious reasons.

So – if you want to remove your VM in its entirety then place it in a Resource Group and remove the Resource Group. Thus you remove all resources associated with that VM and do not get charged unnecessarily.

There might be times where you want to host associated things within a resource group 0 my advice – label them and put tags on them so that you know what belongs to what if you need to remove things.

Easy as.

Yip.

Advertisements

How to sync user logins across SQL Server instances – dbatools is brilliant

This blog post is about how brilliant dbatools are. In this case – for syncing user logins between SQL Server instances.

Background:

Whenever I do my “DevOps and the DBA” talk I dedicate a minute or two talking about dbatools.

You can find the tools here:

https://dbatools.io/

Simply download and install on your server – or a machine that has visibility to what you want to connect to.

When building new servers the most important thing after restoring and securing the database is syncing up the users. This is especially important for Availability Groups as SQL Authenticated users required the SIDS to be the same.

In the past I had some very long winded code that would do the sync – it was a mixture of TSQL and PowerShell. It worked but you know – it was cumbersome.

So I effectively practiced what I preached recently and used the Copy-DbaLogin command – actually I looked at what it did first by running:

Get-Help Copy-DbaLogin -Detailed

For what I needed to do – I needed both the SQL Authenticated users AND all Windows users/groups – so I just ran this:

Copy-DbaLogin -Source OLDSERVER -Destination NEWServer

Which results in the following example:

copydbaloginresult
This is brilliant

The upshot of all this is that syncing users between SQL Server instances has never been easier and means I can throw away my terribly written script.

Yip.

 

How to change the TFS Agent _work folder

This blog post is about how to change the default work folder _work that TFS agents use when building a solution.

Background:

I’m now a consultant – which is awesome – it means I get to visit clients and make a difference for them.

One particular client had installed TFS and their remote build agent was installed in C:\TFSAgent.

Methodology:

By default when installing TFS Agent you can choose the default for the work folder _work and normally this goes under the root directory of where you install the agent. So in this example they had the agent work folder at:

C:\TFSAgent\_work

Which was fine – until the builds were kicking off regularly (thanks to good Continuous Integration practices they were doing builds almost hourly) and C:\ started running out of space.

So a D:\ was added to the server.

but how to change the work folder to D:\TFSAgent\_work

A lot of posts on the internet are saying just remove the old agent and install it again. That to me seems a bit drastic.

If you’ve read my previous blog post on changing agent settings– you will know about the hidden file .agent

Agent_Settings
The .agent file is our friend for changing settings

Except the settings file is set out in JSON.

Which caught me out – as I made the change D:\TFSAgent\_work and the agent was not happy at all.

So to change the default _work folder to be D:\TFSAgent you need to:

1. Stop the agent service

2. Open the .agent file which will look something like this:

{
“agentId”: 10,
“agentName”: “BUILDAGENT”,
“poolId”: 3,
“serverUrl”: “https://YourtfsURL.something.local/tfs/”,
“workFolder”: _work”
}

3. Edit it like this:

“workFolder”: “D:\\tfsagent\\_work”

Note the double slashes – due to JSON

4. Start the agent service and kick off a build and watch TFS update the directory with what it need.

That really is it – but hopefully by reading this post it will save you time and energy by NOT having to reinstall your agent.

Yip.

Use Boot Diagnostics to see what your Azure VM is doing at boot time

This blog post is about how to diagnose what your Azure VM is doing while it is booting.

I have a DEMO VM that is hosted in Azure – it is where I have Visual Studio, Redgate tools as well as all my DEMO systems, presentations hosted for when I speak. That way when I go to a venue to speaker I only need (at worst) internet connectivity – and I have a phone with fantastic internet if the venue doesn’t.

What I do is keep the VM up to date in terms of windows patches and I make a point of 2 days out before an event of making sure all outstanding patches are installed.

Hint: this might tell you where this post is headed.

So 2 days out from speaking in Spokane – DevOPs & the DBA I made sure to start up my VM to check things were good. The only complicating factor was this was a day before I was to give a session to TheDevOpsLab so I thought – best to get this out of the way and practice my database unit test session that was going to be recorded.

So I went into the Azure Portal and down to Virtual Machines and clicked “start”:

start VM
Let’s start up the VM and get started

Normally this start up process would take about 3-5 minutes whilst things started up.

However after 10 minutes I still could not connect. After 15 minutes I started to become worried. So I clicked on the Virtual Machine in the Azure Portal to see what usage was happening.

Things were happening alright:

The keen eye will note that is 2 hours worth of activity…..

Yip – my VM was busy doing heaps of stuff for 2 hours and the whole time I could NOT log onto it via RDP. Which is when I discovered “Boot Diagnostics” in the Azure Portal for Virtual Machines. It allows us to see the console of the VM.

Simply click on your VM and choose “Boot Diagnostics”:

Boot Diagnostics
Let’s see what the VM is doing

Which gave me an insight to what my VM was spending so much time doing:

windows update
Ugh…  Windows Updates.

So I waited for 2 hours whilst my VM applied these patches (to be fair it was a very big update).

The good thing was I could monitor the progress via Boot Diagnostics.

BTW – in normal running this is the console of my VM:

Normal Console
You should consider visiting New Zealand

Which is the view out my front gate. If you have never considered coming to New Zealand – hopefully the console of my VM hosted in Azure will help you decide.
Or consider submitting to SQL Saturday South Island:

http://www.sqlsaturday.com/712/EventHome.aspx

We’re still open until 26th April 2018 (I extended it today) and to be honest if it’s past that date and you are International speaker – hit me up on twitter — @theHybridDBA if you want to speak – we’re relaxed as in New Zealand and love people who like to give their time to the community.

I’ll even give up my own speaking slot.

Yip.

Authentication issues with GitHub: An error occurred while sending the request —> System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel.

This blog post is about an error that you may receive if you’re using git commands in PowerShell and authenticating against GitHub.

My GitHub account uses 2 Factor Authentication so I thought it might be that – however I could use GitHub Desktop fine.

I was cloning a new project for a client:

git clone https://github.com/Client/Documentation.git

and I got a prompt for my username/password but when I entered both I got an error. O knew my username and password OK and I knew 2FA was working too.

The error (taken from event logs) was:

System.Net.Http.HttpRequestException: An error occurred while sending the request. —> System.Net.WebException: The request was aborted: Could not create SSL/TLS secure channel.

So I looked at my Git for Windows version:

PS C:\Yip> git version
git version 2.14.2.windows.1

So I decided to upgrade – as I had been meaning to do this for a while.

So I downloaded the latest version for windows and 2.16.2

I then ran the same command – which prompted me for my GitHub username/password. I entered them and asked me for my 2 Factor password which I put in and hooray!! — it worked.

I successfully cloned the repo and can now do work for a client who has stuff stored in GitHub.

Yip.

Redgate SQL Test utility isn’t showing up in SSMS

I had recently installed the SQL Toolbelt from Redgate at a client site on a laptop they’d supplied me.

(Fantastic product that SQL Toolbelt BTW.)

Things were going swimmingly – in fact thanks to SQL Compare, SQL Source Control and Team Foundation Server I had implemented an automated Continuous Delivery pipeline for the clients databases and applications.

The next thing I wanted to do was start implementing unit testing for both the applications and database. DEV were going to do the application side (this client is a start-up so it made sense that they had little to no unit tests) and I’d do the database side.

Except in SSMS I couldn’t find SQL Test…??

I knew to click on the icons to bring down other utilities but it wasn’t here either:

Redgate SSMS Where is SQL TEST
Not here either…. what have I done wrong??

So as a windows user I naturally looked in the Windows Apps:

Redgate No SQLTEST in apps
Hmmm…. nothing here either

At this point I decided it had to be my machine as I did a google and looked on forums and no one seemed to have experienced this.

So I uninstalled everything – SSMS and the Toolbelt.

Reinstalled everything.

And got this:

It was while clicking around like a madman I found this:

Phew.

And of course now I can do this:

Redgate Now in my happy place
Let’s start unit testing!!

 

So if you have recently installed Redgate SQL Toolbelt and can’t find SQL Test – hopefully this blog post will help you.

By the way I do think there was something wrong with the laptop the client gave me as now when I right click I get the ability to run tests:

Redgate SQL Test context menu
This was definitely not there before the uninstall/reinstall fiasco

So now I can start my unit tests – the good news the DEV team have started theirs and are really getting behind it. I think they’ve got on with it to stop me talking about unit tests…!!

We now have 4 minutes of unit tests per checked in build but that is definitely something I’ll respond to with:

Yip.

Automation and cleaner code/data are the key to future success

Doing DevOPs for your Database? You need to start here…

This month’s T-SQL Tuesday #100 is hosted by Adam Machanic (B | T) who started T-SQL Tuesday 8 years ago and has invited the community to look forward 8 years at what will be T-SQL Tuesday #200…

 

Before I do that though – I want to think about what I was doing 8 years ago. At the time I was working with Object Orientated databases and the company I worked for had just implemented extensive unit testing across our technology stack. I use that word because we had both the database and application code going through unit tests, integration tests and regression tests. Most of which was automated.

It was around this time that I was starting to do more things in SQL Server, namely SQL Server 2005, funny how 7 years later I was still doing some things in SQL Server 2005 – but that is for another blog post…

It was when I came across to the SQL Server world that I realised that 2 things were different:

  1. Database build/code definitions were not in source control
  2. Unit testing was almost non-existent
  3. Automated testing or deployment was also not a common thing

Even now in 2018 I find that when speaking at conferences and I ask the questions:

  1. Who here uses source control – 60% of hands go up
  2. Who here puts there database into source control – 50% of those 60% of hands go down…..
  3. Who here does automated testing across their databases – another 50% of those hands go down

I generally get ~30 people in my sessions so if at best 5 people out of 30 are doing this – we need to change this.

Yet…

For years people have been discussing both getting our database code into source control and unit testing has been discussed at length during the past 8 years.

So what does this mean for the next 8 years?

I’m on a personal crusade to bring automated testing and DevOPs philosophies to your database…

In fact I am hoping that DevOPs doesn’t exist in 8 years time – we as an industry matured and called it “commonsense“.

Let’s talk about automation – as whilst people might not see the value in testing our most precious resource – data – they will hopefully see the light in that the more we can automate the better our jobs will be.

But won’t it kill the DBA off I hear some of you ask.

No.

It won’t.

In 2026 the role of the DBA will have morphed, in fact all Data Professionals will have an appreciation and understanding of how to automate the delivery of value to the end user. I choose my words carefully – too often we as technical people thing about 1s and 0s but in fact it is how we deliver value to our clients that dictates our success. In terms of customer experience we need to ensure we are delivering value of higher value than our competitors.

Many people are worried that automation will put them out of a job. This won’t happen, and in fact there will never be a shortage of work to do in a successful company. Rather, people are freed up from mindless drudge-work to focus on higher value activities.

This also has the benefit of improving quality, since we humans are at our most error-prone when performing mindless tasks.

By utilising the features in SQL Server Data Tools (SSDT) Version 2026 – which does both State based AND Migration based deployments to databases automatically means that we as DBAs can start focusing on activities that bring value to our clients. Databases have been touted to be self tuning for decades but I simply don’t see it happening and we need DBAs that can tune queries, and more importantly understand how automation can make their lives better.

Data Science is big time in 2026 – the past 8 years have seen a massive jump in it’s usage. This is where Data Professionals have a massive part to play – cleaning up the underlying data – who knows – through automated cleansing jobs that self tune themselves… automatically.

See where I’m going with this — the growth of data we will see over the next 8 years means we need to be more vigilant in the ways that we interact with the data.  Using automation and more testing means that the data scientists can focus on what they’re good at – doing stuff with data that they don’t have to spend (up to) 60% of the time trying to cleanse.

I am not scared of automation putting me out of a job, I’m more scared that over the next 8 years that people won’t embrace it enough to  help with the delivery of value to the end user.

Yip.