SQL Server services will not start automatically after server reboots

This blog post is about a situation where after windows patching the SQL Server service and SQL Server Agent services will not start automatically – but will manually.

At a client site they had recently patched the servers – things went swimmingly – except the SQL Server services would not start.

SQLServer_Broken
Whilst automatic – the services would not start

Manually starting the services by right clicking | choosing start worked….

It is SQL Server 2017 with CU12 applied.

We tried a few things:

Making the services Automatic (Delayed)

Increasing the service pipeline timeout from 30 seconds to 60 seconds

But nothing worked.

Looking at the event logs I (eventually) found this:

Event ID 7000:

The MSSQLSERVER service failed to start due to the following error:
The account name is invalid or does not exist, or the password is invalid for the account name specified.

Which is bizarre – as the service account had been used for months – but after each reboot the services had to be manually started. GPO and other things had been blamed but no one could actually find out why.

Just for a laugh – I decided to “rename” the service account.

From svcSQL@<domain> to <domain>\svcSQL

Rebooted and it worked.

Just to make sure it wasn’t a false positive I left the SQL Server Agent service how it was – and it did not start.

SQLServer_Fixed
<domain>\<service account> works – – but why…?

As to why this is the case – I’m not sure. To date I had always set up my SQL Server services with <domain>\<service account> so I have never stumbled across this – until now.

But I don’t care too much – it means that the client can reboot their server and know SQL Server will come back…

Yip.

Advertisements

Setting up CentOS to run docker for SQL Server

This blog post is about running SQL Server in a docker container on CentOS Linux distribution.

This was both an experiment and also for me to re-learn CentOS – years ago I had done a fair bit with CentOS – back when I was a true Operations Consultant. Up until last weekend I had been using Ubuntu – as it is really easy to use.

All SQL Server container images on docker are built on the Ubuntu image so I wondered – can I run it on a CentOS Linux server running docker?

BTW – the obvious answer is ‘yes, of course because it is just a container”.

But I wanted to try it out and see what differences there were.

The first main difference for me so far – remembering to run sudo when I want to elevate my permissions. This is a good thing – I did many bad things in Ubuntu that I should have thought about first – that is why sudo is a good thing for people like me…!!

So if you are thinking of running docker on CentOS and

Install the latest version of Docker CE:

$ sudo yum install docker-ce

If prompted to accept the GPG key, verify that the fingerprint matches

060A 61C5 1B55 8A7F 742B 77AA C52F EB6B 621E 9F35

and if so, accept it.

Initially start docker:

sudo systemctl start docker

To make it start every time the  Linux server

sudo systemctl enable docker

Download latest SQL server image:

sudo docker pull microsoft/mssql-server-linux:latest

Run up our container and assign a non-default port – in this case I am  swapping out port 1433 to port 56969:

sudo docker run -e 'ACCEPT_EULA=Y' -e 'MSSQL_SA_PASSWORD=<YourStrong!Passw0rd>' -e 'MSSQL_PID=Developer' -p 56669:1433 --name SQLServer-Docker-DEV -d microsoft/mssql-server-linux:latest

Port mapping is specified via -p host port:container port.

Therefore in my example above -p 56669:1433 means that port 56669 on the host (CentOS VM) is mapped to port 1433 in the container. I am not a fan of using the default port 1433  – mainly as I like to stipulate different ports for different SQL Server instances (QA vs UAT vs PROD). This means that all connection strings I create from outside the container will need to include port 56669. How do I remember this number – that’s where configuration management comes in – another blog post I think…

I also want to change the password to something more secure (and less published on the internet) and this allows me to also show you how to access the SQL tools which are built into the image:

How to use SQL Server tools in the container

sudo docker exec -it SQLServer-Docker-DEV "bash"

Once inside the container, connect locally with sqlcmd. Note that sqlcmd is not in the path by default, so you have to specify the full path.

/opt/mssql-tools/bin/sqlcmd -S localhost -U SA -P '<YourStrong!Passw0rd>'

How to Change SA password in container

Once we have connected with sqlcmd we can then use our native T-SQL to change the password:

change_sa_pwd
Using sqlcmd to change our sa password and then testing it out

Now we can go to the Azure portal and allow access to this port:

azure_centos
Allowing network access to the SQL Server port from certain IP addresses in Azure portal

My preference is to tie it right down to the hosts that I know will access the SQL Server instance running in this container on this CentOS VM hosted in Azure.

Kinda reminds me of the dream layers of the movie ‘Inception’.

In the next blog post – let’s connect to this VM using SQL Server Management Studio on our PC.

Yip.

 

This is a company that practises what they preach

How good is a company at engaging with their user base? Obviously when things are going great then it’s wins all around for the company and hopefully the people have invested their money in that company’s products…

When things are not going well – I feel this is how you know what a company is really like and more importantly how their endemic culture actually is.

The company I am talking in this blog post is Redgate, I’ve known Redgate products for the best part of 3 years. Back when I started looking at how I could integrate database deployments with application deployments – specifically around folding in Continuous Delivery principles and processes – one company was leading the rest of the pack.

Redgate.

I like Redgate tools – they make what can be difficult and cumbersome in other technology stacks simple and easy. I like that – mainly because there are enough hard things to do in computers and getting a database into source control (and easily deployable) can be right up there in terms of hair pulling.

I especially like the culture of Redgate – I’ve yet to meet anyone that works there that is not engaging, passionate and knowledgeable about all things DevOps. It’s awesome to deal with them, they are really friendly and even the scariest of DBAs ( t | w) is a guy who is fanatical about making stuff go and better for the community and industry. Heck,  they just hired Kendra Little ( t | w ) so they must be both a cool company and awesome company.

(Side note: It was thanks to Kendra’s awesome blog posts and videos that helped me become a better Data Professional and yeah – I’m a big fan of what she does for the community. Plus, she’s as humorous as she is intelligent).

So what about when things go wrong with a company?

That’s normally when you find out how they really are.

If you’re a customer of Redgate you might have got an email from Tony Payne – their Chief Operating Officer – so the thing that occurred is fairly important and I like the fact it came “from the top”. The issue was a security vulnerability in SQL Monitor – which is a fairly important thing to fix and get right.

I knew about this security vulnerability as I had contacted some of the people I know at Redgate and trust and said “hey, I have noticed something and I think you need to look at SQL Monitor”.  I use SQL Monitor and I recommend SQL Monitor to my clients – it is a fantastic tool and the way it integrates with other tooling really fits in with the ethos of DevOps .

(Another side note: because I’m the Hybrid DBA – I don’t just do databases, I also do stuff like pen testing, configure web nodes with powershell and/or write in ansible to make stuff go… so sometimes I don’t just use tools, I like to break tools)

Now Redgate could have just said “cool, thanks for bringing this to our attention” and left it at that. And whilst that sounds like no company would do that – trust me there are companies that do exactly that.

Redgate responded back to me immediately, the product manager kept me in the loop both of how serious they took this and more importantly what they would do about it. They went to work immediately to fix it and got in outside consultants to audit that things were fixed.

As per the website above detailing what they did – they did all the right things.

What that article doesn’t tell you is they did something that Agile brings to the table – because they practise Agile they were able to React and Adapt to something that was not forecast as part of sprint planning.

Redgate preaches all things DevOps – but could they practise it. The long answer is Yes. Yes they can and for me that is a fantastic measure of a company.

People are fairly intelligent and can tell if you’re wanting to pull the wool over their eyes (you can fool some of the people some of the time…). If they had just swept this under the rug then yeah – their whole brand about delivering value quicker and keeping your data safe would be a fallacy.

But it’s not – they understand how important your data is, they know how to deliver value quicker because – they did.

I’m really impressed by how they handled this situation – I was very lucky to be an insider on what they were doing and I really appreciate that they let me in on that. It made me feel valued and made reinforced why I love both their products and the culture they have within their company.

Nice one team!!

Want to make your database deployment way safer, easier? You should check them out:

https://www.red-gate.com/solutions/overview

Yip.

 

Connecting to SQL Server 2017 using Visual Studio 2017 and getting “An incompatible SQL Server version detected”

This blog post details the error you may get when using Visual Studio 2017 and you get errors that you cannot connect to SQL Server 2017 using Test Explorer or SQL Server Object Explorer.

TL;DR – upgrade Visual Studio from base version…..

I had built a new DEMO machine for showcasing Visual Studio, namely SQL Server Data Tools as well as vendor tools that help people “do DevOps” stuff with databases.

I cloned my repository in my Azure Repo (read up on them here https://azure.microsoft.com/en-us/services/devops/repos/ ) and kicked off a BUILD in Azure DevOps  (read more here https://azure.microsoft.com/en-us/services/devops/ ) (formerly known as Visual Studio Team Services (VSTS)) and everything was going great.

I then wrote some unit tests for some stored procedures and went to build my solution in Visual Studio – which worked, then publish it to my local SQL Server 2017 instance – which failed.

The error was “Ensure you have the same name in master or <database>” which is a weird error.

Thing was – I had used Test Connection – which worked!!.

Initial Error
Don’t believe that things are OK…. while the Test Connection worked – nothing else did

But the actual publish failed – couldn’t connect…. (what???)

So I went to use SQL Server Object Explorer in Visual Studio to try and connect and got the following error:

error connecting VS
I couldn’t add my instance here – but could ‘Test Connection‘ in other parts of VS2017 (!??!)

I upgraded my SQL Server 2017 instance to CU11 – mainly as my version was base version:

SQL server Version Originally
In my defense I did only install this today……. 

But of course in reflection (reflected whilst I waited for CU11 to upgrade my instance) – I could connect to SQL Server 2016 instances. But couldn’t connect to my Azure SQL instances…

…..what the???

Which then made me check the version of Visual Studio 2017.

It was version 15.0 – and lo and behold there was an update waiting in notifications area telling me that an upgrade to 15.8.7 was ready to be installed.

<Groan>

So I upgraded……

VS_reboot.png
At least some things are consistent in Visual Studio  🤣

…….and things now work:

Working Now
This now works – hooray
Working
Don’t worry – I did a Build first before I published!!

….and I can now (after a 2 hour hiatus of upgrades) go back to writing unit tests and running them against instances

 

Yip.

Getting error “unable to parse remote unpack status” when attempting to push to Azure Repo in Azure DevOps

This blog post is about a situation where I went to push my latest local changes to my git repository up in Azure DevOps and got the weirdest error:

error: unable to parse remote unpack status: ng refs/heads/..

An error occurred while communicating with the remote host. The error code is 0x800703E5.

Resolution:

Short answer:

Upgrade Git For Windows.

Longer Answer:

it took me a while but I reliased it was related to the fact that the git version (2.16.2) I was using on my windows machine was old.

The only real give away was

remote:  Azure Repos

And something went off in my head “huh – I wonder if it is because I’m using an old version of Git for Windows”.

So I upgraded (the installer for Git for Windows is very easy) and the problem has gone away..

Yip.

Why you should always build an Azure based VM in a Resource Group

This blog post is based on conversations I witnessed between MVPs around commissioning and more importantly de-commissioning virtual machines (VMs) hosted in Azure. Don’t worry – this blog post is not going to divulge NDA material….

…my MVP award is way too valuable to me to squander it!!

It comes down to this simple fact – you might still pay for resource usage in Azure even after you have deleted your VM….

Why?

Because if you just delete your VM – you will leave behind resources that were associated with it.
Let’s look at how we spin up a VM:

Create VM
Creating a VM in Azure – using the Add button

I’m not going to go into all the different configurations I’ll choose – this post is about what is left behind when I delete the VM and why I’ll pay money unnecessarily.

As until I witnessed the discussion between MVPs (a quite vehement discussion BTW but that’s not for this blog post…) I had always plonked all my VMs in one Resource Group. you know to be tidy and concise…

Anyway – our VM is being deployed – this is the bit I love:

VM being created
I love the smell of new VM

……creating resources means I get to try stuff out. This particular VM is for a DEMO I am going to do at a client site – we’re going to build SQL Server using Desired State Configuration (DSC) – in fact we’re going to build 20 SQL Server instances – and then create 10 Availability Groups – all from Source Control and all scripted. That is definitely for another blog post.

It’s gonna be awesome!!

So in the time it takes for me to make a coffee I have a new VM:

deployment done
3 minutes 48 seconds – seriously that is why you need to host stuff in Azure

Let’s look at the resources in our Resource group:

ResourceGroup
There is more than just our VM in here

So I’ve done the DEMO of SQL Server DSC (blog post coming) and now we want to get rid of the VM.

This is what I used to do:

Go into Virtual Machines and hit Delete

delete VM
See ya VM..

So within about 5 minutes our VM is gone – so let’s look in the Resource Group:

leftover resources.png
There are quite a few things leftover…

So the argument was that this was “designed’ by Microsoft to get money out of people – you know by keeping the resources that are left behind.

🙄

Yeah ok….

The actual story is that Azure allows you to share resources like network configs etc and doesn’t actually know if you want to keep these things – so it allows you to piecemeal manage the resources within your resource group.

The best quote I saw was:

” All of these things are how Cloud works. Nothing to do with Azure. The same is true on AWS.  Everything is treated as single resources that can be “combined” to many different things and I don’t want anybody to just go and delete anything unless I explicitly tell them to. 

This means that yes – if you manage your VM from a VM perspective and not all the resources associated with it – you will get charged for things lying around. It’s not a completely free cloud platform – for obvious reasons.

So – if you want to remove your VM in its entirety then place it in a Resource Group and remove the Resource Group. Thus you remove all resources associated with that VM and do not get charged unnecessarily.

There might be times where you want to host associated things within a resource group 0 my advice – label them and put tags on them so that you know what belongs to what if you need to remove things.

Easy as.

Yip.

How to sync user logins across SQL Server instances – dbatools is brilliant

This blog post is about how brilliant dbatools are. In this case – for syncing user logins between SQL Server instances.

Background:

Whenever I do my “DevOps and the DBA” talk I dedicate a minute or two talking about dbatools.

You can find the tools here:

https://dbatools.io/

Simply download and install on your server – or a machine that has visibility to what you want to connect to.

When building new servers the most important thing after restoring and securing the database is syncing up the users. This is especially important for Availability Groups as SQL Authenticated users required the SIDS to be the same.

In the past I had some very long winded code that would do the sync – it was a mixture of TSQL and PowerShell. It worked but you know – it was cumbersome.

So I effectively practiced what I preached recently and used the Copy-DbaLogin command – actually I looked at what it did first by running:

Get-Help Copy-DbaLogin -Detailed

For what I needed to do – I needed both the SQL Authenticated users AND all Windows users/groups – so I just ran this:

Copy-DbaLogin -Source OLDSERVER -Destination NEWServer

Which results in the following example:

copydbaloginresult
This is brilliant

The upshot of all this is that syncing users between SQL Server instances has never been easier and means I can throw away my terribly written script.

Yip.