DevOPs and Databases — the one thing you may be doing wrong. 

This blog post is part of T-SQL Tuesday Blog series — thanks to Grant Fritchey (t | b)  – for hosting this month’s T-SQL Tuesday event.

“T-SQL TUESDAY #091 – DATABASES AND DEVOPS”

OK, so this DevOPs thing – what is it?

I consider myself both a practitioner and preacher of the DevOPs thing. It is one of my favourite things to talk about and also do. Done right it makes everyone’s lives a lot better. They say you should never discuss politics and religion and at times I feel like the ‘D’ word falls into this category.

It polarises people — people either love it, hate it or don’t know what it is (human instinct thus is to hate it).

At the company I work for – I had a saying “The first rule of DevOPs is:  Never say the word ‘DevOps'”. It stopped the silly and juvenile arguments.

We focused on Continuous Integration and Continuous Delivery – which are components of the DevOPs movement.

So yeah DevOPs is not a thing, there is not big ‘MAKE DEVOPS HAPPEN‘ button that you can push.

At it’s simplest it is about:

Tools, Process, People, Culture

The thing is you need to be aware of all 4 of these. If you concentrate on just one or maybe two – you’re going to fail at this ‘DevOPs’ thing.

So for this T-SQL Tuesday you’d think I would relish the chance to talk about all four things or at least Tools  – because database right?

Yeah, nah.

Only because we have Grant Fritchey hosting this event and the company he works for — Redgate do some fantastic game changing tools in this space. If you want to learn/do Database Lifecycle Managment (DLM) which brings DevOPs to the Database – go look at the Tools that Redgate have. Quite frankly they are game changers.

Now I love Tools, I’m a technical bloke and tools make a huge difference.

But…..you need process too.

In terms of Process – check out DLM Consultants  – Alex Yates (t | b)is a guy who gets how DevOPs can really make database deployments not only easy but boring..

I’ve yet to meet Alex but he’s one of a few guys in this space (Steve Jones (t | b),  Damian Brady (t | b), Donovan Brown (t | b)) who I am looking forward to having a beer with and talking to blokes who really know their stuff around DevOPs.

(Honestly — if you’re reading this far – STOP – look these guys up — they are awesome, knowledgeable and want to help you make stuff go)

OK…

Tools, Process are vitally important. But they aren’t the full story here. In fact to be honest they aren’t the most important parts of this equation:

Tools, Process, People, Culture.

Here’s another saying I have:

The Tools are great, but the Process is wrong, because the People don’t get the Culture.

DevOps is not easy to implement or adopt, and event harder in terms of transforming your company into an agile, business-delivery-focused machine. DevOps is not about having the best automation tools. It’s about people and process.

The thing that brings those two things together? To help break the old habits that are killing our ability to deploy changes more reliably and quicker to both application and database alike?

Culture.

The thing DBAs and OPs hate.

They love Tools, they’re OK with Process, People are kinda meh, but Culture.

Culture……. !!!!

Yeah, nah.

This post is about Culture. How culture can help your database.

Now if you could just hold the hand of the person next to you while we sit in a circle and sing Kumbaya My Lord …..

Just kidding.

The culture of DevOPs is vitally important – in fact culture is the foundation of anything DevOPs, Continuous Integration, and/or Continuous Delivery related.

For too long the Database – the MOST important part of any application – has been ignored in terms of applying DevOPs methodologies to it.

For too long application developers have enjoyed being able to release quickly, automatically and reliably to their applications, whilst our databases broke because of banked up changes, poor source control, manual processes and lack of communication between DEV and DBAs.

That last bit is the killer of all things — communication. Without communication we won’t have collaboration. Without collaboration we will have two or more groups of people who don’t understand what the others do/need and we won’t be able to achieve our goal.

The goal BTW is —  fast,reliable, repeatable and boring deployments of changes to PROD systems. Whether those systems are databases or applications.

Historically there has been a power struggle between DEV and OPS (DBAs):

  • DEV want continuous change and fast enhancements, they are focused on meeting schedule targets.
  • OPS want stability and rigorous/controlled change, they are focused on meeting reliability targets.
  • To make it worse – both camps work in silos – with disparate tools and the only time they really communicate is when things go wrong…

See how this can go wrong…

The thing we need to change here is the Culture present in this situation. So that People can communicate, collaborate on the best Process that will use Tools. To achieve the goal.

If we look at another way of describing DevOPs:

Culture — focus on people, embrace change and experimentation

Automation – Continuous Delivery, Infrastructure as Code (IaC)

Lean – Focus on producing value for the end-user, small batch sizes

Measurement – Measure everything, show the improvements to all

Sharing – Open information sharing, collaboration & communication

The CALMS approach is in fact wholly dependent on Culture.

As the culture will become one of sharing, one where we work together on the automation, one where we have a culture that embraces sharing/showing the metrics of our systems.

Our systems – this means we are ALL involved in the up-time, performance and changes associated with them.

So we need to embrace and implement a culture that is:

  • Highly Communicative – by breaking down the silos we can communicate quicker
  • People-centric – we want open minded people who have cross-pollination of skills
  • Based on problem solving – we all work together towards a common goal(s)
  • Focused on the End-User experience – the client pays us remember…!!
  • Empowered and self-sufficient – DEV can spin up their own resources (IaC)

By the way – DevOPs is not just about DEV and OPs (DBAs) – it is about a cultural philosophy for all members of our company that are involved in delivering business functionality to our end-user (clients).

This means the Functional Testers, Release Managers, Marketing, Sales Team, QA Engineers, Application DEV, Database DEV, Infrastructure Engineers, Report Writers, Operations Managers, DBAs, Business Analysts and even Consultants — all need to embrace the culture of DevOPs.

Because we are all involved in the goal. We all need to understand what the culture is and how we are a part of it.

How I sold DevOPs to management at Jade Software was around reducing “Time in Lieu” as we had many people up at night or working on the weekend trying to get horrid manual deploys to work. Our  application deploys went from hours down to seconds and they were AUTOMATED. Our teams were at work the next day, instead of catching up sleep and instead of fixing things in PROD – they were learning how to do higher value activities. This is a good culture to have.

As I said before – for too long the Database side of the equation has been ignored in this process.  Databases are hard, if you do stuff wrong it is very, very bad. BUT if you start small, if you embrace the culture associated with automation, small changes, if you apply (flexible) standards and if you begin with PROD-like systems it is very achievable. You can realise the benefits of DevOPs with your databases.

Lastly — please….

Regardless of whether you are a DEV or a DBA consider Test Driven Development (TDD) for any code you write. Rob Farley (t | b) writes a good post related to how it fits into “Databases and DevOPs” realm here.

There is a revolution happening and the associated blog posts that are associated with this month’s T-SQL Tuesday event will hopefully show you why it is a great thing applying DevOPs principles to your database.

Yip.

tsql2sday150x150-1[1]

 

var Microsoft_Data_Platform_MVP = “Yip.”;

The 2nd June 2017 will always be something of a special day for me.

It was the day I was awarded my Microsoft MVP for Data Platform.

Warning: this isn’t a technical blog post nor is it a complete “how to get a MVP” guide

For me – being awarded the Microsoft MVP was immensely humbling – mainly because it involved people nominating me and of course the review process of what contributions I had done in the community. The first bit was the big one for me – people nominated me because they thought I was worthy of a MVP award.

That blew me away.

And was daunting.

Daunting — because for the past 4 years I’ve been involved in the community, running conferences, running a User group and then speaking about things that had baffled me but I had worked out – so thought that others might have struggled too…

..and might want to hear my battle/war stories.

I never once did any of my community things for recognition, I did it because for years I had been selfish – I had consumed many blog posts/forums/articles that got me out of a sticky situation.

So now was my time to give back. In 2014 I made a conscious decision to start presenting and really try and help/inspire at least one person every time I talked.

My tagline was “make stuff go” as that is what all of this is about – making stuff go in the best possible,efficient manner so all of our work/life balance is “sweet as”.

And then I had my first MVP nomination last year.

Nomination:

I will always be grateful for the people who nominated me – a keyphrase of MVPs is “NDA” and for the purposes of this post I am being NDA about the people who nominated me.

You know who you are, I know who you are, heck — Microsoft know who you are!!

And I personally want to thank you for all making me speechless when I got the notification — speechless that you believed in what I was trying to do.

Here’s the thing – my first nomination I didn’t fill in the ‘paperwork’ for 2 months as I felt not worthy – especially as there was a person who I knew had been nominated and I felt that in the finite world of MVP awards they deserved it way more than myself.

I did eventually fill it out as someone told me whilst that was honorable – it was slightly dumb (their honest words not mine). I actually submitted that first one after my Ignite presentation as after the people who came up and asked/talked I felt I finally had something to share/give to the global community.

And also that date is/will be forever etched in my memory.

Finding a niche

In a community of brilliant people who can tune SQL Server, are masters of AGs and even know more than me about tempDB tuning – what did I have to offer that I knew could make a massive step change in our industry?

Well I had for years been working with application developers at Jade Software making stuff go. It involved things like Continuous Integration and Continuous Delivery – things associated with DevOPs.

So I decided why not talk/do/share about database deploys that could have some DevOPs brilliance applied to them. For years now Application DEVs have benefited from DevOPs – whereas “databases are too hard/important” was a common phrase I heard.

I had someone once tell me “Being nominated for MVP is the easy part, so someone took 2 minutes out of their day to think about you. Well done. The hard part is proving you’re worthy of becoming a MVP“. They were right.

MVP is about Community:

I consider myself lucky and spoilt because I got to hang out with some awesome MVPs in my neck of the woods.

I’ll take one step back and mention someone who wasn’t a MVP at the time (but now is) and that’s Steve Knutson.

I’ve known Steve for 23.5 years. We met at university and over that time he’s been a mate. He helped me out when I was getting my MCSE in 1999 (he sat/passed SQL Server 6.5 exams at the time!!).  We lost contact for about 7 years — but caught up again and Steve really is that nice guy you read about. He’s an extremely focused guy and willing to help others gain knowledge.

Martin Catherall (t | b) and Warwick Rudd (t | b | w) were two blokes who I made friends with (when I got up the courage to talk to a Microsoft Certified Master….) and who helped me so much over the past few years – not just with MVP related things but heaps of other things.

And that to me is what a MVP is about – someone who cares a heap about sharing their knowledge, to mentor those who don’t know things.

Combined with Rob Farley (t | b | w) , one of the most insanely brilliant intelligent quick-witted people I’ve ever met  I had a triumvirate of MVP mentorship.

So I had three “local” MVPs but in fact I have 2 other people who both enriched my life both at a personal level but also at a mentor level.

Nagaraj (Raj) Venkatesan (t| b) and Melody Zacharias (t | b | w) – both of whom are like family to me. Their positiveness and support when I needed advice or just a “hey bro, how’s things” was awesome and I hope to be the same kind of mentor within the remote (global) community that you both were for me.

Now – here’s the thing – I actually had to prove that I was worth something – and that reads way worse than it sounded in my head when I wrote this……………..

Yes MVPs will help you out – but you also have to help yourself out. Life is not about spoon feeding. The guys were awesome to bounce ideas off for things I wanted to do in the community and I really would not be here without them.

Now of course there are others (a special mention to Reza Rad who I would love to emulate what he does in the Global Community) – but I’ve chosen Martin, Warwick and Rob because they were the guys I skyped, messaged and talked to the most and lads – I am so appreciative that you put up with me, that you consoled me and counselled me through the years.

The one thing I learned becoming a MVP:

Stay true to why you started doing all this.

If you’re just speaking/writing to try and rake up points or whatever to become a MVP – I’m sorry but (in my opinion) that is not a good reason to become a MVP.

While I was going through the process I stopped thinking about MVP (where I could…) and stuck to what I’d been doing the past 3 years. I continued to hone my craft, to extend my reach in the GLOBAL community and most of all to find new ways to help people “make stuff go”.

I have seen people who want to become a MVP go slightly insane about it or even to go about things the wrong way. Like really wrong..

For me in the past months gone by I decided to Forget about the MVP and to stick to First Principles – sharing knowledge is why I got into this game. Being able to see after 60 minutes that I’ve made someone have a eureka moment or epiphany — that is the goal.

I would spend lunch hours talking to anyone who needed help with DevOPs, databases or Azure. Because I finally found I had knowledge and could help/mentor others which was such a great feeling of accomplishment.

Blogging for me was a new thing – again I had been selfish over the years reading others works. Blogging allowed me to try stuff out, then write about what I’d done. Some of it was very simple stuff – but you know what – there is a place for simple technical blogs. I know because over the years I had done many searches for just simple things. SO I am finding a balance – treating my technical blogging like a journey.

Create one database in Azure using one line of PowerShell — done

Creating many databases in Azure using one line of PowerShell — done

Creating a Continuous Delivery Pipeline in 59 minutes using PowerShell — coming soon

My blog post on VSTS hosted agents has got some good hits of late. That was a blog post that came out of my scrooge-like tendency to not spend money on build minutes!!

All of these things — speaking/mentoring others/writing — were my new first principles and I stuck to them. I did think about the MVP thing — I’m human, but I didn’t let it consume me.

Any consuming thoughts I would then put my energy purposely into writing/doing more content – to find better ways of delivering knowledge back to the community.

May 2017 was a massive month for me – I ran a conference, spoke 7 times and traveled to SQL Saturday Brisbane.

I was exhausted at the end of it, but so rewarded as I helped 2nd year ICT students, a Virtual Chapter, a User Group in Canada, a User Group in Christchurch, co-ran Code Camp Christchurch where I spoke twice (twice!!) and of course — the Brisbanites.

I was going to spend last weekend writing up some more content and plan for the upcoming DevOPs Boot Camp which I’m running.

But Friday morning at 6:20am I checked my emails — which I do every morning and there it was. I had been awarded the Microsoft MVP Award. For the first time in my life I was speechless. I’m man enough to admit I cried. Mostly because I didn’t think I was worthy enough to be a MVP and that other people had thought I was worthy enough.

My only regret is that I can’t tell my mum – a lady who took in the strays and unfortunates of society and endeavored to help them. She instilled in me from a young age that we were so lucky with what we had and that we have to help those who aren’t so lucky.

It stuck with me over the years and is my first principle — to help people grow.

That is why 4 days in, I have come to the place where I know I am worthy of this MVP award.

I am also excited because I am now moving from the phase of achieving a MVP award to now delivering like a MVP would.

Yip.

 

T-SQL Tuesday #90 — Bringing Continuous Delivery to a ‘brownfield’ database system

tsql2sday150x150-1[1]

This blog post is part of  T-SQL Tuesday #90 – Shipping Database Changes .

I have decided to write about  a client’s SQL Server based system that was having issues with deployment of code changes. The code changes were being deployed by a third party contractor.

The client engaged with the company I work for to see if we could help out as we were deploying to their other systems with zero issues in PROD.

The issues they were experiencing with the other contractor were:

  1. error prone manual deploys to PROD
  2. no UAT or QA systems so effectively deploying straight from DEV to PROD.
  3. little testing and what testing there was — manual and laborious
  4. no source control of database changes
  5. no central repository (or consistency) of configuration settings
  6. deploy outages that were 4 hours long

So rather than a greenfields project where we can do anything and everything with Continuous Integration and Continuous Delivery we were going to back fill these against a running PROD system – carefully.

So this is what is termed as ‘brownfield’ – a running legacy system…

We worked with the 3rd party contractor as they too were having issues as their DBAs were working at 3am until 7am and it wasn’t a great time.

Note: we were not replacing the 3rd party contractor – we were helping everyone involved. The particular client was very important to us and we were happy to help them out across their other systems we did not actively manage (yet).

The first step was to introduce Source Control and Continuous Integration – so all code that was being developed was pushed up to Source Control. For ease of use and to prove that this was a good thing we started with the application code.

Builds were kicked off in Team City and a consistent artifact was created, we also introduced automated testing.

We then retrofitted consistent environments to the application lifecycle – how’d we do the database side?

We used the Data Tier Application (DAC) model of creating an entity that contains all of the database and instance objects used by an application. So we generated a DAC out of the PROD system (of just the objects and schema definitions – NOT the data), stored it in Source Control and used sqlpackage to create 5 databases that were all standard (compared to PROD).

They were:

CI, QA, Integration, UAT, prePROD

CI – was there to test every build generated against.

QA — was there for nominated QA engineers to test functionality against

Integration – was there for Integration testing with certain backend/frontend systems

UAT – was there for User Acceptance Testing

prePROD – was there for a true copy of PROD

prePROD was a special case as this was a full replica of PROD but with scrambled data.

The application tier was standardised according to my OCD like naming scheme – see my post here about why this was a good thing.

By then making the 3rd party developers push all application & database code changes to source control we then had a versioned state of what changes would be made.

This wasn’t a “hit the GO button” scenario — we had to try things out manually and carefully and there were things that went bump. BUT they went bump in CI and Integration and even prePROD – but by following Continuous Delivery principles and applying to them to the database we have in the past two years never gone bump in PROD.

Now we couldn’t do all the things we wanted – this was not a greenfields project. So here is what we compromised on:

We had to use invoke-sqlcmd for deploying the changes against the databases.

SQLCMD Script

This was OK – because we using a variablised script (which the 3rd party contractor had never done before so we showed them how) and it was deployed consistently across all databases — any issues we caught early on.

Also our deployment process step:

process step

was standard across all environments in our database/application deployment lifecycle.

Here is what the standardised packages from the Continuous Integrationbuild looked like:

Packages

The best part was that ALL database changes in the DBScript package were in the same version as associated application changes. In fact we also included any configuration changes in the repo as well – you know Infrastructure as Code type stuff.

So let’s measure how the changes are against the initial issues:

  1. error prone manual deploys to PROD
    The deploys were automated and any errors were caught early
  2. no UAT or QA systems so effectively deploying straight from DEV to PROD.
    We had PROD-like environment that could give us feedback immdiately and protect PROD from out of process changes.
  3. little testing and what testing there was — manual and laborious
    Introducing both automated tests in the Continuous Integration build steps and automating the deploy
  4. no source control of database changes
    All database changes were pushed up to source control and were versioned.
  5. no central repository (or consistency) of configuration settings
    All configuration settings were stored in a central reposistory and deployed out as aprt of Continuous Delivery processes. Everything was a variable and consistent across the whole application/database lifecycle
  6. deploy outages that were 4 hours long
    Deploys were now around 5 minutes as everything was consistent and proven and we never rolled back once. We could if we wanted to – but didn’t need to.

The upshot of using Continuous Delivery for their ‘brownfield’ system was that database & application changes were consistent, reliable and automated.

The client was very happy, the 3rd party contractor was very happy too (as their time-in-lieu, angst was reduced) and the client approached us to manage more of their systems.

For more things like this — go read the other posts about this month’s T-SQL Tuesday

Yip.

 

.

Removing an App Service Plan in Azure

In my previous post I adhered to my O-OCD (Operational OCD) and standardised my App Service Plan name to fit in line with my Database, User and App Service naming standard.

Why do I standardise everything? Have a quick read here:

The 20 second rule (or why standards matter).

Here is what we have:

AppService_removal

Now that I have standardised – I want to get rid of the historical (and nonsensical) named “CientPortalAppServicePlan” .

Now I could just click on it in the portal and hit “Delete”:

AppService_removal_2

But where is the scripting fun in that?

Also I may one day need to do this for MANY App Service Plans, so let’s do this via PowerShell:

Login-AzureRmAccount
$resourcegroupname = “AzureDEMO_RG”
$location = “Australia East”
$AppServicePlanName = “CientPortalAppServicePlan”

Remove-AzureRmAppServicePlan

-Name $AppServicePlanName

-ResourceGroupName $resourcegroupname

AppServicePlan_removal

Click “Yes” to the pop-up and it is now gone and I can (now) relax.

Which means that I have a couple of Azure App Services and Azure SQL Databases — and they all have standardised names — which NOW means I can do some awesome Continuous Integration and Continuous Delivery/Deployment stuff.

Yip.

Creating an App Service in Azure that connects to an Azure SQL Database.

Using the methodology listed in my previous blog post on creating an Azure SQL Database we now have a Continuous Integration database (named  CIAzureWebAppDEMO_DB).

Now that we have a database its now time to create the web app. This will be an App Service in the Azure portal.

For more background information have a look at Microsoft’s documents on this area:

https://docs.microsoft.com/en-us/azure/app-service-web/app-service-web-app-azure-resource-manager-powershell
  1. We’ll create a App service plan first (mainly as my old name was … old):
App_Service_reason
This App Service Plan has a historical and nonsensical name so has to go…

So using PowerShell:

$resourcegroupname = “AzureDEMO_RG”
$location = “Australia East”
$AppServicePlanName = “AzureDEMO_AppServicePlan”

New-AzureRmAppServicePlan

-Name $AppServicePlanName

-Location $location

-ResourceGroupName $resourcegroupname

-Tier Free

Next we’ll create our App Service:

$resourcegroupname = “AzureDEMO_RG”
$location = “Australia East”
$AppServiceName = “CIAzureWebAppDEMO”
$AppServicePlanName = “AzureDEMO_AppServicePlan”

New-AzureRmWebApp

-Name $AppServiceName

-AppServicePlan $AppServicePlanName

-ResourceGroupName $resourcegroupname

-Location $location `

We can now configure the app service application settings using PowerShell or through the portal and as all Azure Web Apps need configuration values, database based applications also need to have their database Connection String values configured.

Therefore because this app service connects to an Azure SQL database we need to assign it a connection string.

Portal

You can use the Azure portal to configure your newly created App Server:

Configure_AppService_DB_Cxn

  1. Open the Azure Management Portal via https://portal.azure.com
  2. Navigate to the Web App within the portal.
  3. Under “All settings” open up the “Application settings” pane
  4. Scroll down to the “Connection strings” section
  5. Configure as necessary.

PowerShell:

I prefer to use PowerShell as I can feed this into my Continuous Delivery pipeline as I build applications/databases on the fly.

For security reasons we will hash the connection string:

$resourcegroupname = “AzureDEMO_RG”
$location = “Australia East”
$AppServiceName = “CIAzureWebAppDEMO”
$AppServicePlanName = “AzureDEMO_AppServicePlan”

# Create Hash variable for Connection Strings
$hash = @{}

# Add a Connection String to the Hash by using a Hash for the Connection String details
$hash[“defaultConnection”] = @{ Type = “SqlAzure”; Value = “Data Source=tcp:webdbsrv69.database.windows.net,1433;Initial Catalog=CIAzureWebAppDEMO_DB;User Id=CIAzureWebAppDEMO@webdbsrv69.database.windows.net;Password=<redacted>;” }
# Save Connection String to Azure Web App
Set-AzureRmWebApp -ResourceGroupName $resourcegroupname -Name $AppServiceName -ConnectionStrings $hash

So there we have it – an App Service created in a specific App Service Plan that can now connect to an Azure SQL Database. Using this methodology I could have configured anything/everything in the application settings.

Now let’s get rid of that badly named App Service Plan in my next post.

Yip.

Removing an Azure SQL Database using PowerShell..

In this post we will remove some databases located in Azure.

This related to my last post where I am cleaning up some badly named databases and replacing them with standardised database names.

So we have this situation:

database_create_1

We want to get rid of the top two databases AzureWebAppFunctionalTestDB and AzureWebAppQADB.

So we’ll check we’re logged in:

PS > Get-AzureAccount

If not – then just login

PS > Login-AzureRmAccount

It is always a good idea to list the databases on your server:

PS >Get-AzureSqlDatabase -ServerName $servername

And now we’ll simply remove the database:

# Some variables for our resources
$resourcegroupname = “AzureDEMO_RG”
$databasename1 = “QAAzureWebAppDEMO_DB”
$databasename2 = “FTAzureWebAppDEMO_DB”

Remove-AzureSqlDatabase -ServerName $servername -DatabaseName $databasename1

Remove-AzureSqlDatabase -ServerName $servername -DatabaseName $databasename2

You will then be asked if you really want o remove the database:

database_remove

Answer appropriately and viola – we have removed our Azure SQL Databases via PowerShell

 

Yip.

Creating an Azure SQL Database via PowerShell

This post is about using the brilliance of PowerShell to script the creation of databases in Azure.

Background:

Apart from the obvious question of why not? The actual reason is one of standards and how I did not adhere to my own standards that I’ve been preaching for the past 17 years.

For background read my post on standards:

The 20 second rule (or why standards matter).

And you will see how badly I strayed…

Exhibit A:

The WHY of this presentation 3

Yes… my application name and database name do not match my standard of “everything should be able to be tracked from web to database via a name

So of course being a guy who does Data Platform stuff – we’ll create the database first, then the app service then we’ll adjust our release in VSTS. These of course will be split over 2 or more blog posts.

Quick discussion on Tiers in Azure:

There are quite a few different pricing tiers available with Azure for a Microsoft Azure SQL Database.

It allows us to select the capacity metrics that are relevant for our application, without paying too much than what we actually need. Switching between tier plans is very easy. If you have short periods of high performance demand, we can scale up our Azure SQL database to meet those demands. Later on, after the demand diminishes, we can scale the SQL database back down to a lower pricing tier, thus savings us costs.

For this database I’m going to use the Basic tier.

PowerShell Scripts:

So in powershell we’ll log into our Azure subscription:

PS > Login-AzureRmAccount

Which will ask us to login.

We then see our subscription details:

Environment : AzureCloud
Account : moosh69
TenantId : [some hexadecimal numbbers here]
SubscriptionId : [some more hexadecimal numbbers here]
SubscriptionName : [Name of your subscription]
CurrentStorageAccount :

OK, for fun let’s see what databases we have using PowerShell.

Firstly we’ll find out our database server:

PS > Get-AzureSqlDatabaseServer

Which will list our database server and we’ll create a variable $servername using the output of the above.

We now run this command to list the databases in that server:

Get-AzureSqlDatabase -ServerName $servername

Name : AzureWebAppQADB
CollationName : SQL_Latin1_General_CP1_CI_AS
Edition : Free
MaxSizeGB : 0
MaxSizeBytes : 33554432
ServiceObjectiveName : Free
ServiceObjectiveAssignmentStateDescription :
CreationDate : 07/01/2017 8:18:54 AM
RecoveryPeriodStartDate : 27/04/2017 8:29:07 PM

Name : AzureWebAppFunctionalTestDB
CollationName : SQL_Latin1_General_CP1_CI_AS
Edition : Basic
MaxSizeGB : 2
MaxSizeBytes : 2147483648
ServiceObjectiveName : Basic
ServiceObjectiveAssignmentStateDescription :
CreationDate : 08/01/2017 5:57:42 PM
RecoveryPeriodStartDate : 26/04/2017 8:07:56 PM

Name : master
CollationName : SQL_Latin1_General_CP1_CI_AS
Edition : System
MaxSizeGB : 30
MaxSizeBytes : 32212254720
ServiceObjectiveName : System0
ServiceObjectiveAssignmentStateDescription :
CreationDate : 02/01/2017 10:09:58 PM
RecoveryPeriodStartDate :

We’re not too interested in the MASTER database.

So we’re going to create 2 new databases (and eventually point our app services at them).

Here is the very simple code:

# Some variables for our resources
$resourcegroupname = “AzureDEMO_RG”
$databasename1 = “QAAzureWebAppDEMO_DB”
$databasename2 = “FTAzureWebAppDEMO_DB”

New-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname `
-ServerName $servername `
-DatabaseName $databasename1 `
-Edition “Basic”

New-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname `
-ServerName $servername `
-DatabaseName $databasename2 `
-Edition “Basic”

And just like that we’ve created our first database in Azure using Powershell in about 5 seconds.

Yip.