var Microsoft_Data_Platform_MVP = “Yip.”;

The 2nd June 2017 will always be something of a special day for me.

It was the day I was awarded my Microsoft MVP for Data Platform.

Warning: this isn’t a technical blog post nor is it a complete “how to get a MVP” guide

For me – being awarded the Microsoft MVP was immensely humbling – mainly because it involved people nominating me and of course the review process of what contributions I had done in the community. The first bit was the big one for me – people nominated me because they thought I was worthy of a MVP award.

That blew me away.

And was daunting.

Daunting — because for the past 4 years I’ve been involved in the community, running conferences, running a User group and then speaking about things that had baffled me but I had worked out – so thought that others might have struggled too…

..and might want to hear my battle/war stories.

I never once did any of my community things for recognition, I did it because for years I had been selfish – I had consumed many blog posts/forums/articles that got me out of a sticky situation.

So now was my time to give back. In 2014 I made a conscious decision to start presenting and really try and help/inspire at least one person every time I talked.

My tagline was “make stuff go” as that is what all of this is about – making stuff go in the best possible,efficient manner so all of our work/life balance is “sweet as”.

And then I had my first MVP nomination last year.

Nomination:

I will always be grateful for the people who nominated me – a keyphrase of MVPs is “NDA” and for the purposes of this post I am being NDA about the people who nominated me.

You know who you are, I know who you are, heck — Microsoft know who you are!!

And I personally want to thank you for all making me speechless when I got the notification — speechless that you believed in what I was trying to do.

Here’s the thing – my first nomination I didn’t fill in the ‘paperwork’ for 2 months as I felt not worthy – especially as there was a person who I knew had been nominated and I felt that in the finite world of MVP awards they deserved it way more than myself.

I did eventually fill it out as someone told me whilst that was honorable – it was slightly dumb (their honest words not mine). I actually submitted that first one after my Ignite presentation as after the people who came up and asked/talked I felt I finally had something to share/give to the global community.

And also that date is/will be forever etched in my memory.

Finding a niche

In a community of brilliant people who can tune SQL Server, are masters of AGs and even know more than me about tempDB tuning – what did I have to offer that I knew could make a massive step change in our industry?

Well I had for years been working with application developers at Jade Software making stuff go. It involved things like Continuous Integration and Continuous Delivery – things associated with DevOPs.

So I decided why not talk/do/share about database deploys that could have some DevOPs brilliance applied to them. For years now Application DEVs have benefited from DevOPs – whereas “databases are too hard/important” was a common phrase I heard.

I had someone once tell me “Being nominated for MVP is the easy part, so someone took 2 minutes out of their day to think about you. Well done. The hard part is proving you’re worthy of becoming a MVP“. They were right.

MVP is about Community:

I consider myself lucky and spoilt because I got to hang out with some awesome MVPs in my neck of the woods.

I’ll take one step back and mention someone who wasn’t a MVP at the time (but now is) and that’s Steve Knutson.

I’ve known Steve for 23.5 years. We met at university and over that time he’s been a mate. He helped me out when I was getting my MCSE in 1999 (he sat/passed SQL Server 6.5 exams at the time!!).  We lost contact for about 7 years — but caught up again and Steve really is that nice guy you read about. He’s an extremely focused guy and willing to help others gain knowledge.

Martin Catherall (t | b) and Warwick Rudd (t | b | w) were two blokes who I made friends with (when I got up the courage to talk to a Microsoft Certified Master….) and who helped me so much over the past few years – not just with MVP related things but heaps of other things.

And that to me is what a MVP is about – someone who cares a heap about sharing their knowledge, to mentor those who don’t know things.

Combined with Rob Farley (t | b | w) , one of the most insanely brilliant intelligent quick-witted people I’ve ever met  I had a triumvirate of MVP mentorship.

So I had three “local” MVPs but in fact I have 2 other people who both enriched my life both at a personal level but also at a mentor level.

Nagaraj (Raj) Venkatesan (t| b) and Melody Zacharias (t | b | w) – both of whom are like family to me. Their positiveness and support when I needed advice or just a “hey bro, how’s things” was awesome and I hope to be the same kind of mentor within the remote (global) community that you both were for me.

Now – here’s the thing – I actually had to prove that I was worth something – and that reads way worse than it sounded in my head when I wrote this……………..

Yes MVPs will help you out – but you also have to help yourself out. Life is not about spoon feeding. The guys were awesome to bounce ideas off for things I wanted to do in the community and I really would not be here without them.

Now of course there are others (a special mention to Reza Rad who I would love to emulate what he does in the Global Community) – but I’ve chosen Martin, Warwick and Rob because they were the guys I skyped, messaged and talked to the most and lads – I am so appreciative that you put up with me, that you consoled me and counselled me through the years.

The one thing I learned becoming a MVP:

Stay true to why you started doing all this.

If you’re just speaking/writing to try and rake up points or whatever to become a MVP – I’m sorry but (in my opinion) that is not a good reason to become a MVP.

While I was going through the process I stopped thinking about MVP (where I could…) and stuck to what I’d been doing the past 3 years. I continued to hone my craft, to extend my reach in the GLOBAL community and most of all to find new ways to help people “make stuff go”.

I have seen people who want to become a MVP go slightly insane about it or even to go about things the wrong way. Like really wrong..

For me in the past months gone by I decided to Forget about the MVP and to stick to First Principles – sharing knowledge is why I got into this game. Being able to see after 60 minutes that I’ve made someone have a eureka moment or epiphany — that is the goal.

I would spend lunch hours talking to anyone who needed help with DevOPs, databases or Azure. Because I finally found I had knowledge and could help/mentor others which was such a great feeling of accomplishment.

Blogging for me was a new thing – again I had been selfish over the years reading others works. Blogging allowed me to try stuff out, then write about what I’d done. Some of it was very simple stuff – but you know what – there is a place for simple technical blogs. I know because over the years I had done many searches for just simple things. SO I am finding a balance – treating my technical blogging like a journey.

Create one database in Azure using one line of PowerShell — done

Creating many databases in Azure using one line of PowerShell — done

Creating a Continuous Delivery Pipeline in 59 minutes using PowerShell — coming soon

My blog post on VSTS hosted agents has got some good hits of late. That was a blog post that came out of my scrooge-like tendency to not spend money on build minutes!!

All of these things — speaking/mentoring others/writing — were my new first principles and I stuck to them. I did think about the MVP thing — I’m human, but I didn’t let it consume me.

Any consuming thoughts I would then put my energy purposely into writing/doing more content – to find better ways of delivering knowledge back to the community.

May 2017 was a massive month for me – I ran a conference, spoke 7 times and traveled to SQL Saturday Brisbane.

I was exhausted at the end of it, but so rewarded as I helped 2nd year ICT students, a Virtual Chapter, a User Group in Canada, a User Group in Christchurch, co-ran Code Camp Christchurch where I spoke twice (twice!!) and of course — the Brisbanites.

I was going to spend last weekend writing up some more content and plan for the upcoming DevOPs Boot Camp which I’m running.

But Friday morning at 6:20am I checked my emails — which I do every morning and there it was. I had been awarded the Microsoft MVP Award. For the first time in my life I was speechless. I’m man enough to admit I cried. Mostly because I didn’t think I was worthy enough to be a MVP and that other people had thought I was worthy enough.

My only regret is that I can’t tell my mum – a lady who took in the strays and unfortunates of society and endeavored to help them. She instilled in me from a young age that we were so lucky with what we had and that we have to help those who aren’t so lucky.

It stuck with me over the years and is my first principle — to help people grow.

That is why 4 days in, I have come to the place where I know I am worthy of this MVP award.

I am also excited because I am now moving from the phase of achieving a MVP award to now delivering like a MVP would.

Yip.

 

T-SQL Tuesday #90 — Bringing Continuous Delivery to a ‘brownfield’ database system

tsql2sday150x150-1[1]

This blog post is part of  T-SQL Tuesday #90 – Shipping Database Changes .

I have decided to write about  a client’s SQL Server based system that was having issues with deployment of code changes. The code changes were being deployed by a third party contractor.

The client engaged with the company I work for to see if we could help out as we were deploying to their other systems with zero issues in PROD.

The issues they were experiencing with the other contractor were:

  1. error prone manual deploys to PROD
  2. no UAT or QA systems so effectively deploying straight from DEV to PROD.
  3. little testing and what testing there was — manual and laborious
  4. no source control of database changes
  5. no central repository (or consistency) of configuration settings
  6. deploy outages that were 4 hours long

So rather than a greenfields project where we can do anything and everything with Continuous Integration and Continuous Delivery we were going to back fill these against a running PROD system – carefully.

So this is what is termed as ‘brownfield’ – a running legacy system…

We worked with the 3rd party contractor as they too were having issues as their DBAs were working at 3am until 7am and it wasn’t a great time.

Note: we were not replacing the 3rd party contractor – we were helping everyone involved. The particular client was very important to us and we were happy to help them out across their other systems we did not actively manage (yet).

The first step was to introduce Source Control and Continuous Integration – so all code that was being developed was pushed up to Source Control. For ease of use and to prove that this was a good thing we started with the application code.

Builds were kicked off in Team City and a consistent artifact was created, we also introduced automated testing.

We then retrofitted consistent environments to the application lifecycle – how’d we do the database side?

We used the Data Tier Application (DAC) model of creating an entity that contains all of the database and instance objects used by an application. So we generated a DAC out of the PROD system (of just the objects and schema definitions – NOT the data), stored it in Source Control and used sqlpackage to create 5 databases that were all standard (compared to PROD).

They were:

CI, QA, Integration, UAT, prePROD

CI – was there to test every build generated against.

QA — was there for nominated QA engineers to test functionality against

Integration – was there for Integration testing with certain backend/frontend systems

UAT – was there for User Acceptance Testing

prePROD – was there for a true copy of PROD

prePROD was a special case as this was a full replica of PROD but with scrambled data.

The application tier was standardised according to my OCD like naming scheme – see my post here about why this was a good thing.

By then making the 3rd party developers push all application & database code changes to source control we then had a versioned state of what changes would be made.

This wasn’t a “hit the GO button” scenario — we had to try things out manually and carefully and there were things that went bump. BUT they went bump in CI and Integration and even prePROD – but by following Continuous Delivery principles and applying to them to the database we have in the past two years never gone bump in PROD.

Now we couldn’t do all the things we wanted – this was not a greenfields project. So here is what we compromised on:

We had to use invoke-sqlcmd for deploying the changes against the databases.

SQLCMD Script

This was OK – because we using a variablised script (which the 3rd party contractor had never done before so we showed them how) and it was deployed consistently across all databases — any issues we caught early on.

Also our deployment process step:

process step

was standard across all environments in our database/application deployment lifecycle.

Here is what the standardised packages from the Continuous Integrationbuild looked like:

Packages

The best part was that ALL database changes in the DBScript package were in the same version as associated application changes. In fact we also included any configuration changes in the repo as well – you know Infrastructure as Code type stuff.

So let’s measure how the changes are against the initial issues:

  1. error prone manual deploys to PROD
    The deploys were automated and any errors were caught early
  2. no UAT or QA systems so effectively deploying straight from DEV to PROD.
    We had PROD-like environment that could give us feedback immdiately and protect PROD from out of process changes.
  3. little testing and what testing there was — manual and laborious
    Introducing both automated tests in the Continuous Integration build steps and automating the deploy
  4. no source control of database changes
    All database changes were pushed up to source control and were versioned.
  5. no central repository (or consistency) of configuration settings
    All configuration settings were stored in a central reposistory and deployed out as aprt of Continuous Delivery processes. Everything was a variable and consistent across the whole application/database lifecycle
  6. deploy outages that were 4 hours long
    Deploys were now around 5 minutes as everything was consistent and proven and we never rolled back once. We could if we wanted to – but didn’t need to.

The upshot of using Continuous Delivery for their ‘brownfield’ system was that database & application changes were consistent, reliable and automated.

The client was very happy, the 3rd party contractor was very happy too (as their time-in-lieu, angst was reduced) and the client approached us to manage more of their systems.

For more things like this — go read the other posts about this month’s T-SQL Tuesday

Yip.

 

.

Removing an App Service Plan in Azure

In my previous post I adhered to my O-OCD (Operational OCD) and standardised my App Service Plan name to fit in line with my Database, User and App Service naming standard.

Why do I standardise everything? Have a quick read here:

The 20 second rule (or why standards matter).

Here is what we have:

AppService_removal

Now that I have standardised – I want to get rid of the historical (and nonsensical) named “CientPortalAppServicePlan” .

Now I could just click on it in the portal and hit “Delete”:

AppService_removal_2

But where is the scripting fun in that?

Also I may one day need to do this for MANY App Service Plans, so let’s do this via PowerShell:

Login-AzureRmAccount
$resourcegroupname = “AzureDEMO_RG”
$location = “Australia East”
$AppServicePlanName = “CientPortalAppServicePlan”

Remove-AzureRmAppServicePlan

-Name $AppServicePlanName

-ResourceGroupName $resourcegroupname

AppServicePlan_removal

Click “Yes” to the pop-up and it is now gone and I can (now) relax.

Which means that I have a couple of Azure App Services and Azure SQL Databases — and they all have standardised names — which NOW means I can do some awesome Continuous Integration and Continuous Delivery/Deployment stuff.

Yip.

Creating an App Service in Azure that connects to an Azure SQL Database.

Using the methodology listed in my previous blog post on creating an Azure SQL Database we now have a Continuous Integration database (named  CIAzureWebAppDEMO_DB).

Now that we have a database its now time to create the web app. This will be an App Service in the Azure portal.

For more background information have a look at Microsoft’s documents on this area:

https://docs.microsoft.com/en-us/azure/app-service-web/app-service-web-app-azure-resource-manager-powershell
  1. We’ll create a App service plan first (mainly as my old name was … old):
App_Service_reason
This App Service Plan has a historical and nonsensical name so has to go…

So using PowerShell:

$resourcegroupname = “AzureDEMO_RG”
$location = “Australia East”
$AppServicePlanName = “AzureDEMO_AppServicePlan”

New-AzureRmAppServicePlan

-Name $AppServicePlanName

-Location $location

-ResourceGroupName $resourcegroupname

-Tier Free

Next we’ll create our App Service:

$resourcegroupname = “AzureDEMO_RG”
$location = “Australia East”
$AppServiceName = “CIAzureWebAppDEMO”
$AppServicePlanName = “AzureDEMO_AppServicePlan”

New-AzureRmWebApp

-Name $AppServiceName

-AppServicePlan $AppServicePlanName

-ResourceGroupName $resourcegroupname

-Location $location `

We can now configure the app service application settings using PowerShell or through the portal and as all Azure Web Apps need configuration values, database based applications also need to have their database Connection String values configured.

Therefore because this app service connects to an Azure SQL database we need to assign it a connection string.

Portal

You can use the Azure portal to configure your newly created App Server:

Configure_AppService_DB_Cxn

  1. Open the Azure Management Portal via https://portal.azure.com
  2. Navigate to the Web App within the portal.
  3. Under “All settings” open up the “Application settings” pane
  4. Scroll down to the “Connection strings” section
  5. Configure as necessary.

PowerShell:

I prefer to use PowerShell as I can feed this into my Continuous Delivery pipeline as I build applications/databases on the fly.

For security reasons we will hash the connection string:

$resourcegroupname = “AzureDEMO_RG”
$location = “Australia East”
$AppServiceName = “CIAzureWebAppDEMO”
$AppServicePlanName = “AzureDEMO_AppServicePlan”

# Create Hash variable for Connection Strings
$hash = @{}

# Add a Connection String to the Hash by using a Hash for the Connection String details
$hash[“defaultConnection”] = @{ Type = “SqlAzure”; Value = “Data Source=tcp:webdbsrv69.database.windows.net,1433;Initial Catalog=CIAzureWebAppDEMO_DB;User Id=CIAzureWebAppDEMO@webdbsrv69.database.windows.net;Password=<redacted>;” }
# Save Connection String to Azure Web App
Set-AzureRmWebApp -ResourceGroupName $resourcegroupname -Name $AppServiceName -ConnectionStrings $hash

So there we have it – an App Service created in a specific App Service Plan that can now connect to an Azure SQL Database. Using this methodology I could have configured anything/everything in the application settings.

Now let’s get rid of that badly named App Service Plan in my next post.

Yip.

Removing an Azure SQL Database using PowerShell..

In this post we will remove some databases located in Azure.

This related to my last post where I am cleaning up some badly named databases and replacing them with standardised database names.

So we have this situation:

database_create_1

We want to get rid of the top two databases AzureWebAppFunctionalTestDB and AzureWebAppQADB.

So we’ll check we’re logged in:

PS > Get-AzureAccount

If not – then just login

PS > Login-AzureRmAccount

It is always a good idea to list the databases on your server:

PS >Get-AzureSqlDatabase -ServerName $servername

And now we’ll simply remove the database:

# Some variables for our resources
$resourcegroupname = “AzureDEMO_RG”
$databasename1 = “QAAzureWebAppDEMO_DB”
$databasename2 = “FTAzureWebAppDEMO_DB”

Remove-AzureSqlDatabase -ServerName $servername -DatabaseName $databasename1

Remove-AzureSqlDatabase -ServerName $servername -DatabaseName $databasename2

You will then be asked if you really want o remove the database:

database_remove

Answer appropriately and viola – we have removed our Azure SQL Databases via PowerShell

 

Yip.

Creating an Azure SQL Database via PowerShell

This post is about using the brilliance of PowerShell to script the creation of databases in Azure.

Background:

Apart from the obvious question of why not? The actual reason is one of standards and how I did not adhere to my own standards that I’ve been preaching for the past 17 years.

For background read my post on standards:

The 20 second rule (or why standards matter).

And you will see how badly I strayed…

Exhibit A:

The WHY of this presentation 3

Yes… my application name and database name do not match my standard of “everything should be able to be tracked from web to database via a name

So of course being a guy who does Data Platform stuff – we’ll create the database first, then the app service then we’ll adjust our release in VSTS. These of course will be split over 2 or more blog posts.

Quick discussion on Tiers in Azure:

There are quite a few different pricing tiers available with Azure for a Microsoft Azure SQL Database.

It allows us to select the capacity metrics that are relevant for our application, without paying too much than what we actually need. Switching between tier plans is very easy. If you have short periods of high performance demand, we can scale up our Azure SQL database to meet those demands. Later on, after the demand diminishes, we can scale the SQL database back down to a lower pricing tier, thus savings us costs.

For this database I’m going to use the Basic tier.

PowerShell Scripts:

So in powershell we’ll log into our Azure subscription:

PS > Login-AzureRmAccount

Which will ask us to login.

We then see our subscription details:

Environment : AzureCloud
Account : moosh69
TenantId : [some hexadecimal numbbers here]
SubscriptionId : [some more hexadecimal numbbers here]
SubscriptionName : [Name of your subscription]
CurrentStorageAccount :

OK, for fun let’s see what databases we have using PowerShell.

Firstly we’ll find out our database server:

PS > Get-AzureSqlDatabaseServer

Which will list our database server and we’ll create a variable $servername using the output of the above.

We now run this command to list the databases in that server:

Get-AzureSqlDatabase -ServerName $servername

Name : AzureWebAppQADB
CollationName : SQL_Latin1_General_CP1_CI_AS
Edition : Free
MaxSizeGB : 0
MaxSizeBytes : 33554432
ServiceObjectiveName : Free
ServiceObjectiveAssignmentStateDescription :
CreationDate : 07/01/2017 8:18:54 AM
RecoveryPeriodStartDate : 27/04/2017 8:29:07 PM

Name : AzureWebAppFunctionalTestDB
CollationName : SQL_Latin1_General_CP1_CI_AS
Edition : Basic
MaxSizeGB : 2
MaxSizeBytes : 2147483648
ServiceObjectiveName : Basic
ServiceObjectiveAssignmentStateDescription :
CreationDate : 08/01/2017 5:57:42 PM
RecoveryPeriodStartDate : 26/04/2017 8:07:56 PM

Name : master
CollationName : SQL_Latin1_General_CP1_CI_AS
Edition : System
MaxSizeGB : 30
MaxSizeBytes : 32212254720
ServiceObjectiveName : System0
ServiceObjectiveAssignmentStateDescription :
CreationDate : 02/01/2017 10:09:58 PM
RecoveryPeriodStartDate :

We’re not too interested in the MASTER database.

So we’re going to create 2 new databases (and eventually point our app services at them).

Here is the very simple code:

# Some variables for our resources
$resourcegroupname = “AzureDEMO_RG”
$databasename1 = “QAAzureWebAppDEMO_DB”
$databasename2 = “FTAzureWebAppDEMO_DB”

New-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname `
-ServerName $servername `
-DatabaseName $databasename1 `
-Edition “Basic”

New-AzureRmSqlDatabase -ResourceGroupName $resourcegroupname `
-ServerName $servername `
-DatabaseName $databasename2 `
-Edition “Basic”

And just like that we’ve created our first database in Azure using Powershell in about 5 seconds.

Yip.

Moving Azure resources between subscriptions – especially VSTS Team Services Account

For the past 6 months I’ve been paying for my own Azure subscription. My work has a plan but for some reason I (and others) who had a MSDN Subscription (Infrastructure) could not access the ‘free’ credits. I use Visual Studio Team Services (VSTS)  in a lot of my DEMOs and thus was paying quite a bit of my own money to design/create/test my DEMOs before presenting them (which also was costing me run time $$).

Until today.

I finally got added to my work’s “Pay-As-You-Go” subscription. Which meant I had to transfer ALL my Azure resources. And I mean ALL MY AZURE resources.

So I decided to use the portal and it really was as simple as going into my Resource Groups and clicking change subscription.

CHanging Resource Groups
Then choosing the new subscription, creating a new resource group to move all the resources to and clicking that you understand that tools and scripts need to be updated.

CHanging Resource Groups_2

It took about 3 minutes and was very painless.

At this point I’d like to state I could have probably used PowerShell but I wanted to actually see if the portal would do what I needed.

It did.

Except that when I ran up my DEMOs in VSTS — it couldn’t see any of my app services. Which wasn’t surprising as I had actually clicked “I understand that tools and scripts associated with moved resources will not work until I update them to use new resources IDs”.

Duh…

So I then spent most of the afternoon trying to move my VSTS Team Services resource. I got a heap of move failures.

Then I read:

https://social.msdn.microsoft.com/Forums/vstudio/en-US/7404fed9-f9cd-4d11-acae-a7726d7dbb15/move-visual-studio-team-services-to-another-subscription?forum=TFService

Which lead me to:

https://blog.kloud.com.au/2014/01/06/how-to-link-existing-visual-studio-online-with-windows-azure/

I then used https://manage.windowsazure.comto unlink my VSTS subscription from my old ‘Pay-As-You-Go’ subscription and then link it to the new one.

CHanging Resource Groups_3

All that was needed now was to check in the Azure Portal that VSTS was on the new subscription (it was) and then to edit the Service End Point for each project that might use them in VSTS:

CHanging Resource Groups_4

What this now means is I can now start creating a heap of Azure resources ( mostly Data Platform stuff because… Data Platform).

So my next post is going to be about creating a heap of Azure resources.

Yip.

Installing a Visual Studio Team Services private build agent hosted On-Premises

This blog post is about a situation where I  use Visual Studio Team Services (VSTS) to build/deploy my DEMOs. Those DEMOs are what I use to illustrate Continuous Integration & Continuous Delivery (important parts of that thing called DevOPs).

I use my own personal VSTS account so that I am not showing anything commercially sensitive.

One thing with using a basic account is that you only get 240 build minutes per month. These builds are utilising the hosted build agents in Azure.

This works great – up until I have 4 DEMOs that are different and require testing before I present them. In May 2017 I had 4 DEMOs to conduct:

Code Camp Christchurch on May 13th  where I was speaking twice: (VSTS and Azure DEMO and SQL Server Data Tools DEMO) both of which would be using VSTS.

A webinar for the DBA Virtual Chapter on May 18th which was on “DevOPs and the DBA”.

SQL Saturday Brisbane on May 27th – mostly the same DEMO as the DBA VC.

Whilst the DEMOs themselves would only use 12 minutes of build time – the preparation of them would push me over the limit. I also wanted to try quite a few new things related to SSDT, DACPACs and other things related to SQL Azure.

However…..

You can use your own hosted (known as ‘private ‘) build agent and connect to up to VSTS – so long as it has access and the right tools installed on it….

That is another reason to use your own hosted agent — you may have specific software required for your build(s). In my case I would be doing a lot of builds relating only to SQL Server, Azure SQL Database and associated tools.

Oh yeah — VSTS allows one free private hosted agent.

Hooray!!

So I have created a dedicated build agent just for my SQL Server activities.

Here is a step by step guide to installing a private hosted build agent. For authentication I will be using a Personal Access Token (PAT) for the agent to authenticate to VSTS.

  1. Log into VSTS and from your home page, open your profile. Go to your security details.VSTS_Agent_0
  2. Configure the PAT — for the scope select Agent Pools (read, manage) and make sure all the other boxes are clearedVSTS_Agent_0_1
  3. Save the token somewhere as you will need it when installing/configuring the agent on the on-premises server.
  4. Log on to the machine using the account for which you have permissions to install software and also access VSTS
  5. In your web browser, sign on to VSTS, and navigate to the Agent pools tab: https://{your_account}.visualstudio.com/_admin/_AgentPool

    VSTS_Agent

  6. Click Download agent.
  7. On the Get agent dialog box, click Windows.
  8. Click the Download button.
  9. On the server – extract the ZIP file in a directory and run config.cmdVSTS_Agent_3
  10. Fill in details as required – you will use the PAT token from step 2/3 above:VSTS_Agent_4
  11. Go back into VSTS and you will now have a new hosted agent:VSTS_Agent_5
  12. For the particular project — go into Settings | Agent Queues and choose the existing pool that will have the newly installed private agent in it.VSTS_Agent_6
  13. Next we want to associate our build steps with this queue – so edit your build definition and choose the Agent Queue from above:VSTS_Agent_7
  14. Now the REAL fun begins – let’s queue a build!!:
    VSTS_Agent_8
  15. And of course it works the first time….VSTS_Agent_9
  16. If  we now look in generated artifact we will have our desired result – a DACPAC fileVSTS_Agent_10

 

That we can now use VSTS to deploy out to our Azure SQL Database (and also on-premises SQL Server).

Which is the basis for another blog post…..

Yip.

Retrospective analysis of SQL Saturday South Island (#sqlsat614)

Now that SQL Saturday South Island (also known as #sqlsat614 on the Twitter) is done I thought it would be good to look back at an event that consumed me for 4 months.

If you haven’t already — read my post on how to grow a technical conference.

Back in October 2016 just before the PASS Summit Martin Catherall (t | b) and I agreed on a date that we’d run SQL Saturday South Island (SSSI). I would be the lead organiser and Martin would be on the organising committee – along with Rob Douglas (t | b).

Mention should also go to Nick Draper (t) and Sarah Harding (t) — who are on my User Group Committee who helped out with volunteering (and sponsorship (The Talent Hive)).

I was the only of this triumvirate who actually lived in Christchurch so it made sense that I’d do most of the setup work here. From January 2017 until April 2017 I would Skype Martin nearly every week (sometimes more than once) to discuss things relating to SSSI. I’ll give Martin kudos in that he put up with my nagging and OCD like ways very amicably and I want to acknowledge that without him being the calm listening ear to my hypotheses/rants then SSSI wouldn’t be the success it was.

The timezone difference meant that most Skype calls were at 9pm NZT and later and this did result in some humorous (awkward?) situations based on my nighttime attire that Martin no doubt got therapy for….

Summary of SQL Saturday South Island:

A.  We had 126 onsite attendees (up from 93 in 2016). Our venue limit was 150 people.

B.  We had 20 speakers in 5 streams across 4 tracks

Of those speakers:

1 from Singapore

2 from Brisbane, Australia

2 from Sydney, Australia

2 from Melbourne, Australia

1 from Adelaide, Australia

3 from Auckland, NZ

3 from Wellington, NZ

2 from Nelson, NZ

4 from Christchurch, NZ

Of those 20 speakers the were made up of 11 Microsoft MVPs and two Microsoft Certified Masters.

To put this in perspective the most we’d ever had before was 15 speakers (2016) and we had 50% Christchurch based speakers. I really wanted to have speakers from outside Christchurch so that attendees could see people they normally wouldn’t.

Thank you to our sponsors:

Without the generosity of our sponsors we wouldn’t be able to put this event on.

Jade Software

Microsoft

WardyIT (the first SQL Saturday outside of Australia that WardyIT have sponsored)

SQL Services Limited

SentryOne

Dave Dustin Consulting & Training

PASS

The Talent Hive

Ara Institute of Canterbury

Special mention should be made about Dave Dustin Consulting & Training — Dave (t |w) has been a long time supporter of SQL Saturdays in Christchurch. This year Dave was not only a speaker but signed up as a sponsor. This was very humbling for me – as that sponsorship meant we could do some more things — but more importantly was a higher % of Dave’s earnings/year than (say) Microsoft earnings/year…..

It meant a lot to me that we had someone who believed so much in what we were doing that they’d put their own money behind us. Thanks Dave – I hope you get some good consultancy gigs out of what you did for us.

OK, here is a summary of:

Things we did right:

Getting “remote” speakers:

Promoting Christchurch as a cool close knit community to speakers — every SQL Saturday or conference I went to in Australasia I talked about how friendly we are. It worked!!

Getting sponsors:

Approaching sponsors we didn’t think would sponsor us — they did!!

Getting great volunteers:

Asking for more volunteers than last year – we even had t-shirts for them!!

Promoting the conference:

Promoting across all forms of social media – this greatly helped our registrations.

Awesome Precons:

Having both Reza Rad and Warwick Rudd do precons for us greatly helped – thanks guys.

Things we could have done better:

Adjacent Rooms:

Our four rooms were split across the campus for the first time – one in N Block and the other 3 in W block.

Ara_W_block

Which made the DBA track in N Block somewhat disjointed. Thing is there was a spare room next to the other three so in 2018 we’ll have the four tracks all together.

Have a local 2IC:

Whilst Martin Catherall and I work together nicely– he is based in Melbourne. So I need someone local that I can nag as much as I nagged him 😉

Scanning:

Have more people than myself scanning speedpass tickets. In 2018 I’ll just have a scanning BBQ as well as a speaker BBQ on the Sunday.

More International Speakers:

I want some North American speakers. Because if the great precons we ran we have some $$ in the bank for SSSI 2018. So I am going to offer speakers the chance to hang out afterwards in Hanmer Springs for 2 nights on SSSI.

I’ll even take speakers diving in Akaroa or Kaikoura as part of “Come to Christchurch and experience Kiwi hospitality”.

If you want to know what Kiwi Hospitality is like  — read this post by my SQL bro Nagaraj Venkatesan (t | b):

http://www.sqlservercentral.com/blogs/sql-and-sql-only/2017/04/13/sql-saturday-christchurch-2017/

Summary:

We grew our registrations up to 158 this year from 123 last year, we had 126 people onsite this year compared to 93 last year.

In short – I can’t wait until SQL Saturday South Island 2018.

I’ve already started planning (and Skype calls with poor Martin Catherall) and my aim is to get some North American speakers out here.

Yip.

Resolution to “Connection Timeout Expired. [Pre-Login] initialization =18090; handshake=14281” error

This blog post is about a SQL Server connection issue that presents itself:

AG_handshake error

We were building an Availability Group (AG) at the time for an online banking platform.

PROD would have 4 nodes – 2 in Christchurch and 2 in Auckland. Whilst building the prePROD installation (a 3 node cluster (2 in Christchurch and 1 in Auckland) we ran into an interesting issue as described in the title.

During the build phase of setting up the AG you have to add in the replicas – and this brings up the normal connect window in SSMS.

Except for some reason Node 1 could not connect to Node 2.

Yet Node 2 could connect to Node 1.

What the?

Things got down right weird when I decided to try connecting with SQL Authentication and Node 1 COULD connect to Node 2.

But using windows authentication to connect – Node 1 could NOT connect to Node 2.

A brief description of the environment – which for this setup is one of the most secure/restrictive I’ve ever installed SQL Server in.

Each Node has a base IP address – but it also had a secondary IP address that SQL Server would listen on and the environment required non-standard ports for SQL Server to listen on.

Node 1 – 172.34.59.106 and a secondary address of  172.34.59.108

Node 2 – 172.34.59.107 and a secondary address of 172.34.59.109

The clustered IP address in Christchurch was going to be 172.34.59.110.

The non-standard port for connecting was 51234.

Just to add some complication the client had already started testing the application using Node 1 – using 172.34.59.108 which had a DNS entry associated with it that the application would connect to.

We had tried to connect to the instance on Node 2 from Node 1 using a client alias setup for the instance name (using the IP address and port ) and we also had tried using IP Address,port:

172.34.59.109,51234

We could connect to it on Node 2 itself, we could telnet to it from Node 1 but we could not connect to it IF we were using windows authentication.

As mentioned SQL Authentication worked just fine.

What the…..????

After about an hour trying every permutation we stumbled upon the eventual answer which was found by logging onto the DNS server and lo and behold Node 2 did not have a DNS entry associated with its secondary address – 172.34.59.109.

Because of the restrictive nature of this install it was not my team setting up DNS – which we normally do – or our networking department who do the big stuff.

So we added in forward and reverse DNS records and voila – things worked.

This was a very confusing error – as it only occurred if we were using windows authentication – which we needed to do for the AG.

I could not find much on the internet about the error number but after the fact I found a forum post that I have since responded to – the answers there were close but I think my answer is closer – well for my situation anyway…

This error just goes to show how important it is to go through all the variables associated with a problem, investigate everything and also make sure that things are setup how you expect – don’t assume they are.

Yip.