Answer to “Could not locate file ‘xxx_Log’ for database ‘xxx’ in sys.database_files”

This blog post is a form of resolution to the issue of not being able to shrink a transaction log after a FULL backup where you this error:

Could not locate file ‘xxx_Log’ for database ‘xxx’ in sys.database_files

Its funny how peculiar issues decide to crop up on a Friday afternoon…

…just as you’re about to go home.

Long story short a client’s database transaction log kept growing and they couldn’t truncate it and they needed it to be shrunk. So I was asked to assist.

 

My quick and dirty method was to shrink the 500GB file down to 1GB:

USE [master]
GO
ALTER DATABASE [clientTdatawarehouse] SET RECOVERY SIMPLE WITH NO_WAIT
GO
USE [clientTdatawarehouse]
GO
DBCC SHRINKFILE (clientTDataWarehouse_log, 1024)
GO
USE [master]
GO
ALTER DATABASE [clientTdatawarehouse] SET RECOVERY FULL WITH NO_WAIT
GO

However I got this error:

Msg 8985, Level 16, State 1, Line 1
Could not locate file ‘clientTDataWarehouse_log’ for database ‘clientTdatawarehouse’ in sys.database_files. The file either does not exist, or was dropped.

Hmmmm……

So I ran

use [clientTdatawarehouse]
SELECT name FROM sys.database_files

I also ran

use [clientTdatawarehouse]
exec sp_helpfile

And both times got back what I’d expect:

name
clientTDataWarehouse
clientTDataWarehouse_log

But no matter how I copied and pasted it into the original query – I still got the error.

I tried everything listed here:

http://dba.stackexchange.com/questions/43932/dbcc-shrinkfile-works-with-file-id-but-not-with-logical-name

and

http://stackoverflow.com/questions/12644312/could-not-locate-file-mydatabase-for-database-mydatabase-in-sys-database

But still got the weird error.

This was a database that COULD back itself up normally – both bak and trn type backup..!!

(the reason I got called in was because the log and backup drive was very full — because the transaction log kept growing and they couldn’t shrink it and….)

So I decided to clear some disk space, do a backup and then tried this:

USE [clientTdatawarehouse];
ALTER DATABASE clientTdatawarehouse MODIFY FILE
(NAME = clientTdatawarehouse _log, NEWNAME = clientTdatawarehouse_log_1);

It worked in the rename part and now for the shrink part.

Original script slightly modified:

USE [master]
GO
ALTER DATABASE [clientTdatawarehouse] SET RECOVERY SIMPLE WITH NO_WAIT
GO
USE [clientTdatawarehouse]
GO
DBCC SHRINKFILE (clientTDataWarehouse_log_1, 1024)
GO
USE [master]
GO
ALTER DATABASE [clientTdatawarehouse] SET RECOVERY FULL WITH NO_WAIT
GO

It worked!!

Renaming the logical name back to the original also worked and I could shrink the transaction log. BTW I should state the shrinking transaction logs is not a good thing but this client does it (for reasons)  – oh well.

As to why SQL Server had forgotten what the name is – that is my weekends project of diagnosing it. For now – the database can be backed up, the client is happy and my Friday is now my own…

..after going back and contributing back to the community by updating the two sites I initially looked at.

Yip.

Advertisement

Having OCD in Azure…

Everyone gives OCD a hard time. When in fact it is something to embrace.

I am of course talking about Operational Continuous Delivery.

This is where we in Operations can do some of the awesome stuff that Developers have been doing for years with Continuous Delivery. That site BTW is one of the better sites I’ve read about what CD is all about.

Azure really helps us have OCD and achieve greatness.

How?

Well first of all Azure is built on Continuous Delivery (DevOPs) principles.

Example – use Azure SQL?

That is deployed to every 3-4 weeks as part of agile processes. Gone are the days where you wait a year or more for new features. Microsoft have embraced DevOPs and this has meant that new features are rolled out to Azure as part of the “Cloud First” direction they have taken.

What this means is that for new shiny resources I will be looking first to Azure.

What this means is that to achieve the efficient management of those resources I need to also adhere to the all the standards and iterative processes that help make things go (efficiently).

In fact I’ve written about it in my post about standards – so what do we need to do to achieve the same in Azure (or any cloud service provider for that matter…) to achieve Operational Continuous Delivery?

Firstly the Continuous Delivery part of this is being able to reliably, repeatably deploy resources & apps in an automated fashion.

Quickly. Iteratively. Safely.

We can transfer some of the good stuff we have with Continuous Delivery into Azure. Having a standard setup means that we can now deploy things automatically in Azure that we couldn’t perhaps do as quickly on-premises.

The Operational part is about encompassing Continuous Delivery for all resources – whether they are servers, databases, applications, users, storage or Azure Functions.. Which is why this is the blog of the Hybrid DBA and not the blog of the DBA. Operations over-arches all things – whether it is a web app being deployed, a SAN being configured or a change to a database schema(using SSDT).

Let’s look at an example of a Virtual Server residing in Azure that has an Azure SQL database:

ClientPortal_Azure_OCD

So our server CNWAZUS4 (the 4th server in Azure for the client known as CNW) is connecting to the Functional Test database for the Client Portal application.

On the server in question we have IIS setup like this:

ClientPortal_Azure_OCD
Website in IIS

 

ClientPortal_Azure_OCD
Application pools in IIS

In the above pictures we have the user cnwFclientportal assigned to the cnwFclientportal_portal application pool which resides in the cnwFclientportal website and has an application called client portal. These connect to a database called cnwFclientportal in the Azure SQL instance known as ClientPortalAzure.

ALL of the above resources were not created by a human.

This means that when we want to spin up the Integration database (cnwIclientportal) all we will do is pass a new parameter to our Azure Automation scripts and we let the machines do the boring (but standard) work.

We can do this in seconds, we can do this reliably, we can do this repeatably and it is automated.

Azure really helps us do this by use of all the templates available to us. We can now deploy servers, databases, apps, users all at the push of a “button”. We can make iterative changes and know that things will work as we have a standard setup and a standard deployment process.

Thus we truly do have OCD….

…Yip.

I like data – so Azure SQL is (initially) my focus..

My journey with Azure has been around what it can do for me – and the company I work for (Jade Software). I’ve been doing databases for almost 20 years (I feel old writing that).

I’ve interacted with data in many shapes and forms, from

— text files generated from Visual FoxPro that held sheep pelt sizes in 1996 to

— running massive amounts of data in Azure for a NZ utility company to

— Machine Learning…

BTW if you haven’t already – you need to take advantage of the free trial offering from Microsoft Azure. I’ve used the free trial for testing out certain things – it is very easy to set up and most of it is the defaults.

Head along to https://azure.microsoft.com/en-us/free/  and try it today (it’s free!!).

So my journey with Azure is based around the end result of using Cortana Analytics and Machine Learning to make a difference with data. Because I’m about the data.

And servers – I have come from an infrastructure background — which is invaluable if you want to make databases go faster….

But let’s start with the basics first and stay on track about databases — last post we made a database in Azure (hooray) using Azure CLI on a Mac (double hooray).

It is just as easy, maybe easier, to set things up in the portal. So log into https://portal.azure.com – go to the SQL databases icon

Screen Shot 2017-03-22 at 21.18.13

And then we want to ADD a new database

Screen Shot 2017-03-22 at 21.17.37

We’re now going to choose some options – in this example I already have the following:

Resource Group (ClientPortalRG)

Server (webdbsrv69)

Location (Australia East)

Pricing Tier (which I am changing from S2 down to Basic)

As shown here:

Screen Shot 2017-03-22 at 21.21.16

Let’s say I needed a new server – that is simple as – in the portal I’ll choose “Create a New Server” as shown here:

Screen Shot 2017-03-22 at 21.23.55

Which takes about 10 seconds to input the settings and about 1 minute to have ready and going. In NZ colloquial terms that is “sweet as”.

I have come at this from a database first perspective – but I could have created the server first and provisioned it. I like the fact that I have options for how I create resources.

OK, lastly – we have the three almost mot important parts of the setup at the bottom of the screen:

Screen Shot 2017-03-22 at 21.29.46

We can pin the about to be created database to our dashboard, we can create it (quite important) but we can also use automation options.

I’m an automation type guy so the automation part is really important to me and is the basis of another more in-depth post. I’ll probably use the words “continuous delivery” and “Infrastructure as Code” when describing the brilliance of Automation Options in Azure.

Yip.

 

 

Using Azure CLI to do stuff….

For this post I am going to talk about one of the methods I use to  manage my Azure resources.

The prime driver for this is what I call “Infrastructure as Code +”. In my previous blog post “How Azure can assist the deployment of our applications” I talk about how awesome Infrastructure as Code is. Using PowerShell to spin things up and control/manage them is great – but the Azure platform takes things next level.

Because I can now truly spin up resources in seconds where doing things On-Premises (even in a virtual setting) can still take a little time. Sure I can push a button and walk away know it will be a declarative, standard and automated deploy of infrastructure – but it still takes longer than anything I can do in Azure.

At work I’m a windows guy, at home I run MacOS (it’s a long story – buy me a beer one day and I’ll tell you). I also run some Linux stuff but that is an even longer story…

So one of the first things I did when I was mucking about with Azure was install the Azure PowerShell SDK on my work laptops and then install the Azure CLI on my home Mac.

So I visited https://azure.microsoft.com/en-us/downloads/ to get everything I needed and followed the instructions in https://docs.microsoft.com/en-us/cli/azure/install-azure-cli

Because there are so many articles written on the internet around Azure PowerShell on windows I am going to show how I do things using the Azure CLI on Mac.

So I ran:

curl -L https://aka.ms/InstallAzureCli | bash

You just go with the defaults.

On the Mac I am running:

HamishWatson$ azure –version
0.10.8 (node: 4.7.0)

The install was very painless – as most things in the *nix are (most of the time until that time they aren’t and then it hurts real bad and you google a lot and read a lot of BBS..)

The cool thing is I forget a command I can easily get help by typing:

HamishWatson$ azure –help

Screen Shot 2017-03-21 at 21.42.16

One of the first things I did was connect to my existing Azure portal subscription.  Which was fairly simple:

HamishWatson$ azure login
info:    Executing command login
info:    To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code GTYAL666U69 to authenticate.

Screen Shot 2017-03-21 at 21.27.09

And then enter in the code:

Screen Shot 2017-03-21 at 21.30.04

And choose the right account to access Azure:

Screen Shot 2017-03-21 at 21.39.50

Which then links your terminal session with your Azure Portal session.

We can now do some familiar tasks like list our resources:

Screen Shot 2017-03-21 at 21.45.04

Which looks quite familiar:

Screen Shot 2017-03-21 at 21.47.33

Listing resources is one thing but MAKING resources (and ultimately stuff go) is the goal here.

And this is where things got a little weird.

You can use azure or az. Most scripts use az so you need to ensure that you have it installed properly.

The az application resides in  /Users/<your username>/bin/ so make sure you update the $PATH with that.

So doing some of the above:

bash-3.2$  az –version
azure-cli (2.0.1)

Remember the help from above – here it is using az:

Screen Shot 2017-03-21 at 22.55.50

So as you can see – az is very powerful for allowing us to create things – many things.

I’m a database guy so I want to do stuff with the sql command above.

At any step of the wayI can get some help.

So for the sql subgroup:

bash-3.2$  az sql –help
Group
    az sql: Manage Azure SQL Databases and Data Warehouses.
Subgroups:
    db          : Manage databases.
    dw          : Manage data warehouses.
    elastic-pool: Manage elastic pools. An elastic pool is an allocation of CPU, IO, and memory resources. Databases inside the pool share these resources.
   server      : Manage servers. Servers contain databases, data warehouses, and elastic pools.

And for help about the database part:

bash-3.2$  az sql db –help
Group
    az sql db: Manage databases.
Subgroups:
    replica: Manage replication between databases.
Commands:
    copy   : Creates a copy of an existing database.
    create : Creates a database.
    delete : Deletes a database or data warehouse.
    list   : Lists all databases and data warehouses in a server, or all databases in an elastic pool.
    restore: Creates a new database by restoring from a database backup.
    show   : Gets a database or data warehouse.
    update : Updates a database.

And lastly – to create a database and I’m not sure of the command:

bash-3.2$  az sql db create –help
Command
    az sql db create: Creates a database.
Arguments
    –name -n           [Required]: Name of the Azure SQL Database.
    –resource-group -g [Required]: Name of resource group. You can configure the default group using ‘az configure –defaults group=’.
    –server -s         [Required]: Name of the Azure SQL server.
    –collation                   : The collation of the database. If createMode is not Default, this value is ignored.
    –edition                     : The edition of the database.
    –elastic-pool                : The name of the elastic pool the database is in. If                               elasticPoolName and requestedServiceObjectiveName are both updated, the value of requestedServiceObjectiveName is ignored.
    –max-size                    : The max storage size of the database. Only the following sizes are supported (in addition to limitations being placed on each edition): 100MB, 500MB, 1GB, 5GB, 10GB, 20GB, 30GB, 150GB, 200GB, 500GB. If no unit is specified, defaults to bytes (B).
    –sample-name                 : Indicates the name of the sample schema to apply when creating this database. If createMode is not Default, this value is ignored.
    –service-objective           : The name of the configured service level objective of the database. This is the service level objective that is in the process of being applied to the database. Once successfully updated, it will match the value of serviceLevelObjective property.
    –tags                        : Resource tags.
Global Arguments
    –debug                       : Increase logging verbosity to show all debug logs.
    –help -h                     : Show this help message and exit.
    –output -o                   : Output format.  Allowed values: json, jsonc, table, tsv. Default: json.
    –query                       : JMESPath query string. See http://jmespath.org/ for more information and examples.
    –verbose                     : Increase logging verbosity. Use –debug for full debug logs.

Through the use of JSON we can build up a template (or use one of the Azure ones and edit for our own usage).

So to create a database on our existing server – what existing server you ask?

Well let’s find it:

bash-3.2$ az sql server list –resource-group ClientPortalRG
[
  {
    “administratorLogin”: “azuremanager”,
    “administratorLoginPassword”: null,
    “externalAdministratorLogin”: null,
    “externalAdministratorSid”: null,
    “fullyQualifiedDomainName”: “webdbsrv69.database.windows.net”,
    “id”: “/subscriptions/5c4fc285-ddba-4280-93cf-3a965cbce836/resourceGroups/ClientPortalRG/providers/Microsoft.Sql/servers/webdbsrv69”,
    “kind”: “v12.0”,
    “location”: “Australia East”,
    “name”: “webdbsrv69”,
    “resourceGroup”: “ClientPortalRG”,
    “state”: “Ready”,
    “tags”: null,
    “type”: “Microsoft.Sql/servers”,
    “version”: “12.0”
  }
]

So now let’s create a database called UATDB1 in the Basic Edition (as I don’t want to pay much for it per month):

az sql db create –name UATDB1 –resource-group ClientPortalRG –server $servername –edition Basic

Within about 20 seconds we now have a new database – from the CLI:

Screen Shot 2017-03-21 at 23.15.11

And we can see it in the Azure Portal:

Screen Shot 2017-03-21 at 23.19.02

So there we have it – how to create an Azure SQL database from the Azure CLI on a Mac…

Yip.