“DevOps in the Wild” – an Australasian Road Trip

I was lucky enough to be chosen to speak at Ignite Australia in February this year. This was huge – mostly because in Microsoft circles I’m fairly unknown – whereas within the PASS and SQL Saturday community I’m getting known as the DBA/OPs guy who preaches that DevOps thing.

My tagline for getting up in front of people is:

If I can inspire just one person in the audience to make a change – then this is worth it“.

Ignite was always going to be huge as it was going to one of four conferences I was going to speak at within 10 days. Hence the title – this was a road trip of sorts where I got to speak about a topic that I hope can help people make positive change in their businesses.

The others were:

The big one of course being Ignite Australia.

Here is the crowd who wanted to hear me talk:

c4q3zl8ueaaho9g
Ignite Australia 2017 – “Making DevOps work in the Wild”

All the sessions has one common theme – how to make DevOps work for you and your company.  Ignite Australia was about an overall “How to make DevOps work in the wild” view whereas SQL Saturday Sydney (for example) was more about “How DBAs can/should embrace DevOps”. All my sessions have DEMOs associated with them.

For Ignite I wanted to show the whole journey of making changes and showing those flow through various tools and also environment setups.

For Difinity and SQL Saturday Sydney I wanted to drill down into particular parts of Continuous Delivery – specifically the use of SQL Server Data Tools and DACPACs – the common theme was integrating the delivery process with Azure and Continuous Integration tools like Team Foundation Server (TFS) and Visual Studio Team Services (VSTS).

If you’re interested here is my session on Channel 9 (75 minutes):

https://channel9.msdn.com/events/Ignite/Australia-2017/CLD323a

I was also lucky enough to be interviewed by Adam Cogan (t | b) from SSW which was an awesome experience.

You can watch the interview (13 minutes)here:

https://youtu.be/gYapE4Gx_uo

Before I went on the roadtrip I was asked if I would do a podcast with CIAOPS and you can hear it (26 minutes) here:

http://ciaops.podbean.com/e/episode-138-hamish-watson/

All in all the experience of speaking at Ignite Australia was amazing, I really enjoyed meeting fellow speakers and also going along to some great sessions.

I’ve been speaking for roughly 2.5 years now and I always like to retrospectively analyse how my sessions went. My evaluations for Ignite were pretty good – as a speaker I got 4.1 out of 5 which I am happy with.

The comments were interesting – DevOps is a subject that polarises people and I’ve found that I’ll either have lovers or haters who decide to fill out evaluation forms. Ignite was no different and I’m glad for all the comments as I want to grow as a speaker and every time I speak I learn something new about my technique and also something to improve.

I preach about continuous improvement both within my workplace and in front of crowds so it makes sense that I reflect on how I could have improved.

I am going to choose Ignite as this was the largest group of people I’ve spoken to on this subject. About twice the size of the group of people I spoke to at PASS Summit in 2016:

http://www.pass.org/summit/2016/Sessions/Details.aspx?sid=47521

Areas for improvement:

1. I should have made it a 200 level session.

Initially I wanted to go in depth into Continuous Integration feeding into Continuous Delivery. I had a fairly good in depth DEMO that would show this. However I was worried about time and cut back my DEMOs which involved a lot of material that would have made this a 300 level session. I was also worried I’d lose some of the audience if I focused wholly on the technology rather than what actually makes DevOps work in the wild.

What actually makes DevOps work in the wild you might ask?

Watch the session.

But yeah – lessson learned – I should have contacted the organisers and said “hey thanks for picking my session, however in retrospect it’s a 200 level session”.

2. Recorded DEMOs are an art…

Part of the speaker briefing was that Ignite recommended doing recorded DEMOs. Which I could see the merit in – imagine doing an Azure DEMO and the internet drops out. Or your DEMO completely breaks – Ignite don’t want a room full of people twiddling their thumbs whilst the speaker freaks out.

I love to do DEMOs that are LIVE. Because it is risky and more importantly when things go wrong (trust me there is no “if” there) then the audience can go on a journey with you to work out what broke. I have always recorded my DEMOs – as a backup – just in case something does ever go wrong – but I’ve never (yet) had to use them.

So I recorded my DEMOs and then I re-recorded them and changed bits and got them to a point where I thought I was happy. I practised with them, alone and in front of people.

However on the day in front of a crowd of 200+ people – I felt a little disjointed and my DEMOs were a little too fast in hindsight.

They were a little too broad as well, I should have delved into one thing and really done a deep dive – to warrant the session being a 300 level session. In a 200 level session the DEMO I did would have been fine.

Another lesson to be learned – if recording the DEMO – slow it down rather than speed it up and practise it in a variety of stances (as the monitors might be over there or over here etc.).

3. Warm up with the crowd – regardless of size

Before any of my sessions I engage with the crowd. I like to find someone to talk to then extend it out to the mass that is the crowd.. Even at PASS where my audience was over a 100 people I still engaged with them, mostly talking about NZ chocolate..

The reason I do this is that I get into my style quicker and I don’t need to warm up during the first 3 slides.

Ignite I didn’t do this as I was waiting for the room to fill, the technicians to say “yip, you’re on”…

…and when I listened to my session this week I could tell – I use “uhm” or “ahh” when I’m warming up. I should have engaged with the crowd- I was in the room for 30 minutes before my session so I had time.

Summary

As mentioned speaking at Ignite was a wonderful opportunity for me – I feel extremely lucky to have presented. DevOps is a topic that people either don’t understand, over complicate or just plain hate. I’m not going to change the haters but I certainly want to help the people that don’t understand or want to make a positive change in their application life cycle and deployment pipeline.

I learnt some things speaking at Ignite which will help me in future presentations and that is worthy of the time spent preparing, speaking and evaluating myself.

Yip.

VIDEO: Helping others understand Azure and how it can help with DevOPs stuff

For the past 2 years I have been on a crusade of giving back to the community. For so many years I’ve consumed blogs, watched webinars and attended SQL Saturdays to learn things. I’m now at that stage in my career I want to give back.

Not because I’m the best in the field – that was what was holding me back – because I felt I had nothing to offer. I was so wrong.

I realised that I had the potential to talk about a topic that I’ve researched for weeks, months and years and that in the audience might be one person who didn’t know what I now know.

So in 2016 I submitted for every SQL Saturday in Australasia that I was available for and was lucky enough to speak at the PASS Summit in October 2016 – “Overcoming a Culture of FearOps by Adopting DevOps“.

Whilst speaking on tempDB at SQL Saturday Brisbane I met someone who was totally committed to community education – Nagaraj Venkatesan (t / b) and over the past 6 months we talk regularly via social media.

Nagaraj has setup a channel on YouTube and I was very honoured when he asked me if I would be one of his first interviews on it. The video below is 30 minutes long and after watching it a few times I realise that when I’m excited about a topic – you can definitely tell I’m excited about a topic….

Video at SQL Server Central

Yip.

 

 

How Azure can assist the deployment of our applications.

Introduction:

This blog post is the first of many that I’ve had stored up in my brain and in saved drafts.

It was whilst I was preparing for my upcoming presentation at Ignite Australia about my experiences in getting DevOPs working at Jade Software that I realised I had a good series of stories to tell that could help others.

My session is called “Making DevOps work in the wild..” and is a collection of things I’ve had to do over the past 5 years to get stuff going. It describes how Azure and cloud services have enabled efficient, reliable and automated application deployments using DevOPs.

One of the principles of DevOPs is the integration of Continuous Integration (CI) with Continuous Delivery (CD).

Simply put CI is about merging all working copies of developer code into a shared repository several times a day. This is then built and tested using CI software to produce brilliant software – in the form of a package.

CD is about taking that product and ensuring that we can reliably release it at any time. We want to build, test and release our product faster, safer and more frequently.

For years I have administrated on-premises installations of both CI & CD tools. These typically run on virtual servers which were hosted on-premises.

With virutalisation of infrastructure it meant that operational people like myself could script the build of underlying server infrastructure and start to go down the path of Infrastructure as Code (IaC).

What is Infrastructure as Code?

This is the method of improving the way we manage and create the different servers/databases/apps etc (what I call environments) which occur along our CD deployment pipeline.

It allows us to script in a declarative, reusable, automated manner that creates the state of our environment.

The key thing about IaC is that because everything is in a script then it means we can version it in a source control repository.   Version Control has been a key component of CI processes for DEV for years – what this means is that us OPs guys can utilise this our scripts and if required we can restore a known version of an environment at any time.

It also means that DEV and OPs now have something we can discuss and collaborate on. If DEV need certain features for a particular environment (say our Functional Test (FT) environment) then this can be implemented very quickly and at the touch of a button. If successful then we can roll those changes out to UAT, prePROD, Staging and eventually PROD.

How can Azure help us?

With the advent of cloud services such as Azure – which is Microsoft’s cloud computing platform – we have access to a collection of integrated services such as computing, database, mobile, networking, storage and web which will allow us to build on Infrastructure as Code and deploy applications much faster and importantly – save money.

The best part is that you can try out Azure using a free trial:

azure_free_trial

Which is awesome – I signed up for the free account when I first wanted to prototype things on Azure.

Going back to CI & CD – as mentioned I administrated a lot of these on-premises. Which meant that my team and others were involved in installing/configuring and continuously patching the servers and running software. Which meant that even though we had IaC nailed, we still had to be involved in the day to day running of the infrastructure that hosted out CI/CD software.

Enter Azure…

The great thing about Azure is the MarketPlace:

azure_marketplace

The marketplace is growing almost daily and is the online store for thousands of certified, open source, and community software applications, developer services, and data—pre-configured for Microsoft Azure.

What this means is that if I want to run up an online version of one of my favourite pieces of deployment software:

Octopus Deploy

Then I can browse the Marketplace and have this going very quickly. In fact if you are looking at how to do Continuous anything in Azure you should visit this blog:

http://dinventive.com/blog/

For myself I used this post for running up my Octopus Deploy in Azure.

http://dinventive.com/blog/2016/10/11/5-clicks-and-under-15-minutes-octopus-deploy-running-in-microsoft-azure-ci-tools-as-code/

The blog is slightly wrong as it only took 8 minutes to have a functioning installation of Octopus Deploy.

This means that my installation is running and all I have to do is consume it in Azure. The patching of underlying SQL Server and Windows Server are taken care of.

My time can now be spent on better things than ensuring my infrastructure is fully patched.

A special mention is that by using the templates in Azure also means I can punch out applications and services as I need to. This is why I consider using Azure to be “Infrastructure as Code+”.

Tools to manage Azure:

The full Microsoft development stack makes this very easy – you can use:

Visual Studio with Azure SDK

Management portals:

https://portal.azure.com

https://manage.windowsazure.com (old portal)

Azure PowerShell

Azure CLI

 

I’ve used all 4 methods to familiarise myself with how to spin things up in Azure and I will go into each as I write about various aspects of creating Azure resources.

For now here is a teaser for creating an App Service.

Using Visual Studio:

We would create a new ASP.NET Web Application and ensure that we’ve chosen “Host in the cloud”:

vs_webapp_azurevs_webapp_azure_createapp

More detail and a tutorial can be found at:

https://docs.microsoft.com/en-us/azure/app-service-web/web-sites-dotnet-get-started

Using the Management Portal:

Sign in with your account and choose New and browse to “Web + Mobile” and choose “Web App”:

marketplace_webapp_azure_createapp

The great thing is that Azure Resource Manager will fill in as many of the entries as possible to make our life easier and slightly more consistent.

Using PowerShell:

Because this is my preferred method of creating anything in Azure (along with templates) I will write a post dedicated to this.

Conclusion:

Managing Azure is fairly easy and intuitive – you do need to stay focused though – it is a behemoth and it is easy to get lost or consume a lot of time trying out some of the brilliant features in it…

Hence why I signed up for the free trial – I had some goals in mind but it was awesome just spinning things up to see how they could be utilised.

Recently I’ve had to use Azure to solve a database deployment issue for myself and the company I work for. It was extremely easy to spin up the resources I needed – thanks again to Infrastructure as Code, Azure templates and Visual Studio Team Services (VSTS).

I am looking forward to writing about my experiences running/administrating Team Foundation Server in-house versus the ease of VSTS in Azure.

Today’s post is the first of many which will describe my journey in Azure and also how it can help you achieve deployment greatness utilising repeatability and predictability which are keys to any successful deployment of a high-scale application or database.

 

Yip.

The 20 second rule (or why standards matter).

The 20 second rule is not some sort of lewd joke, but is rather something I use in presentations to talk about effective systems management.

Let me paint a picture for you:

It is 3am, an online banking system that one of your team members setup has crashed or is having an exception and you’re on call.

You need to either talk with 1st level support personnel to identify, quantify and rectify the issue (quickly) or actually log onto the server itself and diagnose in situ.

We manage over 6,500 applications residing over some 1650 databases so you know things go bump from time to time. We train our 1st level support staff (2 people on shift – that covers 24 x 7) to resolve around 95% of all issues before escalating to 2nd/3rd level support.

At 3am we want to know or at worst log onto servers/cloud services and know where everything is.

And we can – when I was on call I used to have a 2 minute rule – whereupon within 2 minutes of listening and asking pertinent questions I could resolve the issue and go back to sleep or whatever I was doing at 3am.

And I could and did.

Because of standards and where things fell outside of standards – precise/concise documentation. Where the documentation fell down or wasn’t clear – we all would make a point of updating it, because if I write something it’s in my “dialect” and if you read it – you might not understand it. So it’s important to peer review things AND to continuously review/update documentation.

I work in Managed Services – we have clients across the world that rely on us to make things go and make things right. I have a small team – because our toolsets/standards enable us to scale.

Too often when I visit a non-managed client on a consulting gig the first 40 minutes to an hour is discovering where the config files are and what app talks to what app/database.

And documentation? The last guy who just left was supposed to update it before he left.

I’m not making this up.

Standards_webconfig_bad
 This is not a good thing.

So let’s talk standards that have been in use in the past 38 years my company has been making stuff go.

We treat a database and it’s associated applications as an “environment” or “system”.

So let’s say we’ve developed it for PWC and it’s a Web Application Gateway system of engagement (mobile app)  into their legacy back end system.

So we have a code for the client – PWC and we generally shorten the application name down to something meaningful or memorable. In this case let’s call it WAG.

Following some of the methodologies of Continuous Delivery we’re going to have levels of the application:

DEV, Build,Continuous Integration,Functional Test, Integration Test, UAT, prePROD (or Staging) and finally PROD.
So here we go – < 3 Letter Client Code >< Level >< App Name >:
pwcDwag – DEV
pwcBwag – Build
pwcCwag – Continuous Integration
pwcFwag – Functional Test
pwcIwag – Integration test
pwcUwag – UAT
pwcYwag – prePROD (Y don’t you have prePROD?? – ask me about it one day)
pwcPwag – PROD

We name EVERYTHING associated with the environment using this naming standard.

This means that even if things are multi-tenanted on a server – at least I know by looking at the root directory what we have on the server. Immediately.

This means for a website I already know that the app pool is called pwcPwag_<somewebapplication>

This means I know the usercode for the applications/services and connections to the database are pwcPwag and if I need to I can generate the secure 26 character password (BTW only a few of us can or need to get said password) as all environments have a unique secure password that is NOT visible in config files (more on this later).

This means I know what Active Directory groups have READ access to files, or have READ access to a database and what Active Directory groups have MODIFY rights.

All within 20 seconds.

By knowing what is already setup – it allows me to look for the anomalies and resolve very quickly.

Of course standards change – and that is why I work in IT – because it is forever moving/changing/improving. Standards need to cope with this. Standards need to be measured against the ever changing landscape.

But the cool thing is:

For an application based in Elastic Bean Stalk – I still know it’s name relates to client XXX and I still know that it will have certain characteristics that it would have if it resided on a hosted server within our data centre.

Yes we had to change some of the ways we manage said application but for the most part – something in AWS is not that different to something residing on a VM that we built via our standard build scripts.

And this leads me to automation or “infrastructure as code” – all of our databases, applications, servers and network devices are scripted. The process of creating them is standard and the scripts are stored in source control – we in operations took some of the stuff developers had been doing for years and we’re far more agile than we were even 3 years ago. Right here is where Marketing say “DevOPs!!”. Yeah it is – I just hoped I didn’t have to state it…

Our deployments to applications are standard – we repeat them to make them reliable.

Standardising the deployment means that what used to take a human 4 hours to build (and get wrong) now takes 10-20 seconds to build.

We had been using Team City for our automated standard builds but we were missing something to get the published packages out to the 8 different levels of environments (DEV to PROD).

Enter Octopus Deploy in 2014 – it was the answer to our operational deployment woes (admittedly I had written deployment scripts in 2012- but it still required manual intervention of picking up the files and running/scheduling it).

We now had a way to build more standards into – well everything related to our environment.
  • No more hand editing of config files.
  • No more logging onto servers to look up config file settings.
  • No more people watching deployments at 3am – just in case of unknown changes.
  • No more monthly deploys – we starting deploying multiple times a DAY..!!
  • No more multiple config files on a server for an application.
  • No more wondering if a service was setup correctly by operations.

Octopus Deploy allowed us to have a central repository of our config file contents:

Standards_Octopus_Variables

Variables for ALL environments are stored centrally and securely

Octopus Deploy meant that DEV didn’t have to see if OPs were available to deploy to environments – they just could. It meant that once a service/application was verified in CI and Build then it could be packaged and installed into DEMO, UAT etc without ANY unknowns.

Standards_Octopus_Deploy_Process

Automated reliable, repeatable deployments.

We still had change control processes for prePROD and PROD but it also meant the sign off process was more streamlined as we knew the deploy would work. Every time. We took the standardising we had to do for Continuous Integration (for DEV) and applied it to our Continuous Delivery processes (for OPs).

By using our existing standards and applying them in new innovative ways it has allowed us to continue (and sometimes improve) my 20 second rule.

Which is why standards matter.

Yip.

So why blog….now?

Blogging…

..yeah.

That thing that certain people do.

Post 2008 I never thought I’d join the ranks and for a very good reason.

For the past 16 years (or previous 8 years in 2008) I’ve been doing a lot of things in the Object Orientated (O-O) database world and most of that has been around submitting responses to posts on our forums and writing really long emails to clients.

Because I cared about the issues they had and normally had a one-to-one relationship with them – I didn’t need to publicise to the world.

Oh yeah before I go too much further – g’day… I’m Hamish Watson – linkedIn thing, twitter thing and community thing

Back to what I was talking about –  over the years I’d worked with clients, commiserated with them, consulted for them and mostly – just made stuff go. Because they wanted my company and myself to help there where others couldn’t, wouldn’t or shouldn’t. Do stuff.

I’m hesitant to call myself the poster boy of the O-O world, mainly because in all our marketing collateral – I didn’t feature – I’m not very photogenic. However I was more interested in challenging the engine stuff and helping people solve what they thought were horrid problems.

In 2005 I was given the task of looking after relational “stuff”, we had some initiatives around Oracle and Microsoft SQL Server and my General Manager at the time though I was the guy to look into these technologies.

Weirdly (and out of character) I didn’t complain…

I decided to treat managing relational databases the same I did O-O. And for the first time in my life I read blogs, I consumed them. I joked in management meetings how I could do anything in SQL Server because this new fangled thing called “Google” allowed me to look up anything. It seemed that somewhere, someone had already done the something I was stressing over. I soon found that the documentation in the relational world was far different from what my software company did. It had some things in it but to do things in the real world it didn’t take long before I realised that there were:

blogs.

Written by people who had been hurt and burned. But still cared enough to share those horrid experiences, who cared enough to put it out there so a newbie like me (who had 6 years of DBMS experience but was new to “SQL”) could digest and find a resolution to things. Things that were troubling and couldn’t be found in official documentation. Things that couldn’t even be found in diagnostic logs. They just broke. Because.

These people wrote about it: why it happened, how it happened and more importantly how to fix it. A good example is a guy who literally cares about 1s and 0s – Steve Knutson (b | t) – a bloke I’d known well at university (when I was studying Chem. Eng and he was doing that computer degree thing)

For the past 11 years I’ve been a consumer of blogs. I was selfish. I consumed, I resolved – I did give thanks on certain blogs but I didn’t give anything back to the community.

Why?

2 reasons.

I was but a small fish in a huge world of Microsoft SQL Server. For 2 years my biggest managed database was 30MB. Yes, in 2007 I was looking after databases that were 30MB. but then something happened and that was relation population service. Which meant our O-O clients with 1TB databases could now pump data into relational databases for their report writers (BI didn’t really exist or wasn’t as sexy as it is now).

Suddenly I was looking after systems that were 20GB in size (you never translate all the O-O data – that’s for another post…). The game changed.

And so did my searching.

And so did my consumption of…

..blogs.

And then in October 2012 I met Martin Catherall (b | t). Who was trying to resurrect the local SQL Server User Group post 2010/2011 earthquakes. He did a great job and in March 2013 there were a group of 30 of us who met. Fast forward 3 years and we have 329 members. Yip, that is awesome. We meet on the 3rd Wednesday of each month and have BI/DBA/DEV speakers. We have free beer and pizza, thanks to generous sponsorship.

Check us out here.

I took over from Martin as Chapter Leader (we’re affiliated with PASS) in October 2015. It was an honour to take over something Martin had put his life and soul into and I wanted to do him proud (Martin’s from the UK and I was scared he’d “Liverpool kiss” me if I didn’t “do him proud”).

By now I had been reading many blogs and learning so much about “stuff”.

And in October 2015  I was given an opportunity to go this thing called PASS Summit. For the past 2 years I’d ignored the invite to go – why would I go to something in the US when I could do stuff locally? How blind I was…

So I went with Martin and we had fun, well I certainly did – surviving on 4 hours sleep each day. I went to all the sessions that I though I’d benefit from for the betterment of my company and managed SQL Server systems we managed (over 500 databases with 3 of us). But slowly and kinda unbeknownst to me I realised that at Summit I was meeting some of the people whose blogs I had read and used. People like:

Rob Farley (b |t), Warwick Rudd (b | t), Brent Ozar  (b | t)[whose freecon in Seattle really made me think about the brand “Hamish Watson”], Paul Randal (b | t), Robert L Davis (b | t), John Q Martin (b | t), Allen White (b | t) and many, many others.

I realised quickly that they cared. They had this thing called #sqlfamily & more importantly #sqlhelp that helped people and had this whole community thing going with it. Let’s not forget some of the awesome people I met at Summit. There are so many of them but I want to mention someone who made me realise what it means to be a gentleman – Tom Roush (b | t).

If you have a spare moment and have a read of his personal blog – this is from a guy who I feel blessed (yip I said that word) to have met – one morning over really bad bacon in Portland:

https://tomroush.net/

His blog has made me laugh, cry but also think. A truly nice bloke.

So after Summit – my eyes opened like some blind bloke on a road to some place.

I came back to New Zealand and for the next 5 months I worked hard at my job, my life, I like to keep busy. And I didn’t blog.

Because I was so “busy”.

And I felt I was a minnow in the world of SQL Server.

I’m an ENFJ in that thing that apparently doesn’t box people in but still labels you. Yip. Why would I feel semi-inadequate?

For the first time I felt I had nothing to offer – that I knew nothing to help or offer. How wrong I was. Because here’s the thing –  I am a speaker around the SQL Saturday circuit:

Portland

Melbourne

Sydney

Christchurch (South Island)

Brisbane

plus some guest lecture spots at a local tertiary institute and a local software cluster.

So in fact I was helping people – learn. So why not blog?

I had to choose my topics carefully – speaking to a room of 20-50 people is one thing – laying it on the internet? Very scary.

I chose today to post as it is 6 months to the day since I was at the PASS Summit, in those 6 months I’ve written a summary on my LinkedIn profile, I’ve actually used my twitter account for good and I’ve written this.

I have chosen to go back to first principles – I like to make stuff go. I like to help people in need. I’m therefore a guy who could write something that might help someone.

So I will.

I have a couple of topics in my head that might help people – I could derive said topics from what I talk about, from what I want people to learn, from what I’ve learned. And so I will try.

To get where I am today I’ve had inspiration from people – people who care and give a toss about helping others.

To be honest if you’ve read this far you’re probably the people I’m going to mention:

Melody Zacharias (b | t)

Rie Irish (b | t)

Mickey Stuewe (b | t)

Why’d I pick 3 women? Because to be honest – there are enough guys in IT getting kudos for stuff they do – these three women made me think in the past 6 months. There are many guys whose blogs I’ve read/used – but these women made me actually think.

I saw the great work Mickey does around blogging and sharing – one day I’ll submit to  but small steps.

I had help from Melody when I wrote my first ever PASS Summit abstract and appreciate what she’s doing with teenage girls with coding. She is a regional mentor for Canada and is one of the most caring, helpful people I could ever be lucky enough to call a friend,

I really love what Rie is doing with Women in IT (WIT) – to help, mentor and support women entering our wondrous yet at times fickle industry.

Diversity…

Plus these three women have said hi, talked to me and have put up with my (ahem) unique outlook on life. What they are doing in the community is fantastic and if my small actions either blogging or just talking/caring can emulate their great work then you know – that’s awesome.

So that is why I am blogging… now.

Yip.