2017-05-30

PowerShell accelerators

PowerShell is object oriented and based on the .NET framework. This implies that there are no types, only classes, but to make the daily life to the PowerShell user Microsoft has added some accelerators to PowerShell. So instead of being forced to go the long way and write [System.Management.Automation.PSObject] you can settle with [psobject].

To get a list of accelerators through the Assembly class you can use the statement
[psobject].Assembly.GetType('System.Management.Automation.TypeAccelerators',$true,$true)::Get
and on my current workstation with PowerShell 4.0 there are 80 accelerators.
The method GetType() has three implementations, and the one with three parameters is used to get the most stable and predictable invocation.
The class TypeAccelerators is private, but accessed through a PSObject object.

Some accelerators are named in lower-case and other in mixed casing (CamelCase). Generally I prefer to use the casing that the accelerator has in the definition.

I have not seen any indications on a performance hit when using accelerators. And as the code is so much more readable with accelerators I go with them where it is possible.
When it is not possible I use the full class name, e.g. "System.Data.SqlClient.SqlConnection".

It is possible to create custom accelerators like an accelerator for accelerators:
[psobject].Assembly.GetType('System.Management.Automation.TypeAccelerators')::Add('accelerators',[psobject].Assembly.GetType('System.Management.Automation.TypeAccelerators',$true,$true))
The accelerator is then available like any other accelerator:
[accelerators]::Get
This accelerator is part of some PowerShell extensions.
Personally I prefer to name custom accelerator in lower-case.

Reference

RAVN Systems: PowerShell Type Accelerators – shortcuts to .NET classes

2017-05-29

Thoughts on IT documentation

Documentation of IT is often discussed, but unfortunately often left behind. Deciding to "do it later" is in my opinion also to leave it behind.

One thing that is quite often discussed is how much to document. On one side there is a demand for "everything" while the response in a good discussion could be "Read the code".
Personally I think that you should document enough to make yourself obsolete. With that I mean that your should document with the audience in mind. Not only on the contents but also the structure and taxonomy - this again could start a new discussion about a common ontology...

In an early stage of the documentation it will most likely be helpful do define the difference between:
  • Documentation / Reporting
  • Documentation / Logging
  • Documentation / Monitoring
  • Documentation / Versioning

Principles

I have a few principles on a IT documentation platform, but I can be rather insisting on them...
  • Automation; as much as possible!
  • Wide Access; all functions, all teams, all departments in the organisation.
  • Open Access; common client application - web.
  • Broad Rights; everybody can edit. If you are worried about the quality over time, then assign a librarian in the team.
  • Versioning; version every change to the documentation. This should be automated in the documentation platform. Put every piece of code, also script, under versioning.
  • Continous Documentation; documentation is "Done" SCRUM-wise with each task.

Automation

Today there is a lot of talk about automation, which is one of my preferred weapons in any aspect on IT.
Initially I would start by automate documentation on the lowest abstraction level. That is the physics where the components can be discovered with a script through APIs. Dependencies on this level can often be resolved using tools like SQL Dependency Tracker from Red-Gate.
There are tools for generating documentation, but usually I find the result too general to be useful in the long run.

Work with the technology you are using by using documentation features in development languages, e.g. PowerShell Comment Based Help.

CMDB

To build a automated documentation that is useful on the log run I usually end up building something like a CMDB. Sometimes I am not allowed to call it that because some manager has a high-profile project running. But then the solution is named "Inventory", "Catalogue" or whatever is politically acceptable.
This CMDB will quickly be the heart of your automation and associated solutions like documentation. Actually this asset will easily be the most valuable item to your daily work and your career.

When building a CMDB you will most likely find that a lot of the data you want to generate the documentation is already in other systems like monitoring systems, e.g. Microsoft System Center.
Please regard the CMDB as a integrated part of the IT infrastructure, where a lot of data flows to the CMDB and the sweet things like documentation flows from the CMDB.
This pattern is very much like a data flow around a data ware house. Actually there is complete similarity if you start calling the CMDB the Unified Data Model (UDM). The good news is that if you have worked with data warehousing before, then you can re-use skills on ETL development to populate the CMDB from the sources.

By adding logging or monitoring data you can document the quality of the data.
In some cases I have seen test results and other very detailed history added to the CMDB. That I think is overdoing it, but if there is a good case then it is naturally fine. These test data must include metadata, e.g. system, version and task.

Metadata

Quite early in the automation you will find that something is missing to generate useful documentation. But these data can not be collected from the physics. Still the nature of the data looks quite simple such as environment information.
A solution is to add these metadata to the CMDB. But beware! The data is inserted and maintained manually, so you will have to build some tools to help you manage these metadata.
Examples on useful metadata are system (define!), product, version with more vendor specific data such as license and support agreement.
Relate the metadata on all levels where it add value, e.g. version details on all components.
It is my hard earned experience that you should spend some time on this task. Keep an eye constantly on the data quality. Build checks and be very paranoid about the effect of bad data.

Abstract documentation

When your CMDB is up and running, the tools are sharpened and everybody is using the documentation it is time to talk about the missing parts.
As mentioned earlier the CMDB can help with the documentation of the lowest abstraction level. This means that documentation of requirements, architecture, design etcetera that is documented at more abstract levels does not fit in a CMDB. Also documentation of important issues like motivation, discussion of decisions and failures does not fit in CMDB.
For this kind of more abstract documentation there are several solutions. Usually a solution already used for other documents is used, like SharePoint or a document archive. Just make sure that you can put everything in the solution, also Visio diagrams, huge PDFs and what your favourite tools work with.
Also you should plan how to get and store documentation from outsiders. That is consultants, vendors and customers.

Document process

I would like to repeat that it is very important to document the process with things that was discussed but deliberate left out. Not only the decision but also the motivation.
An obvious item might have been left out because of a decisive business issue. This item will most likely be discussed again, and to save time and other resources it is important to be able to continue the discussion - not forced to restart it from the beginning.
Also it is important to document trials, sometimes called "sandbox experiment" or "spike solution". They are valuable background knowledge when the system must be developed further some time in some future. Especially failures are important to document with as many details as possible.

Enterprise templates

I many organisations there are required documents - sometimes with templates - for requirements, architecture or design. These documents can be called "Business Requirements", "Architectual Document" (AD) or "Design Document".
Use the templates. Do not delete predefined sections. But feel liable to add section or to broaden the use of an existing section. If you add a section then make sure to describe what you see as new in the section, and discuss other "standard" sections in relation to the new section. Make sure to answer questions up front.

2017-04-29

T-SQL formatted duration

I have a database backup that takes more than two days, and I want to look at the duration. But looking at the result of a DATEDIFF the figures are difficult to compare direct.
That took mo to format the duration to something readable. At first I wanted to format to ISO-8601, but that turned out to be too cumbersome in Transact-SQL. Especially getting the leading zeros was a challenge that I usually
Jeff Smith wrote back in 2007 a great article "Working with Time Spans and Durations in SQL Server", but I wanted to avoid the strange date-like part and the multiple columns he worked with.

To combine the different parts into one single-column answer I use part indicators like 'd' for days, 'h' for hours, 'm' for minutes and 's' for seconds. That I think is quite common indicators and should be Immediately recognizable to most people. Also non-IT end users.

Most log solutions give a duration with a lot of seconds, like a SQL Server  database backup. But that can be difficult to comprehend and compare directly

DECLARE @duration INT = 221231;  -- seconds
SELECT CAST(@duration / 86400 AS varchar(3)) + 'd' + CAST((@duration % 86400) / 3600 AS varchar(2)) + 'h' + CAST((@duration % 3600) / 60 AS varchar(2)) + 'm' + CAST(@duration % 60 AS varchar(2)) + 's'

This gives the output
2d13h27m11s

The motivation for this blog entry is database backup history, and the formatting above can be used against the history
SELECT
  [database_name]
 ,backup_set_id, media_set_id, name
 ,backup_start_date, backup_finish_date
 ,CAST(DATEDIFF(SECOND, [backup_start_date], [backup_finish_date]) / 86400 AS varchar(3)) + 'd '
   + CAST((DATEDIFF(SECOND, [backup_start_date], [backup_finish_date]) % 86400) / 3600 AS varchar(2)) + 'h '
   + CAST((DATEDIFF(SECOND, [backup_start_date], [backup_finish_date]) % 3600) / 60 AS varchar(2)) + 'm '
   + CAST(DATEDIFF(SECOND, [backup_start_date], [backup_finish_date]) % 60 AS varchar(2)) + 's' AS [duration_formatted]
 ,(backup_size / 1024 / 1024 / 1024) as [backup_size_gb]
 ,(compressed_backup_size / 1024 / 1024 / 1024) as [compressed_backup_size_gb]
FROM msdb.dbo.backupset
--WHERE [database_name] = 'my_large_database'
ORDER BY [backup_finish_date] DESC

You can comment in or out lines in the statement above to get what you need in the given situation.

Maybe one can use STUFF to insert values in pre-formatted output, but that I could not get to work in first shot. And it did not really add significant value to the output...

2017-03-15

PowerShell script template

I have made this simple template to start a new PowerShell script file. It helps me to get the basics right from the beginning. Also it supports my effort to write all scripts with advanced functions.

<#
.DESCRIPTION
.PARAMETER <Parameter Name>
.EXAMPLE
.INPUTS
.OUTPUTS
.RETURNVALUE
.EXAMPLE

.NOTES
  Filename  : <Filename>
.NOTES
  <Date and time, ISO8601> <Author> <History>

.LINK
  Get-Help about_Comment_Based_Help
.LINK
  TechNet Library: about_Functions_Advanced
  https://technet.microsoft.com/en-us/library/dd315326.aspx
#>

#Requires -Version 4
Set-StrictMode -Version Latest

#Import-Module G:\Teknik\Script\Sandbox\Module.sandbox\Module.sandbox.psm1


#region <name>

function Verb-Noun {
<#
.DESCRIPTION
  <Description of the function>
.PARAMETER <Name>
  <parameter description>
.OUTPUTS
  (none)
.RETURNVALUE
  (none)
.LINK
  <link to external reference or documentation>
.NOTES
  <timestamp> <version>  <initials> <version changes and description>
#>
[CmdletBinding()]
[OutputType([void])]
Param(
  [Parameter(Mandatory=$true, ValueFromPipeLine=$true)]
  [String]$param1
)

Begin {
  $mywatch = [System.Diagnostics.Stopwatch]::StartNew()
  "{0:s}Z  ::  Verb-Noun( '$param1' )" -f [System.DateTime]::UtcNow | Write-Verbose
}

Process {
}

End {
  $mywatch.Stop()
  [string]$Message = "<function> finished with success. Duration = $($mywatch.Elapsed.ToString()). [hh:mm:ss.ddd]"
  "{0:s}Z  $Message" -f [System.DateTime]::UtcNow | Write-Output
}
}  # Verb-Noun()

#endregion <name>


###  INVOKE  ###

Clear-Host
#<function call> -Verbose -Debug


PowerShell script documentation

To PowerShell is defined a syntax for comment-based help in scripts (about_Comment_Based_Help). And there are many more keywords than I have used above, but my personal PowerShell style is covered with these keyword. Feel free to create your own script template with our own personal selection of standard keywords.
Using only standard keywords ensure that your scripts can be presented in a central place with nice and useful documentation. And the presentation can be automatic updated without redundant documentation tasks.

Michael Sorens has on SimpleTalk posted a series of great posts about automatic documentation on a library of PowerShell scripts. The first post is called "How To Document Your PowerShell Library" where the basics are presented in a brilliant way.

PowerShell script module

When you have refined the script, I think it is time to move the functionality to one or more PowerShell script modules. The general concepts about PowerShell modules are listed in the article „Windows PowerShell Module Concepts“ and short described in the MSDN article „Understanding a Windows PowerShell Module“. Right now I will focus on a script module as this text is about PowerShel scripting. There is a short and nice introduction in the MSDN article „How to Write a PowerShell Script Module“.

In the script template above I have indicated how to dynamically import a script module placed in a given path. If the script module is not signed and the PowerShell Execution Policy is somehow restricted – which it should be! – you can't import a module placed outside the local computer, e.g. on a file share through a UNC path.
PowerShell script modules are defined and stored in psm1-files, but looks a lot like plain script files. If you use the script template above for a script module the module import and the final lines with invokation should be removed. The invokation should be in the script file importing the script module.

There are several benefits of using PowerShell script modules. Many of them you will see when you start using them...
One thing that surprised me was the value of unit testing the script code.

Michael Sorens (again) has on SimpleTalk written the great article „Further Down the Rabbit Hole: PowerShell Modules and Encapsulation“ on PowerShell modules and the surroundings with some very valuable guidelines.

History

2017-03-15 : Blog post created.
2017-05-17 : Section about script documentation added. Inspired by comment from my old friend and colleague Jakob Bindslet.
2017-06-14 : Section about PowerShell script module added.

2017-02-04

DBA Backlog

As a Database Administrator both in projects and daily routines like administration or operations all kinds of things show up. Normally they are associations that doesn't really fit in the plans or the context, but still relevant or even important in the long run.
Usually these things and ideas end up on post-it notes, whiteboards or another volatile medium. If you are lucky to be in a large project with experienced developers and project managers some items might be saved.

Another way is to create your own backlog of ideas, points of interest, nice to haves and other things that might make your works better. In ITIL these things will be considered part of Continual Service Improvement (CSI) as input to the Plan part of the Deming Circle.

I have collected a set of attributes I have used to establish a backlog in a project or a team. Not all attributes are relevant in each situation. Sometimes it is also necessary to twist an attribute to make the backlog usable.

And to my proposal for a general backlog:
Attribute NameAttribute Description
TitleShort and descriptive title of the backlog item.
ThemePredefined themes to sort the backlog on a "dimension" like technology. This is very useful when the backlog grows.
{ SQL Server | SSDB | SSIS | ... }
DescriptionProse description of the backlog item with as many details and thoughts as possible.
The description can easy change and grow over time, even before the backlog item is activated and processed.
PriorityThe priority of the backlog item. Predefined values that makes sense, even to senior management.
{ Critical | High | Medium | Low }
AbstractionThis item can be added to the backlog if you are working with defined abstraction layers from idea to solution. This is sometimes seen in agile development methods like SAFe.
{ Epic | Story | Task }
StatusStatus of the backlog item. The values are inspired by Kanban to support an effective execution of the backlog.
"Keep the Ready queue short".
{ Backlog | Ready | Doing | Done }
EstimateA quick estimate of the time needed to process and finish the backlog item.
The item is finish when the issue is solve and documented – completely. As in agile development.
{ Day | Week | Month | Quarter | Year }
DeadlineDate of deadline, if one is required. If a deadline exists it is usually given by business or management.
ChangedTimestamp of last change to the backlog item.
Consider to enable automatic versioning on the backlog.
CreatedTimestamp of when the backlog item was created. Usually when the item was entered into the backlog.
Changed byThe person who last changed the backlog item.
Created byThe person who created the backlog item. Usually the person who entered the item into the backlog.
TangibleThis is more a business attribute that tells about where business can relate to the backlog item.
OwnerBusiness owner of the backlog item.
ResponsibleThe person who is responsible on the backlog item. Usually the person who is to get the job done.

I personally have good experiences on building a backlog as a SharePoint list rather quick and effective. This also gives all the SharePoint features on filtering, printing and other trivialities.
Another possibility I have tried a few times is using the backlog features i Microsoft Team Foundation Server (TFS). It works really nice also for Database Administrators.

History

2017-04-02 Blog post created.
2017-05-08 Abstraction element added to backlog schema.