2017-05-30

PowerShell accelerators

PowerShell is object oriented and based on the .NET framework. This implies that there are no types, only classes, but to make the daily life to the PowerShell user Microsoft has added some accelerators to PowerShell. So instead of being forced to go the long way and write [System.Management.Automation.PSObject] you can settle with [psobject].

To get a list of accelerators through the Assembly class you can use the statement
[psobject].Assembly.GetType('System.Management.Automation.TypeAccelerators',$true,$true)::Get
and on my current workstation with PowerShell 4.0 there are 80 accelerators.
The method GetType() has three implementations, and the one with three parameters is used to get the most stable and predictable invocation.
The class TypeAccelerators is private, but accessed through a PSObject object.

Some accelerators are named in lower-case and other in mixed casing (CamelCase). Generally I prefer to use the casing that the accelerator has in the definition.

I have not seen any indications on a performance hit when using accelerators. And as the code is so much more readable with accelerators I go with them where it is possible.
When it is not possible I use the full class name, e.g. "System.Data.SqlClient.SqlConnection".

It is possible to create custom accelerators like an accelerator for accelerators:
[psobject].Assembly.GetType('System.Management.Automation.TypeAccelerators')::Add('accelerators',[psobject].Assembly.GetType('System.Management.Automation.TypeAccelerators',$true,$true))
The accelerator is then available like any other accelerator:
[accelerators]::Get
This accelerator is part of some PowerShell extensions.
Personally I prefer to name custom accelerator in lower-case.

Reference

RAVN Systems: PowerShell Type Accelerators – shortcuts to .NET classes

2017-05-29

Thoughts on IT documentation

Documentation of IT is often discussed, but unfortunately often left behind. Deciding to "do it later" is in my opinion also to leave it behind.

One thing that is quite often discussed is how much to document. On one side there is a demand for "everything" while the response in a good discussion could be "Read the code".
Personally I think that you should document enough to make yourself obsolete. With that I mean that your should document with the audience in mind. Not only on the contents but also the structure and taxonomy - this again could start a new discussion about a common ontology...

In an early stage of the documentation it will most likely be helpful do define the difference between:
  • Documentation / Reporting
  • Documentation / Logging
  • Documentation / Monitoring
  • Documentation / Versioning

Principles

I have a few principles on a IT documentation platform, but I can be rather insisting on them...
  • Automation; as much as possible!
  • Wide Access; all functions, all teams, all departments in the organisation.
  • Open Access; common client application - web.
  • Broad Rights; everybody can edit. If you are worried about the quality over time, then assign a librarian in the team.
  • Versioning; version every change to the documentation. This should be automated in the documentation platform. Put every piece of code, also script, under versioning.
  • Continous Documentation; documentation is "Done" SCRUM-wise with each task.

Automation

Today there is a lot of talk about automation, which is one of my preferred weapons in any aspect on IT.
Initially I would start by automate documentation on the lowest abstraction level. That is the physics where the components can be discovered with a script through APIs. Dependencies on this level can often be resolved using tools like SQL Dependency Tracker from Red-Gate.
There are tools for generating documentation, but usually I find the result too general to be useful in the long run.

Work with the technology you are using by using documentation features in development languages, e.g. PowerShell Comment Based Help.


A comprehensive automated test suite even provides the most complete and up-to-date form of application documentation, in the form of an executable specification not just of how the system should work, but also of how it actually does work.
(Jez Humble & David Farley: „Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation“)

Inventory

To build a automated documentation that is useful on the log run I usually end up building a Inventory, like a CMDB. Sometimes I am not allowed to call it that because some manager has a high-profile project running. But then the solution is named "Inventory", "Catalogue" or whatever is politically acceptable.
This Inventory will quickly be the heart of your automation and associated solutions like documentation. Actually this asset will easily be the most valuable item to your daily work and your career.

When building a Inventory you will most likely find that a lot of the data you want to generate the documentation is already in other systems like monitoring systems, e.g. Microsoft System Center.
Please regard the Inventroy as a integrated part of the IT infrastructure, where a lot of data flows to the Inventory and the sweet things like documentation flows from the Inventory.
This pattern is very much like a data flow around a data ware house. Actually there is complete similarity if you start calling the Inventory the Unified Data Model (UDM). The good news is that if you have worked with data warehousing before, then you can re-use skills on ETL development to populate the Inventory from the sources.

By adding logging or monitoring data you can document the quality of the data.
In some cases I have seen test results and other very detailed history added to the Inventory. That I think is overdoing it, but if there is a good case then it is naturally fine. These test data must include metadata, e.g. system, version and task.

Metadata

Quite early in the automation you will find that something is missing to generate useful documentation. But these data can not be collected from the physics. Still the nature of the data looks quite simple such as environment information.
A solution is to add these metadata to the Inventory. But beware! The data is inserted and maintained manually, so you will have to build some tools to help you manage these metadata.
Examples on useful metadata are system (define!), product, version with more vendor specific data such as license and support agreement.
Relate the metadata on all levels where it add value, e.g. version details on all components.
It is my hard earned experience that you should spend some time on this task. Keep an eye constantly on the data quality. Build checks and be very paranoid about the effect of bad data.

Abstract documentation

When your Inventory is up and running, the tools are sharpened and everybody is using the documentation it is time to talk about the missing parts.
As mentioned earlier the Inventory can help with the documentation of the lowest abstraction level. This means that documentation of requirements, architecture, design etcetera that is documented at more abstract levels does not fit in a Inventory. Also documentation of important issues like motivation, discussion of decisions and failures does not fit in Inventory.
For this kind of more abstract documentation there are several solutions. Usually a solution already used for other documents is used, like SharePoint or a document archive. Just make sure that you can put everything in the solution, also Visio diagrams, huge PDFs and what your favorite tools work with.
Also you should plan how to get and store documentation from outsiders. That is consultants, vendors and customers.

Document process

I would like to repeat that it is very important to document the process with things that was discussed but deliberate left out. Not only the decision but also the motivation.
An obvious item might have been left out because of a decisive business issue. This item will most likely be discussed again, and to save time and other resources it is important to be able to continue the discussion - not forced to restart it from the beginning.
Also it is important to document trials, sometimes called "sandbox experiment" or "spike solution". They are valuable background knowledge when the system must be developed further some time in some future. Especially failures are important to document with as many details as possible.

Enterprise templates

I many organisations there are required documents - sometimes with templates - for requirements, architecture or design. These documents can be called "Business Requirements", "Architectual Document" (AD) or "Design Document".
Use the templates. Do not delete predefined sections. But feel liable to add section or to broaden the use of an existing section. If you add a section then make sure to describe what you see as new in the section, and discuss other "standard" sections in relation to the new section. Make sure to answer questions up front.

History

2017-05-29 Post created.
2017-07-17 Quote from „Continous Delivery“ added. „CMDB“ changed to „Inventory“.