CPU count from T-SQL

I would like to know the number of processors (cores) on a given computer. No matter what the CPU affinity is or how many database instances is running on the computer.

The answer is given by the DMV "sys.dm_os_schedulers"
SELECT MAX([cpu_id]) + 1 AS [cpu_count]
FROM [master].[sys].[dm_os_schedulers];

I could use WMI and the Win32_Processor class (MSDN Library), but I don't always have the necessary rights to read the value remote.


Log failure in a SQL Agent PowerShell job step

I wanted to have a template for a SQL Server Agent PowerShell job step. A key feature of the template is to make the job step fail controlled with a customized and informative error message.
The job step is defined in a script file (.ps1) called by the SQL Server PowerShell job step. If an error occurs in the script file, this is caught, described and handled. The script exits and the job step presents the error in the job step history and fail. By failure I also want that the value of “run_status” in the table msdb.dbo.sysjobhistory is set to “0” (zero).
The output from the script and the job step is streamed to a log file defined in the job step. The log file should hold all output and messages of the job step execution. That is both customized output and standard output from components.

The name of the logfile is defined with other job step parameters in a script configuration file for the job step.
$LogName = 'JobStepFailure'

In the script file the log file is implemented and initialized
$ScriptStartTime = [System.DateTime]::Now
$LogFileName = "$LogFileFolder\$LogName.{0:yyyyddMMTHHmmss}.log" -f $ScriptStartTime

After the script and the logfile is initialized, I move on to reset the error array. Just in case...
and the script execution can start for real
"{0:s}  Begin." -f $([System.DateTime]::Now) > $LogFileName

If a error occurs in the script execution, I usually handle the basics like this
try {$rowsSource = $DataAdpt.Fill($DataSet) }
catch {
  "{0:s}  Exeption: $_" -f $([System.DateTime]::Now) >> $LogFileName
  throw "Error in script. Check logfile '$LogFileName' for details."
finally { $CnnSource.Close() }

At the end of the script
"{0:s}  End." -f $([System.DateTime]::Now) >> $LogFileName
I check one more time for errors and exit the script
if ($Error) {
  throw "One or more error in script execution.`nCheck logfile '$LogFileName' for details."
else {
  "Script executed with success.`nCheck logfile '$LogFileName' for details." | Write-Output

One major argument to create the log file from the script file instead of using the parameter @output_file_name to the procedure msdb.dbo.sp_add_jobstep is that if the PowerShell script throws an exception, the content of the logfile is only the exception.
I do believe that this template can be used as a general solution.


PowerShell script configuration file

There are two major reasons for looking into how to handle configuration files for PowerShell scripts.
  1. Makes a reuse of the script easy. Just make another configuration file, and you have a reuse.
  2. Makes test and deployment possible. With a configuration file your script can be executed in all environments with the right values.
I like to keep things simple, and by dot-sourcing (not Duck Sauce - Sorry...) a PowerShell script file I have a simple solution. For convenience I name the file "*.config.ps1".
. ".\myScript.taskOne.config.ps1"
When I use the structure for a SQL Agent job with several steps, like import data from various sources, I like the naming "{task}.{job step}.config.ps1".

Each value in the configuration file is defined as a normal PowerShell variable. If you want to tighten your solution, you can define context or type.
[string]$script:SourceServer = 'SANDY.sqladmin.lan'

The configuration file can be tested in the script file on each run by checking the parameter and their value.

One thing I almost always define i the configuration file is the logfile name for the current script execution. Usually with a timestamp in the name.

The solution I have used in production, and it works fine. The simplicity makes it easy to use and maintain.

I tried some XML formatted configuration files, like in Visual Studio projects, but I found them too complicated  to use in a script as I had to parse the XML file to get the values.
Also some old-style configuration files, also called initialization files, use to define sections with [] and values assigned in the sections. Again I found them too complicated to use. Usually the initialization file is accessed with regular expressions, which I normally find to be a rather powerful tool, but when I can do a thing more simple, I do that - Occam's razor...