SMA Management Pack Could Not Connect To Database Alerts – My Troubleshooting Experience

Written by Tao Yang

I’ve setup 2 servers for the SMA environment in my lab a while back. Yesterday, I loaded the SMA MP (version into my OpsMgr management group. Needless to say, I followed the MP guide and configured the Database RunAs profile. However, soon after the MP is loaded, I started getting these 2 alerts:

  • The Service Management Automation web service could not connect to the database.
  • The Service Management Automation worker server could not connect to the database.


To troubleshoot these alerts, I firstly unsealed the management pack, as this is where the monitors are coming from. The Data Source module of the monitor type uses System.OleDbProbe probe action module to make the connection to the database.


To simulate the problem, I used a small free utility called Database Browser Portable to test the DB connection. I launched Database Browser using the same service account as what I configured in the RunAs profile in OpsMgr, and selected OleDB as the connection type:


I populated the Connection String based on the parameters (monitoring object properties) passed into the data source module: Provider=SQLOLEDB;\;Database=SMA;Integrated Security=SSPI


Note the Database Instance property is empty. this is OK in my lab because I’m using the default SQL instance. I’ll explain this later.

The test connection result is positive:


However, after connected, when I clicked the connection, nothing happened, the list of tables did not get populated. I then tried using my own account (which has god rights on everything in the lab), and I got the same result.

Long story short, after trying different configuration changes on the SQL server, I finally found the issue:

On the SQL server, the Name Pipes protocol was disabled


After I enabled it, I was able to populate the tables in Database Browser:


And within few minutes, the alerts were auto closed.

While I was troubleshooting this issue, I came across a blog post from Stanislav Zhelyazkov. In the blog post, Stan mentioned adding the DB instance name in the registry (where the discoveries are looking for). However, when I added “MSSQLSERVER” in the registry and forced re-discovery, the monitors became critical again and I received several 11852 event in Operations Manager event log:


I email Stan and he got back to me and told me he’s using a named instance in his lab and these monitors are working fine in his lab after he added the SQL instance name in the registry. He also told me he didn’t recall specifying the SQL Instance name during the SMA setup but the setup went successful. My guess is that the SQL Browser service must be running on his SQL server, so the setup had no problem identifying the named instance.


Based on my experience and Stan’s experience, we’d like to make the following recommendations:

  • Enable the Name Pipes protocol
  • If using the default SQL instance, please do not manually populate the registry key
  • If using a named instance, please add the SQL instance name in the registry if it’s not populated after setup.


Thanks Stan for his input on this one!

ConfigMgr 2012 (R2) Client Management Pack Updated to Version

Written by Tao Yang

OK, after few weeks of hard work, the updated version of the ConfigMgr 2012 (R2) Client MP is finally here.

The big focus in this release is to reduce the noise this MP generates. In the end, besides the new and updated components I have introduced in this MP, I also had to update every single script used by the monitors and rule.

The changes since previous version (v1.0.1.0) are listed below:

Bug Fixes:

  • Software Update agent health not rolled up (dependency monitors was missed in the previous release).
  • SyncTime in some data source modules were not correctly implemented
  • Typo in Pending Software update monitor alert description
  • The “All ConfigMgr 2012 Client computer group” population is incorrect. It includes all windows computers, not just the ones with ConfigMgr 2012 client installed.
  • Many warning alerts “Operations Manager failed to start a process” generated against various scripts used in this MP. It has been identified the issue is caused by the OpsMgr agent executing the workflows when the SMS Agent Host service is not running. This typically happened right after computer startup or reboot because SMS Agent Host service is set to Automatic (Delayed). All the scripts that query root\ccm WMI namespace have been re-written to wait up to 3 minutes for the SMS Agent Host to start (if it’s not already started). Hopefully this will reduce the number of these warning alerts. The updated scripts will also try to catch such condition so the alert indicates the actual issue:



Additional Items:

  • A diagnostic task and a recovery task for the CcmExec service monitor. The diagnostic task detects if the system uptime is longer than 5 minutes (overrideable), if the system uptime is longer than 5 minutes, the recovery task will start the SMS Agent Host service. Both the service monitor and the recovery task are disabled by default. –If you decide to use this service monitor and the recovery task (both disabled by default), it would help to reduce the number of failed start a process warning alerts caused by stopped SMS Agent Host service.
  • Monitor if the SCCM client has been placed into the Provisioning mode for a long period of time (Consecutive Sample monitor) (
  • The Missing CCMEval Consecutive Sample unit monitor has been disabled and replaced by a new monitor. The new monitor is no longer a consecutive sample monitor, it will simply detect if the CCMEval job has missed 5 consecutive cycles (number of missing cycles is overrideable). This new monitor is designed to simplify the detection process and to address the false alerts the previous consecutive monitor generates.
  • Monitor CCMCache size. Alert when the available free space for the CCMCache is lower than 20%. Some ConfigMgr client computers may be hosted on expensive storage devices (i.e. 90% of my lab machines are now running on SSD). Therefore I think it is necessary to monitor the ccmcache usage.  This monitor provides an indication on how much space has been consumed by ccmcache folder.
  • Agent Task: Delete CCMCache content


Updated Items:

  • Pending Reboot monitor updated to allow users to disable any of the 4 areas that the monitor checks for reboot (Pending File Rename operation is disabled by default because it generates too many alerts):
    • Component Based Serving
    • Windows Software Update Agent
    • SCCM Client
    • Pending File Rename operation
  • The Missing CCMEval monitor is disabled and superseded.
  • All consecutive samples monitors have been updated. The System.ConsolidatorCondition condition detection module has been replaced by the <MatchCount> configuration in the System.ExpressionFilter module (New in OpsMgr 2012) to consolidate consecutive samples. It simplifies the configuration and tuning process of these consecutive sample monitors.
  • Additional events logged in the Operations manager event log by various scripts. – help with troubleshooting. Please refer to Appendix A of the MP documentation for the details of these events.


Upgrade Tip

This version is in-place upgradable from the previous version. However, since there are additional input parameters introduced to the scripts used by monitors and rule, you may experience a large number of “Operations Manager failed to start a process” warning alert right after the updated MPs have been imported and distributed to the OpsMgr agents. To workaround this issue, I strongly recommend to place the “All ConfigMgr 2012 Clients” group into maintenance mode for 1 hour before importing the updated MPs. To do so, simply go the the “Discovered Inventory” view, and change the target type to “All ConfigMgr 2012 Clients”, and place the selected group into maintenance mode.


Special Thanks

I’d like to thank all the people who has provided the feedback since the last release and spent time helped with testing this version. I’d like to specially thank Stanislav Zhelyazkov for this valuable feedbacks and the testing effort. I’d also like to Thank Marnix Wolf for his blog post which has helped me built the Provisioning Mode Consecutive Sample monitor in this MP.



Download ConfigMgr 2012 (R2) Client Management Pack

What Have I Been Up To

Written by Tao Yang

This blog has been a bit quiet lately. This is because I have been very busy, and it’s just all the things I’ve been working on have not been eventuated yet. I just want to quickly post a short update here to share some information with everyone.

ConfigMgr 2012 Client Management Pack Update

I have spent the last couple of weeks updating the ConfigMgr 2012 Client Management Pack. I thought it would only take me few days but it turned out it has taken a lot longer than what I expected (2 solid weeks with over 10 hours each day including weekends). Having said that, there has been a lot of changes and bug fixes in the new release. I have completed the beta version last night and I’m currently testing it with my fellow SCCDM MVP Stanislav Zhelyazkov. I’ll let the beta version running in the test environments for few more days, so hopefully if nothing goes wrong, it will be released in the coming days.

A Custom OpsMgr PowerShell Module for SMA

I started writing a number of OpsMgr PowerShell functions that directly interact with OpsMgr SDK. These functions can be used to create management packs, various rules and monitors. I then transformed these functions into a standalone PowerShell module which can be imported into SMA as Integration Modules. Unlike the built-in OpsMgr module in SMA, this is not a portable module, it does not require SMA runbook workers to install the native OpsMgr 2012 module.

I will be presenting this module in the Melbourne System Center, Security & Infrastructure group next Thursday night (25th September 2014) at Microsoft’s Melbourne office in Southbank with Dan Kregor. We will demonstrate how to automate OpsMgr management packs creation using this module, alone with SMA, Orchestrator and Sharepoint 2013.

To date, I have spent over 2 months working on this project and have already written around 2000 lines of code. I’m really excited with what this solution does and I think it’s pretty cool. If you live in Melbourne and interested in attending, the RSVP detail can be found on the user group website:

After the user group meeting, I will also document this solution and make it available to the community.

SMA Runbook: Update A SharePoint 2013 List Item

Written by Tao Yang


This blog hasn’t been too active lately. I’ve been spending a lot of time learning the new member in System Center family: Service Management Automation.

Yesterday, I needed a SMA runbook to update SharePoint 2013 list items, I found a sample from a blog post by Christian Booth, which contains a SMA runbook written by Ryan Andorfer, a System Center Cloud and Datacenter MVP.  Looks Ryan’s code was written for SharePoint 2010, which does not work for SharePoint 2013 because the SharePoint REST API has been updated. So I have spent some time, learned a bit more about SharePoint 2013’s REST API, and developed a new runbook for SharePoint 2013 based on Ryan’s code.

PowerShell Code

Here’s the finished work:

Unlike Ryan’s code, which also monitors the SP list, my runbook ONLY updates a specific list item.

Pre-Requisite and Parameters

Prior to using this runbook, you will need to save a credential in SMA which has  access to the SharePoint site


The runbook is expecting the following parameters:

SharePointSiteURL: The URL to the sharepoint site. i.e. http://SharepointServer/Sites/DemoSite

SavedCredentialName: name of the saved credential to connect to SharePoint site

ListName: Name of the list. i.e. “Test List”

ListItemID: the ID for the list item that the runbook is going to update

PropertyName: the field / property of the item that is going to be updated.

PropertyValue: the new value that is going to be set to the list item property.

Note: The list Item ID is the reference number for the item within the list. If you point the mouse cursor to the item, you will find the list item ID in the URL.


Putting it into Test:

To test, I’ve created a new list as shown in the above screenshot, I have kicked off the runbook with the the following parameters:



Here’s the result:



Using It Together With Orchestrator SharePoint IP

Since this SMA runbook requires the List Item ID to locate the specific list item, when you design your solution, you will need to find a way to retrieve this parameter prior to calling this runbook.

If you are also using SC Orchestrator and have deployed the SharePoint IP, you can use the “Monitor List Items” activity, and the List Item ID is published by this activity:



Although I’m still a newbie when comes to SMA, it got me really excited. Before its time, when I design Orchestrator runbooks, I often ended up just write the entire solution in PowerShell and then chopped up my PowerShell scripts into many “Run .Net Script” activities. I thought, wouldn’t it be nice if there is an automation engine that only uses PowerShell? Well, looks like SMA is the solution. I wish I have started using it sooner.

If you are like me and want to learn more about this product, i highly recommend you to read the Service Management Automation Whitepaper (currently version 1.0.4) from my fellow SCCDM MVP Michael Rueefli. I have read it page by page like a bible!

OpsMgr Dashboard Fun: Server Details Using SquaredUp

Written by Tao Yang

After my previous post on how to create a performance view using SquaredUp, the founder of SquaredUp, Richard Benwell told me that I can also use “&embed=true” parameter in the URL to get rid of the headers. I also managed to create another widget to display server details. Combined with the performance view, I create a dashboard like this:


The bottom left is the improved version of the performance view (using embed parameter), and the right pane is the server details page:


This server detail view contains the following information:

  • Alerts associated to the computer
  • Health states of the Distributed Apps that this computer is a part of.
  • Health State of its hosted components (Equivalent to the Health Explorer??)
  • Discovered properties of this computer

Combined with the performance view, it gives a good overview of the current state of the computer from different angles.

Here’s the script for this server detail view:

And here’s the script for the improved performance view (with “&embed=true” parameter):

I’d also like to clarify that my examples are just providing alternative ways to utilise SquaredUp and display useful information on a single pane of glass (dashboards).  I don’t want to mislead the readers of article to have an impression that SquaredUp relies on native OpsMgr consoles and dashboards. In my opinion and experience with SquaredUp, I think it is a perfect replacement to the built-in OpsMgr web console.

OpsMgr Dashboard Fun: Performance Widget Using SquaredUp

Written by Tao Yang

I’m a big fan of SquaredUp Dashboard. I have implemented it for my current “day-time” employer Coles over a year ago on their OpsMgr 2007 environments and we have also included SquaredUp in the newly built 2012 R2 management groups. In my opinion, it is more flexible than the native web console as it uses HTML 5 rather than Silverlight and it runs on any browsers as well as mobile devices.

One of my favourite features is that SquaredUp has the capability to directly read data from the OpsMgr Data Warehouse DB. Traditionally, OpsMgr operators would have to run or schedule reports in order to access aged performance data. Based on my experience, I think in 9 out of 10 times, it’s a total waste of my time, people don’t even open those reports when they arrived in their inboxes. With SquaredUp, you can access the performance data for any given period as long as it’s within the retention period. – So I can direct users to access these data from SquaredUp whenever they want, without having me involved.

I had some spare time today so I have installed the latest version in my home lab today. And I managed to create a dashboard using the PowerShell Web Browser widget for less than 10 minutes:


This dashboard contains 2 widgets. the left one is a state widget targeting Windows Server class. the widget on the right is a PowerShell Web Browser widget which has been made available since OpsMgr 2012 SP1 UR6 and SP2 UR2.

The script behind this widget is very simple. If you access the performance data of a server. the monitoring object ID and the timeframe are variables as part of the URL. so all I did is to pass these 2 variables. In this sample, I used the default timeframe of last 12 hours. you can specify other values if you like.


And here’s the script:

Additionally, in order to make SquaredUp work in this dashboard, I had to configure the Data Warehouse DB connection and enable Single Sign-On according to the instructions below:


If you haven’t played with SquaredUp yet, please have take a look at their website: there’s an online demo you can access too.

Sparq Consulting

Written by Tao Yang


If you are actively involved in the System Center community, you may have heard and subscribed to the Inside Podcast Network ( I have known the host of IPN and my fellow System Center Cloud and Datacenter Management MVP, Dan Kregor for many years now (7 to be precise).

Dan and I have previously worked in the same team before he left Australia for the UK back in 2008. Since Dan moved back to Melbourne 2 years ago, we’ve been thinking about working together again. Now, after I have spent the last 18 month working on project that designed and implemented one of the largest System Center 2012 infrastructures in the country / region, I finally had time to sit down and plan for the future of my professional career.

After some thorough considerations and conversations with Dan, we have decided to partner up and start our own consulting firm. We’ve named our new firm Sparq Consulting (  We are offering a range of services to our customers around Microsoft System Center technologies, such as professional / consulting services, training and management packs development.

If you are a regular visitor to my blog, hopefully you’d have a rough idea about my capabilities. Dan and I have very similar skillsets. He has worked for several very well-known consulting firms in the past – I won’t mention the names and details, but if you are interested, please look him up on LinkedIn.

We would love to know about any potential opportunities that your organisation may have, whatever it maybe. If you think we could be of your help, please feel free to contact us. My Sparq Consulting’s email address is Tao [dot] Yang [At]

Lastly, we have also started a new blog:  From now on, I will also start cross blogging on this new blog.

How to Create a PowerShell Console Profile Baseline for the Entire Environment

Written by Tao Yang


Often when I’m working in my lab, I get frustrated because the code in PowerShell profiles varies between different computers and user accounts. And your user profile is also different between the normal PowerShell command console and PowerShell ISE. I wanted to be able to create a baseline for the PowerShell profiles across all computers and all users, no matter which PowerShell console is being used (normal command console vs PowerShell ISE).

For example, I would like to achieve the following when I start any 64 bit PowerShell consoles on any computers in my lab under any user accounts:

This is what I want the consoles to look like:



Although I can manually copy the code into the profiles for each of my user accounts and enable roaming profile for  these users, I don’t want to take this approach because it’s too manual and I am not a big fan of roaming profiles.


My approach is incredibly simple, all I had to do is to create a simple script and deployed it as a normal software package  using ConfigMgr. I’ll now go through the steps.

All Users All Hosts Profile

Firstly, there are actually not one (1), but six (6) different PowerShell profiles (I have to admit, I didn’t know this until now Smile with tongue out). This article from the Scripting Guy explained it very well. Based on this article, I have identified that I need to work on the All Users All Hosts profile. Because I want the code to run regardless which user account am I using, and no matter whether I’m using the normal command console or PowerShell ISE.


As I mentioned previously, because I want to use the PSConsole module I have developed earlier, I need to make sure this module is deployed to all computers in my lab. To do so, I have created a simple msi to copy the module to the PowerShell Module’s folder and deployed it to all the computers using ConfigMgr. I won’t go through how I created the msi here.

Code Inside the All Users All Hosts profile

The All Users All Hosts profile is located at $PsHome\profile.ps1


Here’s the code I’ve added to this profile:

if (Get-module -name PSConsole -List)
Import-Module PSConsole

$host.UI.RawUI.BackgroundColor = "Black"
$host.UI.RawUI.ForegroundColor = "Green"
$host.UI.RawUI.WindowTitle = $host.UI.RawUI.WindowTitle + "  - Tao Yang Test Lab"
If ($psISE)
$psISE.Options.ConsolePaneBackgroundColor = "Black"
} else {
Resize-Console -max -ErrorAction SilentlyContinue
set-location C:\

Note: The $psISE variable only exists in the PowerShell ISE environment, therefore I’m using it to identify which console am I currently in and used an IF… Else… statement to control what’s getting executed within PowerShell ISE and normal PowerShell console.

Script To create All Users All Hosts Profile

Next, I have created a PowerShell script to create the All Users All Hosts profile:

# Script Name:        CreateAllUsersAllHostsProfile.ps1
# DATE:               03/08/2014
# Version:            1.0
# COMMENT:            - Script to create All users All hosts PS profile

$ProfilePath = $profile.AllUsersAllHosts

#Create the profile if doesn't exist
If (!(test-path $ProfilePath))
New-Item -Path $ProfilePath -ItemType file -Force

#content of the profile script
$ProfileContent = @&quot;
if (Get-module -name PSConsole -List)
Import-Module PSConsole

<code>$host.UI.RawUI.BackgroundColor = &quot;Black&quot;
</code>$host.UI.RawUI.ForegroundColor = &quot;Green&quot;
<code>$host.UI.RawUI.WindowTitle = </code>$host.UI.RawUI.WindowTitle + &quot;  - Tao Yang Test Lab&quot;
If (<code>$psISE)
</code>$psISE.Options.ConsolePaneBackgroundColor = &quot;Black&quot;
} else {
Resize-Console -max -ErrorAction SilentlyContinue
set-location C:\
#write contents to the profile
if (test-path $ProfilePath)
Set-Content -Path $ProfilePath -Value $ProfileContent -Force
} else {
Write-Error &quot;All Users All Hosts PS Profile does not exist and this script failed to create it.&quot;

As you can see, I have stored the content in a multi-line string variable. The only thing to pay attention to is that I have to add the PowerShell escape character backtick (`)  in front of each variable (dollar sign $).

This script will overwrite the profile if already exists, so it will make sure the profile is consistent across all computers.

Deploy the Profile Creation Script Using ConfigMgr

In SCCM, I have created a Package with one program for this script:


Command Line: %windir%\Sysnative\WindowsPowerShell\v1.0\Powershell.exe .\CreateAllUsersAllHostsProfile.ps1

Note: I’m using ConfigMgr 2012 R2 in my lab, although the ConfigMgr client seems to be 64-bit, this command will still be executed under 32-bit environment. Therefore I have to use “Sysnative” instead of “System32” to overcome 32-bit redirection in 64-bit OS.

I created a re-occurring deployment for this program:


I’ve set it to run it once a day at 8:00am and always rerun.


This is an example on how we can standardise the baseline of PowerShell consoles within the environment. Individual users will still be able to add the users specific stuff in different profiles.

For example, on one of my computers, I have added one line to the default Current User Current Host profile:


In the All Users All Hosts profile, I have set the location to C:\, but in the Current User Current Host profile, I’ve set the location to “C:\Scripts\Backup Script”. The result is, when I started the console, the location is set to “C:\Scripts\Backup Script”. Obviously the Current User Current Host profile was executed after the All Users All Hosts profile. Therefore we can use the All Users All Hosts profile as a baseline and using Current User Current Host profile as a delta Smile.

This Blog Gets A Major Facelift

Written by Tao Yang

I have kept the same theme on this WordPress blog since day 1. It has been 4 years and I started getting sick of it. Especially that picture of an old iPhone on the top of the page. I finally got around to update the theme today.

I’ve also changed the site title to a more suitable one: “Tao Yang’s System Center Blog”.

Special thanks to my wife – The background picture was taken by her using her Nikon D90 in Fiji few years ago.

Bye bye to the old look,


Sometimes I wish there are more artistic gene in me. I’m still not 100% satisfied with the look, but this is the best I can do for now.

An Alternative for Surface Pro Docking Stations

Written by Tao Yang

I bought my Surface Pro 2 last November – third week after it was released in Australia. I only got it on the third week because I was on holidays in China when it was released and all the resellers ran out of stock when I came back.

I also bought a type cover 2 the same time. I really wanted to get the power cover and the docking station, but they weren’t released back then. I thought I’d get the type cover for now and get the power cover and the docking station when they became available in Australia.

Guess what, I was still waiting when Microsoft announced the Surface 3 release date. I sort of got the idea, they will probably never come to Australia.

For me, a power keyboard is a nice-to-have, but I really want a docking station! Therefore, I have to look elsewhere. I soon found 2 possible alternatives (USB 3 docking stations).

Toshiba Dynadock V.S. Targus USB 3 Dual Video Dock

Toshiba Dynadock U3.0


Targus USB3.0 SuperSpeed Dual Video Docking Station


Both of them have similar specs, The local retail price for the Toshiba one is around AUD $160 and Targus is around $180 (currently $1 AUD = $0.94 USD). I’ve decided to go for the Tagus one simply because the Toshiba dock is vertical with a stand, it will be harder to carry around (if I want to). The Targus dock seems to be more portable to me.

So instead of buying it in a retail shop, I managed to find a seller on eBay U.S. who accepts “Best Price”. After bargaining the price back and forth few times, I managed to get a brand new one for $85 USD. with international shipping, in the end, I paid AUD $118, which I’m very happy about the price!

Targus Dock V.S. Surface Dock

Here’s a specs comparison between the Targus dock and the Surface Pro 2 dock:

Targus USB3.0 Dual Video Dock Surface Pro 2 Dock
Video 1xDVI, 1xHDMI 1xMini Display Port
USB Ports 2xUSB3, 4xUSB2 1xUSB3, 3xUSB2
NIC 1xGB NIC 1x 10/100 NIC
Audio 1x 3.5mm speaker, 1×3.5mm mic 1x 3.5mm speaker, 1×3.5mm mic
Power Supply for Surface No Yes
Security Lock Yes No

The Targus dock also comes with a DVI-To-VGA adapter and a HDMI-To-DVI adapter to cater for different monitor connections. Based on the comparison above, the Targus dock is definitely more feature rich. Since I’ve already bought a spare Surface Pro 2 power supply from eBay, I didn’t mind the fact that I can’t power the Surface with this dock.

More Pictures

Here’s the back view:

Targus Back

Using it with my Surface Pro 2:


Physical size comparing with Surface Pro 2:



I have no problems with drivers, all the drivers got automatically installed when I connected them for the first time.

Cameron Fuller wrote an article on his experience with Surface 2 RT: Using the Surface 2 RT like a Pro-fessional. In Cameron’s article, he listed all the hardware accessories that he has purchased for the RT device. I’m guessing RT devices would always face compatibility issues because of drivers, I haven’t been managed to find an RT device to test this dock with, so I’m not sure if it supports Windows RT.

Replacement for Other Devices

Down here in Australia, I looked up prices for a USB 3 video adapter. it is around $100 AUD (around $94 USD). By getting a docking station like this, it is equivalent of getting:

  • 2x USB 3 video adapter
  • 1x USB 3 or USB 2 hub
  • 1x GB USB NIC

So it is definitely a cheaper option to get the dock instead, not to mention you end up with only one device on your desk.

So now, even if Surface docking station has been made available in Australian market, I’d still stick with this Targus dock, simply because I can connect 2 external monitors.

The only thing I haven’t tried is testing PXE through the NIC port on this dock. If someone has already tried it, please let me know Smile.