My Experience Manipulating MDT Database Using SMA, SCORCH and SharePoint

Written by Tao Yang


At work, there is an implementation team who’s responsible for building Windows 8 tablets in a centralised location (we call it integration centre) then ship these tablets to remote locations around the country. We use SCCM 2012 R2 and MDT 2013 to build these devices using a MDT enabled task sequence in SCCM. The task sequence use MDT locations to apply site specific settings (I’m not a OSD expert, I’m not even going to try to explain exactly what these locations entries do in the task sequence).


In order to build these tablets for any remote sites, before kicking off the OSD build, the integration centre’s default gateway IP address must be added to the location entry for this specific site, and removed from any other locations.


Because our SCCM people didn’t want to give the implementation team access to MDT Deployment Workbench, my team has been manually updating the MDT locations whenever the implementation team wants to build tablets.

I wasn’t aware of this arrangement until someone in my team went on leave and asked me to take care of this when he’s not around. Soon I got really annoyed because I had to do this few times a day! Therefore I decided to automate this process using SMA, SCORCH and SharePoint so they can update the location themselves without giving them access to MDT.

The high level workflow is shown in the diagram below:

MDT Automation


01. SharePoint List

Firstly, I created a list on one of our SharePoint sites, and this list only contains one item:


02. Orchestrator Runbook

I firstly deployed the SharePoint integration pack to the Orchestrator management servers and all the runbook servers. Then I setup a connection to the SharePoint site using a service account


The runbook only has 2 activities:


Monitor List Items:




The link filters the list ID. ID must equal to 1 (first item in the list). This is to prevent users adding additional item to the list. They must always edit the first (and only) item on the list.

Start SMA Runbook called “Update-MDTLocation”:



This activity runs a simple PowerShell script to start the SMA runbook. The SMA connection details (user name, password, SMA web service server and web service endpoint) are all saved in Orchestrator as variables.


03. SMA Runbook


Firstly, I created few variables, credentials and connections to be used in the runbook:



  • Windows Credential that has access to the MDT database (we have MDT DB located on the SCCM SQL server, so it only accepts Windows authentication). I named the credential “ProdMDTDB”


  • MDT Database SQL Server address. I named it “CM12SQLServer”
  • Gateway IP address. I named it “GatewayIP”


Here’s the code for the SMA runbook:

Putting Everything Together

As demonstrated in the diagram in the beginning of this post, here’s how the whole workflow works:

  1. User login to the SharePoint site and update the only item in the list. He / She enters  the new location in the “New Gateway IP Location” field.
  2. The Orchestrator runbook checks updated items in this SharePoint list every 15 seconds.
  3. if the Orchestrator runbook detects the first (and only) item has been updated, it takes the new location value, start the SMA runbook and pass the new value to the SMA runbook.
  4. SMA runbook runs a PowerShell script to update the gateway location directly from the MDT database.
  5. SMA runbook sends email to a nominated email address when the MDT database is updated.

The email looks like this:


The Orchestrator runbook and the SMA runbook execution history can also be viewed in Orchestrator and WAP admin portal:



Room for Improvement

I created this automation process in a quick and easy way to get them off my back. I know in this process, there are a lot of areas can be improved. i.e.

  • Using a SMA runbook to monitor SharePoint list direct so Orchestrator is no longer required (i.e. using the script from this article. – Credit to Christian Booth and Ryan Andorfer).
  • User input validation
  • Look up AD to retrieve user’s email address instead of hardcoding it in a variable.

Maybe in the future when I have spare time, I’ll go back and make it better , but for now, the implementers are happy, my team mates are happier because it is one less thing off our plate Smile.


I hope you find my experience in this piece of work useful. I am still very new in SMA (and I know nothing about MDT). So, if you have any suggestions or critics, please feel free to drop me an email.

Installing VMM 2012 R2 Cluster in My Lab

Written by Tao Yang

I needed to build a 2-node VMM 2012 R2 cluster in my lab in order to test an OpsMgr management pack that I’m working on. I was having difficulties getting it installed on a cluster based on 2 Hyper-V guest VMs, and I couldn’t find a real step-to-step detailed dummy guide. So after many failed attempts and finally got it installed, I’ll document the steps I took in this post, in case I need to do it again in the future.

AD Computer accounts:

I pre-staged 4 computer accounts in the existing OU where my existing VMM infrastructure is located:

  • VMM01 – VMM cluster node #1
  • VMM02 – VMM cluster node #2
  • VMMCL01 – VMM cluster
  • HAVMM – Cluster Resource for VMM cluster


I assign VMMCL01 full control permission to the HAVMM (Cluster resource) computer AD account:


IP Addresses:

I allocated 4 IP addresses, one for each computer account listed above:


Guest VMs for Cluster Nodes

I created 2 identical VMs (VMM01 and VMM02) located in the same VLAN. There is no requirement for shared storage between these cluster nodes.

Cluster Creation

I installed failover cluster role on both VMs and created a cluster.






VMM 2012 R2 Installation

When installing VMM management server on a cluster node, the installation will prompt if you want to install a highly available VMM instance, select yes when prompted. Also, the SQL server hosting the VMM database must be a standalone SQL server or a SQL cluster, the SQL server cannot be installed on one of the VMM cluster node.

DB Configuration


Cluster Configuration


DKM Configuration


Port configuration (left as default)


Library configuration (need to configure manually later)




Run VMM install again on the second cluster node.

As instructed in the completion window, run ConfigureSCPTool.exe –AddNode CORP\HAVMM$

Cluster Role is now created and can be started:


OpsMgr components

In order to integrate VMM and OpsMgr, OpsMgr agent and console need to be installed on both VMM cluster node. I pointed the OpsMgr agent to my existing management group in the lab, approved manually installed agent and enabled agent proxy for both node (required for monitoring clusters).

Installing Update Rollup

After OpsMgr components are installed, I then installed the following updates from the latest System Center 2012 R2 Update Rollup (UR 4 at the time of writing):

  • OpsMgr agent update
  • OpsMgr console update
  • VMM management server update
  • VMM console update

Connect VMM to OpsMgr

I configured OpsMgr connection in VMM console:




The intention of this post is simply to dump all the screenshots that I’ve taken during the install, and document the “correct” way to install VMM cluster that worked in my lab after so many failed attempts.

The biggest hold up for me was without realising I need to create a separate computer account and allocate a separate IP address for the cluster role (HAVMM). I was using the cluster name (VMMCL01) and its IP address in the cluster configuration screen and the installation failed:


After going to through the install log, I realised I couldn’t use the existing cluster name:


When I ran the install again using different name and IP address for the cluster role, the installation completed successfully.

Visual Studio 2013 Community Edition

Written by Tao Yang

Nowadays, Visual Studio is definitely one of my top 5 most-used applications. I have also started using Visual Studio Online to store source codes few months ago. I have started migrating my management packs and PowerShell scripts into Visual Studio Online, and connect Visual Studio to my Visual Studio Online repository.

Microsoft has released a new edition of Visual Studio 2013 few days ago: Visual Studio 2013 Community Edition. This morning, in order to test it, I uninstalled Visual Studio Ultimate from one of my laptops, and installed the new community edition instead.

I tested all the features and extensions that I care about, I have to say I’m amazed all of them worked!

Visual Studio Online: I am able to connect to my Visual Studio Online and retrieved a Management Pack project that I’m currently working on.



Visual Studio Authoring Extension (VSAE): I installed VSAE version, same as the previous installation on my laptop, all MP related options are still there:


I tried to build the MP in the solution I’m working on, and it was built successfully:



PowerShell Tools for Visual Studio 2013: This is a community extension developed by PowerShell MVP Adam Driscoll (More information can be found here). This extension enables Visual Studio as a PowerShell script editor. As expected, it works in the Community edition and my PowerShell script is nicely laid out:



In the past, when Microsoft has discontinued development of OpsMgr 2007 R2 Authoring Console and replaced it with VSAE, in my opinion, it has made it harder for average IT Pros to start authoring management packs. One of the reasons is that VSAE is an extension for Visual Studio, it requires Visual Studio Professional or Ultimate edition, which are not cheap comparing with the old Authoring console (Free).  Therefore I am really excited to find out VSAE works just fine with the latest free Community edition. I’m hoping the community edition would benefit OpsMgr and Service Manager specialists around the world by providing us an affordable authoring solution.

Lastly, having said that, in terms of licensing for the community edition, there are some limitations. Please read THIS article carefully before using it. i.e. If you are working for a large enterprise and are developing a commercial application, you probably not going to able to use it.

Disclaimer: In this post, I’m only focusing the technical aspect based on my experience. Please don’t hold me responsible when you misused Visual Studio 2013 Community edition and violated the licensing condition. As I mentioned in the post, please read THIS article carefully to determine if you are eligible first!

A Simplified Way to Send Emails and Mobile Push Notifications in SMA

Written by Tao Yang


For those who knows me, I’m an OpsMgr guy. I spend a lot of time in OpsMgr and I am very used to the way OpsMgr sends notifications (using notification channels and subscribers).

In OpsMgr, I like the idea of saving the SMTP configuration and notification recipients’ contact details into the system so everyone who has got enough privilege can use these configurations (when configuring alert subscriptions).

Over the last few months, I have spent a lot of time on SMA (Service Management Automation). As I started building more and more runbooks and integration modules, I really miss the simple way of sending notifications in OpsMgr. Although there is a built-in PowerShell cmdlet for sending emails (Send-MailMessage), it requires a lot of input parameters, and the runbook author needs to have all the SMTP information available. I thought it would be nice if I could save SMTP settings as connection objects (similar to notification channels in OpsMgr), and recipients’ contact details (email and mobile device push notification services’ api keys) also as connection objects (similar to subscribers in OpsMgr).

To achieve my goals, I have created 2 SMA Integration modules:

Module Name Connection Type Name PowerShell Functions
SendEmail SMTPServerConnection Send-Email
SendPushNotification SMAAddressBook Send-MobilePushNotification

SendEmail Module

This module defines a connection type where can be used to save all SMTP related information:

  • SMTP Server address
  • Port
  • Authentication Method (Anonymous, Integrate or Credential)
  • User name
  • Password
  • Sender Name
  • Sender Address
  • UseSSL (Boolean)




This module also provides a PowerShell function called “Send-Email”. Since when retrieving an automation connection in SMA, a hash table is returned, Not only you can pass individual SMTP parameters into the Send-Email function, you can also simply pass the SMA connection object that you have retrieved using “Get-AutomationConnection” cmdlet. for more information, please refer to the help topic of this function, and the sample runbook below.

SendPushNotification Module

This module provides a connection type called SMAAddressBook. It can be used like an address book to store recipient’s contact details:

  • Display Name
  • Email Address (optional)
  • NotifiyMyAndroid API Key (optional, encrypted)
  • Prawl (iOS push notification) API Key (optional, encrypted)
  • NotifyMyWndowsPhone API Key (optional, encrypted)



This module also provides a PowerShell function called Send-MobilePushNotification. It can be used to send push notification to either Prawl, NotifyMyAndroid or NotifyMyWindowsPhone.

Sample Runbook

As you can see from this sample, the runbook author does not need to know the SMTP server information (including login credentials), nor the contact details of the recipient. The runbook can simply pass the SMTP connection object (PowerShell Hash Table) into the Send-Email function.

After I executed this runbook, I received the notification via both Email and Android push notification:




Please download from the download link below. Once downloaded, please import the zip files below into SMA:


Download Link

Related Posts

OpsMgr Alerts Push Notification to iOS (And Android, And Windows Phone) Devices

Authoring Integration Modules for SMA


As shown in the sample above, once the SMTP details are saved in SMTP connection objects, and recipients’ contact details are saved as SMAAddressBook connections, it is really simple to utilise the functions provided by these 2 modules to send notifications.

Also, I’d like to point out I had to create 2 integration modules instead of 1 because I need to create 2 kinds of connections. Having said that, these 2 modules do not depend on each other and can be used separately too.

As many people referring to SMA modules and runbooks as Lego pieces, I will definitely to share more and more my Lego pieces as they’ve been developed. In the meantime, please feel free to contact me if you have questions or suggestions.

Using PowerShell and OpsMgr SDK to Get and Set Management Group Default settings

Written by Tao Yang

Over the last couple of days, I have written few additional functions in the OpsMgrSDK PowerShell / SMA module that I’ve been working on over the last few months. Two of these functions are:

  • Get-MGDefaultSettings – Get ALL default settings of an OpsMgr 2012 (R2) management group
  • Set-MGDefaultSetting – Set any particular MG default setting

Since I haven’t seen anything similar to these on the net before, although they will be part of the module when I release it to the public later, I thought they are pretty cool and I’ll publish the code here now.


This function returns an arraylist which contains ALL the default settings of the management group.


$DefaultSettings = Get-MGDefaultSettings -SDK “OpsMgrMS01″ –verbose


As you can see, this function retrieves ALL default settings of a management group. It returns the following properties:

  • SettingFullName: The full name of the assembly type of the setting. This is required when using the Set-MGDefaultSetting function to set the value.
  • SettingName: The name of the assembly type of the setting. consider it as the setting category
  • FieldName: The actual name of the setting. It is required when using the Set-MGDefaultSetting function.
  • Value: The current default value of the setting.
  • AllowOverride: When it’s true, this value can be overridden to a particular instance (differ from the default value).

If you want to retrieve a particular setting, you can always use pipe (“|”) and where-object to filter to the particular setting:





Set-MGDefaultSetting -SDK “OpsMgrMS01″ -SettingType Microsoft.EnterpriseManagement.Administration.Settings+ManagementGroup+AlertResolution -FieldName AlertAutoResolveDays -Value 3 –verbose


I think these two functions are particularly useful when managing multiple management groups. they can be used in automation products such as SC Orchestrator and SMA, to synchronise settings among multiple management groups (i.e. Test vs Dev vs Prod).

Use of ConfigMgr 2012 Client MP: Real Life Examples

Written by Tao Yang

ComplianceLast week, while I was assisting with few production issues in a ConfigMgr 2012 environment, I had to quickly implement some monitoring for some ConfigMgr 2012 site systems. By utilising the most recent release of ConfigMgr 2012 Client management pack (version and few DCM baselines, I managed to achieve the goals in a short period of time. The purpose of this post is to share my experience and hopefully someone can pick few tips and tricks from it.


We are in the process of rebuilding few hundreds sites from Windows Server 2008 R2 / System Center 2007 R2 to Windows Server 2012 R2 / System Center 2012 R2. Last week, the support team has identified few issues during the conversion process. I have been asked to assist. In this post, I will go through 2 particular issues, and also how I setup monitoring so support team and management have a clearer picture of the real impact.

Issue 1: WinRM connectivity issues caused by duplicate computer accounts in AD.

The conversion process involves rebuilding some physical and virtual servers from Windows Server 2008 R2 to Windows Server 2012 R2. When they’ve been rebuilt, they’ve also been moved from Domain A to Domain B (in the same forest) while the computer name remains the same. the support team found they cannot establish WinRM connections to some servers after the rebuild. They got some Kerberos related errors. I had a quick look and found the issue was caused by not having old computer account removed from Domain A, so WinRM using just the NetBIOS name would fail but using FQDN is OK. Although the entire conversion process is automated using Service Manager and Orchestrator, and there is an activity in one of the runbooks deletes old computer accounts, somehow this did not happen to everyone. Moving forward, the support team needs to be notified via SCOM when duplicate computer accounts exists for any computers.

Issue 2: WDS service on ConfigMgr 2012 Distribution Points been mysteriously uninstalled

It took us and Microsoft Premier support few days to identify the cause, I won’t go into the details. But we need to be able to identify from the Distribution Point itself if it is still a PXE enabled DP.

To achieve both goals, I created 2 DCM baselines and targeted them to appropriate collections in ConfigMgr.

Duplicate AD Computer Account Baseline

This baseline contains only 1 Configuration Item (CI). the CI uses a script to detect if the computer account exists in other domains. Here’s the script (note the domain names need to be modified in the first few lines):

In order for the CI to be compliant, the return value from the script needs to be “False” (no duplicate accounts found).



Distribution Point Configuration Baseline

This baseline also only contain 1 CI. Since it contains application setting, I used a very simple script to detect the existence of the ConfigMgr DP:


The compliant condition for the CI is set to:

  • Reg value “HKLM\SOFTWARE\Microsoft\SMS\DP\IsPXE” must exist and set to 1
  • Reg value “HKLM\SOFTWARE\Microsoft\SMS\DP\PXEInstalled” must exist and set to 1



Alerting through OpsMgr

Once I’ve setup and deployed these 2 baselines to appropriate collections, everything has been setup in ConfigMgr. I can now take the ConfigMgr admin hat off.

So what do I need to configure now in OpsMgr for the alerts to go through? The answer is: Nothing! Since the ConfigMgr 2012 Client MP (version has already been implemented in the OpsMgr management group, I don’t need to put on the OpsMgr admin hat because there’s nothing else I need to do. Within few hours, the newly created baselines will be discovered in OpsMgr, and start being monitored:




By utilising the DCM baseline monitoring capability in ConfigMgr 2012 Client MP can greatly simply the processes of monitoring configuration items of targeted endpoints. As showed in these 2 examples, there is no requirement of having OpsMgr administrators involved. Additionally, it is much simpler to create collections for deploying DCM baselines than defining target classes and discoveries in OpsMgr (in order to target the monitors / rules). I encourage you (both ConfigMgr admins and OpsMgr admins) to give it a try, and hopefully you will find it beneficial.

PowerShell Script to Add MP References to Unsealed Management Packs

Written by Tao Yang


Few months ago, I have written a script to remove obsolete MP references from unsealed management packs and have also built this into the OpsMgr Self Maintenance MP. Last week, I needed to write a script to do the opposite: creating obsolete MP references in unsealed MPs.

In the past, some of the MPs I have released had issues with creating overrides in the OpsMgr operational console. i.e. the OpsMgr 2012 Self Maintenance MP and the ConfigMgr 2012 Client MP. Both of them have one thing in common: the phrase “2012” is a part of the MP namespace, and if someone tries to create an override for these MPs in the operational console, he / she will get an “Alias atribute is invalid” error:


When I was testing the latest release ConfigMgr 2012 Client MP (version last week, I also got this error when assigning a RunAs account to the RunAs profile defined in the MP – because the assignment is basically a Secure Reference Override, and a MP reference to the ConfigMgr 2012 Client Library MP needs to be created in the Microsoft.SystemCenter.SecureReferenceOverride MP.

Although we can easily workaround this issue by exporting the unsealed MP, add the MP reference in by manually editing the XML, I thought I’ll write a PowerShell script to do this to make everyone’s life easier.


To make it a bit easier for the users, this PowerShell function CAN ONLY be used on a OpsMgr management server.

Usage Example:

Add-MPRef -ReferenceMPName “ConfigMgr.2012.Client.Library” -Alias “C2CL” -UnsealedMPName “Microsoft.SystemCenter.SecureReferenceOverride” –Verbose


By using this script, we can pick the alias name that we prefer. Although this script is already included in the ConfigMgr 2012 Client MP package, I’d also like to share this script on this blog. For me, it’s a rare scenario that I had to do this, but I hope this can also help someone out there.

Updated ConfigMgr 2012 (R2) Client Management Pack Version

Written by Tao Yang


It’s only been 2 weeks since I released the last update of this MP (version Soon after the release, Mr. David Allen, a fellow System Center CDM MVP contacted me, asked me to test his SCCM Compliance MP, and possibly combine it with my ConfigMgr 2012 Client MP.

In the ConfigMgr 2012 Client MP, the OVERALL DCM baselines compliance status are monitored by the DCM Agent class, whereas in David’s SCCM Compliance MP, each DCM Baseline is discovered as a separate entity and monitored separately. Because of the utilisation of Cook Down feature, comparing with the approach in the ConfigMgr 2012 Client MP, this approach adds no additional overhead to the OpsMgr agents.

David’s MP also included a RunAs profile to allow users to configure monitoring for OpsMgr agents using a  Low-Privileged default action account.

I think both of the features are pretty cool, so I have taken David’s MP, re-modelled the health classes relationships, re-written the scripts from PowerShell to VBScripts, and combined what David has done to the ConfigMgr 2012 Client MP.

If you (the OpsMgr administrators) are concerned about number of additional objects that are going to be discovered by this release (every DCM baseline on every ConfigMgr 2012 Client monitored by OpsMgr), the DCM Baselines discovery is disabled by default, I have taken an similar approach as configuring Business Critical Desktop monitoring, there is an additional unsealed MP in this release to allow you to cherry pick which endpoints to monitor in this regards.

What’s New in Version

Other than combining David’s SCCM Compliance MP, there are also few other updates included in this release. Here’s the full “What’s New” list:

Bug Fix: ConfigMgr 2012 Client Missing Client Health Evaluation (CCMEval) Execution Cycles Monitor alert parameter incorrect

Added a privileged RunAs Profile for all applicable workflows

Additional rule: ConfigMgr 2012 Client Missing Cache Content Removal Rule

Enhanced Compliance Monitoring

  • Additional class: DCM Baseline (hosted by DCM agent)
  • Additional Unit monitor: ConfigMgr 2012 Client DCM Baseline Last Compliance Status Monitor
  • Additional aggregate and dependency monitors to rollup DCM Baseline health to DCM Agent
  • Additional State View for DCM Baseline
  • Additional instance groups:
    • All DCM agents
    • All DCM agents on server computers
    • All DCM agents on client computers
    • All Business Critical ConfigMgr 2012 Client DCM Agents
  • Additional unsealed MP: ConfigMgr 2012 Client Enhanced Compliance Monitoring
    • Override to enabled DCM baseline discovery for All DCM agents on server computers group
    • Override to disable old DCM baseline monitor for All DCM agents on server computers group
    • Discovery for All Business Critical ConfigMgr 2012 Client DCM Agents (users will have to populate this group, same way as configuring business critical desktop monitoring)
    • Override to enabled DCM baseline discovery for All Business Critical ConfigMgr 2012 Client DCM Agents group
    • Override to disable old DCM baseline monitor for All Business Critical ConfigMgr 2012 Client DCM Agents group
  • Additional Agent Task: Evaluate DCM Baseline (targeting the DCM Baseline class)

Additional icons

  • Software Distribution Agent
  • Software Update Agent
  • Software Inventory Agent
  • Hardware Inventory Agent
  • DCM Agent
  • DCM Baseline


Enhanced Compliance Monitoring

Version has introduced a new feature that can monitor assigned DCM Compliance Baselines on a more granular level. Prior to this release, there is a unit monitor targeting the DCM agent class and monitor the overall baselines compliance status as a whole. Since version, each individual DCM baseline can be discovered and monitored separately.

By default, the discovery for DCM Baselines is disabled. It needs to be enabled on manually via overrides before DCM baselines can be monitored individually.


There are several groups can be used for overriding the DCM Baseline discovery:


Scenario Override Target
Enable For All DCM Agents Class: ConfigMgr 2012 Client Desired Configuration Management Agent
Enable For Server Computers Only Group: All ConfigMgr 2012 Client DCM Agents on Server OS
Enable For Client Computers Only Group: All ConfigMgr 2012 Client DCM Agents on Client OS
Enable for a subset of group of computers Manually create an instance group and populate the membership based on the “ConfigMgr 2012 Client Desired Configuration Management Agent” class

Note: Once the DCM Baseline discovery is enabled, please also disable the “ConfigMgr 2012 Client DCM Baselines Compliance Monitor” for the same targets as it has become redundant.

Once the DCM baselines are discovered, their compliance status is monitored individually:



Additionally, the DCM Baselines have an agent task called “Evaluate DCM Baseline”, which can be used to manually evaluate the baseline. This agent task performs the same action as the “Evaluate” button in the ConfigMgr 2012 client:


ConfigMgr 2012 Client Enhanced Compliance Monitoring Management Pack

An additional unsealed management pack named “ConfigMgr 2012 Client Enhanced Compliance Monitoring” is also introduced. This management pack includes the following:

  • An override to enable DCM baseline discovery for “All ConfigMgr 2012 Client DCM Agents on Server OS” group.
  • An override to disable the legacy ConfigMgr 2012 Client DCM Baselines Compliance Monitor for “All ConfigMgr 2012 Client DCM Agents on Server OS” group.
  • A blank group discovery for the “All Business Critical ConfigMgr 2012 Client DCM Agents” group
  • An override to enable DCM baseline discovery for “All Business Critical ConfigMgr 2012 Client DCM Agents” group.
  • An override to disable the legacy ConfigMgr 2012 Client DCM Baselines Compliance Monitor for “All Business Critical ConfigMgr 2012 Client DCM Agents” group.


In summary, this management pack enables DCM baseline discovery for all ConfigMgr 2012 client on server computers and switch from existing “overall” compliance baselines status monitor to the new more granular compliance baseline status monitor which targets individual baselines. This management pack also enables users to manually populate the new “All Business Critical ConfigMgr 2012 Client DCM Agents” group. Members in this group will also be monitored the same way as the server computers as previously mentioned.

Note: Please only use this management pack when you prefer to enable enhanced compliance monitoring on all server computers, otherwise, please manually configure the groups and overrides as previously stated.


New RunAs Profile for Low-Privilege Environments

Since almost all of the workflows in the ConfigMgr 2012 Client management packs require local administrative access to access various WMI namespaces and registry, it will not work when the OpsMgr agent RunAs account does not have local administrator privilege.

Separate RunAs accounts can be created and assigned to the “ConfigMgr 2012 Client Local Administrator RunAs Account” profile.

RunAs Account Example:


RunAs Profile:


For More information about OpsMgr RunAs account and profile, please refer to:

Note: When assigning a RunAs Account to the “ConfigMgr 2012 Client Local Administrator RunAs Account” profile, you will receive an error as below:


Please refer to the MP documentation section “14.3 Error Received when Adding RunAs Account to the RunAs Profile” for instruction on fixing this error.

New Rule: Missing Cache Content Removal Rule

This rule runs every 4 hours by default and checks if any registered ConfigMgr 2012 Client cache content has been deleted from the file system. When obsolete cache content is detected, this rule will remove the cache content entry from ConfigMgr 2012 client via WMI and generates an informational alert with the details of the missing cache content:


Additional Icons:

Prior to this release, only the top level class ConfigMgr 2012 Client has its dedicated icons. I have spent a lot of time looking for icons for all other classes, I managed to produce icons for each monitoring classes in this release:



Note: I only managed to find high res icons for the Software Distribution Agent and the Software Update Agent (extracted from various DLLs and EXEs). I couldn’t find a way to extract icons from AdminUI.UIResources.DLL – where all the icons used by SCCM are stored. So for other icons, I had to use SnagIt to take screenshots of these icons. You may notice the quality is not that great, but after few days effort trying to find these icons, this is the best I can do. If you have a copy of these icons (res higher than 80×80), or know a way to extract these icons from AdminUI.UIResources.dll, please contact me and I’ll update them in the next release.


BIG thank you to David Allen for his work on the SCCM Compliance MP, and also helping me test this release!

You can download the ConfigMgr 2012 Client MP Version HERE.

Until next time, happy SCOMMING!

Clean Up SMA Database After a Module Deletion

Written by Tao Yang


I noticed an issue while I was writing a SMA integration module. I once made an mistake in the .json file and I noticed I couldn’t import the updated module back to SMA even after I firstly deleted the old version. I’ll explain this issue using a sample dummy module.

Reproducing The Issue

To reproduce the issue, I firstly created a dummy module with 3 files:


The module .psm1 and the connection file .json (notice there’s a connection field named “ComputerName” in the .json file):


I zipped and imported this module into SMA, everything looks good so far. and the connection fields are correctly displayed (as expected):


I then update the .json file (Changed the connection field name from ComputerName to ComputerFQDN):


I zipped the updated module, tried to import the module into SMA, but got an error:


I then tried to delete the existing module and imported again, but I got the same error.
I also noticed that even after the module has been deleted, the connection is still available to be selected:


My Resolution

Since I couldn’t find any documentation on how to completely remove an Integration Module, I went ahead and developed a SQL script to completely remove the module and module connection from the various tables in the SMA database. Here’s the SQL script:

To use this script, you will need to change the @ModuleName and @ConnectionName variable to suit your module:


After I ran this script, I was able to import the updated module:



Although I’ve used this for multiple modules in multiple SMA environments and so far have not found any problems, I did not consult this workaround with SMA experts, please use this at your own risk. Please don’t blame me if it breaks your environment.

SMA Management Pack Could Not Connect To Database Alerts – My Troubleshooting Experience

Written by Tao Yang

I’ve setup 2 servers for the SMA environment in my lab a while back. Yesterday, I loaded the SMA MP (version into my OpsMgr management group. Needless to say, I followed the MP guide and configured the Database RunAs profile. However, soon after the MP is loaded, I started getting these 2 alerts:

  • The Service Management Automation web service could not connect to the database.
  • The Service Management Automation worker server could not connect to the database.


To troubleshoot these alerts, I firstly unsealed the management pack, as this is where the monitors are coming from. The Data Source module of the monitor type uses System.OleDbProbe probe action module to make the connection to the database.


To simulate the problem, I used a small free utility called Database Browser Portable to test the DB connection. I launched Database Browser using the same service account as what I configured in the RunAs profile in OpsMgr, and selected OleDB as the connection type:


I populated the Connection String based on the parameters (monitoring object properties) passed into the data source module: Provider=SQLOLEDB;\;Database=SMA;Integrated Security=SSPI


Note the Database Instance property is empty. this is OK in my lab because I’m using the default SQL instance. I’ll explain this later.

The test connection result is positive:


However, after connected, when I clicked the connection, nothing happened, the list of tables did not get populated. I then tried using my own account (which has god rights on everything in the lab), and I got the same result.

Long story short, after trying different configuration changes on the SQL server, I finally found the issue:

On the SQL server, the Name Pipes protocol was disabled


After I enabled it, I was able to populate the tables in Database Browser:


And within few minutes, the alerts were auto closed.

While I was troubleshooting this issue, I came across a blog post from Stanislav Zhelyazkov. In the blog post, Stan mentioned adding the DB instance name in the registry (where the discoveries are looking for). However, when I added “MSSQLSERVER” in the registry and forced re-discovery, the monitors became critical again and I received several 11852 event in Operations Manager event log:


I email Stan and he got back to me and told me he’s using a named instance in his lab and these monitors are working fine in his lab after he added the SQL instance name in the registry. He also told me he didn’t recall specifying the SQL Instance name during the SMA setup but the setup went successful. My guess is that the SQL Browser service must be running on his SQL server, so the setup had no problem identifying the named instance.


Based on my experience and Stan’s experience, we’d like to make the following recommendations:

  • Enable the Name Pipes protocol
  • If using the default SQL instance, please do not manually populate the registry key
  • If using a named instance, please add the SQL instance name in the registry if it’s not populated after setup.


Thanks Stan for his input on this one!