Tag Archives: MimboloveSCCM

ConfigMgr OMS Connector

Written by Tao Yang

Earlier this week, Microsoft has release a new feature  in System Center Configuration Manager 1606 called OMS Connector:


As we all know, OMS supports computer groups. We can either manually create computer groups in OMS using OMS search queries, or import AD and WSUS groups. With the ConfigMgr OMS Connector, we can now import ConfigMgr device collections into OMS as computer groups.

Instead of using the OMS workspace ID and keys to access OMS, the ConfigMgr OMS connector requires an Azure AD Application and Service Principal. My friend and fellow Cloud and Data Center Management MVP Steve Beaumont has blogged his setup experience few days ago. You can read Steve’s post here: http://www.poweronplatforms.com/configmgr-1606-oms-connector/.  As you can see from Steve’s post, provisioning the Azure AD application for the connector can be pretty complex if you are doing it manually – it contains too many steps and you have to use both the old Azure portal (https://manage.windowsazure.com) and the new Azure Portal (https://portal.azure.com).

To simplify the process, I have created a PowerShell script to create the Azure AD application for the ConfigMgr OMS Connector. The script is located in my GitHub repository: https://github.com/tyconsulting/BlogPosts/tree/master/OMS

In order to run this script, you will need the following:

  • The latest version of the AzureRM.Profile and AzureRM.Resources PowerShell module
  • An Azure subscription admin account from the Azure Active Directory that your Azure Subscription is associated to (the UPN must match the AAD directory name)

When you launch the script, you will firstly be prompted to login to Azure:


Once you have logged in, you will be prompted to select the Azure Subscription and then specify a display name for the Azure AD application. If you don’t assign a name, the script will try to create the Azure AD application under the name “ConfigMgr-OMS-Connector”:


This script creates the AAD application and assign it Contributor role to your subscription:


At the end of the script, you will see the 3 pieces of information you need to create the OMS connector:

  • Tenant
  • Client ID
  • Client Secret Key

You can simply copy and paste these to the OMS connector configuration.

Once you have configured the connector in ConfigMgr and enabled SCCM as a group source, you will soon start seeing the collection memberships being populated in OMS. You can search them in OMS using a search query such as “Type=ComputerGroup GroupSource=SCCM”:


Based on what I see, the connector runs every 6 hours and any membership additions or deletions will be updated when the connector runs.

i.e. If I search for a particular collection based on the last 6 hours, I can see this particular collection has 9 members:


During my testing, I deleted 2 computers from this collection few days ago. If I specify a custom range targeting a 6-hour time window from few days ago, I can see this collection had 11 members back then:


This could be useful sometimes when you need to track down if certain computers have been placed into a collection in the past.

This is all I have to share today. Until next time, enjoy OMS Smile.

Collecting ConfigMgr Logs To Microsoft Operation Management Suite – The NiCE way

Written by Tao Yang


I have been playing with Azure Operational Insights for a while now, and I am really excited about the capabilities and capacities it brings. I haven’t blogged anything about OpInsights until now, largely because all the wonderful articles that my MVP friends have already written. i.e. the OpInsights series from Stanislav Zheyazkov (at the moment, he’s written 18 parts so far!): https://cloudadministrator.wordpress.com/2015/04/30/microsoft-azure-operational-insights-preview-series-general-availability-part-18-2/

Back in my previous life, when I was working on ConfigMgr for living, THE one thing that I hate the most, is reading log files, not to mention all the log file names, locations, etc. that I have to memorise. I remember there was even a spreadsheet listing all the log files for ConfigMgr. Even until now, when I see a ConfigMgr person, I’d always ask “How many log files did you read today?” – as a joke. However, sometimes, when sh*t hits the fan, people won’t see the funny side of it. In my opinion, based on my experience working on ConfigMgr, I see the following challenges in ConfigMgr log files:

There are too many of them!

And even for a same component, there would be multiple log files (i.e. for software update point, there are wsyncmgr.log, WCM.log, etc.). Often administrators have to cross check entries from multiple log files to identify the issue.

Different components place log files in different locations

Site server, clients, management points, distribution points, PXE DPs, etc. all save logs to different locations. not to mention when you some of these components co-exist on the same machine, the log locations would be different again (i.e. client logs location on the site server is different than the normal clients).

Log file size is capped

By default, the size of each log file is capped to 2.5MB (I think). Although it keeps a copy of the previous log (renamed to .lo_ file), still, it holds totally 5MB of log data for the particular component. In a large / busy environment, or when something is not doing right, these 2 files (.log and .lo_) probably only holds few hours of data.  Sometimes, by the time when you realised something went wrong and you need to check the logs, they have already been overwritten.

It is difficult to read

You need a special tool (CMTrace.exe) to read these log files. If you see someone reading ConfigMgr log files using notepad, he’s either really really good, or someone hasn’t been working on ConfigMgr for too long. For majority of people like us, we rely on CMTrace.exe (or Trace32.exe in ConfigMgr 2007) to read log files. When you log to a computer and want to read some log files (i.e. client log files), you’d always have to find a copy of CMTrace.exe somewhere on the network and copy it over to the computer that you are working on. In my lab, I even created an application in ConfigMgr to copy CMTrace.exe to C:\Windows\System32 and deployed to every machine – so I don’t have to manually copy it again and again. I’m sure this is a common practice and many people have all done this before.

Logs are not centralised

In a large environment where you ConfigMgr hierarchy consists of hundreds of servers, it is a PAIN to read logs on all of these servers. i.e. When something bad happens with OSD and PXE, the results can be catastrophic (some of you guys may still remember what an incorrectly advertised OSD task sequence has done to a big Australian bank few years back).  Based on my own experience, I have seen support team needs to check PXE DP’s SMSPXE.log on as many as few hundred PXE enabled distribution points, within a very short time window (before the logs get overwritten). People would have to connect to each individual DP  and read the log files one at a time. – In situation like this, if you go up to them and ask them “How many logs have you read today?”, I’m sure it wouldn’t go down too well.

It would be nice if…

When Microsoft has released Operational Insights (OpInsights) to preview, the first thing came to my mind is, would be very nice if we can collect and process ConfigMgr log files into OpInsights. This would bring the following benefits to ConfigMgr administrators:

  • Logs are centralised and searchable
  • Much longer retention period (up to 12 month)
  • No need to use special tools such as CMTrace.exe to read the log files
  • Being able to correlate data from multiple log files and multiple computers when searching, thus make administrator’s troubleshooting experience much easier.



A line of ConfigMgr log entry consists of many piece of information. And the server and client log files have different format. i.e.

Server Log file:


Client Log File:


Before sending the information to OMS, we firstly must capture only the useful information from each entry, transform them into a more structured way (such as Windows Event log format), so these fields would become searchable once been stored and indexed in your OMS workspace.

No Custom Solution Packs available

Since OMS is still very new, there aren’t many Solution Packs available (aka Intelligence Packs in the OpInsights days). Microsoft has not yet released any SDKs / APIs for partners and 3rd parties to author and publish Solution Packs. Therefore, at this stage, in order to send the ConfigMgr log file entries to OMS, we will have to utilise our old friend OpsMgr 2012 (with OpInsights integration configured), leveraging the power of OpsMgr management packs to collect and process the data before sending to OMS (via OpsMgr).

OpsMgr Limitations

As we all know, OpsMgr provides a “Generic Text Log” event collection rule. But unfortunately, this native event data source is not capable of accomplish what I am going to achieve here.

NiCE Log File Management Pack

NiCE is a company based in Germany. They offer a free OpsMgr management pack for log file monitoring. There are already many good blog articles written about this MP, I will not write an introduction here. If you have never heard or used it, please read the articles listed below, then come back to this post:

SCOM 2012 – NiCE Log File Library MP Monitoring Robocopy Log File – By Stefan Roth

NiCE Free Log File MP & Regex & PowerShell: Enabling SCOM 2 Count LOB Crashes – By Marnix Wolf

SCOM – Free Log File Monitoring MP from NiCE –By Kevin Greene

The beauty about the NiCE Log File MP is, it is able to extract the important information (as I highlighted in the screenshots above) by using Regular Expression (RegEx), and present the data in a structured way (in XML).

In Regular Expression, we are able to define named capturing groups to capture data from a string, this is similar to storing the information in a variable when comes to programming. I’ll use a log file entry from both ConfigMgr client and server logs, and my favourite Regular Expression tester site https://regex101.com/ to demonstrate how to extract the information as I highlighted above.

Server Log entry:

Regular Expression:


Sample Log entry:

Execute query exec [sp_CP_GetPushRequestMachine] 2097152112~  $$<SMS_CLIENT_CONFIG_MANAGER><06-07-2015 13:11:09.448-600><thread=6708 (0x1A34)>

RegEx Match:


Client Log entry:

Regular Expression:


Sample Log entry:

<![LOG[Update (Site_9D4393B0-A197-4FC8-AF8C-0BC42AD2F33F/SUM_01a0100c-c3b7-4ec7-866e-db8c30111e80) Name (Update for Windows Server 2012 R2 (KB3045717)) ArticleID (3045717) added to the targeted list of deployment ({C5B54000-2018-4BD9-9418-0EFDFBB73349})]LOG]!><time=”20:59:35.148-600″ date=”06-05-2015″ component=”UpdatesDeploymentAgent” context=”” type=”1″ thread=”3744″ file=”updatesmanager.cpp:420″>

RegEx Match:


NiCE Log MP Regular Expression Tester

The NiCE Log MP also provides a Regular Expression Tester UI in the management pack. The good thing about this RegEx tester is, it also shows you what the management pack module output would be (in XML and XPath):


Now, I hope you get the bigger picture of what I want to achieve now. I want to use OpsMgr 2012, NiCE Log File MP to collect various ConfigMgr 2012 log files (both client and server logs), and then send over to OMS via OpsMgr. It is now time to talk about the management packs.

Management Pack

Obviously, the NiCE Log File MP is required. You can download it from NiCE’s customer portal once registered. This MP must be firstly imported into your management group.

Additionally, your OpsMgr management group must be configured to connect to a Operational Insights (or called “System Center Advisor” if you haven’t patched your management group in the last few months). However, what I’m about to show you is also able to store the data in your on-prem OpsMgr operational and data warehouse databases. So, even if you don’t use OMS (yet), you are still able to leverage this solution to store your ConfigMgr log data in OpsMgr databases.

Management Pack 101

Before I dive into the MP authoring and configuration, I’d like to firstly spend some time to go through some management pack basics – at the end of the day, not everyone working in System Center writes management packs. By going through some of the basics, it will help people who haven’t previously done any MP development work understand better later on.

In OpsMgr, there are 3 types of workflows:

  • Object Discoveries – For discovering instances and it’s properties of classes defined in management packs.
  • Monitors – responsible for the health states of monitoring objects. Can be configured to generate alerts.
  • Rules – Not responsible for the objects health state. Can be used to collect information, and also able to generate alerts.

Since our goal is to collect information from ConfigMgr log files, it is obvious we are going to create some rules to achieve this goal.

A rule consists of 3 types of member modules:

  • One(1) or more Data Source modules (beginning of the workflow)
  • Zero(0) or One(1) Condition Detection Module (optional, 2nd phase of the workflow)
  • One(1) or more write action modules (Last phase of the workflow).

To map the rule structure into our requirement, the rules we are going to author (one rule for each log file) is going to be something like this:

  • Data Source module: Leveraging the NiCE Log MP to read and process ConfigMgr log entries using Regular Expression.
  • Condition Detection module: Map the output of the Data Source Module into Windows event log data format
  • Write Action modules: write the Windows Event log formatted data to various data repositories. Depending your requirements, this could be any combinations of the 3 data repositories:
    • OpsMgr Operational DB (On-Prem, short term storage, but able to access the data from the Operational Console)
    • OpsMgr Data Warehouse DB (On-Prem, long term storage, able to access the data via OpsMgr reports)
    • OMS workspace (Cloud based, long term or short term storage depending on your plan, able to access the data via OMS portal, and via Azure Resource Manager API.)


Using NiCE Log MP as Data Source

Unfortunately, we cannot build our rules 100% from the OpsMgr operations console. The NiCE Log File MP does not provide any event collection rules in the UI. There are only alert rules and performance collection rules to choose from:


This is OK, because as I explained before, rules consists of 3 types of modules. An alert rule generated in this UI would have 2 member modules:

  • Data source module (called ‘NiCE.LogFile.Library.Advanced.Filtered.LogFileProvider.DS’) to collect the log entries and process them using the RegEx provided by you.
  • Write Action Module (called ‘System.Health.GenerateAlert’): Generate alerts based on the data passed from the data source module.

What we can do is to take the same data source module from such an Alert rule (and it’s configuration), then build our own rule with our condition detection module (called ‘System.Event.GenericDataMapper’) to map the data into Windows Event Log format, and use any of these 3 write action module to store the data:

  • Write to Ops DB: ‘Microsoft.SystemCenter.CollectEvent’
  • Write to DW DB: ‘Microsoft.SystemCenter.DataWarehouse.PublishEventData’
  • Write to OMS (OpInsights): ‘Microsoft.SystemCenter.CollectCloudGenericEvent’

However, to go one step further, since there are so many input parameters we need to specify for the Data Source module, and I want to hide the complexity for the users (your System Center administrators), I have created my own data source modules, and “wrapped” the NiCE data source module ‘NiCE.LogFile.Library.Advanced.Filtered.LogFileProvider.DS’ inside my own data source module. By doing so, I am able to hardcode some common fields that are same among all the rules we are going to create (i.e. the regular expression, etc.). Because the regular expression for ConfigMgr client logs and server logs are different, I have created 2 generic data source modules, one for each type of log – that you can use when creating your event collection rules.

When creating your own event collecting rules, you will only need to provide the following information:

  • IntervalSeconds: How often should the NiCE data source to scan the particular log
  • ComputerName: the name of the computer of where the logs is located. – This could be a property of the target class (or a class in the hosting chain).
  • EventID: to specify an event ID for the processed log entries (as we are formatting the log entries as Windows Event Log entries)
  • Event Category: a numeric value. Please refer to the MSDN documentation for the possible value: https://msdn.microsoft.com/en-au/library/ee692955.aspx. It is OK to use the value 0 (to ignore).
  • Event Level: a numeric value. Please refer to the MSDN documentation for the possible value: https://msdn.microsoft.com/en-au/library/ee692955.aspx.
  • LogDirectory: the directory of where the log file is located (i.e. C:\Windows\CCM\Logs)
  • FileName: the name of the log file (i.e. execmgr.log)


So What am I Offering?

I’m offering 3 management pack files to get you started:

ConfigMgr.Log.Collection.Library (ConfigMgr Logs Collection Library Management Pack)

This sealed management pack provides the 2 data source modules that I’ve just mentioned:

  • ConfigMgr.Log.Collection.Library.ConfigMgr.Client.Log.DS (Display Name: ‘Collect ConfigMgr 2012 Client Logs Data Source’)
  • ConfigMgr.Log.Collection.Library.ConfigMgr.Server.Log.DS (Display Name: ‘Collect ConfigMgr 2012 Server Logs Data Source’)

When you create your own management pack where your collection rules are going to be stored, you will need to reference this MP and use the appropriate data source module.

ConfigMgr.Log.Collection.Dir.Discovery (ConfigMgr Log Collection ConfigMgr Site Server Log Directory Discovery)

This sealed management pack is optional, you do not have to use it.

As I mentioned earlier, you will need to specify the log directory when creating the rule. The problem with this is, when you are creating a rule for a ConfigMgr server log file, it’s probably not ideal if you have to specify a static value because in a large environment where there are multiple ConfigMgr sites, the ConfigMgr install directory on each site server could be different. Unfortunately, the ConfigMgr 2012 management pack from Microsoft does not define and discovery the install folder or log folder as a property of the site server:


To demonstrate how we can overcome this problem, I have created this management pack. In this management pack, I have defined a new class called “ConfigMgr 2012 Site Server Extended”, it is based on the existing class defined from the Microsoft ConfigMgr 2012 MP. I have defined and discovered an additional property called “Log Folder”:


By doing so, we can variablise the “LogDirectory” parameter when creating the rules by passing the value of this property to the rule (I’ll demonstrate later).

Again, as I mentioned earlier, this MP is optional, you do not have to use it. When creating the rule, you can hardcode the “LogDirectory’’ parameter using a most common value in your environment, and using overrides to change this parameter for any servers that have different log directories.

ConfigMgr Logs Collection Demo Management Pack (ConfigMgr.Log.Collection.Demo)

In this unsealed demo management pack, I have created 2 event collection rules:

Collect ConfigMgr Site Server Wsyncmgr.Log to OpsMgr Operational DB Data Warehouse DB and OMS rule

This rule is targeting the “ConfigMgr 2012 Site Server Extended” class defined in the ‘ConfigMgr Log Collection ConfigMgr Site Server Log Directory Discovery’ MP, and collects Wsyncmgr.Log to all 3 destinations (Operational DB, Data Warehouse DB, and OMS).

Collect ConfigMgr Client ContentTransferManager.Log to OpsMgr Data Warehouse and OMS rule

This rule targets the “System Center ConfigMgr 2012 Client” class which is defined in the ConfigMgr 2012 (R2) Client Management Pack Version (which is also developed by myself).

This rule collects the ContentTransferManager.log only to Data Warehouse DB and OMS.

Note: I’m targeting this class instead of the ConfigMgr client class defined in the Microsoft ConfigMgr 2012 MP because my MP defined and discovered the log location already. When you are writing your own rule for ConfigMgr clients, you don’t have target this class, as most of the clients should have the logs located at C:\Windows\CCM\Logs folder (except on ConfigMgr servers).

Note: There are few other good example on how to write event collection rules for OMS, you may also find these articles useful:


What Do I get in OMS?

After you’ve created your collection rules and imported into your OpsMgr management group, within few minutes, the management packs would have reached the agents, started processing the logs, and send the data back to OpsMgr. OpsMgr would then send the data to OMS. It will take another few minutes for OMS to process the data before the data becomes searchable in OMS.

You will then be able to search the events:

Client Log Example:


Server Log Example:


As you can see, each field identified by the Regular Expression in NiCE data source module are structured in different parameters in the OMS log entry. You can also perform more complex searches. Please refer to the articles listed below for more details:

By Daniele Muscetta:

Official documentation:

Download MP

You may download all 3 management packs from TY Consulting’s web site: http://www.tyconsulting.com.au/portfolio/configmgr-log-collection-management-pack/

What’s Next?

I understand writing management packs is not a task for everyone, currently, you will need to write your own MP to capture the log files of your choice. I am working on an automated solution. I am getting very close in releasing the OpsMgrExtended PowerShell / SMA module that I’ve been working since August last year. In this module, I will provide a way to automate OpsMgr rule creation using PowerShell. I will write a follow-up post after the release of OpsMgrExtended module to go through how to use PowerShell to create these ConfigMgr log collection rules. So, please stay tuned Smile.

Note: I’d like to warn everyone who’s going to implement this solution: Please do not leave these rules enabled by default when you’ve just created it. You need to have a better understanding on how much data is sending to OMS as there is a cost associated in how much data is sending to it, as well as the impact to your link to the Internet. So please make them disabled by default, start with a smaller group.

Lastly, I’d like to thank NiCE for producing such a good MP, and making it free to the community. Smile

Detecting Windows License Activation Status Using ConfigMgr DCM and OpsMgr

Written by Tao Yang

Hello and Happy New year. You are reading my first post in 2015! This is going to a quick post, something I did this week.

Recently, during a ConfigMgr 2012 RAP (Risk and Health Assessment Program) engagement with Microsoft, it has been identified that a small number of ConfigMgr Windows client computers do not have their Windows License activated. The recommendation from the Microsoft ConfigMgr PFE who’s running the RAP was to create a Compliance (DCM) baseline to detect whether the Windows license is activated on client computers.

To respond to the recommendation from Microsoft, I quickly created a DCM baseline with 1 Configuration Item (CI). The CI uses a simple PowerShell script to detect the Windows license status.


I configured the CI to only support computers running Windows 7 / Server 2008 R2 and above (as per the minimum supported OS for the SoftwareLicensingProduct WMI class documented on MSDN: http://msdn.microsoft.com/en-us/library/cc534596(v=vs.85).aspx):


The CI is configured with 1 compliance rule:


Next, I created a Compliance baseline and assigned this CI to it. I then deployed the baseline to an appropriate collection. after few hours, the clients have started receiving the baseline and completed the first evaluation:


Additionally, since I have implemented and configured the latest ConfigMgr 2012 Client MP (Version, this DCM baseline assignments on SCOM managed computers are also discovered in SCOM, any non-compliant status would be alerted in SCOM as well.


That’s all for today. It is just another example on how to use ConfigMgr DCM, OpsMgr and ConfigMgr 2012 Client MP to quickly implement a monitoring requirement.

Use of ConfigMgr 2012 Client MP: Real Life Examples

Written by Tao Yang

ComplianceLast week, while I was assisting with few production issues in a ConfigMgr 2012 environment, I had to quickly implement some monitoring for some ConfigMgr 2012 site systems. By utilising the most recent release of ConfigMgr 2012 Client management pack (version and few DCM baselines, I managed to achieve the goals in a short period of time. The purpose of this post is to share my experience and hopefully someone can pick few tips and tricks from it.


We are in the process of rebuilding few hundreds sites from Windows Server 2008 R2 / System Center 2007 R2 to Windows Server 2012 R2 / System Center 2012 R2. Last week, the support team has identified few issues during the conversion process. I have been asked to assist. In this post, I will go through 2 particular issues, and also how I setup monitoring so support team and management have a clearer picture of the real impact.

Issue 1: WinRM connectivity issues caused by duplicate computer accounts in AD.

The conversion process involves rebuilding some physical and virtual servers from Windows Server 2008 R2 to Windows Server 2012 R2. When they’ve been rebuilt, they’ve also been moved from Domain A to Domain B (in the same forest) while the computer name remains the same. the support team found they cannot establish WinRM connections to some servers after the rebuild. They got some Kerberos related errors. I had a quick look and found the issue was caused by not having old computer account removed from Domain A, so WinRM using just the NetBIOS name would fail but using FQDN is OK. Although the entire conversion process is automated using Service Manager and Orchestrator, and there is an activity in one of the runbooks deletes old computer accounts, somehow this did not happen to everyone. Moving forward, the support team needs to be notified via SCOM when duplicate computer accounts exists for any computers.

Issue 2: WDS service on ConfigMgr 2012 Distribution Points been mysteriously uninstalled

It took us and Microsoft Premier support few days to identify the cause, I won’t go into the details. But we need to be able to identify from the Distribution Point itself if it is still a PXE enabled DP.

To achieve both goals, I created 2 DCM baselines and targeted them to appropriate collections in ConfigMgr.

Duplicate AD Computer Account Baseline

This baseline contains only 1 Configuration Item (CI). the CI uses a script to detect if the computer account exists in other domains. Here’s the script (note the domain names need to be modified in the first few lines):

In order for the CI to be compliant, the return value from the script needs to be “False” (no duplicate accounts found).



Distribution Point Configuration Baseline

This baseline also only contain 1 CI. Since it contains application setting, I used a very simple script to detect the existence of the ConfigMgr DP:


The compliant condition for the CI is set to:

  • Reg value “HKLM\SOFTWARE\Microsoft\SMS\DP\IsPXE” must exist and set to 1
  • Reg value “HKLM\SOFTWARE\Microsoft\SMS\DP\PXEInstalled” must exist and set to 1



Alerting through OpsMgr

Once I’ve setup and deployed these 2 baselines to appropriate collections, everything has been setup in ConfigMgr. I can now take the ConfigMgr admin hat off.

So what do I need to configure now in OpsMgr for the alerts to go through? The answer is: Nothing! Since the ConfigMgr 2012 Client MP (version has already been implemented in the OpsMgr management group, I don’t need to put on the OpsMgr admin hat because there’s nothing else I need to do. Within few hours, the newly created baselines will be discovered in OpsMgr, and start being monitored:




By utilising the DCM baseline monitoring capability in ConfigMgr 2012 Client MP can greatly simply the processes of monitoring configuration items of targeted endpoints. As showed in these 2 examples, there is no requirement of having OpsMgr administrators involved. Additionally, it is much simpler to create collections for deploying DCM baselines than defining target classes and discoveries in OpsMgr (in order to target the monitors / rules). I encourage you (both ConfigMgr admins and OpsMgr admins) to give it a try, and hopefully you will find it beneficial.

Updated ConfigMgr 2012 (R2) Client Management Pack Version

Written by Tao Yang


It’s only been 2 weeks since I released the last update of this MP (version Soon after the release, Mr. David Allen, a fellow System Center CDM MVP contacted me, asked me to test his SCCM Compliance MP, and possibly combine it with my ConfigMgr 2012 Client MP.

In the ConfigMgr 2012 Client MP, the OVERALL DCM baselines compliance status are monitored by the DCM Agent class, whereas in David’s SCCM Compliance MP, each DCM Baseline is discovered as a separate entity and monitored separately. Because of the utilisation of Cook Down feature, comparing with the approach in the ConfigMgr 2012 Client MP, this approach adds no additional overhead to the OpsMgr agents.

David’s MP also included a RunAs profile to allow users to configure monitoring for OpsMgr agents using a  Low-Privileged default action account.

I think both of the features are pretty cool, so I have taken David’s MP, re-modelled the health classes relationships, re-written the scripts from PowerShell to VBScripts, and combined what David has done to the ConfigMgr 2012 Client MP.

If you (the OpsMgr administrators) are concerned about number of additional objects that are going to be discovered by this release (every DCM baseline on every ConfigMgr 2012 Client monitored by OpsMgr), the DCM Baselines discovery is disabled by default, I have taken an similar approach as configuring Business Critical Desktop monitoring, there is an additional unsealed MP in this release to allow you to cherry pick which endpoints to monitor in this regards.

What’s New in Version

Other than combining David’s SCCM Compliance MP, there are also few other updates included in this release. Here’s the full “What’s New” list:

Bug Fix: ConfigMgr 2012 Client Missing Client Health Evaluation (CCMEval) Execution Cycles Monitor alert parameter incorrect

Added a privileged RunAs Profile for all applicable workflows

Additional rule: ConfigMgr 2012 Client Missing Cache Content Removal Rule

Enhanced Compliance Monitoring

  • Additional class: DCM Baseline (hosted by DCM agent)
  • Additional Unit monitor: ConfigMgr 2012 Client DCM Baseline Last Compliance Status Monitor
  • Additional aggregate and dependency monitors to rollup DCM Baseline health to DCM Agent
  • Additional State View for DCM Baseline
  • Additional instance groups:
    • All DCM agents
    • All DCM agents on server computers
    • All DCM agents on client computers
    • All Business Critical ConfigMgr 2012 Client DCM Agents
  • Additional unsealed MP: ConfigMgr 2012 Client Enhanced Compliance Monitoring
    • Override to enabled DCM baseline discovery for All DCM agents on server computers group
    • Override to disable old DCM baseline monitor for All DCM agents on server computers group
    • Discovery for All Business Critical ConfigMgr 2012 Client DCM Agents (users will have to populate this group, same way as configuring business critical desktop monitoring)
    • Override to enabled DCM baseline discovery for All Business Critical ConfigMgr 2012 Client DCM Agents group
    • Override to disable old DCM baseline monitor for All Business Critical ConfigMgr 2012 Client DCM Agents group
  • Additional Agent Task: Evaluate DCM Baseline (targeting the DCM Baseline class)

Additional icons

  • Software Distribution Agent
  • Software Update Agent
  • Software Inventory Agent
  • Hardware Inventory Agent
  • DCM Agent
  • DCM Baseline


Enhanced Compliance Monitoring

Version has introduced a new feature that can monitor assigned DCM Compliance Baselines on a more granular level. Prior to this release, there is a unit monitor targeting the DCM agent class and monitor the overall baselines compliance status as a whole. Since version, each individual DCM baseline can be discovered and monitored separately.

By default, the discovery for DCM Baselines is disabled. It needs to be enabled on manually via overrides before DCM baselines can be monitored individually.


There are several groups can be used for overriding the DCM Baseline discovery:


Scenario Override Target
Enable For All DCM Agents Class: ConfigMgr 2012 Client Desired Configuration Management Agent
Enable For Server Computers Only Group: All ConfigMgr 2012 Client DCM Agents on Server OS
Enable For Client Computers Only Group: All ConfigMgr 2012 Client DCM Agents on Client OS
Enable for a subset of group of computers Manually create an instance group and populate the membership based on the “ConfigMgr 2012 Client Desired Configuration Management Agent” class

Note: Once the DCM Baseline discovery is enabled, please also disable the “ConfigMgr 2012 Client DCM Baselines Compliance Monitor” for the same targets as it has become redundant.

Once the DCM baselines are discovered, their compliance status is monitored individually:



Additionally, the DCM Baselines have an agent task called “Evaluate DCM Baseline”, which can be used to manually evaluate the baseline. This agent task performs the same action as the “Evaluate” button in the ConfigMgr 2012 client:


ConfigMgr 2012 Client Enhanced Compliance Monitoring Management Pack

An additional unsealed management pack named “ConfigMgr 2012 Client Enhanced Compliance Monitoring” is also introduced. This management pack includes the following:

  • An override to enable DCM baseline discovery for “All ConfigMgr 2012 Client DCM Agents on Server OS” group.
  • An override to disable the legacy ConfigMgr 2012 Client DCM Baselines Compliance Monitor for “All ConfigMgr 2012 Client DCM Agents on Server OS” group.
  • A blank group discovery for the “All Business Critical ConfigMgr 2012 Client DCM Agents” group
  • An override to enable DCM baseline discovery for “All Business Critical ConfigMgr 2012 Client DCM Agents” group.
  • An override to disable the legacy ConfigMgr 2012 Client DCM Baselines Compliance Monitor for “All Business Critical ConfigMgr 2012 Client DCM Agents” group.


In summary, this management pack enables DCM baseline discovery for all ConfigMgr 2012 client on server computers and switch from existing “overall” compliance baselines status monitor to the new more granular compliance baseline status monitor which targets individual baselines. This management pack also enables users to manually populate the new “All Business Critical ConfigMgr 2012 Client DCM Agents” group. Members in this group will also be monitored the same way as the server computers as previously mentioned.

Note: Please only use this management pack when you prefer to enable enhanced compliance monitoring on all server computers, otherwise, please manually configure the groups and overrides as previously stated.


New RunAs Profile for Low-Privilege Environments

Since almost all of the workflows in the ConfigMgr 2012 Client management packs require local administrative access to access various WMI namespaces and registry, it will not work when the OpsMgr agent RunAs account does not have local administrator privilege.

Separate RunAs accounts can be created and assigned to the “ConfigMgr 2012 Client Local Administrator RunAs Account” profile.

RunAs Account Example:


RunAs Profile:


For More information about OpsMgr RunAs account and profile, please refer to: http://technet.microsoft.com/en-us/library/hh212714.aspx

Note: When assigning a RunAs Account to the “ConfigMgr 2012 Client Local Administrator RunAs Account” profile, you will receive an error as below:


Please refer to the MP documentation section “14.3 Error Received when Adding RunAs Account to the RunAs Profile” for instruction on fixing this error.

New Rule: Missing Cache Content Removal Rule

This rule runs every 4 hours by default and checks if any registered ConfigMgr 2012 Client cache content has been deleted from the file system. When obsolete cache content is detected, this rule will remove the cache content entry from ConfigMgr 2012 client via WMI and generates an informational alert with the details of the missing cache content:


Additional Icons:

Prior to this release, only the top level class ConfigMgr 2012 Client has its dedicated icons. I have spent a lot of time looking for icons for all other classes, I managed to produce icons for each monitoring classes in this release:



Note: I only managed to find high res icons for the Software Distribution Agent and the Software Update Agent (extracted from various DLLs and EXEs). I couldn’t find a way to extract icons from AdminUI.UIResources.DLL – where all the icons used by SCCM are stored. So for other icons, I had to use SnagIt to take screenshots of these icons. You may notice the quality is not that great, but after few days effort trying to find these icons, this is the best I can do. If you have a copy of these icons (res higher than 80×80), or know a way to extract these icons from AdminUI.UIResources.dll, please contact me and I’ll update them in the next release.


BIG thank you to David Allen for his work on the SCCM Compliance MP, and also helping me test this release!

You can download the ConfigMgr 2012 Client MP Version HERE.

Until next time, happy SCOMMING!

ConfigMgr 2012 (R2) Client Management Pack Updated to Version

Written by Tao Yang

4th October, 2014: This MP has been updated to Version Please download the latest version from this page: http://blog.tyang.org/2014/10/04/updated-configmgr-2012-r2-client-management-pack-version-1-2-0-0/.

OK, after few weeks of hard work, the updated version of the ConfigMgr 2012 (R2) Client MP is finally here.

The big focus in this release is to reduce the noise this MP generates. In the end, besides the new and updated components I have introduced in this MP, I also had to update every single script used by the monitors and rule.

The changes since previous version (v1.0.1.0) are listed below:

Bug Fixes:

  • Software Update agent health not rolled up (dependency monitors was missed in the previous release).
  • SyncTime in some data source modules were not correctly implemented
  • Typo in Pending Software update monitor alert description
  • The “All ConfigMgr 2012 Client computer group” population is incorrect. It includes all windows computers, not just the ones with ConfigMgr 2012 client installed.
  • Many warning alerts “Operations Manager failed to start a process” generated against various scripts used in this MP. It has been identified the issue is caused by the OpsMgr agent executing the workflows when the SMS Agent Host service is not running. This typically happened right after computer startup or reboot because SMS Agent Host service is set to Automatic (Delayed). All the scripts that query root\ccm WMI namespace have been re-written to wait up to 3 minutes for the SMS Agent Host to start (if it’s not already started). Hopefully this will reduce the number of these warning alerts. The updated scripts will also try to catch such condition so the alert indicates the actual issue:



Additional Items:

  • A diagnostic task and a recovery task for the CcmExec service monitor. The diagnostic task detects if the system uptime is longer than 5 minutes (overrideable), if the system uptime is longer than 5 minutes, the recovery task will start the SMS Agent Host service. Both the service monitor and the recovery task are disabled by default. –If you decide to use this service monitor and the recovery task (both disabled by default), it would help to reduce the number of failed start a process warning alerts caused by stopped SMS Agent Host service.
  • Monitor if the SCCM client has been placed into the Provisioning mode for a long period of time (Consecutive Sample monitor) (http://thoughtsonopsmgr.blogspot.com.au/2014/06/sccm-w7-osd-task-sequence-with-install.html)
  • The Missing CCMEval Consecutive Sample unit monitor has been disabled and replaced by a new monitor. The new monitor is no longer a consecutive sample monitor, it will simply detect if the CCMEval job has missed 5 consecutive cycles (number of missing cycles is overrideable). This new monitor is designed to simplify the detection process and to address the false alerts the previous consecutive monitor generates.
  • Monitor CCMCache size. Alert when the available free space for the CCMCache is lower than 20%. Some ConfigMgr client computers may be hosted on expensive storage devices (i.e. 90% of my lab machines are now running on SSD). Therefore I think it is necessary to monitor the ccmcache usage.  This monitor provides an indication on how much space has been consumed by ccmcache folder.
  • Agent Task: Delete CCMCache content


Updated Items:

  • Pending Reboot monitor updated to allow users to disable any of the 4 areas that the monitor checks for reboot (Pending File Rename operation is disabled by default because it generates too many alerts):
    • Component Based Serving
    • Windows Software Update Agent
    • SCCM Client
    • Pending File Rename operation
  • The Missing CCMEval monitor is disabled and superseded.
  • All consecutive samples monitors have been updated. The System.ConsolidatorCondition condition detection module has been replaced by the <MatchCount> configuration in the System.ExpressionFilter module (New in OpsMgr 2012) to consolidate consecutive samples. It simplifies the configuration and tuning process of these consecutive sample monitors.
  • Additional events logged in the Operations manager event log by various scripts. – help with troubleshooting. Please refer to Appendix A of the MP documentation for the details of these events.


Upgrade Tip

This version is in-place upgradable from the previous version. However, since there are additional input parameters introduced to the scripts used by monitors and rule, you may experience a large number of “Operations Manager failed to start a process” warning alert right after the updated MPs have been imported and distributed to the OpsMgr agents. To workaround this issue, I strongly recommend to place the “All ConfigMgr 2012 Clients” group into maintenance mode for 1 hour before importing the updated MPs. To do so, simply go the the “Discovered Inventory” view, and change the target type to “All ConfigMgr 2012 Clients”, and place the selected group into maintenance mode.


Special Thanks

I’d like to thank all the people who has provided the feedback since the last release and spent time helped with testing this version. I’d like to specially thank Stanislav Zhelyazkov for this valuable feedbacks and the testing effort. I’d also like to Thank Marnix Wolf for his blog post which has helped me built the Provisioning Mode Consecutive Sample monitor in this MP.



Download ConfigMgr 2012 (R2) Client Management Pack

How to Create a PowerShell Console Profile Baseline for the Entire Environment

Written by Tao Yang


Often when I’m working in my lab, I get frustrated because the code in PowerShell profiles varies between different computers and user accounts. And your user profile is also different between the normal PowerShell command console and PowerShell ISE. I wanted to be able to create a baseline for the PowerShell profiles across all computers and all users, no matter which PowerShell console is being used (normal command console vs PowerShell ISE).

For example, I would like to achieve the following when I start any 64 bit PowerShell consoles on any computers in my lab under any user accounts:

This is what I want the consoles to look like:



Although I can manually copy the code into the profiles for each of my user accounts and enable roaming profile for  these users, I don’t want to take this approach because it’s too manual and I am not a big fan of roaming profiles.


My approach is incredibly simple, all I had to do is to create a simple script and deployed it as a normal software package  using ConfigMgr. I’ll now go through the steps.

All Users All Hosts Profile

Firstly, there are actually not one (1), but six (6) different PowerShell profiles (I have to admit, I didn’t know this until now Smile with tongue out). This article from the Scripting Guy explained it very well. Based on this article, I have identified that I need to work on the All Users All Hosts profile. Because I want the code to run regardless which user account am I using, and no matter whether I’m using the normal command console or PowerShell ISE.


As I mentioned previously, because I want to use the PSConsole module I have developed earlier, I need to make sure this module is deployed to all computers in my lab. To do so, I have created a simple msi to copy the module to the PowerShell Module’s folder and deployed it to all the computers using ConfigMgr. I won’t go through how I created the msi here.

Code Inside the All Users All Hosts profile

The All Users All Hosts profile is located at $PsHome\profile.ps1


Here’s the code I’ve added to this profile:

if (Get-module -name PSConsole -List)
Import-Module PSConsole

$host.UI.RawUI.BackgroundColor = "Black"
$host.UI.RawUI.ForegroundColor = "Green"
$host.UI.RawUI.WindowTitle = $host.UI.RawUI.WindowTitle + "  - Tao Yang Test Lab"
If ($psISE)
$psISE.Options.ConsolePaneBackgroundColor = "Black"
} else {
Resize-Console -max -ErrorAction SilentlyContinue
set-location C:\

Note: The $psISE variable only exists in the PowerShell ISE environment, therefore I’m using it to identify which console am I currently in and used an IF… Else… statement to control what’s getting executed within PowerShell ISE and normal PowerShell console.

Script To create All Users All Hosts Profile

Next, I have created a PowerShell script to create the All Users All Hosts profile:

# Script Name:        CreateAllUsersAllHostsProfile.ps1
# DATE:               03/08/2014
# Version:            1.0
# COMMENT:            - Script to create All users All hosts PS profile

$ProfilePath = $profile.AllUsersAllHosts

#Create the profile if doesn't exist
If (!(test-path $ProfilePath))
New-Item -Path $ProfilePath -ItemType file -Force

#content of the profile script
$ProfileContent = @&quot;
if (Get-module -name PSConsole -List)
Import-Module PSConsole

<code>$host.UI.RawUI.BackgroundColor = &quot;Black&quot;
</code>$host.UI.RawUI.ForegroundColor = &quot;Green&quot;
<code>$host.UI.RawUI.WindowTitle = </code>$host.UI.RawUI.WindowTitle + &quot;  - Tao Yang Test Lab&quot;
If (<code>$psISE)
</code>$psISE.Options.ConsolePaneBackgroundColor = &quot;Black&quot;
} else {
Resize-Console -max -ErrorAction SilentlyContinue
set-location C:\
#write contents to the profile
if (test-path $ProfilePath)
Set-Content -Path $ProfilePath -Value $ProfileContent -Force
} else {
Write-Error &quot;All Users All Hosts PS Profile does not exist and this script failed to create it.&quot;

As you can see, I have stored the content in a multi-line string variable. The only thing to pay attention to is that I have to add the PowerShell escape character backtick (`)  in front of each variable (dollar sign $).

This script will overwrite the profile if already exists, so it will make sure the profile is consistent across all computers.

Deploy the Profile Creation Script Using ConfigMgr

In SCCM, I have created a Package with one program for this script:


Command Line: %windir%\Sysnative\WindowsPowerShell\v1.0\Powershell.exe .\CreateAllUsersAllHostsProfile.ps1

Note: I’m using ConfigMgr 2012 R2 in my lab, although the ConfigMgr client seems to be 64-bit, this command will still be executed under 32-bit environment. Therefore I have to use “Sysnative” instead of “System32” to overcome 32-bit redirection in 64-bit OS.

I created a re-occurring deployment for this program:


I’ve set it to run it once a day at 8:00am and always rerun.


This is an example on how we can standardise the baseline of PowerShell consoles within the environment. Individual users will still be able to add the users specific stuff in different profiles.

For example, on one of my computers, I have added one line to the default Current User Current Host profile:


In the All Users All Hosts profile, I have set the location to C:\, but in the Current User Current Host profile, I’ve set the location to “C:\Scripts\Backup Script”. The result is, when I started the console, the location is set to “C:\Scripts\Backup Script”. Obviously the Current User Current Host profile was executed after the All Users All Hosts profile. Therefore we can use the All Users All Hosts profile as a baseline and using Current User Current Host profile as a delta Smile.

Location, Location, Location. Part 3

Written by Tao Yang

location-graphicThis is the 3rd and the final part of the 3-part series. In this post, I will demonstrate how do I track the physical location history for Windows 8 location aware computers (tablets and laptops), as well as how to visually present the data collected on a OpsMgr 2012 dashboard.

I often see people post of Facebook or Twitter that he or she has checked in at <some places> on Foursquare. I haven’t used Foursquare before (and don’t intend to in the future), I’m not sure what is the purpose of it, but please think this as Four Square in OpsMgr for your tablets Smile. I will now go through the management pack elements I created to achieve this goal.

Event Collection Rule: Collect Location Aware Device Coordinate Rule

So, I firstly need to collect the location data periodically. Therefore, I created an event collection rule targeting the “Location Aware Windows Client Computer” class I created (explained in Part 2 of this series). This rule uses the same data source module as the “Location Aware Device Missing In Action Monitor” which I also explained in Part 2. I have configured this rule to pass the exact same data to the data source module as what the monitor does, – so we can utilise Cook Down (basically the data source only execute once and feed the output data to both the rule and the monitor).



Note: Although this rule does not require the home latitude and longitude and these 2 inputs are optional for the data source module, I still pass these 2 values in. Because in order to use Cook Down, both workflows need to pass the exact same data to the data source module. By not doing this, the same script will run twice in each scheduling cycle.

This rule maps the data collected from the data source module to event data, and stores the data in both Ops DB and DW DB. I’ve created a event view in the management pack, you can see the events created:


Location History Dashboard

Now, that the data has been captured and stored in OpsMgr databases as event data, we can consume this data in a dashboard:


As shown above, there are 3 widgets in this Location History dashboard:

  • Top Left: State Widget for Location Aware Windows Client Computer class.
  • Bottom Left: Using PowerShell Grid widget to display the last 50 known locations of the selected device from the state widget.
  • Right: Using PowerShell Web Browser widget to display the selected historical location from bottom left PowerShell Grid Widget.

The last 50 known locations for the selected devices are listed on bottom left section. Users can click on the first column (Number) to sort it based on the time stamp. When a previous location is selected, this location gets pined on the map. So we known exactly where the device is at that point of time. – From now on, I need to make sure my wife doesn’t have access to OpsMgr in my lab so she can’t track me down Smile.

Note: the location shown in above screenshot is my office. I took my Surface to work, powered it on and connected to a 4G device, it automatically connected to my lab network using DirectAccess.

Surface in car

Since this event was collected over 2 days ago, for demonstration purpose, I had to modify the PowerShell grid widget to list a lot more than 50 previous locations.

The script below is what’s used in the bottom left PowerShell Grid widget:


$i = 1
foreach ($globalSelectedItem in $globalSelectedItems)
$MonitoringObjectID = $globalSelectedItem["Id"]
$MG = Get-SCOMManagementGroup
$globalSelectedItemInstance = Get-SCOMClassInstance -Id $MonitoringObjectID
$Computername = $globalSelectedItemInstance.DisplayName
$strInstnaceCriteria = "FullName='Microsoft.Windows.Computer:$Computername'"
$InstanceCriteria = New-Object Microsoft.EnterpriseManagement.Monitoring.MonitoringObjectGenericCriteria($strInstnaceCriteria)
$Instance = $MG.GetMonitoringObjects($InstanceCriteria)[0]
$Events = Get-SCOMEvent -instance $Instance -EventId 10001 -EventSource "LocationMonitoring" | Where-Object {$_.Parameters[1] -eq 4} |Sort-Object TimeAdded -Descending | Select -First 50
foreach ($Event in $Events)
$EventID = $Event.Id.Tostring()
$LocalTime = $Event.Parameters[0]
$LocationStatus = $Event.Parameters[1]
$Latitude = $Event.Parameters[2]
$Longitude = $Event.Parameters[3]
$Altitude = $Event.Parameters[4]
$ErrorRadius = $Event.Parameters[5].trimend(".")

$dataObject = $ScriptContext.CreateInstance("xsd://foo!bar/baz")
$dataObject["ErrorRadius (Metres)"]=$ErrorRadius


And here’s the script for the PowerShell Web Browser Widget:


$dataObject = $ScriptContext.CreateInstance("xsd://Microsoft.SystemCenter.Visualization.Component.Library!Microsoft.SystemCenter.Visualization.Component.Library.WebBrowser.Schema/Request")
$dataObject["BaseUrl"]="<a href="http://maps.google.com/maps&quot;">http://maps.google.com/maps"</a>
$parameterCollection = $ScriptContext.CreateCollection("xsd://Microsoft.SystemCenter.Visualization.Component.Library!Microsoft.SystemCenter.Visualization.Component.Library.WebBrowser.Schema/UrlParameter[]")
foreach ($globalSelectedItem in $globalSelectedItems)
$EventID = $globalSelectedItem["Id"]
$Event = Get-SCOMEvent -Id $EventID
If ($Event)
$bIsEvent = $true
$Latitude = $Event.Parameters[2]
$Longitude = $Event.Parameters[3]

$parameter = $ScriptContext.CreateInstance("xsd://Microsoft.SystemCenter.Visualization.Component.Library!Microsoft.SystemCenter.Visualization.Component.Library.WebBrowser.Schema/UrlParameter")
$parameter["Name"] = "q"
$parameter["Value"] = "loc:" + $Latitude + "+" + $Longitude
} else {
$bIsEvent = $false
If ($bIsEvent)
$dataObject["Parameters"]= $parameterCollection


This concludes the 3rd and the final part of the series. I know it is only a proof-of-concept. I’m not sure how practical it is if we are to implement this in a corporate environment. i.e. Since most of the current Windows tablets don’t have GPS receivers built-in, I’m not sure and haven’t been able to test how well does the Windows Location Provider calculate locations when a device is connected to a corporate Wi-Fi.

I have also noticed what seems to be a known issue with the Windows Location Provider COM object LocationDisp.LatLongReportFactory. it doesn’t always return a valid location report. Therefore to work around the issue, I had to code all the scripts to retry and wait between attempts. I managed to get the script to work on all my devices. However, you may need to tweak the scripts if you don’t always get valid location reports.


Other than the VBScript I mentioned in Part 2, I was lucky enough to find this PowerShell script. I used this script as the starting point for all my scripts.

Also, when I was trying to setup DirectAccess to get my lab ready for this experiment, I got a lot of help from Enterprise Security MVP Richard Hick’s blog: http://directaccess.richardhicks.com. So thanks to Richard Smile.


You can download the actual monitoring MP and dashboard MP, as well as all the scripts I used in the MP and dashboards HERE.

Note: For the monitoring MP (Location.Aware.Devices.Monitoring), I’ve also included the unsealed version in the zip file for your convenience (so you don’t have to unseal it if you want to look inside). Please do not import it into your management group because the dashboard MP is referencing it, therefore it has to be sealed.

Lastly, as always, I’d like to hear from the community. Please feel free to share your thoughts with me by leaving comments in the post or contacting me via email. Until next time, happy SCOMMING Smile.

Use of Disable Operations Manager alerts option in ConfigMgr

Written by Tao Yang

In System Center Configuration Manager, there is an option “Disable Operations manager alerts while this program runs” in the program within a package:


There are also same options in the deployment of ConfigMgr 2012 applications and Software update groups:

Application Deployment:


Software Update Groups Deployment:


Most of seasoned System Center specialists must already know that these tick boxes do not make the computers enter maintenance mode in OpsMgr. It’s suppressing alerts by pausing the OpsMgr healthservice. As far as I know, there is no way to initiate maintenance mode from an agent. Maintenance mode can only be started from the management server (via Consoles or any scripts / runbooks / applications via SDK).

I am a little bit concerned about enabling these options on deployments targeting OpsMgr management servers. Since I am on holidays this week and have some spare time, I have spent some time in my lab today and performed some tests.

The OpsMgr 2012 R2 management group running in my lab consists of 3 management servers. 2 of which (named OpsMgrMS01 and OpsMgrMS02) are dedicated for managing Windows computers, the 3rd one OpsMgrMS03 is used to manage network devices and UNIX computers. I configured my management group to heartbeat every 60 seconds and allow up to 3 missing heartbeats.

I created a simple batch file to wait 15 minutes and does nothing:


I then created a package and a program in my lab’s ConfigMgr 2012 R2 site, distributed the package to all the distribution points, made sure the “Disable Operations Manager alerts while this program runs” is ticked.

I performed 4 series of the test by deploying this program to different management servers (or combination of management servers):

Test 1: Targeting Single Management Server OpsMgrMS02.

In OpsMgrMS02, there is one agent that hasn’t had failover management servers configured. so I firstly advertised I meant deployed this program to it. When the deployment kicked off, the HealthService entered pause state:


And Event 1217 was logged to the Operations Manager log:


I then waited 15 minutes, I was happy to see that no alerts were logged during this period. I checked few agents who are reporting to OpsMgrMS02, including the one without failover management servers configured. none of them complained about not able to contact the primary management server and no one has failed over to the secondary management servers.

Test 2: Targeting all 3 management servers

I deleted the execution history from OpsMgrMS02’s registry, added the other 2 management servers to the collection in ConfigMgr and then created another mandatory assignment.

3 minutes after the deployment has kicked off on all management servers, I got an alert told me  that the All Management Servers Resource Pool is not available:


I then powered off 2 virtual machines that are monitored in OpsMgr. As expected, I did not get any alerts for these 2 computers while the ConfigMgr deployment was running (because HealthService on all 3 management servers were paused). after 10 minutes or so, they are still not greyed out in the state view:


Soon after the ConfigMgr deployment has finished, the healthservice on all management servers were running again, I got the alerts for the 2 offline agents very shortly because they were still off at that point of time.


I also had a look at a performance view from the Windows Server MP:


I picked a memory counter, the perf collection rule is configured to run every 10 minutes. As you can see from above figure, during the package deployment, the performance data was not collected because there’s 20 minutes apart between 2 readings (supposed to be 10) and the 15-minute deployment falls into the this time window.

Test 3: Targeting OpsMgrMS01

I’ve decided to test on a single MS again. This time I picked the first management server. After the HealthService is paused, I powered off a VM that is reporting to this management server. I was happy to see that the alerts were generated within few minutes (so it should!).

Test 4: Targeting 2 out of 3 Management Servers

For the final test, I targeted OpsMgrMS02 and OpsMgrMS03. Because resource pools require minimum 50% of their members to be healthy. by targeting 2 out of 3 management servers, the All Management Servers Resource Pool became unavailable again. I shutdown 2 virtual machines reporting to OpsMgrMS02. I got the same result as Test 2. alerts were only generated after 15 minutes, when healthservice on 2 management servers have resumed running.


Note: Below recommendations are only based on my PERSONAL experience / opinions:

Based on my tests, I strongly recommend not to use these options during ConfigMgr package / application / software update deployments.

In large organisations, the team who’s using ConfigMgr managing the server fleet is probably not the same people who look after the OpsMgr environments. OpsMgr administrators may not even aware these issues are caused by ConfigMgr deployments because OpsMgr event logs on management servers get filled out fairly quickly. that particular event 1217 may have already been overwritten by the time the OpsMgr administrators are looking for the cause.

By using this option against management servers, you are not only suppressing alerts on for management servers themselves, but also critical alerts (such as computers offline) of the entire management group.

In large management groups, you may get away with just targeting 1 or few management servers because as long as there are more than 50% of management servers running, AMSRP will still be functional. but if your management groups are fairly small (i.e. 2 management servers), you need to be aware that if you pause healthservice on even just 1 MS, AMSRP will be unavailable.

Depending on the nature of the ConfigMgr deployments for your OpsMgr management servers, if no reboots are required, you may want to only select the specific class that is impacted by the deployment to enter the maintenance mode  (i.e. computer role, application components, etc). If reboots are required, make sure failover management servers are configured for all your agents and then disable any alert connectors / subscriptions and stage the reboot process among all your management servers. Nowadays, most likely your management servers will be running on a virtualised platform, so the reboot process should be really quick.

Lastly, I’d like to hear about your opinion. If you have anything to add or disagree with me, please feel free to comment in this post or drop me an email.

ConfigMgr 2012 (R2) Clients Management Pack Released

Written by Tao Yang

ConfigMgr 2012 Client MP IconTime flies, I can’t believe it’s been over 7 months since I posted the beta version of the ConfigMgr 2012 client MP for testing. I haven’t forgotten about this MP (because it’s one of the deliverables for the System Center 2012 upgrade project that I’ve been working on for the last 12 months or so). Today, I finally managed to finish updating this MP, it is ready for final release (Version

I didn’t manage to get many feedbacks since the beta version was released. so it’s either a good thing that everyone’s happy about it, or it’s really bad that no one bothered to use it 🙂 . I would hope it’s because that everyone’s happy about it 🙂

Anyways, below is a list of what’s changed.

Display Name for the ConfigMgr 2012 Client Agents are changed.

in beta version, the display names various client agents(DCM agents, Hardware Inventory agents, etc.) were hardcoded to the client agent name:


I don’t believe it is too user friendly when working in the Operations Console, so in this version, I’ve changed them to be the actual computer name:


Bug Fix: Incorrect Member Monitors for various client agents dependency monitors.

I made a mistake when writing the client agents dependency monitor’s snippet template in VSAE. As the result, all dependency monitors (for availability, performance, configuration and security health) had client agents availability health aggregate monitors as member monitors.


This is now fixed. the correct member monitor is assigned to each dependency monitor.


ConfigMgr 2012 Client object is no longer discovered on cluster instances.

When I was working on the beta version, the development management group that I was using did not have any failover clusters. I didn’t realise the ConfigMgr 2012 Client object is being discovered on cluster instances (virtual nodes) until I imported the MPs into our proper test environment. So this is something that has been overlooked. It is fixed now, it will not discover ConfigMgr 2012 Client (and any client agents) on clusters.

The “ConfigMgr 2012 Client All Programs Service Window Monitor” is now disabled by default.

I’m not too sure how many environments will have a maintenance window (service window) created for all clients. Therefore I’ve disabled this monitor. this is to ensure it will not flood SCOM by generating an alert for each ConfigMgr client. If it is required for all or a subset of ConfigMgr clients, it can be enabled via overrides.

Few spelling mistakes in alerts descriptions are corrected.

Finally, since the beta version was released prior to System Center 2012 R2 release, I have also tested the this MP on ConfigMgr 2012 R2 environment, it is 100% compatible without any modifications.

It can be downloaded HERE. As always, please feel free to contact me if you have any issues or suggestions.

12th April, 2014 Update: Stanislav Zhelyazkov found the override MP packed in the zip file is not correct. It did not have any references to other sealed MP. Not sure what happened when I preparing the zip file. Anyways, If you intend to use the unsealed override MP, please use this one instead.