Automating OpsMgr Part 3: New Management Pack Runbook via SMA and Azure Automation

Written by Tao Yang


This is the 3rd instalment of the Automating OpsMgr series. Previously on this series:

Today, I will demonstrate a rather simple runbook to create a blank management pack in the OpsMgr management group. Additional, I will also demonstrate executing this runbook not only on your On-Premise Service Management Automation (SMA) infrastructure, but also from an Azure Automation account via Hybrid Workers.

Since the Hybrid Worker is a very new component in Azure Automation, I will firstly give a brief introduction before diving into the runbook.

Azure Automation Hybrid Worker

Ever since Azure Automation was introduced, it was great solution for automating around your assets and fabric on Azure, but there was lack of capabilities of reaching out to your on-prem data centres. Last month during Microsoft Ignite in Chicago, Microsoft has announced an additional component: Hybrid Workers, which is a Azure Automation runbook worker that you can setup on a on-prem server computer. To find out more, you can watch this Ignite session recording: Automating Operational and management Tasks Using Azure Automation. and my buddy and fellow SCCDM MVP Stanislav Zhelyazkov has also written a good post on this topic:

I am not going to go through the steps of setting up hybrid workers as Stan has already covered in his post. As Stan pointed out in his post, currently, any Integration Modules that you imported into your Azure Automation account does not get pushed out Hybrid Workers. Therefore in order to execute the New-OpsMgrMP runbook on your hybrid workers, after you’ve imported the OpsMgrExtended module in your Azure Automation account,  you must also need to manually copy the module to all your hybrid worker servers. To do so:

1. log on to the hybrid worker, and look up the PSModulePath environment variable. You can do so in PowerShell using $env:PSModulePath


2. Copy the OpsMgrExtended module to a folder that is on the PSModulePath list. Please do not copy it to any folders that are part of your user profile. I have copied it to “C:\Program Files\WindowsPowerShell\Modules” folder.

Operations Manager SDK Connection

The “Operations Manager SDK” connection must be created in the Azure Automation account, the same way as your On-Prem SMA environment:


The server  name I used is the FQDN of one of my OpsMgr management server. The user name is a service account I created in my on-prem Active Directory (I believe it’s called Legacy AD or LAD now Smile). i.e. Domain\ServicecAccount.  This is connection is created exactly the same as the one I created in my On-Prem SMA environment.

New-OpsMgrMP Runbook

The runbook in Azure Automation and SMA is exactly identical. Please note I have configured the Operations Manager SDK connection name to be identical on Azure Automation and SMA. you will need to update Line 11 of this runbook to the name of the connection you’ve created:


Executing the runbook on SMA:

Fill out the required parameters. the parameter “Version” is configured as optional in the runbook (with default value of “”), so I did not enter a version number in that field:




And you can then see the management pack in OpsMgr operational console:


Executing Runbook on Azure Automation via Hybrid Worker:

Fill out the input parameters and select “Hybrid Worker”. As you can see, the default value for “Version” parameter has already been prepopulated in the Azure portal:




And then the management pack appeared in OpsMgr operational console:



This is a rather simple runbook sample, the key to this runbook is the “New-OMManagementPack” activity from the OpsMgrExtended module.

For those who do not have SMA in their environment, I have just demonstrated how to leverage Azure Automation and Hybrid Workers to perform the same activities. As shown in Stan’s blog post, it’s rather easy to setup a Hybrid Worker in your environment, all you need is a Windows server with Internet connection. Unlike SMA, you do not need any database servers for Hybrid Workers.

I’d also like to point out, even if you have not opened an Azure Automation account yet, I strongly recommend you to do so and give it a try. You can go on a free tier, which gives you 500 job minutes a month. For testing and up skilling purposes, this is more than enough!

Lastly, if you would also like to see the ability to automatically push out custom modules to Hybrid workers in the future, Please help me and vote this idea in Azure  user voice:

Automating OpsMgr Part 2: SMA Runbook for Creating ConfigMgr Log Collection Rules

Written by Tao Yang


This is the 2nd instalment of the Automating OpsMgr series. Previously on this series:

Few weeks ago, I have also published a post: Collecting ConfigMgr Logs To Microsoft Operation Management Suite – The NiCE Way, which demonstrated how to use an OpInsights integrated OpsMgr management group and NiCE Log File MP to collect ConfigMgr client and server logs into Microsoft Operation Management Suite.

The solution I provided in that post included few sealed management packs and a demo management pack which includes few actual event collection rules for ConfigMgr log files. However, it requires some manual XML editing outside of whatever MP authoring tool that you might be using, which could be a bit complicated for IT Pros and non management pack developers. The manual XML editing is necessary because the log collection rules use a Write Action module called “Microsoft.SystemCenter.CollectCloudGenericEvent” to send the event data to the OpInsights workspace. This write action module is located in the “Microsoft.IntelligencePacks.Types” sealed management pack. This management pack is automatically pushed to your OpsMgr management group once you’ve configured the OpInsights connection.

When using management pack authoring tools such as VSAE, if you need to reference a sealed management pack (or management pack bundle), you must have the sealed MP or MP bundle files (.mp or .mpb) handy and add these files as references in your MP project. But since the sealed MP “Microsoft.IntelligencePacks.Types” is automatically pushed to your management group as part of the OpInsights integration, and Microsoft does not provide a downloadable .mp file for this MP (yes,I have asked the OpInsights product group). There was no alternatives but manually editing the XML outside of the authoring tool in order to create these rules.

Our goal is to create potentially a large number of event collection rules for all the ConfigMgr event logs that ConfigMgr administrators are interested in. In my opinion, this is a perfect automation candidate because you will need to create multiple near-identical rules, and it is very time consuming if you use MP authoring tools and text editors to create these rules (as I explained above).


I am going to demonstrate how to create these event collection rules using a SMA runbook which uses The OpsMgrExtended PowerShell module. In order to implement this solution, you will need the following:

  • An OpsMgr 2012 SP1 or R2 management group that has been connected to Azure Operational Insights (OMS)
  • A SMA infrastructure in your environment
  • Microsoft ConfigMgr 2012 management pack version 5.0.7804.1000 imported and configured in your OpsMgr management group
  • The ConfigMgr components of which you need to collect the logs from must be monitored by the OpsMgr (including ConfigMgr servers and clients). These computers must be agent monitored. Agentless monitoring is not going to work in this scenario.
  • NiCE Log File MP imported in your OpsMgr management group
  • OpsMgrExtended module imported into SMA and an “Operations Manager SDK” SMA connection object is created for your OpsMgr management group – Please refer to Part 1 of this series for details
  • The “ConfigMgr Logs Collection Library Management Pack” must also be imported into your OpsMgr management group – Download link provided in my previous post.


Runbook: New-ConfigMgrLogCollectionRule

When executing this runbook, the user must specify the following parameters:

  • RuleName: the internal name of the OpsMgr rule
  • RuleDisplayName: the display name of the OpsMgr rule
  • ManagementPackName: The internal name of the management pack (must be an existing MP in your OpsMgr management group)
  • ClassName: The target class of the rule. It must be one of the following values:
    • “Microsoft.SystemCenter2012.ConfigurationManager.DistributionPoint”
    • “Microsoft.SystemCenter2012.ConfigurationManager.ManagementPoint”
    • “Microsoft.SystemCenter2012.ConfigurationManager.SiteServer”
    • “Microsoft.SystemCenter2012.ConfigurationManager.Client”
  • LogDirectory: The directory where the log is located (i.e. “C:\Windows\CCM\Logs”)
  • LogFileName: The name of the log file (i.e. “UpdatesStore.Log”)
  • EventID: The Event ID that you wish to use when converting log file entries to Windows events
  • EventLevel: Windows event level. Must be one of the following values:
    • ‘Success’
    • ‘Error’
    • ‘Warning’
    • ‘Information’
    • ‘Audit Failure’
    • ‘Audit Success’
  • IntervalSeconds: How often does the rule run

On line 16 of the runbook, I’ve coded the runbook to retrieve a SMA connection object called “OpsMgrSDK_TYANG”:


This is because my SMA connection object for my OpsMgr management group is named “OpsMgrSDK_TYANG”. You will need to change this line according to how you’ve created your SMA connection:



You can also further simplify the runbook in the following possible areas:

  • Hardcoding the destination management pack in the runbook
  • Hardcoding the interval seconds (i.e. to 120 seconds)
  • Create a switch statement for the target class, so instead entering “Microsoft.SystemCenter2012.ConfigurationManager.Client”, users can simply enter “Client” for example.
  • Create a switch statement for the LogDirectory parameter. for example, when the target class of “Client” is specified, set LogDirectory variable to “C:\Windows\CCM\Logs”.
  • Automatically populate Rule name and display name based on the target class and the log file name.
  • Build a user’s request portal using System Center Service Manager or SharePoint List (This would be a separate topic for another day, but Please refer to my previous MVP Community Camp presentation recording for some samples I’ve created in the past using SharePoint Lists).

Lastly, needless to say, you can also execute this PowerShell workflow in a standalone PowerShell environment (or convert this PowerShell workflow into a regular PowerShell script). When running it outside of SMA, you will need to use another Parameter Set for the “New-OMManagementPackReference” and “New-OMRule” activities. So instead of using –SDKConnection Parameter, you will have to use –SDK (and optionally –Username and –Password) to connect to your OpsMgr management group. To Change it, please modify the following lines:

Change Line 16 to $SDK = “<Your OpsMgr management server>”

Change Line 47 to:


$NewMPRef = New-OMManagementPackReference -SDK $SDK -ReferenceMPName “Microsoft.Windows.Library” -Alias “Windows” -UnsealedMPName $ManagementPackName

Change Line 117 to:


New-OMRule -SDK $USING:SDK -MPName $USING:ManagementPackName -RuleName $USING:RuleName -RuleDisplayName $USING:RuleDisplayName -Category “EventCollection” -ClassName $USING:ClassName -DataSourceModules $USING:arrDataSourceModules -WriteActionModules $USING:arrWriteActionModules -Remotable $false


After I filled out all the parameters:




And executed the runbook:


The rule was successfully created:


And shortly after it, you should start seeing the log entries in your OMS workspace:




I have demonstrated how to use the OpsMgrExtended module in a SMA runbook to enable users creating large number of similar OpsMgr management pack workflows.

Given this is only part 2 of the series, and the first example I have released, maybe I should have started with something easier. The reason I’ve chosen this example as Part 2 is because I am going to present in the next Melbourne System Center, Security, & Infrastructure user group meeting next Tuesday 7th July among with 3 other MVPs (David O’Brien, James Bannan and Orin Thomas). I am going to demonstrate this very same scenario – using OpInsights to collect SCCM log files. So I thought I’ll make this the 2nd instalment of the series, so people who attended the user group meeting have something to refer to. In this sample runbook, I’ve used a relatively more complicated activity called New-OMRule to create these event collection rules. This activity is designed as a generic method to create any types of OpsMgr rules. I will dedicate another blog post just for this one in the future.

Lastly, if you are based in Melbourne and would like to see this in action, please come to the user group meeting in the evening of 7th July. It is going to be held at Microsoft Melbourne office in South Bank. the registration details is available on the website:

Automating OpsMgr Part 1: Introducing OpsMgrExtended PowerShell / SMA Module

Written by Tao Yang


The OpsMgrExtended PowerShell and SMA module is a project that I have been working on since August last year. I am very glad that it is now ready to be released to the community.

This module is designed to fill some gaps in the current OpsMgr automation solutions provided natively in System Center 2012 suite. This module can be used as a System Center Service Management Automation (SMA) Integration Module, as well as a standalone PowerShell module.

Currently, the following products are available when comes to creating automation solutions for OpsMgr:

  • OpsMgr native PowerShell module
  • OpsMgr Integration Pack for System Center Orchestrator
  • OpsMgr portable Integration Module for System Center Service Management Automation

In my opinion, each of above listed serves their purpose, but also have some limitations.

OpsMgr PowerShell Module
An OpsMgr native component that can be installed on any computers running PowerShell. With the System Center 2012 R2 release, this module offers 173 cmdlets. However, most of them are designed for administrative tasks, it is lacking features such as creating management pack components (i.e. rules, monitors, etc.).

OpsMgr Integration Pack for System Center Orchestrator

Microsoft has released a version of this IP for every release of OpsMgr 2012. However, the functionalities this IP provides is very limited.


As you can see, it only offer 8 activities. It also requires the corresponding version of the OpsMgr operational console to be manually installed on each Orchestrator runbook server and runbook designer computer before you can executing runbooks which utilise these activities. The requirement for the operations console introduces some limitations:

  • You cannot install multiple versions of OpsMgr operations console on a same computer. – This means if you have multiple versions of OpsMgr (i.e. 2012 and 2007), you MUST use separate Orchestrator runbook servers and runbook designer computers for runbooks targeting these systems.
  • If you also need to install OpsMgr agents on these runbook servers, you can ONLY install the agent that is the same version of the operations console. – This means if you do have both OpsMgr 2007 and 2012 in your environment, the runbook servers for your OpsMgr 2007 management groups cannot be monitored by OpsMgr 2012 (unless you implement less efficient agentless monitoring for these runbook servers).

OpsMgr SMA Portable Integration Module

When SMA was released as part of System Center 2012 R2, it was shipped with an OperationsManager portable module built-in to the product.


The portable modules are not real modules. They are like the middle man between your runbooks and the “real” Integration Modules. It takes your input parameters and call the activities from the real module for you. i.e.


In order to use the OperationsManager-Portable module in SMA, you must firstly manually install the “real” OpsMgr 2012 PowerShell module on all the SMA runbook servers. One of the great feature that SMA offers is being able to automatically deploy Integration Modules to all runbook servers once been imported into SMA. But for the portable modules, this is not the case, as you must manually install the “real” modules by yourself. The other limitation is, it still only just offers whatever is available in the native OpsMgr 2012 PowerShell module.

With all these limitations in mind, I have developed a brand new custom OpsMgr PowerShell / SMA Module OpsMgrExtended to fill some of these gaps.


OpsMgrExtended Introduction

Back in January 2015, I have presented a work-in-progress version of this module in the Melbourne MVP Community Camp. At that time, I said it was going to be released in few weeks time. Unfortunately, I just couldn’t dedicate enough time on this project and I wanted to add few additional functions in this module, I only managed to finalise it now (5 months later). My presentation has been recorded, you can watch it and download the slide deck from my previous post:

OpsMgr SDK Assemblies

The core component of all above mentioned native solutions is the OpsMgr SDK. All of them requires OpsMgr SDK assemblies to be installed onto the computer running the scripts and runbooks separately. This is done via the install of the OpsMgr Operations console and the PowerShell console. When you install the Operations Console or the PowerShell console onto a computer, the OpsMgr SDK assemblies are installed into the Global Assembly Cache (GAC) on this computer.

To make OpsMgrExtended module TRULLY portable and independent, I have placed the 3 OpsMgr 2012 R2 SDK DLLs into the module base folder. The PowerShell code in the OpsMgrExtended module would try to load the SDK assemblies from the GAC, but if the assemblies are not located in the GAC, it would leverage the 3 SDK DLLs that are located in the module base folder. By doing so, there is NO requirement for installing ANY OpsMgr components before you can start using this module.

Why Using OpsMgrExtended?

“If you think you will do a task twice – automate it!

When comes to automation, this is my favourite quote, from Joe Levy, a program manager in the System Center Orchestrator team. I have been managing large OpsMgr environments for many years. At my last job, I was pretty much the single point of contact for OpsMgr. Based on my own personal experience, there are a lot of repetitive tasks when managing OpsMgr infrastructures. This is why few years ago I spent few months of my spare time and developed the OpsMgr Self Maintenance MP. This MP was targeting the administrative workflows which normally carried out by OpsMgr admins.

Other than the day-to-day tasks the Self Maintenance MP has already covered, I still find a lot of repetitive tasks that do not fall into that category. for example, management packs development. I have been writing management packs for few years. Based on my own experience and the feedbacks I got from the community, I believe a lot of OpsMgr customers, or the broader community are facing the following challenges:

MP development can get very hard, and there are not many good MP developers out there.

Most of the SCOM administrators in your organisation would fall into the “IT Pro” category. MP development can get very complicated and definitely a skillset more suitable for developers  rather than IT Pro’s. There are simply not many MP developers out there. I’ve been heavily involved in the OpsMgr community for few years now, I can confidently state that if I don’t know ALL the good MP developers in the OpsMgr community, I think I know most of them. So trust me when I say there are not many around. Sometimes, I would imagine, world would be a better place if MP Development skills are as popular as ConfigMgr OSD skills (which pretty much every System Center specialist I know has got that written down on their CV’s).

It is hard to learn MP development

I’m not saying this skill is very hard to learn. But I don’t believe there are enough good & structured materials for people who wants to pick up this skill. When I started writing management packs, I was really struggling in the beginning. My friend and fellow Melbourne based MVP Orin Thomas once said to me, that he believes if you want people to start using your products, you need to make sure you invest heavily in developing trainings. I think what Orin said was spot on. I believe this is one of the main reasons that there are not many good MP developers around.

Too many toolsets

For beginners, you can use the OpsMgr operational console to write some really basic management pack elements. Most of the OpsMgr specialist who claims they can write management packs probably would use either the OpsMgr 2007 R2 Authoring Console, or the 3rd party product Silect MPAuthor. They are user-friendly, GUI based authoring tools and there are relatively easy to learn. Then for seasoned MP developers, they would normally use Visual Studio Authoring Extension (VSAE) – which is just a extension in Visual Studio, no GUI, you need to REALLY understand the management pack XML schema to be able to use this tool. not to mention Visual Studio is not free (Using it to author MPs for commercial purpose or for large organisations does not qualify you for using the free Community edition). It is hard to explain when someone completely new in this area ask me “what tool do people use to write management packs?”

How about PowerShell?

Most IT Pros should by now already very familiar with Windows PowerShell. Wouldn’t it be nice if I can use PowerShell to create OpsMgr monitors and rules? For example, if I need to monitor a Windows service, how about use a cmdlet like “New-ServiceMonitor” to create this service monitor in my OpsMgr management group?

Well, this is one of the areas I’m trying to cover in the OpsMgrExtended module.

When I was managing a large OpsMgr environment in my previous job, as much as I like developing management packs, sometimes, I still consider it as repetitive tasks. Every now and then, people would come to me and asked me to monitor service X, monitor perf counter Y, collect events Z, etc. I’ve done it once, I’ve learnt how to do it, I don’t want to do it over and over again, simply because I’m not a robot and I HATE repetitive tasks! Not to mention all the ITIL overhead that you have to put up with (i.e. testing, managing Dev, Test, Production environments, change management, release management, etc.). When there is a monitoring requirement, why can’t my customer simply fill out a request and whatever he / she needs to create gets automatically created? – Same way a normal end user would request for a piece of software to be installed on his / her PC? I don’t have to be involved (neither do I want to) when every time someone needs to get something created in OpsMgr. I’d rather spend my time working on some more complicated solutions Smile.

Another good example would be, over a year ago, I was helping a colleague from another team setting up a brand new OpsMgr 2012 environment to monitor couple of thousand servers within our organisation. My colleague has spent a lot of time, back and forth with the Windows server support team to identify their requirements. In the end, after I waited a long period of time, they finally gave me a spreadsheet which consists of 20-30 services they need to monitor. Imagine for most of the OpsMgr administrators who has never used VSAE before, this would take a lot of time and maybe a lot of copy-paste to accomplish when using Authoring Console, MPAuthor or even NotePad++. For me, although I used VSAE and I knew how to develop custom snippet templates in VSAE, still took me like 20-30 minutes to develop such snippet template, then generated MP fragment, built MP, testing, pushing to Production etc. And since our customers has already identified their requirements, I shouldn’t need to be involved at all if we have an automation solution in place.

As I demonstrated in my 2015 Melbourne MVP Community Camp presentation (demo 2, start from 28:05, link provided above), I have designed a set of tasks for customers to request new monitors:

  1. New New blank unsealed MP
  2. Create a unit monitor in a “Test” management group
  3. Created a SMA runbook that runs daily and populates the MP list of my Test MG onto a SharePoint List
  4. When customers have tested the newly created monitor and happy with it, he / she can go to the SharePoint List, locate the specific MP where the monitor is stored, and use a drop-down box to copy the MP to the production environment.

This process has covered the entire process of creating, testing and implementing the new SCOM monitoring requirements without getting OpsMgr administrators involved at all!

What functions / activities are included in this release of OpsMgrExtended

In the first release of this module, I have included 34 PowerShell functions (if you watched the presentation recording, there were 29 back in January, I’ve added few more since). These functions can be grouped into 3 categories:

SDK Connection Functions

  • Import-OpsMgrSDk
    • Load the SDK assemblies. It will firstly try to load them from GAC, if the assemblies are not in GAC, it will load them from the SDK DLLs from the module base folder.
  • Install-OpsMgrSDK
    • Install the OpsMgr SDK DLLs from the module base folder to the GAC
  • Connect-OMManagementGroup
    • Establish connection to the OpsMgr management group by specifying a management server name (and optional alternative username and password).

Administrative Tasks

  • Approve-OMManualAgents
    • Approve manually installed OpsMgr agents that meet the naming convention.
  • Backup-OMManagementPacks
    • Backup OpsMgr management packs (unsealed and sealed).
  • Add-OMManagementGroupToAgent
    • Configure an OpsMgr agent to report to a specific management group using WinRM.
  • Remove-OMManagementGroupFromAgent
    • Remove a management group configuration from an OpsMgr agent using WinRM.
  • Get-OMManagementGroupDefaultSettings
    • Get OpsMgr management group default settings via OpsMgr SDK. A System.Collections.ArrayList is returned containing all management group default settings. Each setting in the arraylist is presented in a hashtable format.
  • Set-OMManagementGroupDefaultSetting
    • Set OpsMgr management group default settings.

Basic Authoring Tasks

  • Get-OMManagementPack
    • Get a particular management pack by providing the management pack name or get all management pack in an OpsMgr management group using OpsMgr SDK.
  • New-OMManagementPack
    • Create a new unsealed management pack in an OpsMgr management group.
  • Remove-OMManagementPack
    • Remove a management pack from an OpsMgr management group.
  • Copy-OMManagementPack
    • Copy an unsealed management pack from a source OpsMgr management group to the destination. management group.
  • New-OMManagementPackReference
    • Add a management pack reference to an unsealed management pack.
  • New-OM2StateEventMonitor
    • Create a 2-state event monitor in OpsMgr.
  • New-OM2StatePerformanceMonitor
    • Create a 2-state performance monitor in OpsMgr.
  • New-OMPerformanceCollectionRule
    • Create a performance collection rule in OpsMgr.
  • New-OMEventCollectionRule
    • Create an event collection rule in OpsMgr.
  • New-OMServiceMonitor
    • Create a Windows service monitor in OpsMgr.
  • New-OMInstanceGroup
    • Create an empty instance group in OpsMgr using OpsMgr SDK. The group membership must be populated manually or via another script.
  • New-OMComputerGroup
    • Create an empty computer group in OpsMgr using OpsMgr SDK. The group membership must be populated manually or via another script.
  • New-OMConfigurationOverride
    • Create a configuration (parameter) override in OpsMgr using OpsMgr SDK.
  • New-OMPropertyOverride
    • Create a property override in OpsMgr using OpsMgr SDK.
  • New-OMOverride
    • Create an override in OpsMgr using OpsMgr SDK. This function would detect whether it’s a property override or configuration override and call New-OMPropertyOverride or new-OMConfigurationOverride accordingly.
  • Remove-OMGroup
    • Remove an instance group or computer group in OpsMgr using OpsMgr SDK.
  • Remove-OMOverride
    • Remove an override in OpsMgr.
  • Get-OMDAMembers
    • Get monitoring objects that are members of a Distributed Application in OpsMgr using OpsMgr SDK. By default, this function only retrieves objects one level down. Users can use -Recursive parameter to retrieve all objects within the DA hierarchy.
  • New-OMAlertConfiguration
    • Create a new OpsMgrExtended.AlertConfiguration object that can be passed to the New-OMRule function as an input. This object is required for the New-OMRule function when creating an alert generating rule.
  • New-OMModuleConfiguration
    • Create a new OpsMgrExtended.ModuleConfiguration object that can be passed to the New-OMRule function as an input.
  • New-OMRule
    • Create a rule in OpsMgr by specifying data source modules, optional condition detection module, write action modules and also alert configuration when creating an alert generating rule. This function can be used to create any types of rules in OpsMgr.
  • New-OMWindowsServiceTemplateInstance
    • Create a Windows Service monitoring template instance in OpsMgr.

Advanced Authoring Tasks

  • New-OMTCPPortCheckDataSourceModuleType
  • New-OMTCPPortCheckMonitorType
  • New-OMTCPPortMonitoring

Last year, when I asked few OpsMgr focused MVPs for advice and feedbacks, my buddy Dieter Wijckmans suggested me to create a function that creates a TCP Port monitoring template instance. When I had a look, I did not like the MP elements created by this template. As I explained in my MVP Community Camp presentation (Demo 3, starts at 47:13 in the recording), I didn’t like the module type and monitor types created by the TCP Port monitoring template because many values have been hard coded in the modules and the monitor types did not enable On-Demand detections. Therefore, instead of creating an instance of this template using SDK, I’ve taken the hard route, spent a week, written 1,200 lines of PowerShell code, recreated all the MP elements the way I wanted.

When you use New-OMTCPPortMonitoring function from this module, it creates the following items:

  • Class Definition for TCP Port Watcher and various groups
  • Class Relationships
  • Class and Relationship Discoveries
  • Data Source Module Type
  • Monitor Type
  • Performance Collection Rule
  • 4 Unit Monitors and a dependency monitor
  • Discovery Overrides

The monitors created by New-OMTCPPortMonitoring supports On-Demand detection (which can be triggered by clicking the “Recalculate Health” button in Health Explorer), and I have variablised the data source module type and monitor type, so they can be reused for other workflows.

Establishing Connections to OpsMgr Management Groups

Configuring SMA Integration Module

When using this module in SMA, you may create a connection object to your OpsMgr management group.



  • Connection Type: Operations Manager SDK
  • Name: Name of this SMA connection object
  • Description: Description of this SMA connection
  • ComputerName: One of the OpsMgr management servers
  • UserName: A Service Account that has OpsMgr administrator access
  • Password: Password for the service account


Connecting in Normal PowerShell Scripts

When this module is used as a normal PowerShell module, all the functions that require OpsMgr management group connections support the following 3 parameters:

  • SDK: One of the OpsMgr management servers
  • -Username (optional): Alternative account to connect to OpsMgr management group.
  • -Password (optional): the password for the alternative account.


Getting Help and More Information

I have included help information for every function in this module. You can access if using Get-Help cmdlet.

i.e. Get-help New-OMRule –Full


Once imported in SMA, you can also see the description for each function in the WAP Admin portal:



Getting Started

I have written many sample runbooks for this module. Initially, my plan was to release these sample runbooks together with the module. Then I had a second thought, I think instead of releasing these samples now, I will make this a blog series and continue writing posts explaining how to use this module for different scenarios. I believe by doing so, it will help readers better understand the capability this module brings. I will name this series “Automating OpsMgr” and consider this is Part 1 of this series.

System Requirements

The minimum PowerShell version required for this module is 3.0.

The entire module and sample runbooks were developed on Windows Server 2012 R2, Windows 8.1, OpsMgr 2012 R2 and PowerShell version 4.0.

I have not test this module on OpsMgr 2012 RTM and SP1. Although the SDK assembly version is the same between RTM, SP1 and R2, I cannot guarantee all functions and upcoming sample runbooks would work 100% on RTM and SP1 versions. If you have identified any issues, please let me know.

I have performed very limited testing on PowerShell 5.0 Preview. I cannot guarantee it will work with PowerShell 5.0 100%. But if you manage to find any issues on PowerShell 5.0, please let me know.


Where Can I Download this Module?

This module can be downloaded from TY Consulting’s web site from link below:


I’m releasing this module under Apache License Version 2.0. If you do not agree with the term, please do not download or use this module.

Because this module requires OpsMgr 2012 SDK DLLs, and I am not allowed to distribute these DLLs (refer to System Center 2012 R2 Operations Manager EULA Section 7 Scope of License, which can be located on the OpsMgr 2012 R2 DVD under Licenses folder).


Therefore, once you’ve downloaded this module, you will need to manually copy the following 3 DLLs into the module folder:

  • Microsoft.EnterpriseManagement.Core.dll
  • Microsoft.EnterpriseManagement.OperationsManager.dll
  • Microsoft.EnterpriseManagement.Runtime.dll

These DLLs can be found on your OpsMgr management server, under <OpsMgr Install Dir>\Server\SDK Binaries:


Copy them into the module folder:


If it’s intended to be used in SMA, you will need to zip the folder back after DLLs been copied to the folder, then import the module in SMA.

Looking back, this has has been a very long journey – I have written around 6,800 lines of code for this module alone, not including all the sample runbooks that I’m going to publish for this blog series. I hope the community would find it useful, and please feel free to contact me if you have any new ideas or suggestions.

This is all I have for the Part 1 of this new series. In the next couple of days, I will discuss how to use the OpsMgrExtended module to create ConfigMgr log collections rules for OMS (As I previously blogged here.)

OpsMgr 2012 Data Warehouse Health Check Script Updated

Written by Tao Yang

Since I published the OpsMgr 2012 Data Warehouse Health Check Script last week, the responses I have received from the community have been overwhelming!

As I mentioned the the post that there might be potential issues when executing the script for an environment where the Data Warehouse DB is hosted on a named SQL instance, man people have reached out to me and confirmed this is indeed the case.

Over the last few days, I have been busy updating this script to address all the issues identified by the community. The version 1.1 is now ready.

I have addressed the following issues in this release:

  • Fixed the issues with named SQL instances and SQL instances using non-default ports.
  • Fixed the issue where the script failed to get management group default settings when executed in PowerShell version 5 preview.
  • Fixed the error where incorrect Buffer Cache Hit Ratio counter is presented on the report.
  • Additional pre-requisite check for PowerShell version. This script requires minimum version 3.0
  • Additional pre-requisite check to test WinRM and remote WMI connectivity to each management server
  • Fixed minor typos in the reports
  • Additional optional parameter “-OutputDir”. You can now specify the script to write reports to a folder of your choice. This folder must be previously created by you. If the specified folder is not valid or this parameter is not used, the script will write the report files to the script root folder.


I have updated the original post, the updated version of the script can now be downloaded from the original link.


I’d like to thank everyone who tested and provided valuable feedback to me. This project is truly a wonderful community effort!

OpsMgr 2012 Data Warehouse Health Check Script

Written by Tao Yang

Note (19/06/2015): This script has been updated to version 1.1. You can find the details of version 1.1 here: The download link at the end of this post has been updated too.

I’m sure you all would agree with me that the OpsMgr database performance is a very common issue in many OpsMgr deployments – when it has not been designed and configured properly. The folks at Squared Up certainly feels the pain – when the OpsMgr Data Warehouse database is not performing at the optimal level, it would certainly impact the performance of Squared Up dashboard since Squared Up is heavily relied on the Data Warehouse database.

So Squared Up asked me to build a Health Check tool specific to OpsMgr data warehouse databases, in order to help customers identify and troubleshooting the performance related issues with the data warehouse DB. Over the last few weeks, I have been working on such a script, focusing on the data warehouse component, and produces a HTML report in the end.

We have decided to make this tool not only available to the Squared Up customers, but also to the broader community, free of charge. So on that, BIG Thank-You to Squared Up’s generosity.

Before I dive into the details,  I’d like to show you what the report looks like. You can access the sample report generated against my lab MG here:


As shown in this sample, the report consists of the following sections:

Management Group Information

  • Management group name and version
  • server names for RMS Emulator, Operational DB SQL Server, Data Warehouse SQL server
  • Operational DB name, Data Warehouse DB name
  • Number of management servers, Windows agents, Unix agents, managed network devices and agentless managed computers
  • Current SDK connection count (total among all management servers)

Data Warehouse SQL Server information

  • Server hardware spec and OS version
  • SQL server version and collation
  • Minimum and Maximum assigned memory to the SQL server

Data Warehouse SQL DB information

  • DB Name, creation date, collation, recovery mode
  • Current state, is broker enabled, is auto-shrink enabled
  • Current DB size (both data and logs), free space %
  • Growth settings, last backup date and backup size

Temp DB configuration

  • File size, max size and growth settings for each file used by Temp DB

SQL Service Account Configuration

  • If the SQL Service account has “Perform volume maintenance tasks” and “Lock Pages in Memory” rights

Data Warehouse Dataset Configuration

  • Dataset retention setting
    • Retention setting for each dataset
    • current row count, size and % of total size of each dataset
  • Dataset aggregation backlog
  • Staging Table Row Count for the following tables:
    • Alert.AlertStage
    • Event.EventStage
    • Perf.PerformanceStage
    • State.StateStage
    • ManagedEntityStage

Key SQL and OS performance counters

  • SQL performance counters
    • SQLServer.Buffer.Manager\Buffer cache hit ratio
    • SQLServer.Buffer.Manager\Page.Life.Expectancy
  • Operating System performance counters
    • Logical Disk(_total)\Avg. disk sec/Read
    • Logical Disk(_total)\Avg. disk sec/Write
    • Processor Information (_total)\% Processor Time

Collect Data Warehouse performance related events from each management server

  • Event ID: 2115
  • Event ID: 8000
  • Event ID: 31550-21559

Since each environment is different, therefore I didn’t want to create a fix set of rules to flag any of above listed items good or bad. but instead, at the end of each section, I have included some articles that can help you to evaluate your environment and identify if there are any discrepancies.


This script has the following pre-requisites:

  • The user account that is running the script (or the alternative credential passed into the script) must have the following privileges:
    • local administrator rights on the Data Warehouse SQL server and all Management servers
    • A member of the OpsMgr Administrator role
    • SQL sysadmin rights on the Data Warehouse SQL server
  • WinRM (PowerShell Remoting) must be enabled on the Data Warehouse SQL Server
  • The OpsMgr SDK Assemblies must be available on the computer running the script:
    • The script can be executed on a OpsMgr management server, web console server, or a computer that has OpsMgr operations console installed
    • OR, manually copy the 3 DLLs from “<management server install dir>\SDK Binary” folder to the folder where the script is located.

Executing the script

The only required parameter is –SDK <OpsMgr Management Server name>, where you need to specify one of your management server (doesn’t matter which one). Additionally, if you use the –OpenReport switch, the HTML report will be opened in your default browser in the end. If you use -OutputDir to specify a directory, the reports will be saved to this directory instead of script root directory. If the directory you’ve specified is not valid, the script will save the reports to the script root directory instead (updated 19/06/2015). You can also use –verbose switch to see the verbose output:


.\SCOMDWHealthCheck.ps1 –SDK “OpsMgrMS01” -OutputDir C:\Temp\Reports\ –OpenReport –Verbose

Or if you need to specify alternative credential:

$password = ConvertTo-SecureString –String “password12345” –AsPlainText –Force

.\SCOMDWHealthCheck.ps1 –SDK “OpsMgrMS01” –Username “domain\SCOM.Admin” –Password $Password –OpenReport –Verbose


The report outputs the following files:

  • Main HTML report
  • Main Report in XML format
  • Windows Event export from each management server in a separate HTML page
  • Windows Event export from each management server in a separate CSV file

Note: The XML file is produced so if anyone wants to develop another set of tool to analyse the data for their own environment, it would be very easy to read the data from the XML file.

The script writes the list of the file it generated as output:


Possible Areas for Improvement

Due to the limited environments that I have access to, I am unable to test this script in environments where Data Warehouse DB is installed on a named SQL instance or a SQL Always-On setup. So if your environment is setup this way, please contact me and let me know what’s working and what’s not. This issue is now fixed in version 1.1 (Updated 19.06/2015)


I couldn’t have done this by myself. I’d like to thank the following people (in random order) who helped me in testing and provided feedbacks:

Folks from Squared Up: Glen Keech, Richard Benwell

SCCDM MVPs: Marnix Wolf, David Allen, Daniele Grandini, Cameron Fuller, Simon Skinner, Scott Moss, Fleming Riis

And, the legendary Kevin Holman

I’d also like to thank for all the people who has indirectly contributed to this tool (where I included links to their awesome articles and publications in the report). Some of them are already listed above, but here are few more: Paul Keely (Author for the SQL Server Guide for System Center 2012 whitepaper), Michel Kamp, Bob Cornelissen, Stefan Stranger and Oleg Kapustin.


You can download the script from the link below. Please place the 2 files in the zip file in the same directory:


Lastly, as always, please feel free to contact me if you’d like to provide feedback.



Collecting ConfigMgr Logs To Microsoft Operation Management Suite – The NiCE way

Written by Tao Yang


I have been playing with Azure Operational Insights for a while now, and I am really excited about the capabilities and capacities it brings. I haven’t blogged anything about OpInsights until now, largely because all the wonderful articles that my MVP friends have already written. i.e. the OpInsights series from Stanislav Zheyazkov (at the moment, he’s written 18 parts so far!):

Back in my previous life, when I was working on ConfigMgr for living, THE one thing that I hate the most, is reading log files, not to mention all the log file names, locations, etc. that I have to memorise. I remember there was even a spreadsheet listing all the log files for ConfigMgr. Even until now, when I see a ConfigMgr person, I’d always ask “How many log files did you read today?” – as a joke. However, sometimes, when sh*t hits the fan, people won’t see the funny side of it. In my opinion, based on my experience working on ConfigMgr, I see the following challenges in ConfigMgr log files:

There are too many of them!

And even for a same component, there would be multiple log files (i.e. for software update point, there are wsyncmgr.log, WCM.log, etc.). Often administrators have to cross check entries from multiple log files to identify the issue.

Different components place log files in different locations

Site server, clients, management points, distribution points, PXE DPs, etc. all save logs to different locations. not to mention when you some of these components co-exist on the same machine, the log locations would be different again (i.e. client logs location on the site server is different than the normal clients).

Log file size is capped

By default, the size of each log file is capped to 2.5MB (I think). Although it keeps a copy of the previous log (renamed to .lo_ file), still, it holds totally 5MB of log data for the particular component. In a large / busy environment, or when something is not doing right, these 2 files (.log and .lo_) probably only holds few hours of data.  Sometimes, by the time when you realised something went wrong and you need to check the logs, they have already been overwritten.

It is difficult to read

You need a special tool (CMTrace.exe) to read these log files. If you see someone reading ConfigMgr log files using notepad, he’s either really really good, or someone hasn’t been working on ConfigMgr for too long. For majority of people like us, we rely on CMTrace.exe (or Trace32.exe in ConfigMgr 2007) to read log files. When you log to a computer and want to read some log files (i.e. client log files), you’d always have to find a copy of CMTrace.exe somewhere on the network and copy it over to the computer that you are working on. In my lab, I even created an application in ConfigMgr to copy CMTrace.exe to C:\Windows\System32 and deployed to every machine – so I don’t have to manually copy it again and again. I’m sure this is a common practice and many people have all done this before.

Logs are not centralised

In a large environment where you ConfigMgr hierarchy consists of hundreds of servers, it is a PAIN to read logs on all of these servers. i.e. When something bad happens with OSD and PXE, the results can be catastrophic (some of you guys may still remember what an incorrectly advertised OSD task sequence has done to a big Australian bank few years back).  Based on my own experience, I have seen support team needs to check PXE DP’s SMSPXE.log on as many as few hundred PXE enabled distribution points, within a very short time window (before the logs get overwritten). People would have to connect to each individual DP  and read the log files one at a time. – In situation like this, if you go up to them and ask them “How many logs have you read today?”, I’m sure it wouldn’t go down too well.

It would be nice if…

When Microsoft has released Operational Insights (OpInsights) to preview, the first thing came to my mind is, would be very nice if we can collect and process ConfigMgr log files into OpInsights. This would bring the following benefits to ConfigMgr administrators:

  • Logs are centralised and searchable
  • Much longer retention period (up to 12 month)
  • No need to use special tools such as CMTrace.exe to read the log files
  • Being able to correlate data from multiple log files and multiple computers when searching, thus make administrator’s troubleshooting experience much easier.



A line of ConfigMgr log entry consists of many piece of information. And the server and client log files have different format. i.e.

Server Log file:


Client Log File:


Before sending the information to OMS, we firstly must capture only the useful information from each entry, transform them into a more structured way (such as Windows Event log format), so these fields would become searchable once been stored and indexed in your OMS workspace.

No Custom Solution Packs available

Since OMS is still very new, there aren’t many Solution Packs available (aka Intelligence Packs in the OpInsights days). Microsoft has not yet released any SDKs / APIs for partners and 3rd parties to author and publish Solution Packs. Therefore, at this stage, in order to send the ConfigMgr log file entries to OMS, we will have to utilise our old friend OpsMgr 2012 (with OpInsights integration configured), leveraging the power of OpsMgr management packs to collect and process the data before sending to OMS (via OpsMgr).

OpsMgr Limitations

As we all know, OpsMgr provides a “Generic Text Log” event collection rule. But unfortunately, this native event data source is not capable of accomplish what I am going to achieve here.

NiCE Log File Management Pack

NiCE is a company based in Germany. They offer a free OpsMgr management pack for log file monitoring. There are already many good blog articles written about this MP, I will not write an introduction here. If you have never heard or used it, please read the articles listed below, then come back to this post:

SCOM 2012 – NiCE Log File Library MP Monitoring Robocopy Log File – By Stefan Roth

NiCE Free Log File MP & Regex & PowerShell: Enabling SCOM 2 Count LOB Crashes – By Marnix Wolf

SCOM – Free Log File Monitoring MP from NiCE –By Kevin Greene

The beauty about the NiCE Log File MP is, it is able to extract the important information (as I highlighted in the screenshots above) by using Regular Expression (RegEx), and present the data in a structured way (in XML).

In Regular Expression, we are able to define named capturing groups to capture data from a string, this is similar to storing the information in a variable when comes to programming. I’ll use a log file entry from both ConfigMgr client and server logs, and my favourite Regular Expression tester site to demonstrate how to extract the information as I highlighted above.

Server Log entry:

Regular Expression:


Sample Log entry:

Execute query exec [sp_CP_GetPushRequestMachine] 2097152112~  $$<SMS_CLIENT_CONFIG_MANAGER><06-07-2015 13:11:09.448-600><thread=6708 (0x1A34)>

RegEx Match:


Client Log entry:

Regular Expression:


Sample Log entry:

<![LOG[Update (Site_9D4393B0-A197-4FC8-AF8C-0BC42AD2F33F/SUM_01a0100c-c3b7-4ec7-866e-db8c30111e80) Name (Update for Windows Server 2012 R2 (KB3045717)) ArticleID (3045717) added to the targeted list of deployment ({C5B54000-2018-4BD9-9418-0EFDFBB73349})]LOG]!><time=”20:59:35.148-600″ date=”06-05-2015″ component=”UpdatesDeploymentAgent” context=”” type=”1″ thread=”3744″ file=”updatesmanager.cpp:420″>

RegEx Match:


NiCE Log MP Regular Expression Tester

The NiCE Log MP also provides a Regular Expression Tester UI in the management pack. The good thing about this RegEx tester is, it also shows you what the management pack module output would be (in XML and XPath):


Now, I hope you get the bigger picture of what I want to achieve now. I want to use OpsMgr 2012, NiCE Log File MP to collect various ConfigMgr 2012 log files (both client and server logs), and then send over to OMS via OpsMgr. It is now time to talk about the management packs.

Management Pack

Obviously, the NiCE Log File MP is required. You can download it from NiCE’s customer portal once registered. This MP must be firstly imported into your management group.

Additionally, your OpsMgr management group must be configured to connect to a Operational Insights (or called “System Center Advisor” if you haven’t patched your management group in the last few months). However, what I’m about to show you is also able to store the data in your on-prem OpsMgr operational and data warehouse databases. So, even if you don’t use OMS (yet), you are still able to leverage this solution to store your ConfigMgr log data in OpsMgr databases.

Management Pack 101

Before I dive into the MP authoring and configuration, I’d like to firstly spend some time to go through some management pack basics – at the end of the day, not everyone working in System Center writes management packs. By going through some of the basics, it will help people who haven’t previously done any MP development work understand better later on.

In OpsMgr, there are 3 types of workflows:

  • Object Discoveries – For discovering instances and it’s properties of classes defined in management packs.
  • Monitors – responsible for the health states of monitoring objects. Can be configured to generate alerts.
  • Rules – Not responsible for the objects health state. Can be used to collect information, and also able to generate alerts.

Since our goal is to collect information from ConfigMgr log files, it is obvious we are going to create some rules to achieve this goal.

A rule consists of 3 types of member modules:

  • One(1) or more Data Source modules (beginning of the workflow)
  • Zero(0) or One(1) Condition Detection Module (optional, 2nd phase of the workflow)
  • One(1) or more write action modules (Last phase of the workflow).

To map the rule structure into our requirement, the rules we are going to author (one rule for each log file) is going to be something like this:

  • Data Source module: Leveraging the NiCE Log MP to read and process ConfigMgr log entries using Regular Expression.
  • Condition Detection module: Map the output of the Data Source Module into Windows event log data format
  • Write Action modules: write the Windows Event log formatted data to various data repositories. Depending your requirements, this could be any combinations of the 3 data repositories:
    • OpsMgr Operational DB (On-Prem, short term storage, but able to access the data from the Operational Console)
    • OpsMgr Data Warehouse DB (On-Prem, long term storage, able to access the data via OpsMgr reports)
    • OMS workspace (Cloud based, long term or short term storage depending on your plan, able to access the data via OMS portal, and via Azure Resource Manager API.)


Using NiCE Log MP as Data Source

Unfortunately, we cannot build our rules 100% from the OpsMgr operations console. The NiCE Log File MP does not provide any event collection rules in the UI. There are only alert rules and performance collection rules to choose from:


This is OK, because as I explained before, rules consists of 3 types of modules. An alert rule generated in this UI would have 2 member modules:

  • Data source module (called ‘NiCE.LogFile.Library.Advanced.Filtered.LogFileProvider.DS’) to collect the log entries and process them using the RegEx provided by you.
  • Write Action Module (called ‘System.Health.GenerateAlert’): Generate alerts based on the data passed from the data source module.

What we can do is to take the same data source module from such an Alert rule (and it’s configuration), then build our own rule with our condition detection module (called ‘System.Event.GenericDataMapper’) to map the data into Windows Event Log format, and use any of these 3 write action module to store the data:

  • Write to Ops DB: ‘Microsoft.SystemCenter.CollectEvent’
  • Write to DW DB: ‘Microsoft.SystemCenter.DataWarehouse.PublishEventData’
  • Write to OMS (OpInsights): ‘Microsoft.SystemCenter.CollectCloudGenericEvent’

However, to go one step further, since there are so many input parameters we need to specify for the Data Source module, and I want to hide the complexity for the users (your System Center administrators), I have created my own data source modules, and “wrapped” the NiCE data source module ‘NiCE.LogFile.Library.Advanced.Filtered.LogFileProvider.DS’ inside my own data source module. By doing so, I am able to hardcode some common fields that are same among all the rules we are going to create (i.e. the regular expression, etc.). Because the regular expression for ConfigMgr client logs and server logs are different, I have created 2 generic data source modules, one for each type of log – that you can use when creating your event collection rules.

When creating your own event collecting rules, you will only need to provide the following information:

  • IntervalSeconds: How often should the NiCE data source to scan the particular log
  • ComputerName: the name of the computer of where the logs is located. – This could be a property of the target class (or a class in the hosting chain).
  • EventID: to specify an event ID for the processed log entries (as we are formatting the log entries as Windows Event Log entries)
  • Event Category: a numeric value. Please refer to the MSDN documentation for the possible value: It is OK to use the value 0 (to ignore).
  • Event Level: a numeric value. Please refer to the MSDN documentation for the possible value:
  • LogDirectory: the directory of where the log file is located (i.e. C:\Windows\CCM\Logs)
  • FileName: the name of the log file (i.e. execmgr.log)


So What am I Offering?

I’m offering 3 management pack files to get you started:

ConfigMgr.Log.Collection.Library (ConfigMgr Logs Collection Library Management Pack)

This sealed management pack provides the 2 data source modules that I’ve just mentioned:

  • ConfigMgr.Log.Collection.Library.ConfigMgr.Client.Log.DS (Display Name: ‘Collect ConfigMgr 2012 Client Logs Data Source’)
  • ConfigMgr.Log.Collection.Library.ConfigMgr.Server.Log.DS (Display Name: ‘Collect ConfigMgr 2012 Server Logs Data Source’)

When you create your own management pack where your collection rules are going to be stored, you will need to reference this MP and use the appropriate data source module.

ConfigMgr.Log.Collection.Dir.Discovery (ConfigMgr Log Collection ConfigMgr Site Server Log Directory Discovery)

This sealed management pack is optional, you do not have to use it.

As I mentioned earlier, you will need to specify the log directory when creating the rule. The problem with this is, when you are creating a rule for a ConfigMgr server log file, it’s probably not ideal if you have to specify a static value because in a large environment where there are multiple ConfigMgr sites, the ConfigMgr install directory on each site server could be different. Unfortunately, the ConfigMgr 2012 management pack from Microsoft does not define and discovery the install folder or log folder as a property of the site server:


To demonstrate how we can overcome this problem, I have created this management pack. In this management pack, I have defined a new class called “ConfigMgr 2012 Site Server Extended”, it is based on the existing class defined from the Microsoft ConfigMgr 2012 MP. I have defined and discovered an additional property called “Log Folder”:


By doing so, we can variablise the “LogDirectory” parameter when creating the rules by passing the value of this property to the rule (I’ll demonstrate later).

Again, as I mentioned earlier, this MP is optional, you do not have to use it. When creating the rule, you can hardcode the “LogDirectory’’ parameter using a most common value in your environment, and using overrides to change this parameter for any servers that have different log directories.

ConfigMgr Logs Collection Demo Management Pack (ConfigMgr.Log.Collection.Demo)

In this unsealed demo management pack, I have created 2 event collection rules:

Collect ConfigMgr Site Server Wsyncmgr.Log to OpsMgr Operational DB Data Warehouse DB and OMS rule

This rule is targeting the “ConfigMgr 2012 Site Server Extended” class defined in the ‘ConfigMgr Log Collection ConfigMgr Site Server Log Directory Discovery’ MP, and collects Wsyncmgr.Log to all 3 destinations (Operational DB, Data Warehouse DB, and OMS).

Collect ConfigMgr Client ContentTransferManager.Log to OpsMgr Data Warehouse and OMS rule

This rule targets the “System Center ConfigMgr 2012 Client” class which is defined in the ConfigMgr 2012 (R2) Client Management Pack Version (which is also developed by myself).

This rule collects the ContentTransferManager.log only to Data Warehouse DB and OMS.

Note: I’m targeting this class instead of the ConfigMgr client class defined in the Microsoft ConfigMgr 2012 MP because my MP defined and discovered the log location already. When you are writing your own rule for ConfigMgr clients, you don’t have target this class, as most of the clients should have the logs located at C:\Windows\CCM\Logs folder (except on ConfigMgr servers).

Note: There are few other good example on how to write event collection rules for OMS, you may also find these articles useful:


What Do I get in OMS?

After you’ve created your collection rules and imported into your OpsMgr management group, within few minutes, the management packs would have reached the agents, started processing the logs, and send the data back to OpsMgr. OpsMgr would then send the data to OMS. It will take another few minutes for OMS to process the data before the data becomes searchable in OMS.

You will then be able to search the events:

Client Log Example:


Server Log Example:


As you can see, each field identified by the Regular Expression in NiCE data source module are structured in different parameters in the OMS log entry. You can also perform more complex searches. Please refer to the articles listed below for more details:

By Daniele Muscetta:

Official documentation:

Download MP

You may download all 3 management packs from TY Consulting’s web site:

What’s Next?

I understand writing management packs is not a task for everyone, currently, you will need to write your own MP to capture the log files of your choice. I am working on an automated solution. I am getting very close in releasing the OpsMgrExtended PowerShell / SMA module that I’ve been working since August last year. In this module, I will provide a way to automate OpsMgr rule creation using PowerShell. I will write a follow-up post after the release of OpsMgrExtended module to go through how to use PowerShell to create these ConfigMgr log collection rules. So, please stay tuned Smile.

Note: I’d like to warn everyone who’s going to implement this solution: Please do not leave these rules enabled by default when you’ve just created it. You need to have a better understanding on how much data is sending to OMS as there is a cost associated in how much data is sending to it, as well as the impact to your link to the Internet. So please make them disabled by default, start with a smaller group.

Lastly, I’d like to thank NiCE for producing such a good MP, and making it free to the community. Smile

How to Create a Squared Up Visio Dashboard for an Existing Distributed Application

Written by Tao Yang


OK, it has been over a month since my last blog post. Not that I’ve been lazy, I’ve actually been crazily busy. As you may know, I’ve started working for Squared Up after Ignite. So, this is another blog about Squared Up – this time, I’ll touch base on the Visio dashboard.

If you haven’t heard or played with Squared Up’s Visio Dashboard plug-in, you can find a good demo by Squared Up’s founder, Richard Benwell in one of Microsoft Ingite’s SCOM Sessions here:

If you already have a Visio diagram for your application (that’s been monitored by OpsMgr), it is really quick and easy to import it into Squared Up as a dashboard (as Richard demonstrated in the Ignite session). However, what if you don’t have Visio diagrams for aparticular application you want to create dashboard for (i.e. an Off-The-Shelve application such as AD, ConfigMgr, etc.)? If this is the case, you can manually create the Visio diagram – and hopefully you are able to find the relevant stencils for your applications. But, this can take a lot of time. If you are like me, who really hate drawing Visio diagrams, you probably won’t enjoy this process too much.

In this post, I’ll show you how to quickly produce a Visio dashboard in Squared Up for an existing application that’s been monitored by SCOM. I’ll use the Windows Azure Pack Distributed Application from the community WAP management pack as an example (developed by Oskar Landman from Inovativ:


01. In OpsMgr console, open the diagram view for the DA of your choice and export it to a Visio .vdx file:


Click OK if you get a message warning you there are too many objects included in this DA:


By default, the diagram view will only show the top level objects. However, you can keep drilling down the diagram, until you get a desired diagram (that you wish to display in Squared Up). In this demo, I will just use the diagram with top level objects:


As shown above, click the export button to export this diagram to a Visio diagram (.vdx) file.

02. Preparing the Visio diagram (.vsdx) from the .vdx file:

When you open the .vdx file, and zoom in, it looks exactly the same as the OpsMgr diagram view:


Firstly, you will need to remove the health state icons (the green ticks and red crosses in this case). A .vdx file is read-only in Visio, so after the icons have been removed, Save it as a .vsdx file. The .vsdx file looks like this now:


Now, we need to import SCOM monitoring objects data into this Visio diagram. Squared Up has written a good user guide on how to generate an Excel spreadsheet for the monitoring object information from Squared Up console. You can find this article here:

However, by using the Squared Up console as mentioned above, you have to manually  lookup every single monitoring object that is displayed in the Visio diagram. This can be very time consuming if you have a lot of objects in your diagram. In order to simplify this process, I have created a PowerShell script called Export-DAMembers.ps1 to get the information for members of a Distributed Application, and export the data to a CSV file.

You can download this script from HERE.

Note: This script does not require the native OpsMgr PowerShell module to run, however, it does require the OpsMgr 2012 SDK assemblies. If you are running it on an OpsMgr management server, web console server, or a computer that has the operational console installed, you don’t need to do anything else, you can just run this script straightaway. But if you are running this script on a computer that does not meet any of these requirements, you will need to copy the 3 OpsMgr 2012 SDK DLLs to the same folder of where the script is located. these 3 DLLs are:

  • Microsoft.EnterpriseManagement.Core.dll
  • Microsoft.EnterpriseManagement.OperationsManager.dll
  • Microsoft.EnterpriseManagement.Runtime.dll


You can find them on a management server, located at <OpsMgr install directory>\Server\SDK Binaries

I have included a help section for the script, as well as all the functions in the script, so I won’t go through how to use it here. you can simply open the script in a text editor and read it if you like:


In order to export the information we need for the Visio dashboard, we only need the Display Name and the Monitoring Object Id. I’m running the script with the following parameters:

.\Export-DAMembers.ps1 -SDK “<SCOM Management Server Name>” -DADisplayName “Windows Azure Pack” -ExportProperties (“DisplayName”, “Id”) -Path C:\Temp\DAExport1.csv –verbose


Note: As you can see, because I’m only going to display the top level objects in the dashboard, so I did not have to use recursive lookup, therefore, only 6 objects returned. If I run the script again with “-recursive $true” parameter, it will return all objects that are member of the DA (143 in total):


The total number matches the previous warning message in the OpsMgr diagram view:


Once the CSV is exported, open it in Excel:


In order for Squared Up to understand the data, we will need to change the title for both columns:

  • Change DisplayName to ScomName
  • Change Id to ScomId


Now, save it as an Excel Spreadsheet (.xlsx file).

We can now import the data from the Excel spreadsheet into the Visio diagram. The guide from Squared Up’s site has documented it very well, I won’t go through it again here.

After I’ve mapped the data for each object in the Visio diagram, it looks like this:


I’ve then hidden the data in Visio, exported it as a .SVG file, and produced a Visio dashboard in Squared Up using the SVG file. The final piece looks like this:


Which is very similar to the diagram view in OpsMgr console:



If you already have Squared Up in your environment, I hope you find this blog post useful. As I demonstrated, it is really easy to create a Squared Up dashboard for your existing Distributed Applications – and I’ve already done the hard work for you (creating the script for looking up monitoring object IDs).

As we all know, Squared Up is based on HTML 5 and it’s cross platform, You can use it on browsers other than IE, as well as mobile devices such as an Android tablet. The picture below is my Lenovo Yoga Tab 2 Android tablet displaying this Squared Up WAP dashboard I’ve just created Smile


New MP: OpsMgr Health State Synchronization Library

Written by Tao Yang

Health SyncBackground

As I mentioned in previous blog posts, I will continue blogging on the topic of managing multiple OpsMgr management groups – a topic keeps getting brought up in the private conversations between us SCCDM MVPs.

Previously, I have written 2 posts demonstrated how to use Squared Up dashboard to access data from foreign management groups using their SQL and Iframe plugins. Now that I’ve covered the presentation layer (using Squared Up), I’d like to explore deeper in this subject.

I wanted to be able to synchronise the health state from a monitoring object managed by a remote management group into the local management group so it can be part of the health models users are building (i.e. be part of a Distributed Application, or simply a dashboard). I had this idea in my head for awhile now, over the last month or so, I finally managed to produce such a management pack that enables OpsMgr users to do so.


It is common to have multiple OpsMgr management groups in large organisations. When designing distributed application or creating custom dashboards, one of the limitations is that OpsMgr users can only select monitoring objects within the local management group to be a part of the Health Model. This becomes an issue when users want to design a Distributed Application or dashboard that include components monitored by different OpsMgr management groups.

The OpsMgr Health Synchronization Library management pack is designed to provide a workaround to this limitation. This management pack provides a template that enables OpsMgr users to create monitoring objects named “Health State Watcher” hosted by All Management Servers Resource Pool. Health State Watcher objects have monitors configured to query health state of monitoring objects located in a remote management group using OpsMgr SDK.


As shown in the diagram above, an instance of Health State Watcher can be created for each monitoring object of user’s choice from a remote management group. Each Health State Watcher object will periodically update its own health state based on the health state of the remote monitoring object it is watching for (every 5 minutes by default). As shown above, the Health State Watcher can query health state of any monitoring objects from remote management group (i.e. a Windows Computer object, a Distributed Application or any other types monitoring objects).

This management pack provides 4 unit monitors to the Health State Watcher class. They are used to query the health state of the Availability, Configuration, Performance and Security aggregate monitors of the remote monitoring object respectively.

Once the Health State Watcher objects are created and correctly configured, it can be used to display the health state of the remote monitoring object in a dashboard or distributed application hosted by the local management group.

How Do I Use this MP?

This management pack provides a management pack template for OpsMgr users to create the Health State Watcher instances from the OpsMgr operations console.


The following information must be provided when creating an instance using the management pack template:

  • Display Name
  • Description (Optional)
  • Unsealed Management Pack (where the MP elements will be saved)
  • One of the management servers from the remote management group
  • Monitoring Object ID of the monitoring object from the remote management group
  • Run-As account for SDK connection to the remote management group

Please follow the steps listed below to create a template instance.

1. Click the “Add Monitoring Wizard” from the Authoring pane under “Management Pack Template”


2. Choose “Cross Management Group Health State Monitoring” from the list


3. In General Property page, enter the display name, description and select an unsealed MP from the drop-down list:


4. In the Parameter Configuration Page, enter the following information:


  • Management server from the remote management group
  • Source Instance ID (monitoring object ID) of the monitoring object from the remote management group

Note: there are multiple ways to find the monitoring object ID in OpsMgr. Please refer to this article for possible ways to locate the ID:

  • Select the Run-As account that was created prior to running this wizard.

Note: The Run-As account must meet the following requirements:

    1. It must have at least Operator access in the remote management group
    2. It must be distributed to all management servers


    1. It must have logon locally access on all management servers. – This is a general requirement for all Windows Run-As accounts in OpsMgr. Although it will never be used to logon locally on the management servers, without this right, the workflows that are using this Run-As account will not work.


5. Confirm all information is correct in the Summary page, and click on “Create”.


6. After few minutes, the health state of the Health State Watcher instance should be synchronized from the remote monitoring object:

Overall state:


Health Explorer:


Source monitoring object (from remote MG):


Sample Distributed Application

The diagram below demonstrates how to utilize the health state watcher in a Distributed Application:


In the demo environment, the 2 domain controllers (AD01 and AD02) are being monitored by a management group located in the On-Prem network. There is another domain controller located in Microsoft Azure IaaS, and it is being monitored by a separate management group in Azure. A Health State Watcher object was created previously to synchronize the health state of the Azure DC Windows computer health.

Sample Dashboard (Using Squared Up)


As shown above, on the left section, the Health State Watcher object for the Azure based domain controller is pinned on the correct location of a World Map dashboard. The health state of individual domain controllers are also listed on the right.


The workflows in this MP are actually pretty simple, but it has taken me A LOT of time to finish this MP. This is largely caused by the template UI interfaces. Unfortunately, I couldn’t find any official guides on how to build template UIs using C#. I’d like to thank my friend and fellow SCCDM MVP Daniele Grandini (blog, twitter) for testing and helping me debugging this MP when I hit the road block with the template UI interface.

Where can I download this management pack?

As I mentioned in my previous post, from now on, any new tools such as management packs, modules, scripts etc. will be released under TY Consulting and download links will be provided from it’s website

You can download this MP for free from, however, to help me promote and grow my business, I am asking you to provide your name, email and company name in a form and the download link will be emailed to you by the system.

Lastly, as always, any feedbacks are welcome. Please feel free to drop me an email if you have anything to share.

Next Step in My Career

Written by Tao Yang


Full Logo Raw Long White Background

Back in September 2011, after a short and unpleasant contracting experience, I joint a large Australian retailer as a senior systems engineer, focusing on their System Center infrastructure. I chose this company because the office location is close to home, the role was interesting (have a VERY large System Center environment to play with), and my wife and I were expecting a baby so I thought a permanent role suited me better. Now looking back the 3.5 years I have spent there, I have involved in multiple System Center implementation and upgrade projects, written countless amount of management packs, trained many of my colleagues in various System Center products, and most importantly, continued blogging on this blog and earned my System Center Cloud and Datacenter Management MVP title.

Like all good things come to an end, I have realised it is time for me to move on and I have tendered my resignation 2 weeks ago. My last day as a full time employee there is Friday 1st of May 2015.

Last year, shortly after I became a MVP, I attempted to partner with another MVP to start a company called Sparq Consulting. Unfortunately, Sparq never took off and worked out for me. Until this date, it is still nothing but a verbal agreement. I have no legal obligation nor signed any contracts with Sparq. Therefore, I have decided to separate from Sparq and become an independent / freelance consultant.

After meeting with my accountant, I was told I need to register a company. As I am very bad with names, my wife suggested me to just use my initials as company name. Therefore, I named my company TY Consulting.

Moving forward, I am still going to keep blogging on this blog, but in order to help myself promoting TY Consulting, any new tools such as management packs, modules, scripts etc. will be released under TY Consulting and download links will be provided from it’s website Over the last month or so, I’ve been working on an OpsMgr management pack that enables OpsMgr users to synchronise the health state from a monitoring object managed by a remote management group. I have already released this management pack on, I will write about it in the next blog post shortly after this.

Lastly, and most importantly, for all the folks out there in the broader System Center community, please DO contact me when you are looking for System Center consultants, I am now open for business!

2015 Global Azure Bootcamp – Melbourne Event

Written by Tao Yang
2015 Azure BootCamp

For those who are actively engaged in Microsoft System Center and Azure community may already aware that there is a global Azure Bootcamp event taking place later this month (April 2015).

This year, the Azure bootcamp is going to be held at 195 confirmed locations on Saturday 25th April. More information about this global event can be found here:

My fellow SCCDM MVP Daniel Mar is the organizer for the Melbourne event. Daniel has already put a great effort in organising this event (BIG thank-you to Daniel), and we now have a great line-up for Melbourne – with totally 11 MVPs presenting:

  • James Bannan – Enterprise Client Management
  • David O’Brien – System Center Cloud and Datacenter Management
  • Orin Thomas – Consumer Security
  • Tao Yang – System Center Cloud and Datacenter Management
  • Dan Kregor – System Center Cloud and Datacenter Management
  • Daniel Mar – System Center Cloud and Datacenter Management
  • Mahesh Krishnan – Microsoft Azure
  • Mitch Denny – Visual Studio ALM
  • John Azariah – Microsoft Azure
  • Yaniv Rodesnki – Microsoft Azure
  • Bill Chesnut – Microsoft Integration

This event is going to be held at Saxons Training Facilities located in Melbourne CBD (Level 8, 500 Collins Street, Melbourne), and all 4 Australia based System Center Cloud and Datacenter Management MVPs (David, Daniel, Dan and myself), as well as some other big names in System Center and IT Pro community such as Orin Thomas and James Bannan will be presenting in this event. Smile

Please checkout the event site for sessions details. This is a free event, please register if you’d like to attend. (please note the available spots are limited, so, please only register if you are certain that you will be attending),

Lastly, we acknowledged 25th April 2015 is the 100th anniversary of the ANZAC day. In order to respect the Australian and New Zealand soldiers, we have looked at the possibilities of changing the event day for Melbourne, but unfortunately, given it is a part of a global event, we are unable to do so. However, we will show respect to the fallen soldiers by reciting Ode of Remembrance and play the Last Post.