Tag Archives: MimboloveMP Authoring

Recordings Available for the VSAE MP Authoring Webinar with Squared Up

Written by Tao Yang

Last night, I conducted 2 webinars with Richard Benwell of Squared Up on MP Authoring. I recorded both sessions from my computer using Camtasia, and now the recordings for both sessions are now available on Squared Up’s YouTube channel:

First Session: https://www.youtube.com/watch?v=oH035DgbUSQ

Second Session: https://www.youtube.com/watch?v=Xu3yRE770QA

Lastly, the workshop guide, slide deck and the sample VSAE project is also available on GitHub:

https://github.com/tyconsulting/SquaredUp-VSAE-Workshop

Upcoming Webinar on MP Authoring Using VSAE

Written by Tao Yang

Last week, I have conducted a workshop with Richard Benwell from Squared Up to a group of Squared Up’s customers at an internal company event. In the workshop, I led the students and built a sealed OpsMgr management pack with a simple agent task.

After the workshop, our plan is to make the content available to general public, therefore, Richard and I will be conducting 2 additional webinars next week to cover different time zones. We will repeat what we did in last week’s internal event, and demonstrate how to build such a MP from scratch using VSAE and Visual Studio 2015. This is an absolute beginner’s guide to authoring management packs using VSAE. As you will see, we are writing the entire MP in Visual Studio without having to type any XML code!

If you are interested in this topic, please feel free to pick a time that’s suitable to you from the registration page below:

https://attendee.gotowebinar.com/rt/3979835542420770052

Lastly, the workshop guide, and a sample completed Visual Studio project can be found in this GitHub repo: https://github.com/tyconsulting/SquaredUp-VSAE-Workshop.

If you’d like to build the MP during the webinar with us, there are some pre-requisites that you must complete the steps outlined in section of the workshop guide before attending to the webinar.

Looking forward to seeing you next week!

Small Bug Fix in OpsMgr Self Maintenance MP V2.5.0.0

Written by Tao Yang

Last night, someone left a comment on the my post for the OpsMgr Self Maintenance MP V2.5.0.0 and advised a configuration in the Data Warehouse staging tables row count performance collection rules is causing issues with the Exchange Correlation service – which is a part of the Exchange MP. This issue was previously identified for other MPs: https://social.technet.microsoft.com/Forums/en-US/f724545d-90a3-42e6-950e-72e14ac0bd9d/exchange-correlation-service-cannot-connect-to-rms?forum=operationsmanagermgmtpacks

In a nutshell, looks like the Exchange Correlation service does not like rules that have category set to “None”.

I would have never picked it up in my environment because I don’t have Exchange in my lab therefore no Exchange MP configured.

Anyways, I have updated the category for these rules in both the Self Maintenance MP as well as the OMS Add-On MP, changed them from “None” to “PerformanceCollection”. I have updated the download from TY Consulting’s website. The current version is now 2.5.0.1. So if you are have already downloaded v.2.5.0.0 and you are using Exchange MP in your environment, you might want to download the updated version again from the same spot HERE.

OpsMgr Self Maintenance Management Pack 2.5.0.0

Written by Tao Yang

OMSelfMaintMPIcon26/10/2015 Update: It has been identified the unsealed override MP was not included in the download, and also there was a small error in “Known Issue” section (section 8) of the MP guide. Therefore I have just updated the download which now included the override MP and updated MP guide. However, if you have already downloaded the version 2.5.0.1, and only after the override MP, you can download it from HERE.

18/09/2015 Update: A bug has been identified in version 2.5.0.0, where the newly added Data Warehouse DB staging tables row count performance collection rules is causing issues with the Exchange Correlation service from the of Exchange MP (Please refer to the comment section of this post) because the rule category is set to “None”. I have updated the category of these performance collection rules in both the Self Maintenance MP and the OMS Add-On MP. Please re-download the MP (version 2.5.0.1) if you have already downloaded it and you are using Exchange MP in your environment.

Introduction

I can’t believe it has been 1 year and 3 month since the OpsMgr Self Maintenance MP was lastly updated. This is partially because over the last year or so, I have been spending a lot of time developing the OpsMgr PowerShell / SMA module OpsMgrExtended and am stilling working on the Automating OpsMgr blog series.  But I think one of the main reasons is that I did not get too many new ideas for the next release. I have decided to start working on version 2.5 of the Self Maintenance MP few weeks ago, when I realised I have collected enough resources for a new release. So, after few weeks of development and testing, I’m pleased to announce the version 2.5 is ready for the general public.

What’s new in version 2.5?

  • Bug Fix: corrected “Collect All Management Server SDK Connection Count Rule” where incorrect value may be collected when there are gateway servers in the management group.
  • Additional Performance Rules for Data Warehouse DB Staging Tables row count.
  • Additional 2-State performance monitors for Data Warehouse DB Staging Tables row count.
  • Additional Monitor: Check if all management servers are on the same patch level
  • Additional discovery to replace the built-in “Discovers the list of patches installed on Agents” discovery for health service. This additional discovery also discovers the patch list for OpsMgr management servers, gateway servers and SCSM servers.
  • Additional Agent Task: Display patch list (patches for management servers, gateway servers, agents and web console servers).
  • Additional Agent Task: Configure Group Health Rollup
  • Updated “OpsMgr 2012 Self Maintenance Detect Manually Closed Monitor Alerts Rule” to include an option to reset any manually closed monitor upon detection.
  • Additional Rule: “OpsMgr 2012 Self Maintenance Audit Agent Tasks Result Event Collection Rule”
  • Additional Management Pack: “OpsMgr Self Maintenance OMS Add-On Management Pack”

To summarise, in my opinion, the 2 biggest features shipped in this release are the workflows built around managing OpsMgr Update Rollup patch level, and the extension to Microsoft Operations Management Suite (OMS) for the management groups that have already been connected to OMS via the new OpsMgr Self Maintenance OMS Add-On MP .

I will now briefly go though each item from the list above. The detailed documentation can be found in the updated MP guide.

Bug Fix: Total SDK Connection Count Perf Rule

In previous version, the PowerShell script used by the “Collect All Management Server SDK Connection Count Rule” had a bug, where the incorrect count could be collected when there are gateway servers in the management group. i.e.

image

As shown above, when I installed a gateway server in my management group, the counter value has become incorrect and has increased significantly. This issue is now fixed.

Monitoring and Collecting the Data Warehouse DB staging tables row count

Back in the MVP Summit in November last year, my friend and fellow MVP Bob Cornelissen suggested me to monitor the DW DB staging tables row count because he has experienced issues where large amount of data were stuck in the staging tables (http://www.bictt.com/blogs/bictt.php/2014/10/10/case-of-the-fast-growing). Additionally, I have already included the staging tables row count in the Data Warehouse Health Check script which was released few months ago.

In this release, the MP comes with a performance collection rule and a 2-state performance threshold monitor for each of these 5 staging tables:

  • Alert.AlertStage
  • Event.EventStage
  • ManagedEntityStage
  • Perf.PerformanceStage
  • State.StateStage

The performance collection rules collect the row count as performance data and store the data in both operational DB and the Data Warehouse DB:

SNAGHTML23ed92

The 2-State performance threshold monitors will generate critical alerts when the row count over 1000.

SNAGHTML26712f

Managing OpsMgr Update Rollup Patch Level

Over the last 12 months, I have heard a lot of unpleasant stories caused by inconsistent patch levels between different OpsMgr components. In my opinion, currently we have the following challenges when managing updates for OpsMgr components:

People do not follow the instructions (aka Mr Holman’s blog posts) when applying OpsMgr updates.

Any seasoned OpsMgr folks would know wait for Kevin Holman’s post for the update when a UR is released, and the order for applying the UR is also critical. However, I have seen many times that wrong orders where followed or some steps where skipped during the update process (i.e. SQL update scripts, updating management packs, etc.)

OpsMgr management groups are partially updates due to the (mis)configuration of Windows Update (or other patching solutions such as ConfigMgr).

I have heard situations where a subset of management servers were updated by Windows Update, and the patch level among management servers themselves, as well as between servers and agents are different. Ideally, all management servers should be patched together within a very short time window (together with updating SQL DBs and management packs), and agents should also be updated ASAP. Leaving management servers in different patch levels would cause many undesired issues.

It is hard to identify the patch level for management servers

Although OpsMgr administrators can verify the patch list for the agent by creating a state view for agents and select “Patch List” property, the patch list property for OpsMgr management servers and gateway servers are not populated in OpsMgr. This is because the object discovery of which is responsible for populating this property only checks the patch applied to the MSI of the OpsMgr agent. Additionally, after the update rollup has been installed on OpsMgr servers, it does not show up in the Program and Features in Windows Control Panel. Up to date, the most popular way to check the servers patch level is by checking the version of few DLLs and EXEs. Due to these difficulties, people may not even aware of the inconsistent patch level within the management group because it is not obvious and it’s hard to find out.

In order to address some of these issues, and helping OpsMgr administrators to better manage the patch level and patching process, I have created the following items in this release of the Self Maintenance MP:

State view for Health Service which also displays the patch list:

SNAGHTML48f742

An agent task targeting Health Service to list OpsMgr components patch level:

SNAGHTML49c996

Because the “Patch List” property is populated by an object discovery, which only runs infrequently, in order to check the up-to-date information(of the patch list), I have created a task called “Get Current Patch List”, which is targeting the Health Service class. This task will display the patch list for any of the following OpsMgr components installed on the selected health service:

Management Servers | Gateway Servers:

imageimage

Agents | Web Console (also has agent installed):

imageimage

Object Discovery: OpsMgr 2012 Self Maintenance Management Server and Agent Patch List Discovery

Natively in OpsMgr, the agent patch list is discovered by an object discovery called “Discovers the list of patches installed on Agents”:

image

As the name suggests, this discovery discovers the patch list for agents, and nothing else. It does not discover the patch list for OpsMgr management servers, gateway servers, and SCSM management servers (if they are also monitored by OpsMgr using the version of the Microsoft Monitoring Agent that is a part of the Service Manager 2012). On the other hand, this discovery provided by the OpsMgr 2012 Self Maintenance MP (Version 2.5.0.0) is designed to replace the native patch list discovery. Instead of only discovering agent patch list, it also discovers the patch list for OpsMgr management servers, gateway servers, SCSM management servers and SCSM Data Warehouse management servers.

Same as all other workflows in the Self Maintenance MP, this discovery is disabled by default. In order to start using this discovery, please disable the built-in discovery “Discovers the list of patches installed on Agents” BEFORE enabling “OpsMgr 2012 Self Maintenance Management Server and Agent Patch List Discovery”:

image

Shortly after the built-in discovery has been disabled and the “OpsMgr 2012 Self Maintenance Management Server and Agent Patch List Discovery” has been enabled for the Health Service class, the patch list for the OpsMgr management servers, gateway servers and SCSM management servers (including Data Warehouse management server) will be populated (as shown in the screenshot below):

SNAGHTML51edc1

Note:

As shown above, the patch list for different flavors of Health Service is properly populated, with the exception of the Direct Microsoft Monitoring Agent for OpInsights (OMS). This is because at the time of writing this post (September, 2015), Microsoft has yet released any patches to the OMS direct MMA agent. The last Update Rollup for the Direct MMA agent is actually released as an updated agent (MSI) instead of an update (MSP). Therefore, since there is no update to the agent installer MSI, the patch list is not populated.

Warning:

Please do not leave both discoveries enabled at the same time as it will cause config-churn in your OpsMgr environment.

Monitor: OpsMgr 2012 Self Maintenance All Management Servers Patch List Consistency Consecutive Samples Monitor

This consecutive sample monitor is targeting the “All Management Servers Resource Pool” and it is configured to run every 2 hours (7200 seconds) by default. It executes a PowerShell script which uses WinRM to remotely connect to each management server and checks if all the management servers are on the same UR patch level.

In order to utilise this monitor, WinRM must be enabled and configured to accept connections from other management servers. The quickest way to do so is to run “Winrm QuickConfig” on these servers. The account that is running the script in the monitor must also have OS administrator privilege on all management servers (by default, it is running under the management server’s default action account). If the default action account does not have Windows OS administrator privilege on all management servers, a Run-As profile can be configured for this monitor:

SNAGHTML53a46a

In addition to the optional Run-As profile, if WinRM on management servers are listening to a non-default port, the port number can also be modified via override:

image

Note:

All management servers must be configured to use the same WinRM port. Using different WinRM port is not supported by the script used by the monitor.

If the monitor detected inconsistent patch level among management servers in 3 consecutive samples, a Critical alert will be raised:

image

The number of consecutive sample can be modified via override (Match Count) parameter.

Agent Task: Configure group Health Rollup

This task has been previously released in the OpsMgr Group Health Rollup Task Management Pack. I originally wrote this task in response to Squared Up’s customers feedback. When I was developing the original MP (for Squared Up), Squared Up has agreed for me to release it to the public free of charge, as well as making this as a part of the new Self Maintenance MP.

Therefore, this agent task is now part of the Self Maintenance MP, kudos Squared Up Smile.

Auditing Agent Tasks Execution Status

In OpsMgr, the task history is stored in the operational DB, which has a relatively short retention period. In this release, I have added a rule called “OpsMgr 2012 Self Maintenance Audit Agent Tasks Result Event Collection Rule”. it is designed to collect the agent task execution result and store it in both operational and Data Warehouse DB as event data. Because the data in the DW database generally has a much longer retention, the task execution results can be audited and reported.

Note:

This rule was inspired by this blog post (although the script used in this rule is completely different than the script from this post): http://www.systemcentercentral.com/archiving-scom-console-task-status-history-to-the-data-warehouse/

Resetting Health for Manually Closed Monitor Alerts

Having ability to automatically reset health state for manually closed monitor alerts must be THE most popular suggestion I have received for the Self Maintenance MP. I get this suggestions all the time, from the community, and also from MVPs. Originally, my plan was to write a brand new rule for this purpose. I then realised I already have created a rule to detect any manually closed monitor alerts. So instead of creating something brand new, I have updated the existing rule “OpsMgr 2012 Self Maintenance Detect Manually Closed Monitor Alerts Rule”. In this release, this rule now has an additional overrideable parameter called “ResetUnitMonitors”. This parameter is set to “false” by default. But when it is set to “true” via overrides, the script used by this rule will also reset the health state of the monitor of which generated the alert if the monitor is a unit monitor and its’ current health state is either warning or error:

image

OpsMgr Self Maintenance OMS Add On MP

OK, we all have to admit, OMS is such a hot topic at the moment. Hopefully you all have played and read about this solution (if not, you can learn more about this product from Mr Pete Zerger’s survival guide for OMS:http://social.technet.microsoft.com/wiki/contents/articles/31909.ms-operations-management-suite-survival-guide.aspx)

With the release of version 2.5.0.0, the new “OpsMgr Self Maintenance OMS Add-On Management Pack” has been introduced.

This management pack is designed to also send performance and event data generated by the OpsMgr 2012 Self Maintenance MP to the Microsoft Operations Management Suite (OMS) Workspace.

In addition to the existing performance and event data, this management pack also provides 2 event rules that send periodic “heartbeat” events to OMS from configured health service and All Management Servers Resource Pool. These 2 event rules are designed to monitor the basic health of the OpsMgr management group from OMS (Monitor the monitor scenario).

Note:

In order to use this management pack, the OpsMgr management must meet the minimum requirements for the OMS / Azure Operational Insights integration, and the connection to OMS must be configured prior to importing this management pack.

Sending Heartbeat Events to OMS

There have been many discussion and custom solutions on how to monitor the monitor? It is critical to be notified when the monitor – OpsMgr management group is “down”. With the recent release of Microsoft Operations Management Suite (OMS) and the ability to connect the on-premise OpsMgr management group to OMS workspace, the “OpsMgr Self Maintenance OMS Add-On Management Pack” provides the ability to send “heartbeat” events to OMS from

  • All Management Servers Resource Pool (AMSRP)
  • Various Health Service
    • Management Servers and Gateway Servers
    • Agents

The idea behind these rules is that once the resource pool and management servers have started sending heartbeat events to OMS every x number of minutes, we will then be able to detect when the expected heartbeat events are missing, thus detecting potential issues within OpsMgr – thus monitoring the monitor.

The heartbeat events can be accessed via the the OMS web portal (as well as using the OMS search API):

i.e. the AMSRP heartbeat events for the last 15 minutes:

image

Dashboard tile with threshold:

SNAGHTMLb630de

Note:

For the heartbeat event rule targeting the health service, I have configured it to continue sending the heartbeat even when the Windows computer has been placed into maintenance mode (not that management servers should ever been placed in maintenance mode in the first place Smile).

I’m not going to take all the credit for this one. Monitoring the monitor using OMS was an idea from my friend and fellow MVP Cameron Fuller. as the result of this discussion with Cameron and other CDM MVPs, I ended up developed a management pack which sends heartbeat events from AMSRP and selected health service (management servers for example) to OMS. This management pack has never been published to the public, but I believe Cameron has recently demonstrated it in the Minnesota System Center User Group meeting (http://blogs.catapultsystems.com/cfuller/archive/2015/08/14/summary-from-the-mnscug-august-2015-meeting/)

Please refer to the MP guide section 7.1 for detailed information about this feature.

Collecting Data Generated by the OpsMgr 2012 Self Maintenance MP

Other than the heartbeat event collection rules, the OMS Add-On MP also collects the following event and performance data to OMS:

  • Data Warehouse Database Aggregation Outstanding dataset count (Perf Data)
  • Data Warehouse Database Staging Tables Row Count (Perf Data)
  • All Management Server SDK Connection Count (Perf Data)
  • OpsMgr Self Maintenance Health Service OMS Heartbeat Event Rule
  • Agent Tasks Result Audit (Event Data)

The above listed data are already being generated by the OpsMgr 2012 Self Maintenance MP, The OMS Add-On MP fully utilise Cook Down feature, and store these data in OMS in additional to the OpsMgr databases.

i.e. Agent Task Results Audit Event:

image

SDK Connection Count Perf Data:

image

Please refer to the MP guide section 7.2 for more information (and sample search queries) about these OMS data collection rules.

Credit

There are simply too many people to thank. I have mentioned few names in this post, but if I attempt to mention everyone who’s given me feedback, advise and helped me testing, I’m sure I’ll miss someone.

So I’d like to thank the broader OpsMgr community for adopting this MP and for all the feedback and suggestions I’ve received.

What’s Next?

Well, my another short time goal is to create a Squared Up dashboard for this MP, and release it in Squared Up’s upcoming community dashboard site.

Speaking about the long time goal, my prediction is that the next release is probably going to be dedicated to OpsMgr 2016. I am planning to make a brand new MP for OpsMgr 2016 (instead of upgrading this build), so I am able to delete all the obsolete elements in the 2016 build. I will re-evaluate and test all the workflows in this MP, making sure it is still relevant for OpsMgr 2016.

Download

You can download this MP from my company’s website HERE

Collecting ConfigMgr Logs To Microsoft Operation Management Suite – The NiCE way

Written by Tao Yang

Introduction

I have been playing with Azure Operational Insights for a while now, and I am really excited about the capabilities and capacities it brings. I haven’t blogged anything about OpInsights until now, largely because all the wonderful articles that my MVP friends have already written. i.e. the OpInsights series from Stanislav Zheyazkov (at the moment, he’s written 18 parts so far!): https://cloudadministrator.wordpress.com/2015/04/30/microsoft-azure-operational-insights-preview-series-general-availability-part-18-2/

Back in my previous life, when I was working on ConfigMgr for living, THE one thing that I hate the most, is reading log files, not to mention all the log file names, locations, etc. that I have to memorise. I remember there was even a spreadsheet listing all the log files for ConfigMgr. Even until now, when I see a ConfigMgr person, I’d always ask “How many log files did you read today?” – as a joke. However, sometimes, when sh*t hits the fan, people won’t see the funny side of it. In my opinion, based on my experience working on ConfigMgr, I see the following challenges in ConfigMgr log files:

There are too many of them!

And even for a same component, there would be multiple log files (i.e. for software update point, there are wsyncmgr.log, WCM.log, etc.). Often administrators have to cross check entries from multiple log files to identify the issue.

Different components place log files in different locations

Site server, clients, management points, distribution points, PXE DPs, etc. all save logs to different locations. not to mention when you some of these components co-exist on the same machine, the log locations would be different again (i.e. client logs location on the site server is different than the normal clients).

Log file size is capped

By default, the size of each log file is capped to 2.5MB (I think). Although it keeps a copy of the previous log (renamed to .lo_ file), still, it holds totally 5MB of log data for the particular component. In a large / busy environment, or when something is not doing right, these 2 files (.log and .lo_) probably only holds few hours of data.  Sometimes, by the time when you realised something went wrong and you need to check the logs, they have already been overwritten.

It is difficult to read

You need a special tool (CMTrace.exe) to read these log files. If you see someone reading ConfigMgr log files using notepad, he’s either really really good, or someone hasn’t been working on ConfigMgr for too long. For majority of people like us, we rely on CMTrace.exe (or Trace32.exe in ConfigMgr 2007) to read log files. When you log to a computer and want to read some log files (i.e. client log files), you’d always have to find a copy of CMTrace.exe somewhere on the network and copy it over to the computer that you are working on. In my lab, I even created an application in ConfigMgr to copy CMTrace.exe to C:\Windows\System32 and deployed to every machine – so I don’t have to manually copy it again and again. I’m sure this is a common practice and many people have all done this before.

Logs are not centralised

In a large environment where you ConfigMgr hierarchy consists of hundreds of servers, it is a PAIN to read logs on all of these servers. i.e. When something bad happens with OSD and PXE, the results can be catastrophic (some of you guys may still remember what an incorrectly advertised OSD task sequence has done to a big Australian bank few years back).  Based on my own experience, I have seen support team needs to check PXE DP’s SMSPXE.log on as many as few hundred PXE enabled distribution points, within a very short time window (before the logs get overwritten). People would have to connect to each individual DP  and read the log files one at a time. – In situation like this, if you go up to them and ask them “How many logs have you read today?”, I’m sure it wouldn’t go down too well.

It would be nice if…

When Microsoft has released Operational Insights (OpInsights) to preview, the first thing came to my mind is, would be very nice if we can collect and process ConfigMgr log files into OpInsights. This would bring the following benefits to ConfigMgr administrators:

  • Logs are centralised and searchable
  • Much longer retention period (up to 12 month)
  • No need to use special tools such as CMTrace.exe to read the log files
  • Being able to correlate data from multiple log files and multiple computers when searching, thus make administrator’s troubleshooting experience much easier.

 

Challenges

A line of ConfigMgr log entry consists of many piece of information. And the server and client log files have different format. i.e.

Server Log file:

SNAGHTML9a32655

Client Log File:

SNAGHTML9aee440

Before sending the information to OMS, we firstly must capture only the useful information from each entry, transform them into a more structured way (such as Windows Event log format), so these fields would become searchable once been stored and indexed in your OMS workspace.

No Custom Solution Packs available

Since OMS is still very new, there aren’t many Solution Packs available (aka Intelligence Packs in the OpInsights days). Microsoft has not yet released any SDKs / APIs for partners and 3rd parties to author and publish Solution Packs. Therefore, at this stage, in order to send the ConfigMgr log file entries to OMS, we will have to utilise our old friend OpsMgr 2012 (with OpInsights integration configured), leveraging the power of OpsMgr management packs to collect and process the data before sending to OMS (via OpsMgr).

OpsMgr Limitations

As we all know, OpsMgr provides a “Generic Text Log” event collection rule. But unfortunately, this native event data source is not capable of accomplish what I am going to achieve here.

NiCE Log File Management Pack

NiCE is a company based in Germany. They offer a free OpsMgr management pack for log file monitoring. There are already many good blog articles written about this MP, I will not write an introduction here. If you have never heard or used it, please read the articles listed below, then come back to this post:

SCOM 2012 – NiCE Log File Library MP Monitoring Robocopy Log File – By Stefan Roth

NiCE Free Log File MP & Regex & PowerShell: Enabling SCOM 2 Count LOB Crashes – By Marnix Wolf

SCOM – Free Log File Monitoring MP from NiCE –By Kevin Greene

The beauty about the NiCE Log File MP is, it is able to extract the important information (as I highlighted in the screenshots above) by using Regular Expression (RegEx), and present the data in a structured way (in XML).

In Regular Expression, we are able to define named capturing groups to capture data from a string, this is similar to storing the information in a variable when comes to programming. I’ll use a log file entry from both ConfigMgr client and server logs, and my favourite Regular Expression tester site https://regex101.com/ to demonstrate how to extract the information as I highlighted above.

Server Log entry:

Regular Expression:

(?<LogMessage>.+)\s\s\$\$\<(?<SiteComponent>.+)\>\<(?<LogDate>.+)\s(?<LogTime>.+)\>\<(?<LogThread>.+)\>

Sample Log entry:

Execute query exec [sp_CP_GetPushRequestMachine] 2097152112~  $$<SMS_CLIENT_CONFIG_MANAGER><06-07-2015 13:11:09.448-600><thread=6708 (0x1A34)>

RegEx Match:

image

Client Log entry:

Regular Expression:

\<\!\[LOG\[(?<LogMessage>.+)\]LOG\]\!\>\<time=\”(?<LogTime>.+)\”\s+date=\”(?<LogDate>.+)\”\s+component=\”(?<LogComponent>.+)\”\s+context=\”(?<LogContext>.*)\”\s+type=\”(?<LogType>\d)\”\s+thread=\”(?<LogThread>\d+)\”\s+file=\”(?<LogFile>.+)\”\>

Sample Log entry:

<![LOG[Update (Site_9D4393B0-A197-4FC8-AF8C-0BC42AD2F33F/SUM_01a0100c-c3b7-4ec7-866e-db8c30111e80) Name (Update for Windows Server 2012 R2 (KB3045717)) ArticleID (3045717) added to the targeted list of deployment ({C5B54000-2018-4BD9-9418-0EFDFBB73349})]LOG]!><time=”20:59:35.148-600″ date=”06-05-2015″ component=”UpdatesDeploymentAgent” context=”” type=”1″ thread=”3744″ file=”updatesmanager.cpp:420″>

RegEx Match:

image

NiCE Log MP Regular Expression Tester

The NiCE Log MP also provides a Regular Expression Tester UI in the management pack. The good thing about this RegEx tester is, it also shows you what the management pack module output would be (in XML and XPath):

image

Now, I hope you get the bigger picture of what I want to achieve now. I want to use OpsMgr 2012, NiCE Log File MP to collect various ConfigMgr 2012 log files (both client and server logs), and then send over to OMS via OpsMgr. It is now time to talk about the management packs.

Management Pack

Obviously, the NiCE Log File MP is required. You can download it from NiCE’s customer portal once registered. This MP must be firstly imported into your management group.

Additionally, your OpsMgr management group must be configured to connect to a Operational Insights (or called “System Center Advisor” if you haven’t patched your management group in the last few months). However, what I’m about to show you is also able to store the data in your on-prem OpsMgr operational and data warehouse databases. So, even if you don’t use OMS (yet), you are still able to leverage this solution to store your ConfigMgr log data in OpsMgr databases.

Management Pack 101

Before I dive into the MP authoring and configuration, I’d like to firstly spend some time to go through some management pack basics – at the end of the day, not everyone working in System Center writes management packs. By going through some of the basics, it will help people who haven’t previously done any MP development work understand better later on.

In OpsMgr, there are 3 types of workflows:

  • Object Discoveries – For discovering instances and it’s properties of classes defined in management packs.
  • Monitors – responsible for the health states of monitoring objects. Can be configured to generate alerts.
  • Rules – Not responsible for the objects health state. Can be used to collect information, and also able to generate alerts.

Since our goal is to collect information from ConfigMgr log files, it is obvious we are going to create some rules to achieve this goal.

A rule consists of 3 types of member modules:

  • One(1) or more Data Source modules (beginning of the workflow)
  • Zero(0) or One(1) Condition Detection Module (optional, 2nd phase of the workflow)
  • One(1) or more write action modules (Last phase of the workflow).

To map the rule structure into our requirement, the rules we are going to author (one rule for each log file) is going to be something like this:

  • Data Source module: Leveraging the NiCE Log MP to read and process ConfigMgr log entries using Regular Expression.
  • Condition Detection module: Map the output of the Data Source Module into Windows event log data format
  • Write Action modules: write the Windows Event log formatted data to various data repositories. Depending your requirements, this could be any combinations of the 3 data repositories:
    • OpsMgr Operational DB (On-Prem, short term storage, but able to access the data from the Operational Console)
    • OpsMgr Data Warehouse DB (On-Prem, long term storage, able to access the data via OpsMgr reports)
    • OMS workspace (Cloud based, long term or short term storage depending on your plan, able to access the data via OMS portal, and via Azure Resource Manager API.)

 

Using NiCE Log MP as Data Source

Unfortunately, we cannot build our rules 100% from the OpsMgr operations console. The NiCE Log File MP does not provide any event collection rules in the UI. There are only alert rules and performance collection rules to choose from:

image

This is OK, because as I explained before, rules consists of 3 types of modules. An alert rule generated in this UI would have 2 member modules:

  • Data source module (called ‘NiCE.LogFile.Library.Advanced.Filtered.LogFileProvider.DS’) to collect the log entries and process them using the RegEx provided by you.
  • Write Action Module (called ‘System.Health.GenerateAlert’): Generate alerts based on the data passed from the data source module.

What we can do is to take the same data source module from such an Alert rule (and it’s configuration), then build our own rule with our condition detection module (called ‘System.Event.GenericDataMapper’) to map the data into Windows Event Log format, and use any of these 3 write action module to store the data:

  • Write to Ops DB: ‘Microsoft.SystemCenter.CollectEvent’
  • Write to DW DB: ‘Microsoft.SystemCenter.DataWarehouse.PublishEventData’
  • Write to OMS (OpInsights): ‘Microsoft.SystemCenter.CollectCloudGenericEvent’

However, to go one step further, since there are so many input parameters we need to specify for the Data Source module, and I want to hide the complexity for the users (your System Center administrators), I have created my own data source modules, and “wrapped” the NiCE data source module ‘NiCE.LogFile.Library.Advanced.Filtered.LogFileProvider.DS’ inside my own data source module. By doing so, I am able to hardcode some common fields that are same among all the rules we are going to create (i.e. the regular expression, etc.). Because the regular expression for ConfigMgr client logs and server logs are different, I have created 2 generic data source modules, one for each type of log – that you can use when creating your event collection rules.

When creating your own event collecting rules, you will only need to provide the following information:

  • IntervalSeconds: How often should the NiCE data source to scan the particular log
  • ComputerName: the name of the computer of where the logs is located. – This could be a property of the target class (or a class in the hosting chain).
  • EventID: to specify an event ID for the processed log entries (as we are formatting the log entries as Windows Event Log entries)
  • Event Category: a numeric value. Please refer to the MSDN documentation for the possible value: https://msdn.microsoft.com/en-au/library/ee692955.aspx. It is OK to use the value 0 (to ignore).
  • Event Level: a numeric value. Please refer to the MSDN documentation for the possible value: https://msdn.microsoft.com/en-au/library/ee692955.aspx.
  • LogDirectory: the directory of where the log file is located (i.e. C:\Windows\CCM\Logs)
  • FileName: the name of the log file (i.e. execmgr.log)

 

So What am I Offering?

I’m offering 3 management pack files to get you started:

ConfigMgr.Log.Collection.Library (ConfigMgr Logs Collection Library Management Pack)

This sealed management pack provides the 2 data source modules that I’ve just mentioned:

  • ConfigMgr.Log.Collection.Library.ConfigMgr.Client.Log.DS (Display Name: ‘Collect ConfigMgr 2012 Client Logs Data Source’)
  • ConfigMgr.Log.Collection.Library.ConfigMgr.Server.Log.DS (Display Name: ‘Collect ConfigMgr 2012 Server Logs Data Source’)

When you create your own management pack where your collection rules are going to be stored, you will need to reference this MP and use the appropriate data source module.

ConfigMgr.Log.Collection.Dir.Discovery (ConfigMgr Log Collection ConfigMgr Site Server Log Directory Discovery)

This sealed management pack is optional, you do not have to use it.

As I mentioned earlier, you will need to specify the log directory when creating the rule. The problem with this is, when you are creating a rule for a ConfigMgr server log file, it’s probably not ideal if you have to specify a static value because in a large environment where there are multiple ConfigMgr sites, the ConfigMgr install directory on each site server could be different. Unfortunately, the ConfigMgr 2012 management pack from Microsoft does not define and discovery the install folder or log folder as a property of the site server:

image

To demonstrate how we can overcome this problem, I have created this management pack. In this management pack, I have defined a new class called “ConfigMgr 2012 Site Server Extended”, it is based on the existing class defined from the Microsoft ConfigMgr 2012 MP. I have defined and discovered an additional property called “Log Folder”:

image

By doing so, we can variablise the “LogDirectory” parameter when creating the rules by passing the value of this property to the rule (I’ll demonstrate later).

Again, as I mentioned earlier, this MP is optional, you do not have to use it. When creating the rule, you can hardcode the “LogDirectory’’ parameter using a most common value in your environment, and using overrides to change this parameter for any servers that have different log directories.

ConfigMgr Logs Collection Demo Management Pack (ConfigMgr.Log.Collection.Demo)

In this unsealed demo management pack, I have created 2 event collection rules:

Collect ConfigMgr Site Server Wsyncmgr.Log to OpsMgr Operational DB Data Warehouse DB and OMS rule

This rule is targeting the “ConfigMgr 2012 Site Server Extended” class defined in the ‘ConfigMgr Log Collection ConfigMgr Site Server Log Directory Discovery’ MP, and collects Wsyncmgr.Log to all 3 destinations (Operational DB, Data Warehouse DB, and OMS).

Collect ConfigMgr Client ContentTransferManager.Log to OpsMgr Data Warehouse and OMS rule

This rule targets the “System Center ConfigMgr 2012 Client” class which is defined in the ConfigMgr 2012 (R2) Client Management Pack Version 1.2.0.0 (which is also developed by myself).

This rule collects the ContentTransferManager.log only to Data Warehouse DB and OMS.

Note: I’m targeting this class instead of the ConfigMgr client class defined in the Microsoft ConfigMgr 2012 MP because my MP defined and discovered the log location already. When you are writing your own rule for ConfigMgr clients, you don’t have target this class, as most of the clients should have the logs located at C:\Windows\CCM\Logs folder (except on ConfigMgr servers).

Note: There are few other good example on how to write event collection rules for OMS, you may also find these articles useful:

 

What Do I get in OMS?

After you’ve created your collection rules and imported into your OpsMgr management group, within few minutes, the management packs would have reached the agents, started processing the logs, and send the data back to OpsMgr. OpsMgr would then send the data to OMS. It will take another few minutes for OMS to process the data before the data becomes searchable in OMS.

You will then be able to search the events:

Client Log Example:

image

Server Log Example:

image

As you can see, each field identified by the Regular Expression in NiCE data source module are structured in different parameters in the OMS log entry. You can also perform more complex searches. Please refer to the articles listed below for more details:

By Daniele Muscetta:

Official documentation:

Download MP

You may download all 3 management packs from TY Consulting’s web site: http://www.tyconsulting.com.au/portfolio/configmgr-log-collection-management-pack/

What’s Next?

I understand writing management packs is not a task for everyone, currently, you will need to write your own MP to capture the log files of your choice. I am working on an automated solution. I am getting very close in releasing the OpsMgrExtended PowerShell / SMA module that I’ve been working since August last year. In this module, I will provide a way to automate OpsMgr rule creation using PowerShell. I will write a follow-up post after the release of OpsMgrExtended module to go through how to use PowerShell to create these ConfigMgr log collection rules. So, please stay tuned Smile.

Note: I’d like to warn everyone who’s going to implement this solution: Please do not leave these rules enabled by default when you’ve just created it. You need to have a better understanding on how much data is sending to OMS as there is a cost associated in how much data is sending to it, as well as the impact to your link to the Internet. So please make them disabled by default, start with a smaller group.

Lastly, I’d like to thank NiCE for producing such a good MP, and making it free to the community. Smile

Creating OpsMgr Instance Group for All Computers Running an Application and Their Health Service Watchers

Written by Tao Yang

OK, the title of this blog is pretty long, but please let me explain what I’m trying to do here. In OpsMgr, it’s quite common to create an instance group which contains some computer objects as well as the Health Service Watchers for these computers. This kind of groups can be used for alert subscriptions, overrides, and also maintenance mode targets.

There are many good posts around this topic, i.e.

From Tim McFadden: Dynamic Computer groups that send heartbeat alerts

From Kevin Holman: Creating Groups of Health Service Watcher Objects based on other Groups

Yesterday, I needed to create several groups that contains computer and health service watcher objects for:

  • All Hyper-V servers
  • All SQL servers
  • All Domain Controllers
  • All ConfigMgr servers

Because all the existing samples I can find on the web are all based on computer names, so I thought I’ll post how I created the groups for above mentioned servers. In this post, I will not go through the step-by-step details of how to create these groups, because depending on the authoring tool that you are using the steps are totally different. But I will go through what the actual XML looks like in the management pack.

Step 1, create the group class

This is straightforward, because this group will not only contain computer objects, but also the health service watcher objects, we must create an instance group.

i.e. Using SQL servers as an example, the group definition looks like this:

  <TypeDefinitions>
    <EntityTypes>
      <ClassTypes>
        <ClassType ID="TYANG.SQL.Server.Computer.And.Health.Service.Watcher.Group" Accessibility="Public" Abstract="false" Base="MSIL!Microsoft.SystemCenter.InstanceGroup" Hosted="false" Singleton="true" />
      </ClassTypes>
    </EntityTypes>
  </TypeDefinitions>

Note: the MP alias “MSIL” is referencing “Microsoft.SystemCenter.InstanceGroup.Library” management pack.

Step 2, Find the Root / Seed Class from the MP for the specific application

Most likely, the application that you are working on (for instance, SQL server) is already defined and monitored by another set of management packs. Therefore, you do not have to define and discover these servers by yourself. The group discovery for the group you’ve just created need to include:

  • All computers running any components of the application (in this instance, SQL Server).
  • And all Health Service Watcher objects for the computers listed above.

In any decent management packs, when multiple application components are defined and discovered, most likely, the management pack author would define a root (seed) class, representing a computer that runs any application components (in this instance, we refer this as the “SQL server”). Once an instance of this seed class is discovered on a computer, there will be subsequent discoveries targeting this seed class that discovers any other application components (using SQL as example again, these components would be DB Engine, SSRS, SSAS, SSIS, etc.).

So in this step, we need to find the root / seed class for this application. Based on what I needed to do, the seed classes for the 4 applications I needed are listed below:

  • SQL Server:
    • Source MP: Microsoft.SQLServer.Library
    • Class Name: Microsoft.SQLServer.ServerRole
    • Alias in my MP: SQL
  • HyperV Server:
    • Source MP: Microsoft.Windows.HyperV.Library
    • Class Name: Microsoft.Windows.HyperV.ServerRole
    • Alias in my MP: HYPERV
  • Domain Controller:
    • Source MP: Microsoft.Windows.Server.AD.Library
    • Class Name: .Windows.Server.AD.DomainControllerRole
    • Alias in my MP: AD
  • ConfigMgr Server
    • Source MP: Microsoft.SystemCenter2012.ConfigurationManager.Library
    • Class Name: Microsoft.SystemCenter2012.ConfigurationManager.Server
    • Alias in my MP: SCCM

Tip: you can use MPViewer to easily check what classes are defined in a sealed MP. Use SQL as example again, in the Microsoft.SQLServer.Library:image

You can easily identify that “SQL Role” is the seed class because it is based on Microsoft.Windows.ComputerRole and other classes use this class as the base class. You can get the actual name (not the display name) from the “Raw XML” tab.

Step 3 Create MP References

Your MP will need to reference the instance group library, as well as the MP of which the application seed class is defined (i.e. SQL library):

image

Step 4 Create the group discovery

The last component we need to create is the group discovery.The Data Source module for the group discovery is Microsoft.SystemCenter.GroupPopulator, and there will be 2 <MembershipRule> sections.i.e. For the SQL group:

 

image

As shown above, I’ve translated each membership rule to plain English. And the XML is listed below. If you want to reuse my code, simply change the line I highlighted in above screenshot to suit your needs.

  <Monitoring>
    <Discoveries>
      <Discovery ID="TYANG.SQL.Server.Computer.And.Health.Service.Watcher.Group.Discovery" Enabled="true" Target="TYANG.SQL.Server.Computer.And.Health.Service.Watcher.Group" ConfirmDelivery="false" Remotable="true" Priority="Normal">
        <Category>Discovery</Category>
        <DiscoveryTypes>
          <DiscoveryRelationship TypeID="MSIL!Microsoft.SystemCenter.InstanceGroupContainsEntities" />
        </DiscoveryTypes>
        <DataSource ID="DS" TypeID="SC!Microsoft.SystemCenter.GroupPopulator">
          <RuleId>$MPElement$</RuleId>
          <GroupInstanceId>$MPElement[Name="TYANG.SQL.Server.Computer.And.Health.Service.Watcher.Group"]$</GroupInstanceId>
          <MembershipRules>
            <MembershipRule>
              <MonitoringClass>$MPElement[Name="Windows!Microsoft.Windows.Computer"]$</MonitoringClass>
              <RelationshipClass>$MPElement[Name="MSIL!Microsoft.SystemCenter.InstanceGroupContainsEntities"]$</RelationshipClass>
              <Expression>
                <Contains>
                  <MonitoringClass>$MPElement[Name="SQL!Microsoft.SQLServer.ServerRole"]$</MonitoringClass>
                </Contains>
              </Expression>
            </MembershipRule>
            <MembershipRule>
              <MonitoringClass>$MPElement[Name="SC!Microsoft.SystemCenter.HealthServiceWatcher"]$</MonitoringClass>
              <RelationshipClass>$MPElement[Name="MSIL!Microsoft.SystemCenter.InstanceGroupContainsEntities"]$</RelationshipClass>
              <Expression>
                <Contains>
                  <MonitoringClass>$MPElement[Name="SC!Microsoft.SystemCenter.HealthService"]$</MonitoringClass>
                  <Expression>
                    <Contained>
                      <MonitoringClass>$MPElement[Name="Windows!Microsoft.Windows.Computer"]$</MonitoringClass>
                      <Expression>
                        <Contained>
                          <MonitoringClass>$Target/Id$</MonitoringClass>
                        </Contained>
                      </Expression>
                    </Contained>
                  </Expression>
                </Contains>
              </Expression>
            </MembershipRule>
          </MembershipRules>
        </DataSource>
      </Discovery>
    </Discoveries>
  </Monitoring>

Result

After I imported the MP into my lab management group, all the SQL computer and Health Service Watcher objects are listed as members of this group:image

Updated ConfigMgr 2012 (R2) Client Management Pack Version 1.2.0.0

Written by Tao Yang

Background

It’s only been 2 weeks since I released the last update of this MP (version 1.1.0.0). Soon after the release, Mr. David Allen, a fellow System Center CDM MVP contacted me, asked me to test his SCCM Compliance MP, and possibly combine it with my ConfigMgr 2012 Client MP.

In the ConfigMgr 2012 Client MP, the OVERALL DCM baselines compliance status are monitored by the DCM Agent class, whereas in David’s SCCM Compliance MP, each DCM Baseline is discovered as a separate entity and monitored separately. Because of the utilisation of Cook Down feature, comparing with the approach in the ConfigMgr 2012 Client MP, this approach adds no additional overhead to the OpsMgr agents.

David’s MP also included a RunAs profile to allow users to configure monitoring for OpsMgr agents using a  Low-Privileged default action account.

I think both of the features are pretty cool, so I have taken David’s MP, re-modelled the health classes relationships, re-written the scripts from PowerShell to VBScripts, and combined what David has done to the ConfigMgr 2012 Client MP.

If you (the OpsMgr administrators) are concerned about number of additional objects that are going to be discovered by this release (every DCM baseline on every ConfigMgr 2012 Client monitored by OpsMgr), the DCM Baselines discovery is disabled by default, I have taken an similar approach as configuring Business Critical Desktop monitoring, there is an additional unsealed MP in this release to allow you to cherry pick which endpoints to monitor in this regards.

What’s New in Version 1.2.0.0

Other than combining David’s SCCM Compliance MP, there are also few other updates included in this release. Here’s the full “What’s New” list:

Bug Fix: ConfigMgr 2012 Client Missing Client Health Evaluation (CCMEval) Execution Cycles Monitor alert parameter incorrect

Added a privileged RunAs Profile for all applicable workflows

Additional rule: ConfigMgr 2012 Client Missing Cache Content Removal Rule

Enhanced Compliance Monitoring

  • Additional class: DCM Baseline (hosted by DCM agent)
  • Additional Unit monitor: ConfigMgr 2012 Client DCM Baseline Last Compliance Status Monitor
  • Additional aggregate and dependency monitors to rollup DCM Baseline health to DCM Agent
  • Additional State View for DCM Baseline
  • Additional instance groups:
    • All DCM agents
    • All DCM agents on server computers
    • All DCM agents on client computers
    • All Business Critical ConfigMgr 2012 Client DCM Agents
  • Additional unsealed MP: ConfigMgr 2012 Client Enhanced Compliance Monitoring
    • Override to enabled DCM baseline discovery for All DCM agents on server computers group
    • Override to disable old DCM baseline monitor for All DCM agents on server computers group
    • Discovery for All Business Critical ConfigMgr 2012 Client DCM Agents (users will have to populate this group, same way as configuring business critical desktop monitoring)
    • Override to enabled DCM baseline discovery for All Business Critical ConfigMgr 2012 Client DCM Agents group
    • Override to disable old DCM baseline monitor for All Business Critical ConfigMgr 2012 Client DCM Agents group
  • Additional Agent Task: Evaluate DCM Baseline (targeting the DCM Baseline class)

Additional icons

  • Software Distribution Agent
  • Software Update Agent
  • Software Inventory Agent
  • Hardware Inventory Agent
  • DCM Agent
  • DCM Baseline

 

Enhanced Compliance Monitoring

Version 1.2.0.0 has introduced a new feature that can monitor assigned DCM Compliance Baselines on a more granular level. Prior to this release, there is a unit monitor targeting the DCM agent class and monitor the overall baselines compliance status as a whole. Since version 1.2.0.0, each individual DCM baseline can be discovered and monitored separately.

By default, the discovery for DCM Baselines is disabled. It needs to be enabled on manually via overrides before DCM baselines can be monitored individually.

image

There are several groups can be used for overriding the DCM Baseline discovery:

 

Scenario Override Target
Enable For All DCM Agents Class: ConfigMgr 2012 Client Desired Configuration Management Agent
Enable For Server Computers Only Group: All ConfigMgr 2012 Client DCM Agents on Server OS
Enable For Client Computers Only Group: All ConfigMgr 2012 Client DCM Agents on Client OS
Enable for a subset of group of computers Manually create an instance group and populate the membership based on the “ConfigMgr 2012 Client Desired Configuration Management Agent” class

Note: Once the DCM Baseline discovery is enabled, please also disable the “ConfigMgr 2012 Client DCM Baselines Compliance Monitor” for the same targets as it has become redundant.

Once the DCM baselines are discovered, their compliance status is monitored individually:

image

SNAGHTML44656c89

Additionally, the DCM Baselines have an agent task called “Evaluate DCM Baseline”, which can be used to manually evaluate the baseline. This agent task performs the same action as the “Evaluate” button in the ConfigMgr 2012 client:

SNAGHTML44665daf

ConfigMgr 2012 Client Enhanced Compliance Monitoring Management Pack

An additional unsealed management pack named “ConfigMgr 2012 Client Enhanced Compliance Monitoring” is also introduced. This management pack includes the following:

  • An override to enable DCM baseline discovery for “All ConfigMgr 2012 Client DCM Agents on Server OS” group.
  • An override to disable the legacy ConfigMgr 2012 Client DCM Baselines Compliance Monitor for “All ConfigMgr 2012 Client DCM Agents on Server OS” group.
  • A blank group discovery for the “All Business Critical ConfigMgr 2012 Client DCM Agents” group
  • An override to enable DCM baseline discovery for “All Business Critical ConfigMgr 2012 Client DCM Agents” group.
  • An override to disable the legacy ConfigMgr 2012 Client DCM Baselines Compliance Monitor for “All Business Critical ConfigMgr 2012 Client DCM Agents” group.

 

In summary, this management pack enables DCM baseline discovery for all ConfigMgr 2012 client on server computers and switch from existing “overall” compliance baselines status monitor to the new more granular compliance baseline status monitor which targets individual baselines. This management pack also enables users to manually populate the new “All Business Critical ConfigMgr 2012 Client DCM Agents” group. Members in this group will also be monitored the same way as the server computers as previously mentioned.

Note: Please only use this management pack when you prefer to enable enhanced compliance monitoring on all server computers, otherwise, please manually configure the groups and overrides as previously stated.

 

New RunAs Profile for Low-Privilege Environments

Since almost all of the workflows in the ConfigMgr 2012 Client management packs require local administrative access to access various WMI namespaces and registry, it will not work when the OpsMgr agent RunAs account does not have local administrator privilege.

Separate RunAs accounts can be created and assigned to the “ConfigMgr 2012 Client Local Administrator RunAs Account” profile.

RunAs Account Example:

image

RunAs Profile:

SNAGHTML446ddb3a

For More information about OpsMgr RunAs account and profile, please refer to: http://technet.microsoft.com/en-us/library/hh212714.aspx

Note: When assigning a RunAs Account to the “ConfigMgr 2012 Client Local Administrator RunAs Account” profile, you will receive an error as below:

image

Please refer to the MP documentation section “14.3 Error Received when Adding RunAs Account to the RunAs Profile” for instruction on fixing this error.

New Rule: Missing Cache Content Removal Rule

This rule runs every 4 hours by default and checks if any registered ConfigMgr 2012 Client cache content has been deleted from the file system. When obsolete cache content is detected, this rule will remove the cache content entry from ConfigMgr 2012 client via WMI and generates an informational alert with the details of the missing cache content:

image

Additional Icons:

Prior to this release, only the top level class ConfigMgr 2012 Client has its dedicated icons. I have spent a lot of time looking for icons for all other classes, I managed to produce icons for each monitoring classes in this release:

image

 

Note: I only managed to find high res icons for the Software Distribution Agent and the Software Update Agent (extracted from various DLLs and EXEs). I couldn’t find a way to extract icons from AdminUI.UIResources.DLL – where all the icons used by SCCM are stored. So for other icons, I had to use SnagIt to take screenshots of these icons. You may notice the quality is not that great, but after few days effort trying to find these icons, this is the best I can do. If you have a copy of these icons (res higher than 80×80), or know a way to extract these icons from AdminUI.UIResources.dll, please contact me and I’ll update them in the next release.

Credit

BIG thank you to David Allen for his work on the SCCM Compliance MP, and also helping me test this release!

You can download the ConfigMgr 2012 Client MP Version 1.2.0.0 HERE.

Until next time, happy SCOMMING!

ConfigMgr 2012 (R2) Client Management Pack Updated to Version 1.1.0.0

Written by Tao Yang

4th October, 2014: This MP has been updated to Version 1.2.0.0. Please download the latest version from this page: http://blog.tyang.org/2014/10/04/updated-configmgr-2012-r2-client-management-pack-version-1-2-0-0/.

OK, after few weeks of hard work, the updated version of the ConfigMgr 2012 (R2) Client MP is finally here.

The big focus in this release is to reduce the noise this MP generates. In the end, besides the new and updated components I have introduced in this MP, I also had to update every single script used by the monitors and rule.

The changes since previous version (v1.0.1.0) are listed below:

Bug Fixes:

  • Software Update agent health not rolled up (dependency monitors was missed in the previous release).
  • SyncTime in some data source modules were not correctly implemented
  • Typo in Pending Software update monitor alert description
  • The “All ConfigMgr 2012 Client computer group” population is incorrect. It includes all windows computers, not just the ones with ConfigMgr 2012 client installed.
  • Many warning alerts “Operations Manager failed to start a process” generated against various scripts used in this MP. It has been identified the issue is caused by the OpsMgr agent executing the workflows when the SMS Agent Host service is not running. This typically happened right after computer startup or reboot because SMS Agent Host service is set to Automatic (Delayed). All the scripts that query root\ccm WMI namespace have been re-written to wait up to 3 minutes for the SMS Agent Host to start (if it’s not already started). Hopefully this will reduce the number of these warning alerts. The updated scripts will also try to catch such condition so the alert indicates the actual issue:

clip_image002

 

Additional Items:

  • A diagnostic task and a recovery task for the CcmExec service monitor. The diagnostic task detects if the system uptime is longer than 5 minutes (overrideable), if the system uptime is longer than 5 minutes, the recovery task will start the SMS Agent Host service. Both the service monitor and the recovery task are disabled by default. –If you decide to use this service monitor and the recovery task (both disabled by default), it would help to reduce the number of failed start a process warning alerts caused by stopped SMS Agent Host service.
  • Monitor if the SCCM client has been placed into the Provisioning mode for a long period of time (Consecutive Sample monitor) (http://thoughtsonopsmgr.blogspot.com.au/2014/06/sccm-w7-osd-task-sequence-with-install.html)
  • The Missing CCMEval Consecutive Sample unit monitor has been disabled and replaced by a new monitor. The new monitor is no longer a consecutive sample monitor, it will simply detect if the CCMEval job has missed 5 consecutive cycles (number of missing cycles is overrideable). This new monitor is designed to simplify the detection process and to address the false alerts the previous consecutive monitor generates.
  • Monitor CCMCache size. Alert when the available free space for the CCMCache is lower than 20%. Some ConfigMgr client computers may be hosted on expensive storage devices (i.e. 90% of my lab machines are now running on SSD). Therefore I think it is necessary to monitor the ccmcache usage.  This monitor provides an indication on how much space has been consumed by ccmcache folder.
  • Agent Task: Delete CCMCache content

 

Updated Items:

  • Pending Reboot monitor updated to allow users to disable any of the 4 areas that the monitor checks for reboot (Pending File Rename operation is disabled by default because it generates too many alerts):
    • Component Based Serving
    • Windows Software Update Agent
    • SCCM Client
    • Pending File Rename operation
  • The Missing CCMEval monitor is disabled and superseded.
  • All consecutive samples monitors have been updated. The System.ConsolidatorCondition condition detection module has been replaced by the <MatchCount> configuration in the System.ExpressionFilter module (New in OpsMgr 2012) to consolidate consecutive samples. It simplifies the configuration and tuning process of these consecutive sample monitors.
  • Additional events logged in the Operations manager event log by various scripts. – help with troubleshooting. Please refer to Appendix A of the MP documentation for the details of these events.

 

Upgrade Tip

This version is in-place upgradable from the previous version. However, since there are additional input parameters introduced to the scripts used by monitors and rule, you may experience a large number of “Operations Manager failed to start a process” warning alert right after the updated MPs have been imported and distributed to the OpsMgr agents. To workaround this issue, I strongly recommend to place the “All ConfigMgr 2012 Clients” group into maintenance mode for 1 hour before importing the updated MPs. To do so, simply go the the “Discovered Inventory” view, and change the target type to “All ConfigMgr 2012 Clients”, and place the selected group into maintenance mode.

SNAGHTML30b0e7ee

Special Thanks

I’d like to thank all the people who has provided the feedback since the last release and spent time helped with testing this version. I’d like to specially thank Stanislav Zhelyazkov for this valuable feedbacks and the testing effort. I’d also like to Thank Marnix Wolf for his blog post which has helped me built the Provisioning Mode Consecutive Sample monitor in this MP.

 

Download

Download ConfigMgr 2012 (R2) Client Management Pack 1.1.0.0

Location, Location, Location. Part 3

Written by Tao Yang

location-graphicThis is the 3rd and the final part of the 3-part series. In this post, I will demonstrate how do I track the physical location history for Windows 8 location aware computers (tablets and laptops), as well as how to visually present the data collected on a OpsMgr 2012 dashboard.

I often see people post of Facebook or Twitter that he or she has checked in at <some places> on Foursquare. I haven’t used Foursquare before (and don’t intend to in the future), I’m not sure what is the purpose of it, but please think this as Four Square in OpsMgr for your tablets Smile. I will now go through the management pack elements I created to achieve this goal.

Event Collection Rule: Collect Location Aware Device Coordinate Rule

So, I firstly need to collect the location data periodically. Therefore, I created an event collection rule targeting the “Location Aware Windows Client Computer” class I created (explained in Part 2 of this series). This rule uses the same data source module as the “Location Aware Device Missing In Action Monitor” which I also explained in Part 2. I have configured this rule to pass the exact same data to the data source module as what the monitor does, – so we can utilise Cook Down (basically the data source only execute once and feed the output data to both the rule and the monitor).

image

image

Note: Although this rule does not require the home latitude and longitude and these 2 inputs are optional for the data source module, I still pass these 2 values in. Because in order to use Cook Down, both workflows need to pass the exact same data to the data source module. By not doing this, the same script will run twice in each scheduling cycle.

This rule maps the data collected from the data source module to event data, and stores the data in both Ops DB and DW DB. I’ve created a event view in the management pack, you can see the events created:

SNAGHTMLb60c734

Location History Dashboard

Now, that the data has been captured and stored in OpsMgr databases as event data, we can consume this data in a dashboard:

SNAGHTMLb65f9e4

As shown above, there are 3 widgets in this Location History dashboard:

  • Top Left: State Widget for Location Aware Windows Client Computer class.
  • Bottom Left: Using PowerShell Grid widget to display the last 50 known locations of the selected device from the state widget.
  • Right: Using PowerShell Web Browser widget to display the selected historical location from bottom left PowerShell Grid Widget.

The last 50 known locations for the selected devices are listed on bottom left section. Users can click on the first column (Number) to sort it based on the time stamp. When a previous location is selected, this location gets pined on the map. So we known exactly where the device is at that point of time. – From now on, I need to make sure my wife doesn’t have access to OpsMgr in my lab so she can’t track me down Smile.

Note: the location shown in above screenshot is my office. I took my Surface to work, powered it on and connected to a 4G device, it automatically connected to my lab network using DirectAccess.

Surface in car

Since this event was collected over 2 days ago, for demonstration purpose, I had to modify the PowerShell grid widget to list a lot more than 50 previous locations.

The script below is what’s used in the bottom left PowerShell Grid widget:

Param($globalSelectedItems)

$i = 1
foreach ($globalSelectedItem in $globalSelectedItems)
{
$MonitoringObjectID = $globalSelectedItem["Id"]
$MG = Get-SCOMManagementGroup
$globalSelectedItemInstance = Get-SCOMClassInstance -Id $MonitoringObjectID
$Computername = $globalSelectedItemInstance.DisplayName
$strInstnaceCriteria = "FullName='Microsoft.Windows.Computer:$Computername'"
$InstanceCriteria = New-Object Microsoft.EnterpriseManagement.Monitoring.MonitoringObjectGenericCriteria($strInstnaceCriteria)
$Instance = $MG.GetMonitoringObjects($InstanceCriteria)[0]
$Events = Get-SCOMEvent -instance $Instance -EventId 10001 -EventSource "LocationMonitoring" | Where-Object {$_.Parameters[1] -eq 4} |Sort-Object TimeAdded -Descending | Select -First 50
foreach ($Event in $Events)
{
$EventID = $Event.Id.Tostring()
$LocalTime = $Event.Parameters[0]
$LocationStatus = $Event.Parameters[1]
$Latitude = $Event.Parameters[2]
$Longitude = $Event.Parameters[3]
$Altitude = $Event.Parameters[4]
$ErrorRadius = $Event.Parameters[5].trimend(".")

$dataObject = $ScriptContext.CreateInstance("xsd://foo!bar/baz")
$dataObject["Id"]=$EventID
$dataObject["No"]=$i
$dataObject["LocalTime"]=$LocalTime
$dataObject["Latitude"]=$Latitude
$dataObject["Longitude"]=$Longitude
$dataObject["Altitude"]=$Altitude
$dataObject["ErrorRadius (Metres)"]=$ErrorRadius
$ScriptContext.ReturnCollection.Add($dataObject)
$i++
}
}

 

And here’s the script for the PowerShell Web Browser Widget:

Param($globalSelectedItems)

$dataObject = $ScriptContext.CreateInstance("xsd://Microsoft.SystemCenter.Visualization.Component.Library!Microsoft.SystemCenter.Visualization.Component.Library.WebBrowser.Schema/Request")
$dataObject["BaseUrl"]="<a href="http://maps.google.com/maps&quot;">http://maps.google.com/maps"</a>
$parameterCollection = $ScriptContext.CreateCollection("xsd://Microsoft.SystemCenter.Visualization.Component.Library!Microsoft.SystemCenter.Visualization.Component.Library.WebBrowser.Schema/UrlParameter[]")
foreach ($globalSelectedItem in $globalSelectedItems)
{
$EventID = $globalSelectedItem["Id"]
$Event = Get-SCOMEvent -Id $EventID
If ($Event)
{
$bIsEvent = $true
$Latitude = $Event.Parameters[2]
$Longitude = $Event.Parameters[3]

$parameter = $ScriptContext.CreateInstance("xsd://Microsoft.SystemCenter.Visualization.Component.Library!Microsoft.SystemCenter.Visualization.Component.Library.WebBrowser.Schema/UrlParameter")
$parameter["Name"] = "q"
$parameter["Value"] = "loc:" + $Latitude + "+" + $Longitude
$parameterCollection.Add($parameter)
} else {
$bIsEvent = $false
}
}
If ($bIsEvent)
{
$dataObject["Parameters"]= $parameterCollection
$ScriptContext.ReturnCollection.Add($dataObject)
}

Conclusion

This concludes the 3rd and the final part of the series. I know it is only a proof-of-concept. I’m not sure how practical it is if we are to implement this in a corporate environment. i.e. Since most of the current Windows tablets don’t have GPS receivers built-in, I’m not sure and haven’t been able to test how well does the Windows Location Provider calculate locations when a device is connected to a corporate Wi-Fi.

I have also noticed what seems to be a known issue with the Windows Location Provider COM object LocationDisp.LatLongReportFactory. it doesn’t always return a valid location report. Therefore to work around the issue, I had to code all the scripts to retry and wait between attempts. I managed to get the script to work on all my devices. However, you may need to tweak the scripts if you don’t always get valid location reports.

Credit

Other than the VBScript I mentioned in Part 2, I was lucky enough to find this PowerShell script. I used this script as the starting point for all my scripts.

Also, when I was trying to setup DirectAccess to get my lab ready for this experiment, I got a lot of help from Enterprise Security MVP Richard Hick’s blog: http://directaccess.richardhicks.com. So thanks to Richard Smile.

Download

You can download the actual monitoring MP and dashboard MP, as well as all the scripts I used in the MP and dashboards HERE.

Note: For the monitoring MP (Location.Aware.Devices.Monitoring), I’ve also included the unsealed version in the zip file for your convenience (so you don’t have to unseal it if you want to look inside). Please do not import it into your management group because the dashboard MP is referencing it, therefore it has to be sealed.

Lastly, as always, I’d like to hear from the community. Please feel free to share your thoughts with me by leaving comments in the post or contacting me via email. Until next time, happy SCOMMING Smile.

Location, Location, Location. Part 2

Written by Tao Yang

miaThis is the 2nd part of the 3-part series. In this post, I will demonstrate how do I monitor the physical location of my location aware devices (Windows 8 tablets and laptops). To do so, I created a monitor which generates alerts when a device has gone beyond allowed distance from its home location. I will now go through each the component in the management pack that I created to achieve this goal.

Custom Class: Location Aware Windows Client Computer

I created a custom class based on “Windows Client 8 Computer” class. I needed to create this class instead of just using existing Windows Client 8 Computer class because I need to store 2 additional property values: “Home Latitude” and “Home Longitude”. Once been discovered, these 2 values will be passed to the monitor workflow so the script within the monitor can calculate the distance between current location and configured home location.

image

I created the following registry keys and values for this custom class:

Key: HKLM\SOFTWARE\TYANG\MonitorLocation

REG_SZ values: HomeLatitude & HomeLongitude

image

Discovery

I created a registry discovery targeting Windows Client 8 Computer class to discover the class (Location Aware Windows Client Computer) and the 2 properties I defined.

image

It is configured to run every 6 hours by default. This can be overridden.

Location Aware Device Missing In Action Monitor

image

image

To create this monitor, I firstly wrote a script to detect the current location and calculate the distance between the current location and home location (based on the registry value discovered).

Note: I managed to find few PowerShell scripts to calculate distance between 2 map coordinates (i.e. http://poshcode.org/2591 and http://stackoverflow.com/questions/365826/calculate-distance-between-2-gps-coordinates). However, I believe all the examples I found are not calculating the distance correctly. For example, I know for fact that the direct distance between my home to my office is somewhere between 23 – 25 kilometres. Using both of these scripts I mentioned, the calculated distance is around 16 kilometres. It is too short to be considered being correct. In the end, I found a VBScript from a Unix forum. The result from this script is just over 23km, which also matches the result from this online calculator. Therefore, I converted this VBScript into PowerShell and used it in this management pack. As I am really bad at math, I didn’t bother looking into the differences between these scripts. It is beyond my ability.

When the script runs, it logs an informational event (event ID 10003) if the current location is successfully detected:

SNAGHTMLb37b568

Or a warning event (event ID 10002) if the location data retrieved is not valid.

I then created Probe Action, Data Source modules and monitor type for this monitor. – All just usual drill, I won’t go through the details here.

As I have shown in the 1st and 2nd screenshots, I have configured required registry keys and values on my wife’s Dell XPS ultrabook (running Windows 8.1). The Home Latitude and Longitude coordinates are the location of my office. Because I configured the warning threshold to 5,000 metres (5km) and critical threshold to 10,000 metres (10km), a critical alert was generated against this XPS laptop:

SNAGHTMLb381a0e

For my Surface Pro 2, I configured the home location to be my home, therefore, currently as I’m home writing this blog post and it is right next to me, the health state for my Surface is healthy:

SNAGHTMLb3e690c

This concludes the 2nd part of the series. Please continue to Part 3.