Tag Archives: MimboloveSCOM

Command Launching Microsoft Monitoring Agent Control Panel Applet

Written by Tao Yang

I have been refreshing my lab servers to Windows Server 2016. I’m using the Non GUI version (Server Core) wherever is possible.

When working on Server Core servers, I found it is troublesome that I can’t access the Microsoft Monitoring Agent applet in Control Panel:

image

Although I can use PowerShell and the MMA agent COM object AgentConfigManager.MgmtSvcCfg, Sometime it is easier to use the applet.

After some research, I found the applet can be launched using command line:

image

OpsMgr Alert Tuning using OpsLogix EZalert

Written by Tao Yang

EZAlert

OpsLogix has recently released a new product to the market called “EZalert”. It learns the operator’s alert handling behaviour and then it is able to automatically update Alert resolution states based on its learning outcome. You can find more information about this product here: http://www.opslogix.com/ezalert/. I was given a trail license for evaluation and review. Today I installed it on a dedicated VM and connected it to my lab OpsMgr management group.

EZalert Walkthrough

Once installed, I could see a new dashboard view added in the monitoring pane, and this is where we tune all the alerts:

image

From this view, I can see all the active alerts, and I can start tuning then either one at a time, or I can multiple select and set desired state in bulk. Once I have gone through all the alerts on the list, I can choose to save the configuration under the Settings tab:

image

image

Once this is done, any new alerts that have previously been trained will be updated automatically when it was generated. i.e. I have created a test alert and trained EZalert to set the resolution state to Closed, as you can see below, it was created at 9:44:57AM and modified by EZalert 2 seconds later:

image

Once the initial training process is completed and saved, the training tab will become empty. Any new alerts generated will show up in the training tab, and you can see if there’s a suggested state assigned, and you can also modify it by assigning another state:

image

And all previously trained alerts can be found in the history tab:

image

You can also create exclusions. if you want EZalert to skip certain alerts for certain monitoring object (i.e. Disk space alert generated on C:\ on Server A), you can do so by creating exclusions:

image

image

In my opinion, this is a very good practice when tuning alerts. when setting alert resolution states, you only need to do it once, and EZalert learns your behaviour and repeat your action for you in the future. It will be a huge time saver for all your OpsMgr operators over the time. It will also become very handy for alert tuning in the follow situations:

  • When you have just deployed a new OpsMgr management group
  • When you have introduced new management packs in your management group
  • When you have updated existing management packs to the newer versions

EZalert vs Alert Update Connector

Before EZalert’s time, I have been using the OpsMgr Alert Update Connector (AUC) from Microsoft (https://blogs.technet.microsoft.com/kevinholman/2012/09/29/opsmgr-public-release-of-the-alert-update-connector/). I was really struggling when configuring AUC so I developed my own solution to configure AUC in an automated fashion  (http://blog.tyang.org/2014/04/19/programmatically-generating-opsmgr-2012-alert-update-connector-configuration-xml/) and I have also developed a management pack to monitor it (http://blog.tyang.org/2014/05/31/updated-opsmgr-2012-alert-update-connector-management-pack/). In my opinion, AUC  is a solid solution. It’s been around for many years and being used by many customers. But I do find it has some limitations:

  • Configuration process is really hard
  • Configuration is based on rules and monitors, not alerts. So it’s easy to incorrectly configure rules and monitors that don’t generate alerts (i.e. perf / event collection rules, aggregate / dependency monitors, etc).
  • Modifying existing configuration causes service interrupt due to service restart
  • When running in a distributed environment (on multiple management servers), you need to make sure configuration files are consistent across these servers and only one instance is running at any given time.
  • No way to easily view the current configurations (without reading XML files)

I think EZalert has definitely addressed some of these shortcomings:

  • Alert training process is performed on the OpsMgr console
  • No need to restart services and reload configuration files after new alerts are added or when existing alerts are modified
  • Configurations are saved in a SQL database, not text based files
  • Current configuration are easily viewable within the SCOM console

However, AUC has the following advantages over EZalert:

  • AUC supports assigning different values to different groups or individual objects. In EZalert, the exception can only be created for individual monitoring objects and it doesn’t seem like you can assign different value for this object, it’s simply on/off exception
  • Other than Alert resolution state, AUC can also be used to update other alert properties (i.e. custom fields, Owner, ticket ID,  etc.). EZalert doesn’t seem like it can update other alert fields.

Things to Consider

When using EZalert, in my opinion, there are few things you need to consider:

1. It does not replace requirements for overrides

If you are training EZalert to automatically close an alert when it’s generated, then you should ask yourself – do you really need this alert to be generated in the first place? Unless you want to see these alerts in the alert statistics report, you should probably disable this alert via overrides. EZalert should not be used to replace overrides. if you don’t need this alert, disable it! it saves resources on both SCOM server and agent to process alert, and database space to store the alert.

2. Training Monitor generated alerts

As we all know, we shouldn’t manually close monitor generated alerts. So when you are training monitor alerts, make sure you don’t train EZalert to update the resolution state to “Closed”. consider using other states such as “Resolved”.

3. Create Scoped roles for normal operators in order to hide the EZalert dashboard view

You may not want normal operators to train alerts, so instead of using the built-in operators role, you’d better create your own scoped role and hide the EZalert dashboard view from normal operators

Conclusion

I believe EZalert has some strong use cases. Unless you have a very complicated alert flow automation process that leverages other alert fields such as custom fields, owner, etc. (i.e. for generating tickets, etc) and you are currently using AUC for this particular reason, I think EZalert gives you a much more user friendly experience for ongoing alert tuning.

I have personally implemented AUC in few places, and I still get calls every now and then from those places asking help with AUC configuration and it’s been few years since it was implemented. Also I’m not exactly sure if AUC is officially supported by Microsoft because it was originally developed by an OpsMgr PFE at this spare time (I’m not entirely sure about the supportability of AUC, maybe someone from MSFT can confirm). Whereas EZalert is a commercial product, the vendor OpsLogix provide full support of  it.

lastly, if you have any questions about EZalert, please feel free to contact OpsLogix directly.

Squared Up Upcoming V3 Dashboard with Distributed Application Discovery Feature

Written by Tao Yang

Squared Up is set to release the version 3 of their dashboard next week at Ignite North America. One of the key features in the v3 release is called the “Visual Application Discovery & Analysis” (aka VADA).

VADA utilise OpsMgr agent tasks and netstat.exe command to discover the other TCP/IP endpoints the agents are communicating to. You can learn more about this feature from a short YouTube video Squared Up has published recently: https://www.youtube.com/watch?v=DJK_3SritwY

I was given a trail copy of v3 for my lab. After I’ve installed it and imported the required management pack, I was able to start discovering the endpoints that are communicating to my OpsMgr agents in the matter of few clicks:

image

As we all know, natively, OpsMgr is lacking the capability of automatically Distributed Application discovery, customers used to integrate 3rd party applications such as BlueStripe FactFinder with OpsMgr for this capability. However, now that BlueStripe has been acquired by Microsoft and it’s being fitted under the OMS banner as the Application Dependency Monitor solution (ADM), customers can no longer purchase it for OpsMgr. It is good to see that Squared Up has released something with similar capabilities because at this very moment, it seems to be a gap in the OpsMgr space.

Having said that, I don’t think the OMS ADM solution is too far away from the public preview release.

image

One of the biggest differences I can see (after spending couple of hours on Squared Up V3), is that Squared Up VADA collects ad-hoc data at the time VADA is launched (which triggers the agent ask), whereas OMS ADM has it’s own agents and it is collecting data continuously.

image

Additionally, looks like Squared Up VADA only supports Windows agents at this stage and OMS ADM will also support Linux agents.

At this stage, since we don’t know  if BlueStripe will be made available to OpsMgr in the future, and Squared Up is releasing this awesome addition to their already-popular OpsMgr web console / dashboard product, why not give it a try and see what you can produce? I guess since the data collection is ad-hoc, it will make more sense to start the discovery in VADA during peak hours when the system is fully loaded and each components are actively communicating to each other, so you don’t miss any components.

15th May 2017 Update:

VADA version 3.1 was released few weeks ago and it has introduced few new features:

Support for Linux discovery:

Cross-platform application discovery (most Linux distros covered)

Application groups / tiers

Logically arrange your discovered servers and devices by role / tier / group

Export as SCOM DA component groups for health roll-up, override management and reporting

clip_image002

Manually add dependencies

Supplement automatic discovery by manually adding servers and devices

clip_image004

If you’d like to know more about VADA, here are some helpful new resources:

10 Minute Overview Video

Live Webinar

Free Trial

Lastly, if you are going to attend Ignite NA next week and want to learn more about this new feature in Squared Up V3, please make sure you go find them at their booth.

OpsMgr Agent Task to Configure OMS Network Performance Monitor Agents

Written by Tao Yang

OMS Network Performance Monitor (NPM) has made to public preview few weeks ago. Unlike other OMS solutions, for NPM, additional configuration is required on each agent that you wish to enrol to this solution. The detailed steps are documented in the solution documentation.

The product team has provided a PowerShell script to configure the MMA agents locally (link included in the documentation). In order to make the configuration process easier for the OpsMgr users, I have created a management pack that contains several agent tasks:

  • Enable OMS Network Performance Monitor
  • Disable OMS Network Performance Monitor
  • Get OMS Network Performance Monitor Agent Configuration

image

Note: Since this is an OpsMgr management pack, you can only use these tasks against agents that are enrolled to OMS via OpsMgr, or direct OMS agents that are also reporting to your OpsMgr management group.

These tasks are targeting the Health Service class, if you are also using my OpsMgr 2012 Self Maintenance MP, you will have a “Health Service” state view, and you will be able to access these tasks from the task pane of this view:

image

I can use the “Get OMS Network Performance Monitor Agent Configuration” task  to check if an agent has been configured for NPM.

i.e. Before an agent is configured, the task output shows it is not configured:

image

Then I can use the “Enable OMS Network Performance Monitor” task to enable NPM on this agent:

image

Once enabled, if I run the “Get OMS Network Performance Monitor Agent Configuration” task  again, the task output will show it’s enabled and also display the configured port number:

image

and shortly after, you will be able to see the newly configured node in OMS NPM solution:

image

If you want to remove the configuration, just simply run the “Disable OMS Network Performance Monitor” task:

image

You can download the sealed version of this MP HERE. I’ve also pushed the VSAE project for this MP to GitHub.

OMS Near Real Time Performance Data Aggregation Removed

Written by Tao Yang

Few weeks ago, the OMS product team has made a very nice change for the Near Real Time (NRT) Performance data – the data aggregation has been removed! I’ve been waiting for the official announcement before posting this on my blog. Now Leyla from the OMS team has finally broke the silence and made this public: Raw searchable performance metrics in OMS.

I’m really excited about this update. Before this change, we were only able to search 30-minute aggregated data via Log Search. this behaviour brings some limitations to us:

  • It’s difficult to calculate average values based on other intervals (i.e. 5-minute or 10-minute)
  • Performance based Alert rules can be really outdated – this is because the search result is based on the aggregated value over the last 30 minutes. In critical environment, this can be a bit too late!

By removing the data aggregation and making the raw data searchable (and living a longer life), the limitations listed above are resolved.

Another advantage this update brings is, it greatly simplified the process of authoring your own OpsMgr performance collection rules for OMS NRT Perf data. Before this change, the NRT perf rules come in pairs – each perf counter you want to collect must have 2 rules (with the identical data source module configurations). One rule is for collecting raw data and another is to collect the 30-minute aggregated data. This has been discussed in great details in Chapter 11 of our Inside Microsoft Operations Management Suite book (TechNet, Amazon). Now, we no longer need to write 2 rules for each perf counter. We only need to write one rule – for the raw perf data.

The sample OpsMgr management pack below collects the “Log Cache Hit Ratio” counter for SQL Databases. It is targeting the Microsoft.SQLServer.Database class, which is the seedclass for pre-SQL 2014 databases (2005, 2008 and 2012):

As you can see from the above sample MP, the rule that collects aggregated data is no longer required.

image

So if you have written some rules collecting NRT perf data for OMS in the past, you may want to revisit what you’ve done in the past and remove the aggreated data collection rules.

Upcoming SCOM Bootcamp-Melbourne Australia

Written by Tao Yang

I’m teaming up with Infront Consulting, Australia and will deliver a 4-day in-person instructor-led SCOM 2012 bootcamp at Melbourne, Australia. The content of this bootcamp was developed by Infront Consulting group and it has been very popular internationally.

This bootcamp is designed for SCOM administrators and operators. If you are running SCOM (or planning to implement SCOM) in your environment, I strongly recommend you enrol to this bootcamp and spend 4 days with myself and other folks attending the bootcamp.

Here’s the detail of this training event:

SCOMBootcamp

SCOM 2012 Bootcamp – Australia

Date: 20 – 23 June 2016

Location:

Saxons Training Facilities Melbourne
Level 8
500 Collins Street
Melbourne VIC 3000

Please join us for the first Infront Consulting SCOM 2012 Bootcamp in Australia! Tao Yang is a well-known author, speaker, blogger and SCOM expert who will be guiding you in person in the SCOM 2012 R2 Bootcamp.

This four-day Bootcamp is a mix of in-depth instructor led training and hands-on labs where you will learn how to administer System Center Operations Manager 2012. This course will provide students with an understanding of the Operations Manager 2012 Architecture, features and how to administer and maintain Operations Manager 2012.

Cost: $3,600 AUD + GST per student, includes course materials and access to Hands on Labs.

Modules covered:

Session 1: Overview of System Center Operations Manager 2012

Session 2: Operations Manager 2012 Architecture

Session 3: Installing Operations Manager 2012

Session 4: Installing the Gateway Server Role

Session 5: Configuring Operations Manager Security

Session 6: Agent Deployment and Configuration

Session 7: Alert Notification and Incident Remediation

Session 8: Management Pack Tuning and Targeting Best Practices

Session 9: Tuning of the Core Microsoft MPs

Session 10: Application Performance Monitoring

Session 11: Network Monitoring in Operations Manager 2012

Session 12: Working in the Operations Manager Shell

Session 13: Building Custom Monitoring Solutions & Distributed Applications

Session 14: Reporting & Dashboards

Session 15: Third Party Extensions

Registration Link:

https://www.eventbrite.com/e/scom-2012-bootcamp-australia-tickets-25190237679

Hope to see you there!

Recordings Available for the VSAE MP Authoring Webinar with Squared Up

Written by Tao Yang

Last night, I conducted 2 webinars with Richard Benwell of Squared Up on MP Authoring. I recorded both sessions from my computer using Camtasia, and now the recordings for both sessions are now available on Squared Up’s YouTube channel:

First Session: https://www.youtube.com/watch?v=oH035DgbUSQ

Second Session: https://www.youtube.com/watch?v=Xu3yRE770QA

Lastly, the workshop guide, slide deck and the sample VSAE project is also available on GitHub:

https://github.com/tyconsulting/SquaredUp-VSAE-Workshop

Automating OpsMgr Part 20: Migrating Your OpsMgr Performance Collection Rules to OMS (Using OpsLogix VMware MP as an Example)

Written by Tao Yang

OpsMgrExntededIntroduction

This is the 20th installment of the Automating OpsMgr series. Previously on this series:

OK, it has been 6 months since my last post on this blog series. I simply didn’t have time to continue on, but I know this is far from over. I am spending A LOT of time on OMS these days, some of you guys may have heard (or have already read) our newly published book Inside Microsoft Operations Management Suite (TechNet, Amazon). I’m hoping you guys all have played with OMS and maybe even have started thinking what workloads can you move to OMS.

As we all know, we can pretty much categorise SCOM data into the following 4 categories:

  • Performance Data
  • Event Data
  • Alert Data
  • State Data

Unlike SCOM, since OMS does not use classes, there are no classes, relationships and state data in OMS, but for the other 3 types, we can easily get them over to OMS. With the SCOM alert data, you can simply enable the Alert solution after you have connected your SCOM management group to your OMS workspace. OMS also has its own alerting and remediation capability. For all existing performance collection and event collection rules, we can easily recreate them using a different Write Action module to store these data into OMS. In this post, I will show you how we can gather all performance collection rules from an existing OpsMgr management pack, and re-create these them for OMS (stored as PerfHourly data in OMS). But before we diving into it, let’s quickly go through the performance data in OMS.

OMS Performance Data

There are 2 types of performance data in OMS. The PerfHourly data was introduced with the Capacity Planning solution. As the name suggests, PerfHourly data is the hourly aggregated performance data. It does not store any raw perf data in OMS.

Another type of performance data is called Near-Real Time (NRT) performance data. NRT perf data can be access using queries such as Type=Perf. Unlike the PerfHourly data, NRT perf data can collect perf data as frequent as every 10 seconds, and the aggregation interval is every half hour. Both raw and aggregated NRT perf data are stored in OMS, where raw data is stored for 14 days and the OMS search queries only return aggregated data.

From the management pack point of view, it is a lot more complicated writing perf collection rules for NRT perf data. With the NRT perf data, we must always author 2 rules for every counter that we are going to collect, one for the raw data and one for the aggregated data. Secondly, for NRT perf data, when mapping performance data, the object name must always follow the format “\\<Computer FQDN>\<Object Name>”. Lastly, the collection rule that collects the aggregated data must use a Condition Detection module called “Microsoft.IntelligencePacks.Performance.PerformanceAggregator”.

Since an OpsMgr rule can only have up to one (1) condition detection member module, converting existing OpsMgr perf collection rules that already have an existing condition detection member module to OMS NRT perf rule may not be that straight forward. In this case, we may need to create some additional module types and things can get very complicated. It is certainly not something that we can use a generic script to achieve.

Therefore in order to make the script work with any existing OpsMgr performance collection rules, I have chosen to store the perf data in OMS as PerfHourly data because it has far less “red tapes”. Having said that, please keep in mind it is still possible to re-create OpMgr perf collection rules as OMS NRT perf collection rules, but it’s just not something we can develop as a generic automated solution.

If you want to learn more about performance data in OMS, or how to author OMS based collection rules in SCOM using VSAE, please refer to Chapter 5: Working with Performance Data and Chapter 11: Custom Management Pack Authoring of the Inside OMS book I mentioned in the beginning of this post.

PowerShell Script: Copy-PerfRulesToOMS.ps1

In the previous posts of this blog series, I have simply placed the scripts / runbooks within the post it self. I have decided to use Github from now on. So the script Copy-PerfRulesToOMS.ps1 can be found in one of my public Github repositories: https://github.com/tyconsulting/OpsMgr-SDK-Scripts/blob/master/OMS%20Related%20Scripts/Copy-PerfRulesToOMS.ps1

This script reads configurations of all performance collection rules in a particular OpsMgr management pack, and then recreate these rules with same configuration but stores the performance data as PerfHourly data in your OMS workspace. The OMS perf collection rules created by this script will be stored in a brand new unsealed MP with the name ‘<Original MP name>.OMS.Perf.Collection’ and display name ‘<Original MP display name> OMS PerfHourly Addon””’.

This script has the following pre-requisites:

  • OpsMgrExtended PS module loaded on the machine where you are executing the script.
  • An account with OpsMgr administrative rights
  • OpsMgr management group must be connected to OMS

The script takes the following input parameters:

  • ManagementServer – Specify the name of an OpsMgr management server that you wish to connect to. This is a mandatory parameter.
  • Credential – Specify an alternative credential that has admin rights to the OpsMgr management group. This is an optional parameter.
  • ManagementPackName – Specify the source MP where you want to copy to Perf collection rule to OMS. This is not the display name but the actual MP name. In the OpsMgr console, when you open the management pack property, it is the ‘ID’ field. i.e. since I’m going to use the OpsLogix VMware management pack as an example in this post, the name for this MP is “OpsLogix.IMP.VMWare.Monitoring”:

image

 

Executing the script:

I have added many verbose messages in the script, so you can use the optional –verbose switch when executing the script.

SNAGHTMLd5d2347

This script firstly connect to the management group, read the source MP, then retrieves all performance collection rules from this MP. If the source MP contains any perf collection rules, it will create a new unsealed MP and start creating a co-responding OMS PerfHourly collection rule for each original OpsMgr perf collection rule. the OMS PerfHourly collection rules will have the same properties, input parameters as well as the same data source and condition detection member modules as the original OpsMgr Perf Collection rules. But they will be configured to use another Write Action member module to send the perf data to OMS.

Note:

  • The script detects OpsMgr Perf collection rules from the source MP by examining the actual write action member modules. If any of the write action member modules are either ‘Microsoft.SystemCenter.CollectPerformanceData’ (used to write perf data to OpsMgr operational DB) or ‘Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData’ (used to write perf data to OpsMgr DW DB), then the script will consider the rule as a perf collection rule.
  • When the source MP is unsealed, the script will failed under the following circumstances:
    • a perf collection rule in the source MP is targeting a class defined in the source MP
    • a perf collection rule in the source MP uses any data source or condition detection module types that are defined in the source MP
  • The script does not disable any existing perf collection rules from the source MP
  • The script copies all attributes from the source perf collection rule to the new OMS PerfHourly rule, including the ‘Enabled’ property. So if the source perf collection rule is disabled by default, then the newly created OMS PerfHourly rule will also be disabled by default.
  • Depending on the number of OpsMgr Perf Collection rules to be processed, this script can take some time to finish because it is writing new OMS PerfHourly rules to the destination MP one at a time. I purposed coded the script this way (rather than writing everything at once), is because by doing so, if a particular rule has failed MP verification, it would not impact the creation of other rules.

When the execution is completed, you will see a new unsealed MP created in your management group:

image

and if I export it to XML and open it in MPViewer, I can see all the newly created OMS PerfHourly collection rules:

image

At this stage, I don’t need to do anything else and all the performance data collected by the source MP (OpsLogix VMware MP in this example) will be stored not only in OpsMgr, but also in OMS.

Because the original OpsMgr perf collection rules and the co-responding OMS PerfHourly rules are sharing the exact same data source modules with same configuration, this would not add additional overhead to the OpsMgr agents due to the OpsMgr Cook Down feature. However, please keep in mind that from now on, if you need to apply overrides to the either rule, it’s best to apply the same override to both rules (so you don’t break Cook Down).

Although the PerfHourly data will not appear in your OMS workspace straightaway (due to the aggregation process), you should be able to see them within few hours:

image

As you can see in the above screenshot, I now have all the VMware related counters defined in the OpsLogix VMware MP in my OMS workspace. the RootObjectName ‘VCENTER01’ is the vCenter server in my lab, and the ObjectDisplayName ‘exs01.corp.tyang.org’ is the VMware ESX host in my lab.

Summary

In this post, I have shared a script and demonstrated how to use this script to migrate your existing OpsMgr performance collection rules to OMS. We can easily write a very similar script for migrating existing event collection rules (maybe a blog topic for another day). I have demonstrated how to use this script to collect VMware related counters originally defined in the OpsLogix VMware MP.

In the next post of this series, I will demonstrate how to use OpsMgrExtended module, SharePointSDK module, Azure Automation, Hybrid Workers and SharePoint Online to build a portal for scheduling OpsMgr maintenance mode – this is based on one of the demos in my Azure Automation session with Pete Zeger from SCU 2016 APAC & Australia.

Until next time, happy automating!

OpsLogix Capacity Report Management Pack Overview

Written by Tao Yang

capacity-banner-bgJust over a month ago, I have blogged and presented a webcast comparing the OpsLogix Capacity Report Management Pack and the OMS Capacity solution. Since then, an update was released on this management pack and I’d like to take a moment to provide a proper overview for this MP. For those who have not used this management pack and are looking for a solution for capacity forecasting and management, I hope you will have some ideas on the capabilities this management pack provides.

Management Pack Introduction

The OpsLogix Capacity Report MP provides OpsMgr reports that can be used to forecast trending of any existing performance data collected by OpsMgr. Same as any other OpsMgr reports, the reports provided by this MP can be accessed from the reporting pane in the OpsMgr console, under “OpsLogix IMP – Capacity trending reports” folder:

image

Installing and Configuring Management Pack

Other than the capacity report MP itself (OpsLogix.IMP.Capacity_v1.0.2.24.mpb), I was also given a zip file containing my license key. This zip file contains an unsealed MP (OpsLogix.IMP.Capacity.License.xml), which contains my license key and it is unique to my environment. I need to import the license MP into my OpsMgr management group together with the capacity report MP. Once both MPs are imported, you will able to see the reports from the folder shown in the screenshot above.

Reports

This MP offers the following reports:

  • Absolute value Report – Single instance
  • Absolute value Report – Multi instance
  • Percentage value Report – Single Instance
  • Percentage value Report – Multi instance
  • Percentage value Report – Multi instance Critical Only
  • Percentage value Report – Single instance Critical Only

I will now go through these reports.

Absolute value Report – Single instance

This report allows you to run a forecast report over any performance counters stored in the OpsMgr data warehouse DB. This report requires the following parameters:

image

  • From: The start date for the forecast analysis. The default value is Today – 30 days
  • To: the number of forecast days. default value is 30 days from today
  • Time zone: choose the time zone of your choice
  • Available Rule Languages: The default value is English, when choosing another language, the performance rules that have display strings defined in that particular language (defined in <LanguagePacks> section in management packs) will appear in the “Performance Rule” drop down list.
  • Performance Rule: this drop down list contains all the performance rules available for the language that you have chosen.
  • Counter:  this drop down list contains the counters collected by the performance rule that you have selected. As the best practice, a perf collection rule should only collect one counter, so hopefully you should only see one counter on this drop down list.
  • Object: this drop down list contains the object associated to the performance rule and counter
  • Instance: this drop down list contains a list of available instances for the performance rule and counter.
  • Managed Entity: this drop down list contains a list of managed entities associated to the performance counter instances.

The report looks something like this:

image

as shown in the screenshot above, the light blue line indicates forecasted future trending for the particular counter that you have chosen. The report also shows the forecasted change and value for the perf counter.

Absolute value Report – Multi instance

This report is very similar to the “Absolute value Report – Single instance” report. the only difference is, we can choose multiple instances in this report:

image

As shown in the screenshot above, we are able to choose multiple instances in the “Instance” section (whereas in the single instance report, we can only choose one from the drop down list). The report output displays all instances that you have selected:

image

Percentage value Report – Single Instance

When performance counters are being collected by OpsMgr, some counter values are absolute values (such as logical disk free space in MB). However, some the counter values are percentage based (i.e. % logical disk free space).the percentage value based reports are designed for the percentage based performance counters. Let’s take a look at the “Percentage value Report – Single Instance” report first. This report requires the following parameters:

image

    • From: The start date for the forecast analysis. The default value is Today – 30 days
    • To: the number of forecast days. default value is 30 days from today
    • Time zone: choose the time zone of your choice
    • Number of days for Warning level: choose the warning threshold. – if the forecasted value reaches 100% (or 0% if reverse forecast direction is set to true) within the value specified in this field, the forecasted capacity state will be warning. The default value is 60 days.
    • Number of days for Critical level: choose the critical threshold. – if the forecasted value reaches 100% (or 0% if reverse forecast direction is set to true) within the value specified in this field, the forecasted capacity state will be critical. The default value is 30 days.
    • Reverse forecast direction: by default, the forecasted capacity state is changed when the forecasted value reaches 100%. But in some cases, we are more interested when the value reaches 0% (i.e. free disk space). In these scenarios, you can specify Reverse forecast direction to “true”.
    • Available Rule Languages: The default value is English, when choosing another language, the performance rules that have display strings defined in that particular language (defined in management packs) will appear in the “Performance Rule” drop down list.
    • Performance Rule: this drop down list contains all the performance rules available for the language that you have chosen.
    • Counter:  this drop down list contains the counters collected by the performance rule that you have selected. As the best practice, a perf collection rule should only collect one counter, so hopefully you should only see one counter on this drop down list.
    • Object: this drop down list contains the object associated to the performance rule and counter
    • Instance: this drop down list contains a list of available instances for the performance rule and counter.
    • Managed Entity: this drop down list contains a list of managed entities associated to the performance counter instances.

image

As you can see from the screenshot above, I have chosen a perf collection rule that collects the % logical disk space for Windows Server 2012. Because this is the single instance report, we are only able to select one instance from the drop down list – in this case, the instance for the perf counter represents the drive letter of Windows Server 2012 logical disks. I have chosen D: drive as the instance, and selected all the D: drives on my Hyper-V hosts from the Managed Entity section.

For the third item on the report indicates according to the forecast, it will run out of space in 206.29 days. Since the warning threshold is configured as 400 days and critical is 200 days. The value 206.29 falls in between the warning and critical threshold, therefore the forecasted capacity state is warning.

For the last item (the 4th) on the report, the forecast indicates it will run out of capacity in 109.46 days, which is less than the configured critical threshold of 200 days, therefore, the forecasted capacity state is critical in this case.

Percentage value Report – Multi Instance

This report is similar to the “Percentage value Report – Single Instance”, but it allows you to select multiple instances for the perf counter you have chosen:

image

In the example above, I have chosen the “% Logical Disk Free Space Windows Server 2012” perf collection rule, which collects the % Free Space counter for Logical disks on Windows Server 2012 computers. In this case, the instance represents each logical disk’s drive letter (as highlighted). Comparing with the single instance report, we are not only able to choose a specific drive (such as C: drive), but also any other drives (as shown below).

image

Percentage value Report – Multi / Single instance Critical Only Reports

The last two reports from this MP are the “Percentage value Report – Multi instance Critical Only” and “Percentage value Report – Single instance Critical Only” reports. The only differences with these two reports comparing to the previously mentioned percentage value reports is, they filter out any items with healthy and warning forecasted capacity state, and only list the critical items:

image

So if you don’t really care about the healthy and warning items, and only want to concentrate on critical items, you may find these 2 reports handy.

Summary

The OpsLogix Capacity Report MP provides generic forecasting reports that can be used against any types of performance data collected by OpsMgr. As long as the related perf counters are being collected by OpsMgr, the reports can be used when planning future capacities. The audiences of this MP can be anyone who are using OpsMgr (i.e. server admins, network admins, cloud and fabric admins, DBAs, LOB application owners, etc).

Lastly, if you have any questions, please feel free to contact myself, or OpsLogix sales team directly (sales@opslogix.com).

Demo – Creating an OpsLogix ProView Dashboard for an Existing OpsMgr Distributed App

Written by Tao Yang

Over the last couple of days, I have spent sometime with OpsLogix ProView.  The OpsLogix ProView is a product that could be a good alternative for the old OpsMgr Visio Add-in. I have recorded a short demo on how to quickly produce a dashboard for an existing Distributed App in OpsMgr.

As shown in the diagram below, the window on the right hand side is the original diagram view for a distributed app in the OpsMgr console, and the window on the left and side is what I produced in ProView.

image

You can watch the recorded demo on YouTube: