I have been refreshing my lab servers to Windows Server 2016. I’m using the Non GUI version (Server Core) wherever is possible.
When working on Server Core servers, I found it is troublesome that I can’t access the Microsoft Monitoring Agent applet in Control Panel:
Although I can use PowerShell and the MMA agent COM object AgentConfigManager.MgmtSvcCfg, Sometime it is easier to use the applet.
After some research, I found the applet can be launched using command line:
C:\Program Files\Microsoft Monitoring Agent\Agent\AgentControlPanel.exe
OpsLogix has recently released a new product to the market called “EZalert”. It learns the operator’s alert handling behaviour and then it is able to automatically update Alert resolution states based on its learning outcome. You can find more information about this product here: http://www.opslogix.com/ezalert/. I was given a trail license for evaluation and review. Today I installed it on a dedicated VM and connected it to my lab OpsMgr management group.
Once installed, I could see a new dashboard view added in the monitoring pane, and this is where we tune all the alerts:
From this view, I can see all the active alerts, and I can start tuning then either one at a time, or I can multiple select and set desired state in bulk. Once I have gone through all the alerts on the list, I can choose to save the configuration under the Settings tab:
Once this is done, any new alerts that have previously been trained will be updated automatically when it was generated. i.e. I have created a test alert and trained EZalert to set the resolution state to Closed, as you can see below, it was created at 9:44:57AM and modified by EZalert 2 seconds later:
Once the initial training process is completed and saved, the training tab will become empty. Any new alerts generated will show up in the training tab, and you can see if there’s a suggested state assigned, and you can also modify it by assigning another state:
And all previously trained alerts can be found in the history tab:
You can also create exclusions. if you want EZalert to skip certain alerts for certain monitoring object (i.e. Disk space alert generated on C:\ on Server A), you can do so by creating exclusions:
In my opinion, this is a very good practice when tuning alerts. when setting alert resolution states, you only need to do it once, and EZalert learns your behaviour and repeat your action for you in the future. It will be a huge time saver for all your OpsMgr operators over the time. It will also become very handy for alert tuning in the follow situations:
- When you have just deployed a new OpsMgr management group
- When you have introduced new management packs in your management group
- When you have updated existing management packs to the newer versions
EZalert vs Alert Update Connector
Before EZalert’s time, I have been using the OpsMgr Alert Update Connector (AUC) from Microsoft (https://blogs.technet.microsoft.com/kevinholman/2012/09/29/opsmgr-public-release-of-the-alert-update-connector/). I was really struggling when configuring AUC so I developed my own solution to configure AUC in an automated fashion (http://blog.tyang.org/2014/04/19/programmatically-generating-opsmgr-2012-alert-update-connector-configuration-xml/) and I have also developed a management pack to monitor it (http://blog.tyang.org/2014/05/31/updated-opsmgr-2012-alert-update-connector-management-pack/). In my opinion, AUC is a solid solution. It’s been around for many years and being used by many customers. But I do find it has some limitations:
- Configuration process is really hard
- Configuration is based on rules and monitors, not alerts. So it’s easy to incorrectly configure rules and monitors that don’t generate alerts (i.e. perf / event collection rules, aggregate / dependency monitors, etc).
- Modifying existing configuration causes service interrupt due to service restart
- When running in a distributed environment (on multiple management servers), you need to make sure configuration files are consistent across these servers and only one instance is running at any given time.
- No way to easily view the current configurations (without reading XML files)
I think EZalert has definitely addressed some of these shortcomings:
- Alert training process is performed on the OpsMgr console
- No need to restart services and reload configuration files after new alerts are added or when existing alerts are modified
- Configurations are saved in a SQL database, not text based files
- Current configuration are easily viewable within the SCOM console
However, AUC has the following advantages over EZalert:
- AUC supports assigning different values to different groups or individual objects. In EZalert, the exception can only be created for individual monitoring objects and it doesn’t seem like you can assign different value for this object, it’s simply on/off exception
- Other than Alert resolution state, AUC can also be used to update other alert properties (i.e. custom fields, Owner, ticket ID, etc.). EZalert doesn’t seem like it can update other alert fields.
Things to Consider
When using EZalert, in my opinion, there are few things you need to consider:
1. It does not replace requirements for overrides
If you are training EZalert to automatically close an alert when it’s generated, then you should ask yourself – do you really need this alert to be generated in the first place? Unless you want to see these alerts in the alert statistics report, you should probably disable this alert via overrides. EZalert should not be used to replace overrides. if you don’t need this alert, disable it! it saves resources on both SCOM server and agent to process alert, and database space to store the alert.
2. Training Monitor generated alerts
As we all know, we shouldn’t manually close monitor generated alerts. So when you are training monitor alerts, make sure you don’t train EZalert to update the resolution state to “Closed”. consider using other states such as “Resolved”.
3. Create Scoped roles for normal operators in order to hide the EZalert dashboard view
You may not want normal operators to train alerts, so instead of using the built-in operators role, you’d better create your own scoped role and hide the EZalert dashboard view from normal operators
I believe EZalert has some strong use cases. Unless you have a very complicated alert flow automation process that leverages other alert fields such as custom fields, owner, etc. (i.e. for generating tickets, etc) and you are currently using AUC for this particular reason, I think EZalert gives you a much more user friendly experience for ongoing alert tuning.
I have personally implemented AUC in few places, and I still get calls every now and then from those places asking help with AUC configuration and it’s been few years since it was implemented. Also I’m not exactly sure if AUC is officially supported by Microsoft because it was originally developed by an OpsMgr PFE at this spare time (I’m not entirely sure about the supportability of AUC, maybe someone from MSFT can confirm). Whereas EZalert is a commercial product, the vendor OpsLogix provide full support of it.
lastly, if you have any questions about EZalert, please feel free to contact OpsLogix directly.
I have been blogging for six and half years and to this moment, I’m still enjoying it. Few months ago my better half has decided to start blogging as well. Although my partner also works in IT as a project manager, her real passions are photography and cooking. She has decided to start a blog focused on food and recipes. By doing this, not only she gets to create her favorite dishes, she also gets to take pictures too.
Then there was a lot of preparation to get her started. I helped her registered her chosen domain name, got a WordPress site hosted on the same hoster as my blog and company website, and also bought a lot of cooking, photo and recording equipment. Now her site is up, and she has already posted 4 recipes. You can check it out on http://www.lemontaste.com.au
You can also follow her on the social media:
Facebook page: https://www.facebook.com/lemontasteblog/
Please feel free to share the links with your friends and family, it will be much appreciated!
For those who know me and my partner well on a personal level, hopefully you all agree that she is amazing when comes to cooking. Even my 4 year old daughter said to me that “Daddy is a cook, Mummy is a chef!” Together, we have already come up with over 20 recipes that she can blog about. However, given the time and effort required for each blog post, unlike my blog articles, she won’t be able to blog as fast as me. But I promise that I’ll keep reminding her and help her to get them published one at a time. At the end of the day, I really enjoy these blog posts too because I get to eat the leftover from her blog posts – after the photos been taken, then it’s all mine!
Here are some of her most recent dishes (all photos were taken by herself), there are few more on her blog:
Lastly, if you have any questions, feel free to contact her directly. She will be more than happy to answer them.
As many of you may already know, the legendary System Center Universe franchise has been merged with ExpertsLive, which is also a popular community event over in Europe. As the result, the upcoming SCU Australia has been renamed to ExpertsLive Australia. Unlike last year, instead of just an one-day event, it is going to be a two day event, with 3 tracks: Cloud, Data Center and Enterprise Client Management. ExpertsLive Australia 2017 is going to be held at Crown Promenade Melbourne on 6th and 7th of April.
I will co-present 2 sessions with my friend and MVP veteran Pete Zerger (@pzerger). Our topics are:
- Discoverying and monitoring your network topology using OMS
- Cloud Based Automation Overview
There are many local and international speakers have already been confirmed, such as Alex Verkinderen, James Bannan, Thomas Maurer, Marcel Zehner, David Obrien, and more!
Make sure you check out the event website: http://www.expertslive.org.au and hopefully I’ll see you all there!
Few days ago I found a bug in the cPowerShellPackageManagement DSC resource module that was caused by the previous update v18.104.22.168.
in version 22.214.171.124, I’ve added –AllowClobber switch to the Install-Module cmdlet, which was explained in my previous post: http://blog.tyang.org/2016/12/16/dsc-resource-cpowershellpackagemanagement-module-updated-to-version-1-0-0-1/
However, I only just noticed that despite the fact that the pre-installed version of the PowerShellGet module on Windows Server 2016 and in WMF 5.0 for Windows Server 202 R2, the install-module cmdlet is sightly different. The pre-installed version of PowerShellGet module is 126.96.36.199, and in Windows 10 and Windows Server 2106, Install-Module cmdlet has the “AllowClobber” switch:
In Windows Server 2012, the Install-module cmdlet does not have –AllowClobber switch:
Therefore I had to update the DSC resource to detect the if AllowClobber switch exists.
Additionally, I have made few additional stability improvements, and added dependency to the PowerShellGet module in the module manifest file.
This updated version can be found on both GitHub and PowerShell Gallery:
PowerShell Gallery: https://www.powershellgallery.com/packages/cPowerShellPackageManagement/188.8.131.52
Microsoft’s PFE Wei Hao Lim has published an awesome blog post that maps OpsMgr ACS reports to OMS search queries (https://blogs.msdn.microsoft.com/wei_out_there_with_system_center/2016/07/25/mapping-acs-reports-to-oms-search-queries/)
There are 36 queries on Wei’s list, so it will take a while to manually create them all as saved searches via the OMS Portal. Since I can see that I will reuse these saved searches in many OMS engagements, I have created a script to automatically create them using the OMS PowerShell Module AzureRM.OperationalInsights.
So here’s the script:
You must run this script in PowerShell version 5 or later. Lastly, thanks Wei for sharing these valuable queries with the community!
The OMSDataInjection module was only updated to v1.1.1 less than 2 weeks ago. I had to update it again to reflect the cater for the changes in the OMS HTTP Data Collector API.
I only found out last night after been made aware people started getting errors using this module that the HTTP response code for a successful injection has changed from 202 to 200. The documentation for the API was updated few days ago (as I can see from GitHub):
This is what’s been updated in this release:
- Updated injection result error handling to reflect the change of the OMS HTTP Data Collector API response code for successful injection.
- Changed the UTCTimeGenerated input parameter from mandatory to optional. When it is not specified, the injection time will be used for the TimeGenerated field in OMS log entry.
If you are using the OMSDataInjection module, I strongly recommend you to update to this release.
PowerShell Gallery: https://www.powershellgallery.com/packages/OMSDataInjection
Back in September this year, I published a PowerShell DSC resource called cPowerSHellPackageManagement. This DSC resource allows you to manage PowerShell repositories and modules on any Windows machines running PowerShell version 5 and later. you can read more about this module from my previous post here: http://blog.tyang.org/2016/09/15/powershell-dsc-resource-for-managing-repositories-and-modules/
Couple of weeks ago my MVP buddy Alex Verkinderen had some issue using this DSC resource in Azure Automation DSC. After some investigation, I found there was a minor bug in the DSC resource. When you use this DSC resource to install modules, sometimes you may get an error like this:
Basically, it is complaining that a cmdlet from the module you are trying to install already exists. In order to fix it, I had to update the DSC resource and added –AllowClobber switch to the Install-Module cmdlet.
I have published the updated version to both PowerShell Gallery (https://www.powershellgallery.com/packages/cPowerShellPackageManagement/184.108.40.206) and GitHub (https://github.com/tyconsulting/PowerShellPackageManagementDSCResource/releases/tag/220.127.116.11)
If you are using this DSC resource at the moment, make sure you check out the update.
Currently in OMS, there are 3 assessment solutions for various Microsoft products. They are:
- Active Directory Assessment Solution
- SQL Server Assessment Solution
- SCOM Assessment Solution
Few days ago, I needed to export the assessment rules from each solution and handover to a customer (so they know exactly what areas are being assessed). So I developed the following queries to extract the details of the assessment rules:
AD Assessment Solution query:
Type=ADAssessmentRecommendation | Dedup Recommendation | select FocusArea,AffectedObjectType,Recommendation,Description | Sort FocusArea
SQL Server Assessment Solution query:
Type=SQLAssessmentRecommendation | Dedup Recommendation | select FocusArea,AffectedObjectType,Recommendation,Description | Sort FocusArea
SCOM Assessment Solution query:
Type=SCOMAssessmentRecommendation | Dedup Recommendation | select FocusArea,AffectedObjectType,Recommendation,Description | Sort FocusArea
In order to use these queries, you need to make sure these solutions are enabled and already collecting data. You may also need to change the search time window to at least last 7 days because by default, assessment solutions only run once a week.
Once you get the result in the OMS portal, you can easily export it to CSV file by hitting the Export button.
Over the last few days, I had an requirement injecting events from .evtx files into OMS Log Analytics. A typical .evtx file that I need to process contains over 140,000 events. Since the Azure Automation runbook have the maximum execution time of 3 hours, in order to make the runbook more efficient, I also had to update my OMSDataInjection PowerShell module to support bulk insert (http://blog.tyang.org/2016/12/05/omsdatainjection-powershell-module-updated/).
I have publish the runbook on GitHub Gist:
Note: In order to use this runbook, you MUST use the latest OMSDataInjection module (version 1.1.1) because of the bulk insert.
You will need to specify the following parameters:
- EvtExportPath – the file path (i.e. a SMB share) to the evtx file.
- OMSConnectionName – the name of the OMSWorkspace connection asset you have created previously. this connection is defined in the OMSDataInjection module
- OMSLogTypeName – The OMS log type name that you wish to use for the injected events.
- BatchLimit – the number of events been injected in a single bulk request. This is an optional parameter, the default value is 1000 if it is not specified.
- OMSTimeStampFieldName – For the OMS HTTP Data Collector API, you will need to tell the API which field in your log represent the timestamp. since all events extracted from .evtx files all have a “TimeCreated” field, the default value for this parameter is ‘TimeCreated’.
You can further customise the runbook and choose which fields from the evtx events that you wish to exclude. For the fields that you wish to exclude, you need to add them to the $arrSkippedProperties array variable (line 25 – 31). I have already pre-populated it with few obvious ones, you can add and remove them to suit your requirements.
Lastly, sometimes you will get events that their formatted description cannot be displayed. i.e.
When the runbook cannot get the formatted description of event, it will use the XML content as the event description instead.
Sample event injected by this runbook: