If you are a regular visitor of this blog, you may have noticed that this blog has been down since last Thursday, and I’ve only been able to get it back online few hours ago (Monday afternoon my time). The downtime was caused by the server which is hosting my blog. My hoster couldn’t recover the sever, and ended up restored my site from the backup (that they took on 24th April, which was 3 weeks ago).
Due to the hoster’s lack of ability to maintain my site, I have lost 3 weeks of data (2 most recent blog posts, comments, etc.). Only until today after I talked to their technical support people on the phone, I found out that they only backup my site once a week (and the recent backups were corrupted).
Putting my emotion aside, I have managed to find the Windows Live Writer’s drafts for the 2 blog posts that I lost, and I have just re-published them. I made sure the URLs are still the same as the original ones, but you may see them appear in your RSS feeds again (as duplicate feeds). Also, if you have left any comments on my blog over the last 3 weeks, they are probably gone now.
I apologise for any inconvenience this outage may have caused. I am looking into my own WordPress backup solutions now.
Also, my hosting plan is due for renewal in about a month time, I think I’ve got do something about it .
I’m teaming up with Infront Consulting, Australia and will deliver a 4-day in-person instructor-led SCOM 2012 bootcamp at Melbourne, Australia. The content of this bootcamp was developed by Infront Consulting group and it has been very popular internationally.
This bootcamp is designed for SCOM administrators and operators. If you are running SCOM (or planning to implement SCOM) in your environment, I strongly recommend you enrol to this bootcamp and spend 4 days with myself and other folks attending the bootcamp.
Here’s the detail of this training event:
SCOM 2012 Bootcamp – Australia
Date: 20 – 23 June 2016
Saxons Training Facilities Melbourne
500 Collins Street
Melbourne VIC 3000
Please join us for the first Infront Consulting SCOM 2012 Bootcamp in Australia! Tao Yang is a well-known author, speaker, blogger and SCOM expert who will be guiding you in person in the SCOM 2012 R2 Bootcamp.
This four-day Bootcamp is a mix of in-depth instructor led training and hands-on labs where you will learn how to administer System Center Operations Manager 2012. This course will provide students with an understanding of the Operations Manager 2012 Architecture, features and how to administer and maintain Operations Manager 2012.
Cost: $3,600 AUD + GST per student, includes course materials and access to Hands on Labs.
Session 1: Overview of System Center Operations Manager 2012
Session 2: Operations Manager 2012 Architecture
Session 3: Installing Operations Manager 2012
Session 4: Installing the Gateway Server Role
Session 5: Configuring Operations Manager Security
Session 6: Agent Deployment and Configuration
Session 7: Alert Notification and Incident Remediation
Session 8: Management Pack Tuning and Targeting Best Practices
Session 9: Tuning of the Core Microsoft MPs
Session 10: Application Performance Monitoring
Session 11: Network Monitoring in Operations Manager 2012
Session 12: Working in the Operations Manager Shell
Session 13: Building Custom Monitoring Solutions & Distributed Applications
Session 14: Reporting & Dashboards
Session 15: Third Party Extensions
Hope to see you there!
Few days ago, I published a PowerShell Module to be used on Azure Automation Hybrid Workers called HybridWorkerToolkit. You can find my blog article HERE.
Yesterday, my good friend and fellow CDM MVP Daniele Grandini (@DanieleGrandini) gave me some feedback, so I’ve updated the module again and incorporated Daniele’s suggestions.
This is the list of updates in this release:
- A new array parameter for New-HybridWorkerEventEntry called “-AdditionalParameters”. This parameter allows users to insert an array of additional parameters to be added in the event data:
- A new Boolean parameter for New-HybridWorkerEventEntry called “-LogMinimum”. This is an optional parameter with the default value of $false. When this parameter is set to true, other than the user specified messages and additional parameters, only the Azure Automation Job Id will be logged as event data:
As we all know, we pay for the amount of data gets injected into our OMS workspace, this parameter allows you to minimise the size of your events (thus saves money on your OMS spending).
OK, it’s Friday night, I feel like writing something on this blog. But after couple of glasses of wine, and I don’t really want to write anything technical that requires too much brain power. So I picked an easy topic for tonight. I have been using a Creative Sound Blaster Bluetooth Pre-Amp and a lapel microphone to record my in-person community presentations. Based on my experience, the recording quality from that device is very average. Last weekend, after I recorded my presentation at Global Azure Bootcamp, I have decided to ditch it and look for a better solution. Over the last few days, I spent some time looking for a new Bluetooth microphone to replace the sound blaster device.
For those who know me well, I can be seen as a gadget man. I like playing with gadgets. after few days of research, I ended up with a Sony ECM-AW4 Bluetooth microphone from eBay. In order to test it, I connected all required equipment on my Surface Pro 4, and created a dummy presentation on this very same topic (My presentation and recording equipments for Surface 4), then presented using these equipments and recorded it using Camtasia. You can watch the recording on YouTube:
To summarise, I am using the following devices during the presentation:
1. Sony Bluetooth Microphone ECM-AW4 (http://www.sony.com.au/product/ecm-aw4)
Although this device is designed for digital cameras and camcorders, it works with your smart phones and computers. There are some noticeable features:
- range support up to 50 metres
- supports external lapel microphones
- supports headphones – for private communication between the camera man (via the receiver mic) and the person in the camera (via the headphone connected to the Bluetooth mic). this communication is not recorded / passed to the recording device.
- comes with a wind screen to be used to cover the microphone when shooting outside in a windy condition
- comes with an arm band which allows you to attach the mic on your arm.
2. Audio-Technica AT9903 lapel microphone (http://audio-technica.com.au/products/at9903/)
I’m connecting this mic to the Sony Bluetooth mic only because I have already got it. I’ve also tried using the Bluetooth mic without this external lapel mic, the quality is also pretty good.
3. Creative Sound Blaster Play!2 USB Sound Card (http://au.creative.com/p/sound-blaster/sound-blaster-play-2)
I had to purchase this USB sound card for my Surface Pro 4 because the 3.5mm audio jack on my Surface does not support microphones. I also have a Lenovo Yoga Pro 3 ultrabook, unfortunately, it is the same case on the Yoga Pro laptop. So no matter which computer do I use for the presentation, I will have to get a USB sound card. Therefore I bought this one because I have been using Creative Sound Blaster external sound cards for many years (right now I have 2 on my desk for 2 NUCs that I’m using as my day-to-day PCs), and this one is very compact – just like a USB dongle.
4. Logitech R800 Presenter (http://business.logitech.com/en-us/product/professional-presenter-r800-business)
In my opinion, this is a must-have device for all your presentations. It is certainly very popular as I’ve seen many of my MVP friends using the very same device.
5. USB 3 Hub
Because Surface Pro 4 only has one USB port and I need to connect both the presenter receiver and the Bluetooth Microphone Receiver to it, I have to use a USB hub. I’ve got this Inateck USB 3 Hub with GB Ethernet adapter few years ago for my old Surface Pro 2. It’s good that it still works without having to install any drivers on Surface Pro 4 running Windows 10.
So, I know many of my MVP friends present in many user group meetings, not sure if anyone has come up with other solutions for recording these in-person presentations. I’m pretty happy with this setup I came up with. Overall, it’s very compact, I don’t need to carry too many additional devices with me. If you are looking to achieve something similar, I hope you find this post and the YouTube video useful.
On the other hand, the recording quality is not as good as my setup on my desktop (Intel NUC) for webinars, but it can be a topic for another day:
23/04/2016 Update: released version 1.0.3 to GitHub and PowerShell gallery. New additions documented in this blog post.
21/04/2016 Update: updated GitHub and PowerShell gallery and released version 1.0.2 with minor bug fix and updated help file.
Over the last few days, I have been working on a PowerShell module for Azure Automation Hybrid Workers. I named this module HybridWorkerToolkit.
This module is designed to run within either a PowerShell runbook or a PowerShell workflow runbook on Azure Automation Hybrid Workers. It provides few functions that can be called within the runbook. These activities can assist gathering information about Hybrid Workers and the runbook runtime environment. It also provides a function to log structured events to the Hybrid Workers Windows Event Logs.
My good friend and fellow MVP Pete Zerger posted a method he developed to use Windows event logs and OMS as a centralised logging solution for Azure Automation runbooks when executed on Hybrid Workers. Pete was using the PowerShell cmdlet Write-EventLog to log runbook related activities to Windows event log and then these events will be picked up by OMS. Log Analytics. This is a very innovative way of using Windows event logs and OMS. However, the event log entries written by Write-EventLog are not structured are lacking basic information about your environment and the job runtime. Couple of weeks ago, another friend of mine, Mr. Kevin Holman from Microsoft also published a PS script that he used to write to Windows event logs with additional parameters.
So I combined Pete’s idea with Kevin’s script, as well as some code I’ve written in the past for Hybrid Workers, and developed this module.
Why do we want to use Windows Event logs combined with OMS for logging runbook activities on Hybrid workers? As Pete explained on this post, it provides a centralised solution where you can query and retrieve these activity logs for all your runbooks from a single location. Additionally, based on my experience (and also confirmed with few other friends), is that when you use Write-Verbose or Write-Output in your runbook and enabled verbose logging, the runbook execution time can increase significantly, especially when loading a module with a lot of activities. Based on my own experience, I’ve seen a runbook that would normally takes a minute or two to run with verbose logging turned off ended up ran over half an hour after I enabled verbose logging. This is another reason I’ve developed this module so it gives you an alternative option to log verbose, error, process and output messages.
This module provides the following 3 functions:
Note: Although the job runtime are different between PowerShell runbooks and PowerShell Workflow runbooks, I have spent a lot of time together with Pete making sure we can use these activities exactly the same ways between PowerShell and PowerShell workflow runbooks.
This function can be used to get the Hybrid Worker and Microsoft Monitoring Agent configuration. A hash table is returned the following configuration properties retrieved from Hybrid Worker and MMA agent:
- Hybrid Worker Group name
- Automation Account Id
- Machine Id
- Computer Name
- MMA install root
- PowerShell version
- Hybrid Worker version
- System-wide Proxy server address
- MMA version
- MMA Proxy URL
- MMA Proxy user name
- MMA connected OMS workspace Id
This function retrieves the following information about the Azure Automation runbook and the job run time. They are returned in a hashtable:
- Runbook job ID
- Sandbox Id
- Process Id
- Automation Asset End Point
- PSModulePath environment variable
- Current User name
- Log Activity Trace
- Current Working Directory
- Runbook type
- Runbook name
- Azure Automation account name
- Azure Resource Group name
- Azure subscription Id
- Time taken to start runbook in seconds
This function can be used to log event log entries. By default, other than the event message itself, the following information is also logged as part of the event (placed under the <EventData> XML tag:
- Azure Automation Account Name
- Hybrid Worker Group Name
- Azure Automation Account Resource Group Name
- Azure Subscription Id
- Azure Automation Job Id
- Sandbox Id
- Process Id
- Current Working Directory ($PWD)
- Runbook Type
- Runbook Name
- Time Taken To Start Running in Seconds
This function also has an optional Boolean parameter called ‘-LogHybridWorkerConfig’ When this parameter is set to $true, the event created by this function will also contain the following information about the Hybrid Worker and MMA:
- Hybrid Worker Version
- Microsoft Monitoring Agent Version
- Microsoft Monitoring Agent Install Path
- Microsoft Monitoring Agent Proxy URL
- Hybrid Worker server System-wide Proxy server address
- Microsoft OMS Workspace ID
Sample PowerShell Runbook:
Get-HybridWorkerConfiguration | out-file C:\temp\HybridWorkerConfiguration.txt
Get-HybridWorkerJobRuntimeInfo | out-file C:\temp\HybridWorkerJobRuntimeInfo.txt
New-HybridWorkerRunbookLogEntry -Id 886 -Message "This is the first test message logged from hybrid worker within a PowerShell runbook."
New-HybridWorkerRunbookLogEntry -Id 887 -Message "This is the second test message logged from hybrid worker within a PowerShell runbook." -Level Error -LogHybridWorkerConfig $true
Sample PowerShell Workflow Runbook
#Write-Output "Exporting Hybrid Worker config"
Get-HybridWorkerConfiguration | out-file C:\temp\HybridWorkerConfiguration.txt
#Write-Output "Exporting Job Runtime info"
Get-HybridWorkerJobRuntimeInfo | out-file C:\temp\HybridWorkerJobRuntimeInfo.txt
#Write-Output "Logging first event log entry."
New-HybridWorkerRunbookLogEntry -Id 888 -Message "This is the first test message logged from hybrid worker within a PowerShell Workflow runbook."
#Write-Output "Logging second event log entry."
New-HybridWorkerRunbookLogEntry -Id 889 -Message "This is the second test message logged from hybrid worker within a PowerShell Workflow runbook." -Level Warning -LogHybridWorkerConfig $true
As you can see, the way to call these functions between PowerShell and PowerShell Workflow runbooks are exactly the same.
Hybrid Worker Configuration output:
Hybrid Worker Job Runtime Info output:
Event generated (with basic information / without setting –LogHybridWorkerConfig to $true):
Event generated (whensetting –LogHybridWorkerConfig to $true):
Consuming collected events in OMS
Once you have collected these events in OMS, you can use search queries to find them, and you can also create OMS alerts to notify you using your preferred methods.
Searching Events in OMS
i.e. I can use this query to get all events logged by a particular runbook:
Type=Event “RunbookName: Test-HybridWorkerOutput-PSW”
or use this query to get all events for a particular job:
Type=Event “JobId: 73A3827D-73F8-4ECC-9DE1-B9340FB90744”
i.e. if I want to create an OMS alert for any Error events logged by New-HybridWorkerRunbookLogEntry, I can use a query like this one:
Type=Event Source=AzureAutomation?Job* EventLevelName=Error
Download / Deploy this module
I have published this module on Github as well as PowerShell Gallery:
GitHub Repository: https://github.com/tyconsulting/HybridWorkerToolkit
PowerShell Gallery: http://www.powershellgallery.com/packages/HybridWorkerToolkit/1.0.3
Last Saturday, I presented the topic of “What’s New in OMS” at the Global Azure Boot Camp 2016 Melbourne event. I have recorded it using Camtasia and uploaded to YouTube. You can watch it here:
You can also download the slide deck from HERE. When I was editing the recording, I noticed that I might not placed the lapel microphone properly as it was rubbing my collar and you will notice the some noise in the video. Oh well, something to improve next time.
My next webinar with OpsLogix will take place on Wednesday 6th April 2016. In this webinar, I will demonstrate how to configure the OpsLogix VMware management pack, and provide an overview of this MP.
If you are interested in this MP, or looking for a solution for monitoring your VMware infrastructure, please make sure you attend this webinar because there are only limited places available.
You can find more details about this webinar from OpsLogix’s blog: http://www.opslogix.com/opslogix-vmware-mp-overview-with-tao-yang/
The registration is via Eventbrite:
I’m looking forward to seeing you then!
This is the 20th installment of the Automating OpsMgr series. Previously on this series:
- Automating OpsMgr Part 1: Introducing OpsMgrExtended PowerShell / SMA Module
- Automating OpsMgr Part 2: SMA Runbook for Creating ConfigMgr Log Collection Rules
- Automating OpsMgr Part 3: New Management Pack Runbook via SMA and Azure Automation
- Automating OpsMgr Part 4:Creating New Empty Groups
- Automating OpsMgr Part 5: Adding Computers to Computer Groups
- Automating OpsMgr Part 6: Adding Monitoring Objects to Instance Groups
- Automating OpsMgr Part 7: Updated OpsMgrExtended Module
- Automating OpsMgr Part 8: Adding Management Pack References
- Automating OpsMgr Part 9: Updating Group Discoveries
- Automating OpsMgr Part 10: Deleting Groups
- Automating OpsMgr Part 11: Configuring Group Health Rollup
- Automating OpsMgr Part 12: Creating Performance Collection Rules
- Automating OpsMgr Part 13: Creating 2-State Performance Monitors
- Automating OpsMgr Part 14: Creating Event Collection Rules
- Automating OpsMgr Part 15: Creating 2-State Event Monitors
- Automating OpsMgr Part 16: Creating Windows Service Monitors
- Automating OpsMgr Part 17: Creating Windows Service Management Pack Template Instance
- Automating OpsMgr Part 18: Second Update to the OpsMgrExtended Module (v1.2)
- Automating OpsMgr Part 19: Creating Any Types of Generic Rules
OK, it has been 6 months since my last post on this blog series. I simply didn’t have time to continue on, but I know this is far from over. I am spending A LOT of time on OMS these days, some of you guys may have heard (or have already read) our newly published book Inside Microsoft Operations Management Suite (TechNet, Amazon). I’m hoping you guys all have played with OMS and maybe even have started thinking what workloads can you move to OMS.
As we all know, we can pretty much categorise SCOM data into the following 4 categories:
- Performance Data
- Event Data
- Alert Data
- State Data
Unlike SCOM, since OMS does not use classes, there are no classes, relationships and state data in OMS, but for the other 3 types, we can easily get them over to OMS. With the SCOM alert data, you can simply enable the Alert solution after you have connected your SCOM management group to your OMS workspace. OMS also has its own alerting and remediation capability. For all existing performance collection and event collection rules, we can easily recreate them using a different Write Action module to store these data into OMS. In this post, I will show you how we can gather all performance collection rules from an existing OpsMgr management pack, and re-create these them for OMS (stored as PerfHourly data in OMS). But before we diving into it, let’s quickly go through the performance data in OMS.
OMS Performance Data
There are 2 types of performance data in OMS. The PerfHourly data was introduced with the Capacity Planning solution. As the name suggests, PerfHourly data is the hourly aggregated performance data. It does not store any raw perf data in OMS.
Another type of performance data is called Near-Real Time (NRT) performance data. NRT perf data can be access using queries such as Type=Perf. Unlike the PerfHourly data, NRT perf data can collect perf data as frequent as every 10 seconds, and the aggregation interval is every half hour. Both raw and aggregated NRT perf data are stored in OMS, where raw data is stored for 14 days and the OMS search queries only return aggregated data.
From the management pack point of view, it is a lot more complicated writing perf collection rules for NRT perf data. With the NRT perf data, we must always author 2 rules for every counter that we are going to collect, one for the raw data and one for the aggregated data. Secondly, for NRT perf data, when mapping performance data, the object name must always follow the format “\\<Computer FQDN>\<Object Name>”. Lastly, the collection rule that collects the aggregated data must use a Condition Detection module called “Microsoft.IntelligencePacks.Performance.PerformanceAggregator”.
Since an OpsMgr rule can only have up to one (1) condition detection member module, converting existing OpsMgr perf collection rules that already have an existing condition detection member module to OMS NRT perf rule may not be that straight forward. In this case, we may need to create some additional module types and things can get very complicated. It is certainly not something that we can use a generic script to achieve.
Therefore in order to make the script work with any existing OpsMgr performance collection rules, I have chosen to store the perf data in OMS as PerfHourly data because it has far less “red tapes”. Having said that, please keep in mind it is still possible to re-create OpMgr perf collection rules as OMS NRT perf collection rules, but it’s just not something we can develop as a generic automated solution.
If you want to learn more about performance data in OMS, or how to author OMS based collection rules in SCOM using VSAE, please refer to Chapter 5: Working with Performance Data and Chapter 11: Custom Management Pack Authoring of the Inside OMS book I mentioned in the beginning of this post.
PowerShell Script: Copy-PerfRulesToOMS.ps1
In the previous posts of this blog series, I have simply placed the scripts / runbooks within the post it self. I have decided to use Github from now on. So the script Copy-PerfRulesToOMS.ps1 can be found in one of my public Github repositories: https://github.com/tyconsulting/OpsMgr-SDK-Scripts/blob/master/OMS%20Related%20Scripts/Copy-PerfRulesToOMS.ps1
This script reads configurations of all performance collection rules in a particular OpsMgr management pack, and then recreate these rules with same configuration but stores the performance data as PerfHourly data in your OMS workspace. The OMS perf collection rules created by this script will be stored in a brand new unsealed MP with the name ‘<Original MP name>.OMS.Perf.Collection’ and display name ‘<Original MP display name> OMS PerfHourly Addon””’.
This script has the following pre-requisites:
- OpsMgrExtended PS module loaded on the machine where you are executing the script.
- An account with OpsMgr administrative rights
- OpsMgr management group must be connected to OMS
The script takes the following input parameters:
- ManagementServer – Specify the name of an OpsMgr management server that you wish to connect to. This is a mandatory parameter.
- Credential – Specify an alternative credential that has admin rights to the OpsMgr management group. This is an optional parameter.
- ManagementPackName – Specify the source MP where you want to copy to Perf collection rule to OMS. This is not the display name but the actual MP name. In the OpsMgr console, when you open the management pack property, it is the ‘ID’ field. i.e. since I’m going to use the OpsLogix VMware management pack as an example in this post, the name for this MP is “OpsLogix.IMP.VMWare.Monitoring”:
Executing the script:
I have added many verbose messages in the script, so you can use the optional –verbose switch when executing the script.
$cred = Get-Credential
.\Copy-PerfRulesToOMS.ps1 -ManagementServer "ManagementServerName" -Credential $cred -ManagementPackName "OpsLogix.IMP.VMWare.Monitoring" –Verbose
This script firstly connect to the management group, read the source MP, then retrieves all performance collection rules from this MP. If the source MP contains any perf collection rules, it will create a new unsealed MP and start creating a co-responding OMS PerfHourly collection rule for each original OpsMgr perf collection rule. the OMS PerfHourly collection rules will have the same properties, input parameters as well as the same data source and condition detection member modules as the original OpsMgr Perf Collection rules. But they will be configured to use another Write Action member module to send the perf data to OMS.
- The script detects OpsMgr Perf collection rules from the source MP by examining the actual write action member modules. If any of the write action member modules are either ‘Microsoft.SystemCenter.CollectPerformanceData’ (used to write perf data to OpsMgr operational DB) or ‘Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData’ (used to write perf data to OpsMgr DW DB), then the script will consider the rule as a perf collection rule.
- When the source MP is unsealed, the script will failed under the following circumstances:
- a perf collection rule in the source MP is targeting a class defined in the source MP
- a perf collection rule in the source MP uses any data source or condition detection module types that are defined in the source MP
- The script does not disable any existing perf collection rules from the source MP
- The script copies all attributes from the source perf collection rule to the new OMS PerfHourly rule, including the ‘Enabled’ property. So if the source perf collection rule is disabled by default, then the newly created OMS PerfHourly rule will also be disabled by default.
- Depending on the number of OpsMgr Perf Collection rules to be processed, this script can take some time to finish because it is writing new OMS PerfHourly rules to the destination MP one at a time. I purposed coded the script this way (rather than writing everything at once), is because by doing so, if a particular rule has failed MP verification, it would not impact the creation of other rules.
When the execution is completed, you will see a new unsealed MP created in your management group:
and if I export it to XML and open it in MPViewer, I can see all the newly created OMS PerfHourly collection rules:
At this stage, I don’t need to do anything else and all the performance data collected by the source MP (OpsLogix VMware MP in this example) will be stored not only in OpsMgr, but also in OMS.
Because the original OpsMgr perf collection rules and the co-responding OMS PerfHourly rules are sharing the exact same data source modules with same configuration, this would not add additional overhead to the OpsMgr agents due to the OpsMgr Cook Down feature. However, please keep in mind that from now on, if you need to apply overrides to the either rule, it’s best to apply the same override to both rules (so you don’t break Cook Down).
Although the PerfHourly data will not appear in your OMS workspace straightaway (due to the aggregation process), you should be able to see them within few hours:
As you can see in the above screenshot, I now have all the VMware related counters defined in the OpsLogix VMware MP in my OMS workspace. the RootObjectName ‘VCENTER01’ is the vCenter server in my lab, and the ObjectDisplayName ‘exs01.corp.tyang.org’ is the VMware ESX host in my lab.
In this post, I have shared a script and demonstrated how to use this script to migrate your existing OpsMgr performance collection rules to OMS. We can easily write a very similar script for migrating existing event collection rules (maybe a blog topic for another day). I have demonstrated how to use this script to collect VMware related counters originally defined in the OpsLogix VMware MP.
In the next post of this series, I will demonstrate how to use OpsMgrExtended module, SharePointSDK module, Azure Automation, Hybrid Workers and SharePoint Online to build a portal for scheduling OpsMgr maintenance mode – this is based on one of the demos in my Azure Automation session with Pete Zeger from SCU 2016 APAC & Australia.
Until next time, happy automating!