Author Archives: Tao Yang

Recordings Available for the VSAE MP Authoring Webinar with Squared Up

Written by Tao Yang

Last night, I conducted 2 webinars with Richard Benwell of Squared Up on MP Authoring using VSAE. I recorded both sessions from my computer using Camtasia, and now the recordings for both sessions are now available on Squared Up’s YouTube channel:

First Session: https://www.youtube.com/watch?v=oH035DgbUSQ

Second Session: https://www.youtube.com/watch?v=Xu3yRE770QA

Lastly, the workshop guide, slide deck and the sample VSAE project is also available on GitHub:

https://github.com/tyconsulting/SquaredUp-VSAE-Workshop

HybridWorkerToolkit PowerShell Module Updated to Version 1.0.3

Written by Tao Yang

Few days ago, I published a PowerShell Module to be used on Azure Automation Hybrid Workers called HybridWorkerToolkit. You can find my blog article HERE.

Yesterday, my good friend and fellow CDM MVP Daniele Grandini (@DanieleGrandini) gave me some feedback, so I’ve updated the module again and incorporated Daniele’s suggestions.

This is the list of updates in this release:

  • A new array parameter for New-HybridWorkerEventEntry called “-AdditionalParameters”. This parameter allows users to insert an array of additional parameters to be added in the event data:

SNAGHTMLb6e7547

  • A new Boolean parameter for New-HybridWorkerEventEntry called “-LogMinimum”. This is an optional parameter with the default value of $false. When this parameter is set to true, other than the user specified messages and additional parameters, only the Azure Automation Job Id will be logged as event data:

image

As we all know, we pay for the amount of data gets injected into our OMS workspace, this parameter allows you to minimise the size of your events (thus saves money on your OMS spending).

I have published this new release to both GitHub and PowerShell Gallery.

Gadget Show Off: My Presentation and Recording Equipment for Surface Pro 4

Written by Tao Yang

OK, it’s Friday night, I feel like writing something on this blog. But after couple of glasses of wine, and I don’t really want to write anything technical that requires too much brain power. So I picked an easy topic for tonight. I have been using a Creative Sound Blaster Bluetooth Pre-Amp and a lapel microphone to record my in-person community presentations. Based on my experience, the recording quality from that device is very average. Last weekend, after I recorded my presentation at Global Azure Bootcamp, I have decided to ditch it and look for a better solution. Over the last few days, I spent some time looking for a new Bluetooth microphone to replace the sound blaster device.

For those who know me well, I can be seen as a gadget man. I like playing with gadgets. after few days of research, I ended up with a Sony ECM-AW4 Bluetooth microphone from eBay. In order to test it, I connected all required equipment on my Surface Pro 4, and created a dummy presentation on this very same topic (My presentation and recording equipments for Surface 4), then presented using these equipments and recorded it using Camtasia. You can watch the recording on YouTube:

To summarise, I am using the following devices during the presentation:

1. Sony Bluetooth Microphone ECM-AW4 (http://www.sony.com.au/product/ecm-aw4)

Sony Mic

Although this device is designed for digital cameras and camcorders, it works with your smart phones and computers. There are some noticeable features:

  • range support up to 50 metres
  • supports external lapel microphones
  • supports headphones – for private communication between the camera man (via the receiver mic) and the person in the camera (via the headphone connected to the Bluetooth mic). this communication is not recorded / passed to the recording device.
  • comes with a wind screen to be used to cover the microphone when shooting outside in a windy condition
  • comes with an arm band which allows you to attach the mic on your arm.

 

2. Audio-Technica AT9903 lapel microphone (http://audio-technica.com.au/products/at9903/)

mic

I’m connecting this mic to the Sony Bluetooth mic only because I have already got it. I’ve also tried using the Bluetooth mic without this external lapel mic, the quality is also pretty good.

3. Creative Sound Blaster Play!2 USB Sound Card (http://au.creative.com/p/sound-blaster/sound-blaster-play-2)

sound card

I had to purchase this USB sound card for my Surface Pro 4 because the 3.5mm audio jack on my Surface does not support microphones. I also have a Lenovo Yoga Pro 3 ultrabook, unfortunately, it is the same case on the Yoga Pro laptop. So no matter which computer do I use for the presentation, I will have to get a USB sound card. Therefore I bought this one because I have been using Creative Sound Blaster external sound cards for many years (right now I have 2 on my desk for 2 NUCs that I’m using as my day-to-day PCs), and this one is very compact – just like a USB dongle.

4. Logitech R800 Presenter (http://business.logitech.com/en-us/product/professional-presenter-r800-business)

presenter

In my opinion, this is a must-have device for all your presentations. It is certainly very popular as I’ve seen many of my MVP friends using the very same device.

5. USB 3 Hub

USB hub

Because Surface Pro 4 only has one USB port and I need to connect both the presenter receiver and the Bluetooth Microphone Receiver to it, I have to use a USB hub. I’ve got this Inateck USB 3 Hub with GB Ethernet adapter few years ago for my old Surface Pro 2. It’s good that it still works without having to install any drivers on Surface Pro 4 running Windows 10.

So, I know many of my MVP friends present in many user group meetings, not sure if anyone has come up with other solutions for recording these in-person presentations. I’m pretty happy with this setup I came up with. Overall, it’s very compact, I don’t need to carry too many additional devices with me. If you are looking to achieve something similar, I hope you find this post and the YouTube video useful.

overall1

overall2

On the other hand, the recording quality is not as good as my setup on my desktop (Intel NUC) for webinars, but it can be a topic for another day:

desktop mic

Upcoming Webinar on MP Authoring Using VSAE

Written by Tao Yang

Last week, I have conducted a workshop with Richard Benwell from Squared Up to a group of Squared Up’s customers at an internal company event. In the workshop, I led the students and built a sealed OpsMgr management pack with a simple agent task.

After the workshop, our plan is to make the content available to general public, therefore, Richard and I will be conducting 2 additional webinars next week to cover different time zones. We will repeat what we did in last week’s internal event, and demonstrate how to build such a MP from scratch using VSAE and Visual Studio 2015. This is an absolute beginner’s guide to authoring management packs using VSAE. As you will see, we are writing the entire MP in Visual Studio without having to type any XML code!

If you are interested in this topic, please feel free to pick a time that’s suitable to you from the registration page below:

https://attendee.gotowebinar.com/rt/3979835542420770052

Lastly, the workshop guide, and a sample completed Visual Studio project can be found in this GitHub repo: https://github.com/tyconsulting/SquaredUp-VSAE-Workshop.

If you’d like to build the MP during the webinar with us, there are some pre-requisites that you must complete the steps outlined in section of the workshop guide before attending to the webinar.

Looking forward to seeing you next week!

New PowerShell Module HybridWorkerToolkit

Written by Tao Yang

HybridWorkerToolkit23/04/2016 Update: released version 1.0.3 to GitHub and PowerShell gallery. New additions documented in this blog post.

21/04/2016 Update: updated GitHub and PowerShell gallery and released version 1.0.2 with minor bug fix and updated help file.

Introduction

Over the last few days, I have been working on a PowerShell module for Azure Automation Hybrid Workers. I named this module HybridWorkerToolkit.

This module is designed to run within either a PowerShell runbook or a PowerShell workflow runbook on Azure Automation Hybrid Workers. It provides few functions that can be called within the runbook. These activities can assist gathering information about Hybrid Workers and the runbook runtime environment. It also provides a function to log structured events to the Hybrid Workers Windows Event Logs.

My good friend and fellow MVP Pete Zerger posted a method he developed to use Windows event logs and OMS as a centralised logging solution for Azure Automation runbooks when executed on Hybrid Workers. Pete was using the PowerShell cmdlet Write-EventLog to log runbook related activities to Windows event log and then these events will be picked up by OMS. Log Analytics. This is a very innovative way of using Windows event logs and OMS. However, the event log entries written by Write-EventLog are not structured are lacking basic information about your environment and the job runtime.  Couple of weeks ago, another friend of mine, Mr. Kevin Holman from Microsoft also published a PS script that he used to write to Windows event logs with additional parameters.

So I combined Pete’s idea with Kevin’s script, as well as some code I’ve written in the past for Hybrid Workers, and developed this module.

Why do we want to use Windows Event logs combined with OMS for logging runbook activities on Hybrid workers? As Pete explained on this post, it provides a centralised solution where you can query and retrieve these activity logs for all your runbooks from a single location. Additionally, based on my experience (and also confirmed with few other friends), is that when you use Write-Verbose or Write-Output in your runbook and enabled verbose logging, the runbook execution time can increase significantly, especially when loading a module with a lot of activities. Based on my own experience, I’ve seen a runbook that would normally takes a minute or two to run with verbose logging turned off ended up ran over half an hour after I enabled verbose logging. This is another reason I’ve developed this module so it gives you an alternative option to log verbose, error, process and output messages.

Functions

This module provides the following 3 functions:

  • Get-HybridWorkerConfiguration
  • Get-HybridWorkerJobRuntimeInfo
  • New-HybridWorkerRunbookLogEntry

Note: Although the job runtime are different between PowerShell runbooks and PowerShell Workflow runbooks, I have spent a lot of time together with Pete making sure we can use these activities exactly the same ways between PowerShell and PowerShell workflow runbooks.

Get-HybridWorkerConfiguration

This function can be used to get the Hybrid Worker and Microsoft Monitoring Agent configuration. A hash table is returned the following configuration properties retrieved from Hybrid Worker and MMA agent:

  • Hybrid Worker Group name
  • Automation Account Id
  • Machine Id
  • Computer Name
  • MMA install root
  • PowerShell version
  • Hybrid Worker version
  • System-wide Proxy server address
  • MMA version
  • MMA Proxy URL
  • MMA Proxy user name
  • MMA connected OMS workspace Id

Get-HybridWorkerJobRuntimeInfo

This function retrieves the following information about the Azure Automation runbook and the job run time. They are returned in a hashtable:

  • Runbook job ID
  • Sandbox Id
  • Process Id
  • Automation Asset End Point
  • PSModulePath environment variable
  • Current User name
  • Log Activity Trace
  • Current Working Directory
  • Runbook type
  • Runbook name
  • Azure Automation account name
  • Azure Resource Group name
  • Azure subscription Id
  • Time taken to start runbook in seconds

New-HybridWorkerRunbookLogEntry

This function can be used to log event log entries. By default, other than the event message itself, the following information is also logged as part of the event (placed under the <EventData> XML tag:

  • Azure Automation Account Name
  • Hybrid Worker Group Name
  • Azure Automation Account Resource Group Name
  • Azure Subscription Id
  • Azure Automation Job Id
  • Sandbox Id
  • Process Id
  • Current Working Directory ($PWD)
  • Runbook Type
  • Runbook Name
  • Time Taken To Start Running in Seconds

This function also has an optional Boolean parameter called ‘-LogHybridWorkerConfig’ When this parameter is set to $true, the event created by this function will also contain the following information about the Hybrid Worker and MMA:

  • Hybrid Worker Version
  • Microsoft Monitoring Agent Version
  • Microsoft Monitoring Agent Install Path
  • Microsoft Monitoring Agent Proxy URL
  • Hybrid Worker server System-wide Proxy server address
  • Microsoft OMS Workspace ID

Sample Runbooks

Sample PowerShell Runbook:

Sample PowerShell Workflow Runbook

As you can see, the way to call these functions between PowerShell and PowerShell Workflow runbooks are exactly the same.

Hybrid Worker Configuration output:

SNAGHTML40e35ad

Hybrid Worker Job Runtime Info output:

SNAGHTML40f4d28

Event generated (with basic information / without setting –LogHybridWorkerConfig to $true):

SNAGHTML4159a60[4]

Event generated (whensetting –LogHybridWorkerConfig to $true):

SNAGHTML4150515

Consuming collected events in OMS

Once you have collected these events in OMS, you can use search queries to find them, and you can also create OMS alerts to notify you using your preferred methods.

Searching Events in OMS

i.e. I can use this query to get all events logged by a particular runbook:

Type=Event “RunbookName: Test-HybridWorkerOutput-PSW”

image

or use this query to get all events for a particular job:

Type=Event “JobId: 73A3827D-73F8-4ECC-9DE1-B9340FB90744”

image

OMS Alerts

i.e. if I want to create an OMS alert for any Error events logged by New-HybridWorkerRunbookLogEntry, I can use a query like this one:

Type=Event Source=AzureAutomation?Job* EventLevelName=Error

image

image

Download / Deploy this module

I have published this module on Github as well as PowerShell Gallery:

GitHub Repository: https://github.com/tyconsulting/HybridWorkerToolkit

PowerShell Gallery:  http://www.powershellgallery.com/packages/HybridWorkerToolkit/1.0.3

Credit

I’d like to thank Pete and Kevin for the ideas in the first place, also I’d like to thank Pete, Jakob Svendsen, Daniele Grandini and Kieran Jacobsen for the testing and feedback!

My Global Azure Boot Camp 2016 Presentation Recording

Written by Tao Yang

Last Saturday, I presented the topic of “What’s New in OMS” at the Global Azure Boot Camp 2016 Melbourne event. I have recorded it using Camtasia and uploaded to YouTube. You can watch it here:

You can also download the slide deck from HERE. When I was editing the recording, I noticed that I might not placed the lapel microphone properly as it was rubbing my collar and you will notice the some noise in the video. Oh well, something to improve next time.

Upcoming Webinar: OpsLogix VMware Management Pack Overview

Written by Tao Yang

My next webinar with OpsLogix will take place on Wednesday 6th April 2016. In this webinar, I will demonstrate how to configure the OpsLogix VMware management pack, and provide an overview of this MP.

If you are interested in this MP, or looking for a solution for monitoring your VMware infrastructure, please make sure you attend this webinar because there are only limited places available.

You can find more details about this webinar from OpsLogix’s blog: http://www.opslogix.com/opslogix-vmware-mp-overview-with-tao-yang/

The registration is via Eventbrite:

https://www.eventbrite.com/e/opslogix-vmware-mp-overview-with-tao-yang-registration-23084853418

I’m looking forward to seeing you then!

Automating OpsMgr Part 20: Migrating Your OpsMgr Performance Collection Rules to OMS (Using OpsLogix VMware MP as an Example)

Written by Tao Yang

OpsMgrExntededIntroduction

This is the 20th installment of the Automating OpsMgr series. Previously on this series:

OK, it has been 6 months since my last post on this blog series. I simply didn’t have time to continue on, but I know this is far from over. I am spending A LOT of time on OMS these days, some of you guys may have heard (or have already read) our newly published book Inside Microsoft Operations Management Suite (TechNet, Amazon). I’m hoping you guys all have played with OMS and maybe even have started thinking what workloads can you move to OMS.

As we all know, we can pretty much categorise SCOM data into the following 4 categories:

  • Performance Data
  • Event Data
  • Alert Data
  • State Data

Unlike SCOM, since OMS does not use classes, there are no classes, relationships and state data in OMS, but for the other 3 types, we can easily get them over to OMS. With the SCOM alert data, you can simply enable the Alert solution after you have connected your SCOM management group to your OMS workspace. OMS also has its own alerting and remediation capability. For all existing performance collection and event collection rules, we can easily recreate them using a different Write Action module to store these data into OMS. In this post, I will show you how we can gather all performance collection rules from an existing OpsMgr management pack, and re-create these them for OMS (stored as PerfHourly data in OMS). But before we diving into it, let’s quickly go through the performance data in OMS.

OMS Performance Data

There are 2 types of performance data in OMS. The PerfHourly data was introduced with the Capacity Planning solution. As the name suggests, PerfHourly data is the hourly aggregated performance data. It does not store any raw perf data in OMS.

Another type of performance data is called Near-Real Time (NRT) performance data. NRT perf data can be access using queries such as Type=Perf. Unlike the PerfHourly data, NRT perf data can collect perf data as frequent as every 10 seconds, and the aggregation interval is every half hour. Both raw and aggregated NRT perf data are stored in OMS, where raw data is stored for 14 days and the OMS search queries only return aggregated data.

From the management pack point of view, it is a lot more complicated writing perf collection rules for NRT perf data. With the NRT perf data, we must always author 2 rules for every counter that we are going to collect, one for the raw data and one for the aggregated data. Secondly, for NRT perf data, when mapping performance data, the object name must always follow the format “\\<Computer FQDN>\<Object Name>”. Lastly, the collection rule that collects the aggregated data must use a Condition Detection module called “Microsoft.IntelligencePacks.Performance.PerformanceAggregator”.

Since an OpsMgr rule can only have up to one (1) condition detection member module, converting existing OpsMgr perf collection rules that already have an existing condition detection member module to OMS NRT perf rule may not be that straight forward. In this case, we may need to create some additional module types and things can get very complicated. It is certainly not something that we can use a generic script to achieve.

Therefore in order to make the script work with any existing OpsMgr performance collection rules, I have chosen to store the perf data in OMS as PerfHourly data because it has far less “red tapes”. Having said that, please keep in mind it is still possible to re-create OpMgr perf collection rules as OMS NRT perf collection rules, but it’s just not something we can develop as a generic automated solution.

If you want to learn more about performance data in OMS, or how to author OMS based collection rules in SCOM using VSAE, please refer to Chapter 5: Working with Performance Data and Chapter 11: Custom Management Pack Authoring of the Inside OMS book I mentioned in the beginning of this post.

PowerShell Script: Copy-PerfRulesToOMS.ps1

In the previous posts of this blog series, I have simply placed the scripts / runbooks within the post it self. I have decided to use Github from now on. So the script Copy-PerfRulesToOMS.ps1 can be found in one of my public Github repositories: https://github.com/tyconsulting/OpsMgr-SDK-Scripts/blob/master/OMS%20Related%20Scripts/Copy-PerfRulesToOMS.ps1

This script reads configurations of all performance collection rules in a particular OpsMgr management pack, and then recreate these rules with same configuration but stores the performance data as PerfHourly data in your OMS workspace. The OMS perf collection rules created by this script will be stored in a brand new unsealed MP with the name ‘<Original MP name>.OMS.Perf.Collection’ and display name ‘<Original MP display name> OMS PerfHourly Addon””’.

This script has the following pre-requisites:

  • OpsMgrExtended PS module loaded on the machine where you are executing the script.
  • An account with OpsMgr administrative rights
  • OpsMgr management group must be connected to OMS

The script takes the following input parameters:

  • ManagementServer – Specify the name of an OpsMgr management server that you wish to connect to. This is a mandatory parameter.
  • Credential – Specify an alternative credential that has admin rights to the OpsMgr management group. This is an optional parameter.
  • ManagementPackName – Specify the source MP where you want to copy to Perf collection rule to OMS. This is not the display name but the actual MP name. In the OpsMgr console, when you open the management pack property, it is the ‘ID’ field. i.e. since I’m going to use the OpsLogix VMware management pack as an example in this post, the name for this MP is “OpsLogix.IMP.VMWare.Monitoring”:

image

 

Executing the script:

I have added many verbose messages in the script, so you can use the optional –verbose switch when executing the script.

SNAGHTMLd5d2347

This script firstly connect to the management group, read the source MP, then retrieves all performance collection rules from this MP. If the source MP contains any perf collection rules, it will create a new unsealed MP and start creating a co-responding OMS PerfHourly collection rule for each original OpsMgr perf collection rule. the OMS PerfHourly collection rules will have the same properties, input parameters as well as the same data source and condition detection member modules as the original OpsMgr Perf Collection rules. But they will be configured to use another Write Action member module to send the perf data to OMS.

Note:

  • The script detects OpsMgr Perf collection rules from the source MP by examining the actual write action member modules. If any of the write action member modules are either ‘Microsoft.SystemCenter.CollectPerformanceData’ (used to write perf data to OpsMgr operational DB) or ‘Microsoft.SystemCenter.DataWarehouse.PublishPerformanceData’ (used to write perf data to OpsMgr DW DB), then the script will consider the rule as a perf collection rule.
  • When the source MP is unsealed, the script will failed under the following circumstances:
    • a perf collection rule in the source MP is targeting a class defined in the source MP
    • a perf collection rule in the source MP uses any data source or condition detection module types that are defined in the source MP
  • The script does not disable any existing perf collection rules from the source MP
  • The script copies all attributes from the source perf collection rule to the new OMS PerfHourly rule, including the ‘Enabled’ property. So if the source perf collection rule is disabled by default, then the newly created OMS PerfHourly rule will also be disabled by default.
  • Depending on the number of OpsMgr Perf Collection rules to be processed, this script can take some time to finish because it is writing new OMS PerfHourly rules to the destination MP one at a time. I purposed coded the script this way (rather than writing everything at once), is because by doing so, if a particular rule has failed MP verification, it would not impact the creation of other rules.

When the execution is completed, you will see a new unsealed MP created in your management group:

image

and if I export it to XML and open it in MPViewer, I can see all the newly created OMS PerfHourly collection rules:

image

At this stage, I don’t need to do anything else and all the performance data collected by the source MP (OpsLogix VMware MP in this example) will be stored not only in OpsMgr, but also in OMS.

Because the original OpsMgr perf collection rules and the co-responding OMS PerfHourly rules are sharing the exact same data source modules with same configuration, this would not add additional overhead to the OpsMgr agents due to the OpsMgr Cook Down feature. However, please keep in mind that from now on, if you need to apply overrides to the either rule, it’s best to apply the same override to both rules (so you don’t break Cook Down).

Although the PerfHourly data will not appear in your OMS workspace straightaway (due to the aggregation process), you should be able to see them within few hours:

image

As you can see in the above screenshot, I now have all the VMware related counters defined in the OpsLogix VMware MP in my OMS workspace. the RootObjectName ‘VCENTER01’ is the vCenter server in my lab, and the ObjectDisplayName ‘exs01.corp.tyang.org’ is the VMware ESX host in my lab.

Summary

In this post, I have shared a script and demonstrated how to use this script to migrate your existing OpsMgr performance collection rules to OMS. We can easily write a very similar script for migrating existing event collection rules (maybe a blog topic for another day). I have demonstrated how to use this script to collect VMware related counters originally defined in the OpsLogix VMware MP.

In the next post of this series, I will demonstrate how to use OpsMgrExtended module, SharePointSDK module, Azure Automation, Hybrid Workers and SharePoint Online to build a portal for scheduling OpsMgr maintenance mode – this is based on one of the demos in my Azure Automation session with Pete Zeger from SCU 2016 APAC & Australia.

Until next time, happy automating!

Inside the Microsoft Operations Management Suite Book Landed on Amazon

Written by Tao Yang

OMSBookCoverSome time last year, when my good friend and fellow CDM MVP Pete Zerger (@pzerger) asked myself and another good friend &  fellow CDM MVP Stanislav Zhelyazkov (@StanZhelyazkov) to join him and Microsoft Principal PFE Anders Bengtsson (http://contoso.se/blog/) on this OMS book project, Stan and I said yes without hesitation. after few months of hard work, we managed to release this book as a free ebook on TechNet Gallery just after new year. To date, this ebook has been downloaded over 6,300 times on TechNet – in just a little over 2 months!

After the book was released on TechNet Gallery, Pete actually has spent A LOT of time working with professional editors in order to publish this book on Amazon. Few weeks ago, this book was released on Amazon as a Kindle ebook.

Yesterday, Pete pinged me and told me the paperback book has also been released on Amazon. Pete has already ordered a copy and sanity checked – making sure the print quality is up to standard.

OMSBook-Pete

If you’d like to order a printed copy, you can get it here: http://www.amazon.com/Inside-Microsoft-Operations-Management-Hands–ebook/dp/B01CH1L9X6

Pete told me once you have ordered the printed paperback copy, the Kindle ebook should be free of charge.

We have set the price as low as possible on Amazon. We originally want to make the Kindle ebook free but Amazon does not allow free ebooks anymore. Our intention is not about making money from this book, but we are hoping the revenue from Amazon can offset some of the cost Pete has paid for professional editors.

I’d like to than all my co-authors – Pete, Stan and Anders. It has been a very pleasant experience, and I am sure we have learned a lot from each other during authoring and reviewing others chapters (at least I felt this way). It was fun reviewing Stan’s monster Azure Automation chapter (over 100 pages) multiple times, and working with Anders, whom I’ve only heard many good things about but never met in person.

I especially would like to thank Pete for all his effort, this would have not been done without Pete. Pete is like the glue that brought all of us together (like a Project Manager) and the one who spent most of the effort (reviewing, dealing with editors, Amazon, etc.). When Pete was working with editors and trying to get it on Amazon, I would go to Amazon and search “Pete Zerger” every day – to see if the book has been published. Interestingly, one day, I found this T-Shirt on Amazon:

image

This is exactly how I feel as Pete has handled everything for us!

We have already started gathering information for the next update of this book. Due to the rapid development and release cycles from the OMS product team, our to-do list for v2 of the book is getting longer and longer! So stay tuned, we are likely to start working on v2 very soon.

OMS Log Analytics Forwarder (aka OMS Gateway) Preview

Written by Tao Yang

Over the last few days, I had the privilege to review and test a new component of the OMS family called “OMS Log Analytics Forwarder”. Since this component has now been released for public preview, I’d like to dedicate this post to my experience with OMS Log Analytics Forwarder so far.

Initial Configuration

First of all, you can download the bits and documentation from Microsoft Download site here: https://www.microsoft.com/en-us/download/details.aspx?id=51603&WT.mc

In my lab, I have created a new VM running Windows Server 2016 TP4 Server Core. I firstly installed the OMS Direct MMA agent, then the OMS Log Analytics Forwarder using command:

msiexec /i “Microsoft OMS Log Analytics Forwarder.msi”

image

Once installed, you will see the following components on the VM:

    • “Microsoft OMS Log Analytics Forwarder” service

image

    • OMS Log Analytics Forwarder Log

image

    • Various performance counters:

image

Since I already have OMS MMA agent installed and this gateway box is directly connected to one of my OMS workspace, I have configured my OMS workspace to collect these OMS Log Analytics Forwarder counters

image

and I also configured my OMS workspace to collect the OMS Log Analytics Forwarder Log in the Windows Event log section:

image

The Active Client Connection counter represents the number of TCP connections clients established to the OMS Log Analytics Forwarder service. This is not a true representation of number of active clients connected to the forwarder.

The Connected Client counter represent the number of clients connected (both Windows and Linux). However, if I stop the MMA agent on an agent connected to this  forwarder, the counter value will not decrease straightaway. This is because in this release, the counter only resets once a day. So you may need to wait for up to 24 hours before you see the value decreases.

Agent Configuration

Now that the OMS Log Analytics Forwarder component has been properly installed, I started reconfiguring some existing agents to go through this forwarder (gateway) machine.

On the Direct-Attached Windows agent, I simply added it under the Proxy Settings tab:

image

On the Linux agent, we can reconfigure the agent to use the proxy server using the following command (assuming the latest version of the OMS agent is installed):

sudo sh ./omsagent-1.1.0-28.universal.x86.sh –upgrade -p http://<proxy user>:<proxy password>@<proxy address>:<proxy port> –w <workspaceid> -s <shared key>

This is documented on here: https://github.com/Microsoft/OMS-Agent-for-Linux/blob/master/docs/OMS-Agent-for-Linux.md#configuring-the-agent-for-use-with-an-http-proxy-server

Additional Configuration for Linux Agents

During my testing, the Windows agents started communicating through the gateway straightaway. This can be verified by looking for Event ID 103 in the OMS Log Analytics Forwarder Log:

SNAGHTML21500845

and followed by Event ID 107 indicating the connection has been successful:

image

However, with the Linux agent, I get an error event (Event ID 105) right after Event ID 103:

SNAGHTML21537d28

image

If you pay close attention to the Event ID 103 events for both Windows and Linux machines, you may notice they are trying to connect to different servers. The Windows machine is trying to connect to xxxxxx.oms.opinsights.azure.com:443 whereas Linux machine is trying to connect to scus-agentservice-prod-1.azure-automation.net

To fix this issue for Linux machine, please go to the server where OMS Log Analytics Forwarder component is installed, open “C:\Program Files\Microsoft OMS Log Analytics Forwarder\allowedlist_server.txt”, and add “scus-agentservice-prod-1.azure-automation.net” to this text file.

image

After I saved the file and restarted the OMS Log Analytics Forwarder service, the Linux agent started communicating through this forwarder server – I verified by examining a NRT perf counter for Linux as you can see from the screenshot below, the data started coming in after I made the change:

image

I am pretty sure Microsoft will fix this issue in the future (since we are still at the preview stage).

Additional Information

Nini Ikhena, a program manager from the OMS product team has posted an excellent post on OMS blog: https://blogs.technet.microsoft.com/msoms/2016/03/17/oms-log-analytics-forwarder/

My friend and fellow CDM MVP Daniele Grandini has also posted an excellent in-depth post few minutes ago (well, Daniele beat me this time): https://nocentdocent.wordpress.com/2016/03/17/msoms-gateway-server-preview/

I strongly recommend you to go and read the above mentioned 2 blog posts as they covered some aspect that I didn’t cover in this blog post.

lastly, if you have any suggestions or issues, please feel free to provide feedback via UserVoice: https://feedback.azure.com/forums/267889-azure-operational-insights