Squared Up Upcoming V3 Dashboard with Distributed Application Discovery Feature

Written by Tao Yang

Squared Up is set to release the version 3 of their dashboard next week at Ignite North America. One of the key features in the v3 release is called the “Visual Application Discovery & Analysis” (aka VADA).

VADA utilise OpsMgr agent tasks and netstat.exe command to discover the other TCP/IP endpoints the agents are communicating to. You can learn more about this feature from a short YouTube video Squared Up has published recently: https://www.youtube.com/watch?v=DJK_3SritwY

I was given a trail copy of v3 for my lab. After I’ve installed it and imported the required management pack, I was able to start discovering the endpoints that are communicating to my OpsMgr agents in the matter of few clicks:

image

As we all know, natively, OpsMgr is lacking the capability of automatically Distributed Application discovery, customers used to integrate 3rd party applications such as BlueStripe FactFinder with OpsMgr for this capability. However, now that BlueStripe has been acquired by Microsoft and it’s being fitted under the OMS banner as the Application Dependency Monitor solution (ADM), customers can no longer purchase it for OpsMgr. It is good to see that Squared Up has released something with similar capabilities because at this very moment, it seems to be a gap in the OpsMgr space.

Having said that, I don’t think the OMS ADM solution is too far away from the public preview release.

image

One of the biggest differences I can see (after spending couple of hours on Squared Up V3), is that Squared Up VADA collects ad-hoc data at the time VADA is launched (which triggers the agent ask), whereas OMS ADM has it’s own agents and it is collecting data continuously.

image

Additionally, looks like Squared Up VADA only supports Windows agents at this stage and OMS ADM will also support Linux agents.

At this stage, since we don’t know  if BlueStripe will be made available to OpsMgr in the future, and Squared Up is releasing this awesome addition to their already-popular OpsMgr web console / dashboard product, why not give it a try and see what you can produce? I guess since the data collection is ad-hoc, it will make more sense to start the discovery in VADA during peak hours when the system is fully loaded and each components are actively communicating to each other, so you don’t miss any components.

Lastly, if you are going to attend Ignite NA next week and want to learn more about this new feature in Squared Up V3, please make sure you go find them at their booth.

Pushing PowerShell Modules From PowerShell Gallery to Your MyGet Feeds Directly

Written by Tao Yang

PSGallery-MyGet

Recently I have started using a private MyGet feed and my cPowerShellPackageManagement DSC Resource module to manage PowerShell modules on my lab servers.

When new modules are released in PowerShell Gallery (i.e. all the Azure modules), I’d normally use Install-Module to install on test machines, then publish the tested modules to my MyGet feed and then my servers would pick up the new modules.

Although I can use public-module cmdlet to upload the module located locally on my PC to MyGet feed, it can be really time consuming when the module sizes are big (i.e. some of the Azure modules). It only took me few minutes to figure out how do I push modules directly from PowerShell Gallery (or any NuGet feeds) to my MyGet feed.

To configure it, Under the MyGet feed, go to “Package Sources”, and click “Add package source…”

SNAGHTML6b70b9f

Then choose NuGet feed, fill out name and source

Name: PowerShellGallery

Source: https://www.powershellgallery.com/api/v2/

image

Once added, I can search PowerShell Gallery and add packages directly to MyGet.

image

image

Scripting Azure Automation Module Imports Directly from MyGet or PowerShell Gallery

Written by Tao Yang

There are few ways to add PowerShell modules to Azure Automation accounts:

1. Via the Azure Portal by uploading the module zip file from local computer.

image

2. If the module is located in PowerShell Gallery, you can push it to your Automation Account directly from PowerShell Gallery.

image

3. Use PowerShell cmdlet New-AzureRmAutomationModule from the AzureRM.Automation module.

One of the limitation of using New-AzureRMAutomationModule cmdlet is, the module must be zipped and located somewhere online that Azure has access to. You will need to specify the location by using the –ContentLink parameter. In the past, in order to script the module deployment, even when the module is located in PowerShell Gallery, I had to save the module to a place where my Automation Account has access to (such as an Azure blob storage, or creating a release in a public Github repo).

Tonight, I was writing a script and I wanted to see if I can deploy modules to my Automation Account directly from a package repository of my choice – other than PowerShell Gallery, I also have a private MyGet feed that I use for storing my PowerShell modules.

It turned out to be really easy to do so, only took me few minutes to figure out how. I’ll use a module I wrote in the past called “SendEmail” as an example. It is published in both PowerShell Gallery, and my private MyGet feed.

Importing from PowerShell Gallery

the URL for this module in PowerShell Gallery is: https://www.powershellgallery.com/packages/SendEmail/1.3

The –ContentLink URI that we need to pass to the Add-AzureRmAutomationModule cmdlet would be:

https://www.powershellgallery.com/api/v2/package/SendEmail/1.3.

As you can see, all you need to do is to add “api/v2/” in the URI. The PowerShell command would be something like this:

Importing from a private MyGet feed

For a private MyGet feed, you can access it by embedding the API key into the URL:

image

The URL for my module would be: “http://www.myget.org/F/<Your MyGet feed name>/auth/<MyGet API Key>/api/v2/package/<Module Name>/<Module Version>

i.e. for my SendEmail module, the PowerShell command would be something like this:

Importing from a public MyGet feed

If the module is located in a public MyGet feed, then the API key is not required. the URI for the module would be very similar to PowerShell Gallery, you will just need to embed “api/v2/” in to the original URI:

‘https://www.myget.org/F/<MyGet Public Feed Name>/api/v2/package/<Module Name>/<Module Version>

the PowerShell script would be something like this:

PowerShell DSC Resource for Managing Repositories and Modules

Written by Tao Yang

256x256Introduction

PowerShell version 5 has introduced a new feature that allows you to install packages (such as PowerShell modules) from NuGet repositories. If you have used cmdlets such as Find-Module, Install-Module or Uninstall-Module, then you have already taken advantage of this awesome feature.

By default, a Microsoft owned public repository PowerShell Gallery is configured on all computers running PowerShell version 5 and when you use Find-Module or Install-Module, you are pulling the modules from the PowerShell Gallery.

Ever since I started using PowerShell v5, I’ve discovered some challenges managing modules for machines in my environment:

  • Lack of a fully automated way to push modules to a group of computers
  • Module version inconsistency between computers
  • Need of a private repository

Let me elaborate each of the point listed above.

Lack of a fully automated way to push modules to a group of computers

Back in the old days (pre WMF v5), I used to package PowerShell modules to msi’s and use ConfigMgr to deploy the msi to target computers. although it’s not too hard to a package module to msi, this method is really time consuming, not to mention it also requires ConfigMgr. In PowerShell v5, I can write a script that utilise PowerShell remoting to push modules to remote machines, this is still a manual process, and it may not be a viable solution for a large group of computers.

Module version inconsistency between computers

over the time, modules get updated, new modules get released from various sources. I often find module version become inconsistent among computers. there is no automated ways to update computers when a new version is released.

Need of a private repository

PowerShell Gallery is public. everything you publish to it will be available for the entire world. Organisations often write modules specifically for internal use, and may not want to share it with the rest of the world.

Before I dive into the main topic, I’d like to discuss what I have done for implementing private repositories.

Private Repositories

PowerShell PackageManagement uses NuGet repositories. I found the following solutions available:

MyGet is a SaaS (Software as a Service) based repository hosted on the cloud. Although you can create your own feeds, private feeds come with a price tag (free accounts allow you to create public feeds that everyone can access).

ProGet is a on-premises solution. To install it, you will need a web server (and optionally a SQL server) within your network. It comes with free, basic and enterprise editions. the feature comparison is located here: http://inedo.com/proget/pricing/features-by-edition

Since both MyGet and ProGet offer NFR (Not For Resell) licenses to Microsoft MVPs, I have tested both for my lab environment. They both work pretty well. I did not bother to setup the free private NuGet repository (the 3rd option).

These days, I found myself writing more and more PowerShell modules for different projects. During development phase, I’d normally use a feed that’s hosted on my ProGet server because it is located in my lab, so it’s faster to publish and download modules. Once the module is ready, I’d normally publish it to MyGet for general consumption because it’s a SaaS based application, both my lab machines and Azure IaaS machines will have no problem accessing it.

DSC Resource cPowerShellPackageManagement

In order to overcome the other two challenges that I’m facing (module automatically deployment and version inconsistency), I have created a DSC resource called cPowerShellPackageManagement.

According to the DSC namingstandard, the first letter ‘c’ indicates it is a community resource, and as the rest of the name suggests, it is used to manage PowerShell packages.

This DSC resource module contains 2 resources:

  • cPowerShellRepository – used to register or unregister specific NuGet feeds on computers running PowerShell v5 and above.
  • cPowerShellModuleManagement – used to install / uninstall modules on computers running PowerShell v5 and aove

cPowerShellRepository

Syntax:

To register a feed, you will need to specify some basic information such as PublishLocation and SourceLocation. You can also set Ensure = Absent to unregister the feed with the name specified in the Name parameter.

When not specified, the InstallationPolicy field default value is “Untrusted”. If you’d like to set the repository as a trusted repository, set this value to “Trusted”.

Note: since the repository registration is based on each user (as opposed to machine based settings) and DSC configuration is executed under LocalSystem context. you will not be able to see the repository added by this resource if you run Get-PSRepository cmdlet under your own user account. If you start PowerShell under LocalSystem by using PsExec (run psexec /i /s /d powershell.exe), you will be able to see the repository:

image

cPowerShellModuleManagement

Syntax:

  • PSModuleName – PowerShell module name. When this is set to ‘all’, all modules from the specified repository will be installed. So please do not use ‘all’ against PSGallery!!
  • RepositoryName – Name of the repository where module will be installed from. This can be a public repository such as PowerShell Gallery, or your privately owned repository (i.e. your ProGet or MyGet feeds). You can use the cPowerShellRepository resource to configure the repository.
  • PSModuleVersion – This is an optional field. when used, only the specified version will be installed (or uninstalled). If not specified, the latest version of the module from the repository will be used. This field will not impact other versions that are already installed on the computer (i.e. when installing the latest version, earlier versions will not be uninstalled).
  • MaintenanceStartHour, MaintenanceStartMinute and MaintenanceLengthMinute – Since the LCM will run the DSC configuration on a pre-configured interval, you may not want to install / uninstall modules during business hours. Therefore, you can set the maintenance start hour (0-23) and start minute (0-59) to specify the start time of the maintenance window. MaintenanceLengthMinute represents the length of the maintenance window in minutes. These fields are optional, when specified, module installation and uninstallation will only take place when the LCM runs the configuration within the maintenance window. Note: Please make sure the MaintenanceLengthMinute is greater than the value configured for the LCM ConfigurationModeFrequencyMins property.

image

Sample Configuration

Here are some sample configurations to demonstrate the usage of these DSC resources.

1. Register to an On-Prem ProGet feed and install all modules from the feed

Using this configuration, I can manage the modules from the repository feed level. if I add or update a module to the feed, the DSC LCM on each configured compute will automatically install the newly added (or updated) module when next time the configuration is refreshed.

2. Register to a feed hosted on MyGet, and install several specific modules

In this example, I’ve specified a particular module can be installed at any time (the Gac module), and another module can only be installed (or updated) at a specific time window (the SharePointSDK module).

Download and Install Locations

This DSC Resource has been published to PowerShellGallery: https://www.powershellgallery.com/packages/cPowerShellPackageManagement

The project is also located on Github: https://github.com/tyconsulting/PowerShellPackageManagementDSCResource

Special Thanks

I’d like to thank my MVP friends Jakob G Svendsen (@JakobGSvendsen), Pete Zerger (@pzerger), Daniele Grandini (@DanieleGrandini) and James Bannan (@JamesBannan) who provided feedback and helped me testing the modules.

PowerShell Module for OMS HTTP Data Collector API

Written by Tao Yang

Background

Earlier today, the OMS Product Group has released the OMS HTTP Data Collection API to public preview. If you haven’t read the announcement, you can read this blog post written by the PM of this feature, Evan Hissey first.

As a Cloud and Datacenter Management MVP, I’ve had private preview access to this feature for few months now, and I actually even developed a solution using this API in a customer engagement with my friend and fellow CDM MVP Alex Verkinderen (@AlexVerkinderen) just over a month ago. I was really impressed with the potential opportunities this feature may bring to us, I’ve been spamming Evan’s inbox asking him for the release date of this feature so I can blog about it and also present this in user group meetups.

Since most of us wouldn’t like having to deal with HTTP headers, bodies, authorizations and other overhead we have to put into our code in order to use this API, I have developed a PowerShell module to help us easily utilize this API.

Introducing OMSDataInjection PowerShell Module

This module was developed about 2 months ago, I was waiting for the API to become public so I can release this module. So now the wait is over, I can finally release it.

This module contains only one public function: New-OMSDataInjection. This function is well documented in a proper help file. you can access it via Get-Help New-OMSDataInjection –Full. I have added 2 examples in the help file too:

————————– EXAMPLE 1 ————————–

PS C:\>$PrimaryKey = Read-Host -Prompt ‘Enter the primary key’
$ObjProperties = @{
Computer = $env:COMPUTERNAME
Username = $env:USERNAME
Message  = ‘This is a test message injected by the OMSDataInjection module. Input data type: PSObject’
LogTime  = [Datetime]::UtcNow
}
$OMSDataObject = New-Object -TypeName PSObject -Property $ObjProperties
$InjectData = New-OMSDataInjection -OMSWorkSpaceId ‘8eb61d08-133c-401a-a45b-0e611194779f’ -PrimaryKey $PrimaryKey -LogType ‘OMSTestData’ -UTCTimeStampField ‘LogTime’ -OMSDataObject $OMSDataObject

Injecting data using a PS object by specifying the OMS workspace Id and primary key
————————– EXAMPLE 2 ————————–

PS C:\>$OMSConnection = Get-AutomationConnection ‘OMSConnection’
$OMSDataJSON = @”
{
“Username”:  “administrator”,
“Message”:  “This is a test message injected by the OMSDataInjection module. Input data type: JSON”,
“LogTime”:  “Tuesday, 28 June 2016 9:08:15 PM”,
“Computer”:  “SERVER01”
}
“@
$InjectData = New-OMSDataInjection -OMSConnection $OMSConnection -LogType ‘OMSTestData’ -UTCTimeStampField ‘LogTime’ -OMSDataJSON $OMSDataJSON

Injecting data using JSON formatted string by specifying the OMSWorkspace Azure Automation / SMA connection object (to be used in a runbook)

This PS module comes with the following features:

01. A Connection object for using this module in Azure Automation and SMA.

Once imported into your Azure Automation account (or SMA for the ‘old skool’ folks), you will be able to create connection objects that contains your OMS workspace Id, primary key and secondary key (optional):

image

And as shown in Example 2 listed above, in your runbook, you can retrieve this connection object and use it when calling the New-OMSDataInjection function.

02. Fall back to the secondary key if the primary key has failed

When the optional secondary key is specified, if the web request using the primary key fails, the module will fall back to the secondary key and try the web request again using the secondary key. This is to ensure your script / automation runbooks will not be interrupted when you are in the process of  following the best practice and cycling through your keys.

03. Supports two types of input: JSON and PSObject

As you can see from Evan’s post, this API is expecting a JSON object as the HTTP body which contains the data to be injected into OMS. When I started testing this API few months ago, my good friend and fellow MVP Stanislav Zhelyazkov (@StanZhelyazkov) suggested me instead of writing plain JSON format, it’s better to put everything into a PSObject then convert it to JSON in PowerShell so we don’t mess with the format and type of each field. I think it was a good idea, so I have coded the module to take either JSON format, or a PSObject that contains the data to be injected into OMS.

Sample Script  and Runbook

I’ve created a sample script and a runbook to help you get started. They are also included in the Github repository for this module (link at the bottom of this article):

Sample Script: Test-OMSDataInjection.ps1

Sample Runbook: Test-OMSDataInjectionRunbook

Exploring Data in OMS

Once the data is injected into OMS, if you are using a new data type,  it can take a while (few hours) for all the fields to be available in OMS.

i.e. the data injected by the sample script and Azure Automation runbook (executed on Azure):

image

all the fields that you have defined are stored as custom fields in your OMS workspace:

image

Please keep in mind, since the Custom Fields feature is still at the preview phase, there’s a limit of 100 custom fields per workspace at this stage (https://azure.microsoft.com/en-us/documentation/articles/log-analytics-custom-fields/), so please be mindful of this limitation when you are building your custom solutions using the HTTP Data Collector API.

Where to Download This Module?

I have published this module in PowerShell Gallery: https://www.powershellgallery.com/packages/OMSDataInjection, if you are using PowerShell version 5 and above, you can install it directly from it: Install-Module –Name OMSDataInjection –Repository PSGallery

You can also download it from it’s GitHub repo: https://github.com/tyconsulting/OMSDataInjection-PSModule/releases

Summary

In the past, we’ve had the OMS Custom View Designer that can help us visualising the data that we already have in OMS Log Analytics, what we were missing is a native way to inject data into OMS. Now with the release of this API, the gap has been filled. Like Evan mentioned in his blog post, by coupling this API with the OMS View Designer (and even throwing Power BI into the mix), you can develop some really fancy solutions.

On 21st of September (3 weeks from now), I will be presenting at the Melbourne Microsoft Cloud and Datacenter Meetup (https://www.meetup.com/Melbourne-Microsoft-Cloud-and-Datacenter-Meetup/events/233154212/), my topic is Developing Your OWN Custom OMS Solutions. I will doing live demos creating solutions using the HTTP Data Collector API as well as the Custom View Designer. If you are from Melbourne, I encourage you to attend. I am also planning to record this session and publish it on YouTube later.

Lastly, if you have any suggestions for this PowerShell module, please feel free to contact me!

OMS Network Performance Monitor Power BI Report

Written by Tao Yang

imageI’ve been playing with the OMS Network Performance Monitor (NPM) today. Earlier today, I’ve released an OpsMgr MP that contains tasks to configure MMA agent for NPM. You can find the post here: http://blog.tyang.org/2016/08/22/opsmgr-agent-task-to-configure-oms-network-performance-monitor-agents/

The other thing I wanted to do is to create a Power BI dashboard for the data collected by OMS NPM solution. The data collected by NPM can be retrieved using OMS search query “Type=NetworkMonitoring”.

To begin my experiment, I created a Power BI schedule in OMS using above mentioned query and waited a while for the data to populate in Power BI

image

I then used 2 custom visuals from the Power BI Custom Visual Gallery:

01. Force-Directed Graph

image

02. Timeline

image

and I created an interactive report that displays the network topology based on the NPM data:

image

In this report, I’m using a built-in slicer (top left) visual to filter source computers and the timeline visual (bottom) to filter time windows. The main section (top right) consists of a Force-Directed Graph visual, which is used to draw the network topology diagram.

I can choose one or more source computers from the slicer, and choose a time window from the timeline visual located at the bottom.

On the network topology (Force-Directed Graph visual), the arrow represents the direction of the traffic, thickness represents the median network latency (thicker = higher latency), and the link colour represents the network loss health state determined by the OMS NPM solution (LossHealthState).

I will now explain the steps I’ve taken to create this Power BI report:

01. Create a blank report based on the OMS NPM dataset (that you’ve created from the OMS portal earlier).

02. Create a Page Level Filter based on the SubType Field, and only select “NetworkPath”.

image

03. Add the Slicer visual to the top left and configure it as shown below:

image

image

04. Add the Force-Directed Graph (ForceGraph) to the main section of the report (top right), and configure it as shown below:

Fields tab:

  • Source – SourceNetworkNodeInterface
  • Target – DestinationNetworkNodeInterface
  • Weight – Average of MedianLatency
  • Link Type – LossHealthState

image

Format tab:

  • Data labels – On
  • Links
    • Arrow – On
    • Label – On
    • Color – By Link Type
    • Thickness – On
  • Nodes
    • Max name length – 15
  • Size – change to a value that suits you the best

image

05. Add a timeline visual to the bottom of the report, then drag the TimeGenerated Field from the dataset to the Time field:

image

As you can see, as long as you understand what each field means in the OMS data type that you are interested in, it’s really easy to create cool Power BI reports, as long as you are using appropriate visuals. This is all I have to share today, until next time, have fun in OMS and Power BI!

OpsMgr Agent Task to Configure OMS Network Performance Monitor Agents

Written by Tao Yang

OMS Network Performance Monitor (NPM) has made to public preview few weeks ago. Unlike other OMS solutions, for NPM, additional configuration is required on each agent that you wish to enrol to this solution. The detailed steps are documented in the solution documentation.

The product team has provided a PowerShell script to configure the MMA agents locally (link included in the documentation). In order to make the configuration process easier for the OpsMgr users, I have created a management pack that contains several agent tasks:

  • Enable OMS Network Performance Monitor
  • Disable OMS Network Performance Monitor
  • Get OMS Network Performance Monitor Agent Configuration

image

Note: Since this is an OpsMgr management pack, you can only use these tasks against agents that are enrolled to OMS via OpsMgr, or direct OMS agents that are also reporting to your OpsMgr management group.

These tasks are targeting the Health Service class, if you are also using my OpsMgr 2012 Self Maintenance MP, you will have a “Health Service” state view, and you will be able to access these tasks from the task pane of this view:

image

I can use the “Get OMS Network Performance Monitor Agent Configuration” task  to check if an agent has been configured for NPM.

i.e. Before an agent is configured, the task output shows it is not configured:

image

Then I can use the “Enable OMS Network Performance Monitor” task to enable NPM on this agent:

image

Once enabled, if I run the “Get OMS Network Performance Monitor Agent Configuration” task  again, the task output will show it’s enabled and also display the configured port number:

image

and shortly after, you will be able to see the newly configured node in OMS NPM solution:

image

If you want to remove the configuration, just simply run the “Disable OMS Network Performance Monitor” task:

image

You can download the sealed version of this MP HERE. I’ve also pushed the VSAE project for this MP to GitHub.

Visualising OMS Agent Heartbeat Data in Power BI

Written by Tao Yang

Introduction

Few days ago, the OMS product team has announced the OMS Agent Heartbeat capability. If you haven’t read about it, you can find the post here: https://blogs.technet.microsoft.com/msoms/2016/08/16/view-your-agent-health-in-oms-2/. In this post, Nini, the PM for the agent heartbeat feature explained how to create custom views within OMS portal to visualize the agent heartbeat data. Funny that I also started working on something similar around the same time, but instead of creating visual presentations within OMS, I did it Power BI. I managed to create couple of Power BI reports for the OMS agent heartbeat, using both native and custom Power BI Visuals:

01. Agent Locations Map Report:

Since the agent heartbeat data contains the geo location of the agent IP address, I’ve created this report to map the physical location the agent on an interactive map.

02. Agent Statistics Report:

This report has several parts, it contains the following parts:

  • A heat map based on the country where the agent is located (Agent Location by Country). The colour highlighting the country changes based on the agent count.
  • An interactive “fish tank” visual. In this visual, each fish represent an OMS agent. the size of the fish presents number of heartbeats generated by the agent. So, the older the agent (fish) is, the more heartbeat will be generated to the OMS workspace (fish tank), and the bigger the fish will become.
  • A Brick chart shows the percentage (this chart contains 100 tiles) of the agent by OS type (Linux vs Windows).
  • A tornado chart shows agent distribution by country. Agent OS type is also separated in different colours.
  • A Pie Chart shows agent distribution by management groups (SCOM attached vs direct attached vs Linux agents)
  • Agent version Donut chart that separates agent counts by agent version numbers (both Windows agents and Linux agents).

The fish visual is called “Enlighten Aquarium”, as you can see below, it’s an animated visual.

In this blog post, I will walk through the steps of creating these reports.

Instructions

Pre-Requisites

Before we create these reports, you need to make sure:

01. Power BI account

You will need to have a Power BI account (either a free or pro account) so OMS can inject data into your Power BI workspace.

02. Power BI preview feature is enabled in OMS

At the time of writing this post, the Power BI integration feature in OMS is still under public preview. Therefore if you haven’t done so, you will need to manually enable this feature first. To do so, go to the “Preview Features” tab in the OMS settings page, and enable “Power BI Integration”:

03. Connect your OMS workspace to your Power BI workspace.

Once the Power BI Integration feature is enabled, you need to connect OMS to Power BI. This is achieved by providing the Power BI account credential in the “Accounts” tab of the OMS settings page:

image

04. Setting up Power BI injection schedules

We need to inject the OMS agent heartbeat data to Power BI. We can just use a simple query: “Type=Heartbeat”, and set the schedule to run every 15 minutes:

image

05. Wait 15 – 30 minutes

You will have to wait a while before you can see the data in Power BI.

06. Download Power BI Custom visual

Since these reports use number of custom Power BI visuals, you will need to download them to your local computer first, and then import them into the reports when you start creating the reports. To download custom visuals, go to the Power BI Visuals Gallery (https://app.powerbi.com/visuals/) and download the following visuals:

  • Brick Chart
  • Hierarchy Slicer
  • Donut Chart GMO
  • Timeline
  • Tornado Chart
  • Enlighten Aquarium

Create Reports

To start creating the report, firstly logon to Power BI using the account you’ve used to make the connection in OMS, and then find the Dataset you have specified. in this post, I’ve created a dataset called “OMS – Agent Heartbeat”. By clicking on the dataset, you will be presented to an empty report:

image

You will then need to import the custom visuals – by clicking on the “…” icon under Visualizations, and select “Import a custom visual”

image

You can only import one at a time, so please repeat this process and import all the custom visuals I have listed above.

Creating Agent Location Report

For the Agent location report, we will add 3 visuals:

  • Hierarchy Slicer – for filtering IP addresses and computer names
  • Map – for pinpointing the agent location
  • Timeline – for filtering the time windows

image

Agent Filter (Hierarchy Slicer)

image

Add a Hierarchy slicer and place on the left side of the page, then drag the ComputerIP and Computer fields from to the “Fields” section, please make sure you place ComputerIP on top of Computer:

image

it’s also a good idea to turn off single selection for the hierarchy slicer so you can select multiple items:

image

Agent Location Map

image

Add the Map visual to the report, configure it as listed below:

  • Location – RemoteIPCountry
  • Legend – Computer
  • Latitude – Average of Remote IPLatitude
  • Longitude – Average of RemoteIPLongitude
  • Size – Count of Computer (Distinct)

image

Note: since the latitude and longitude shouldn’t change between different records for the same computer as long as the IP doesn’t change, so it doesn’t matter if you use average, or maximum or minimum, the result of each calculation should be the same.

Time Slicer (Timeline)

image

Add a timeline slicer to the bottom of the report page, configure it to use the TimeGenerated field:

image

To same some space on the report page, you may also turn off the labels for the timeline slicer:

image

Lastly, add a text box on the top of the report page, give it a title, also if you want to, assign each visual a title by highlighting the visual, then click on Format icon, and update the title field:

image

To use this report, you can make your selections in the hierarchy slicer and the timeline slicer. The map will be automatically updated.

Create Agent Statistic Report

For the second report, you can create a new page of the existing report, or create a brand new report based on the same dataset. We will use the following visuals in this report:

  • Filled Map
  • Aquarium
  • Brick Chart
  • Tornado Chart
  • Pie Chart
  • DonutChartGMO

image

Agent Location By Country (Filled Map)

image

Configure the Filled Map visual as shown below:

image

OMS Agent By Heartbeat Count (Aquarium)

image

Configure the Aquarium visual as shown below:

image

Agent OS Type (Brick Chart)

image

Configure the Brick Chart visual as shown below:

image

Agent Distribution By Country (Tornado Chart)

image

Configure the Tornado Chart as shown below:

image

Agent Distribution By Management Groups (Pie Chart)

image

Configure the Pie Chart as shown below:

image

Agent Version (DonutChartGMO)

image

Configure the DonutChartGMO visual as shown below:

image

and change the Primary Measure under Legend to “Value” / “Percentage” / “Both”, whichever you prefer:

image

Most of the visuals used by this report are interactive. i.e. if I click on a section in the Agent Version DonutChartGMO visual, other visuals will be automatically updated to reflect the selection I made in the DonutChartGMO visual.

Once you’ve configured all the visuals, please make sure you save your report.

Conclusion

There are many things you can do with the Power BI reports you’ve just created. i.e. you can share it with other people, ping individual visuals or entire report to a dashboard, or create an Iframe link and embed the report to 3rd party systems that support IFrame (i.e. SharePoint sites). We are not going to get into details of how to consume these reports today.

Please note that during my testing, the RemoteIPLatitude and RemoteIPLongitude data from the heartbeat events are not very accurate for the computers in my lab. I’m based in Melbourne, Australia but the map coordinates pinged to a location in Sydney, which is over 1000km away from me.

Please also be aware that for SCOM attached agents, each time when the agent sends heartbeat, it will send 2 heartbeats via different channels. This behaviour is by design – my good friend and fellow CDM MVP Stanislav Zhelyazkov(@StanZhelyazkov) has explained this in his blog post: https://cloudadministrator.wordpress.com/2016/08/17/double-heartbeat-events-in-oms-log-analytics/

This is all I have to share for today. until next time, have fun with OMS and Power BI!

OMS Near Real Time Performance Data Aggregation Removed

Written by Tao Yang

Few weeks ago, the OMS product team has made a very nice change for the Near Real Time (NRT) Performance data – the data aggregation has been removed! I’ve been waiting for the official announcement before posting this on my blog. Now Leyla from the OMS team has finally broke the silence and made this public: Raw searchable performance metrics in OMS.

I’m really excited about this update. Before this change, we were only able to search 30-minute aggregated data via Log Search. this behaviour brings some limitations to us:

  • It’s difficult to calculate average values based on other intervals (i.e. 5-minute or 10-minute)
  • Performance based Alert rules can be really outdated – this is because the search result is based on the aggregated value over the last 30 minutes. In critical environment, this can be a bit too late!

By removing the data aggregation and making the raw data searchable (and living a longer life), the limitations listed above are resolved.

Another advantage this update brings is, it greatly simplified the process of authoring your own OpsMgr performance collection rules for OMS NRT Perf data. Before this change, the NRT perf rules come in pairs – each perf counter you want to collect must have 2 rules (with the identical data source module configurations). One rule is for collecting raw data and another is to collect the 30-minute aggregated data. This has been discussed in great details in Chapter 11 of our Inside Microsoft Operations Management Suite book (TechNet, Amazon). Now, we no longer need to write 2 rules for each perf counter. We only need to write one rule – for the raw perf data.

The sample OpsMgr management pack below collects the “Log Cache Hit Ratio” counter for SQL Databases. It is targeting the Microsoft.SQLServer.Database class, which is the seedclass for pre-SQL 2014 databases (2005, 2008 and 2012):

As you can see from the above sample MP, the rule that collects aggregated data is no longer required.

image

So if you have written some rules collecting NRT perf data for OMS in the past, you may want to revisit what you’ve done in the past and remove the aggreated data collection rules.

ConfigMgr OMS Connector

Written by Tao Yang

Earlier this week, Microsoft has release a new feature  in System Center Configuration Manager 1606 called OMS Connector:

image

As we all know, OMS supports computer groups. We can either manually create computer groups in OMS using OMS search queries, or import AD and WSUS groups. With the ConfigMgr OMS Connector, we can now import ConfigMgr device collections into OMS as computer groups.

Instead of using the OMS workspace ID and keys to access OMS, the ConfigMgr OMS connector requires an Azure AD Application and Service Principal. My friend and fellow Cloud and Data Center Management MVP Steve Beaumont has blogged his setup experience few days ago. You can read Steve’s post here: http://www.poweronplatforms.com/configmgr-1606-oms-connector/.  As you can see from Steve’s post, provisioning the Azure AD application for the connector can be pretty complex if you are doing it manually – it contains too many steps and you have to use both the old Azure portal (https://manage.windowsazure.com) and the new Azure Portal (https://portal.azure.com).

To simplify the process, I have created a PowerShell script to create the Azure AD application for the ConfigMgr OMS Connector. The script is located in my GitHub repository: https://github.com/tyconsulting/BlogPosts/tree/master/OMS

In order to run this script, you will need the following:

  • The latest version of the AzureRM.Profile and AzureRM.Resources PowerShell module
  • An Azure subscription admin account from the Azure Active Directory that your Azure Subscription is associated to (the UPN must match the AAD directory name)

When you launch the script, you will firstly be prompted to login to Azure:

image

Once you have logged in, you will be prompted to select the Azure Subscription and then specify a display name for the Azure AD application. If you don’t assign a name, the script will try to create the Azure AD application under the name “ConfigMgr-OMS-Connector”:

SNAGHTMLc560723

This script creates the AAD application and assign it Contributor role to your subscription:

image

At the end of the script, you will see the 3 pieces of information you need to create the OMS connector:

  • Tenant
  • Client ID
  • Client Secret Key

You can simply copy and paste these to the OMS connector configuration.

Once you have configured the connector in ConfigMgr and enabled SCCM as a group source, you will soon start seeing the collection memberships being populated in OMS. You can search them in OMS using a search query such as “Type=ComputerGroup GroupSource=SCCM”:

image

Based on what I see, the connector runs every 6 hours and any membership additions or deletions will be updated when the connector runs.

i.e. If I search for a particular collection based on the last 6 hours, I can see this particular collection has 9 members:

image

During my testing, I deleted 2 computers from this collection few days ago. If I specify a custom range targeting a 6-hour time window from few days ago, I can see this collection had 11 members back then:

image

This could be useful sometimes when you need to track down if certain computers have been placed into a collection in the past.

This is all I have to share today. Until next time, enjoy OMS Smile.