Author Archives: Tao Yang

OMS Network Performance Monitor Power BI Report

Written by Tao Yang

imageI’ve been playing with the OMS Network Performance Monitor (NPM) today. Earlier today, I’ve released an OpsMgr MP that contains tasks to configure MMA agent for NPM. You can find the post here: http://blog.tyang.org/2016/08/22/opsmgr-agent-task-to-configure-oms-network-performance-monitor-agents/

The other thing I wanted to do is to create a Power BI dashboard for the data collected by OMS NPM solution. The data collected by NPM can be retrieved using OMS search query “Type=NetworkMonitoring”.

To begin my experiment, I created a Power BI schedule in OMS using above mentioned query and waited a while for the data to populate in Power BI

image

I then used 2 custom visuals from the Power BI Custom Visual Gallery:

01. Force-Directed Graph

image

02. Timeline

image

and I created an interactive report that displays the network topology based on the NPM data:

image

In this report, I’m using a built-in slicer (top left) visual to filter source computers and the timeline visual (bottom) to filter time windows. The main section (top right) consists of a Force-Directed Graph visual, which is used to draw the network topology diagram.

I can choose one or more source computers from the slicer, and choose a time window from the timeline visual located at the bottom.

On the network topology (Force-Directed Graph visual), the arrow represents the direction of the traffic, thickness represents the median network latency (thicker = higher latency), and the link colour represents the network loss health state determined by the OMS NPM solution (LossHealthState).

I will now explain the steps I’ve taken to create this Power BI report:

01. Create a blank report based on the OMS NPM dataset (that you’ve created from the OMS portal earlier).

02. Create a Page Level Filter based on the SubType Field, and only select “NetworkPath”.

image

03. Add the Slicer visual to the top left and configure it as shown below:

image

image

04. Add the Force-Directed Graph (ForceGraph) to the main section of the report (top right), and configure it as shown below:

Fields tab:

  • Source – SourceNetworkNodeInterface
  • Target – DestinationNetworkNodeInterface
  • Weight – Average of MedianLatency
  • Link Type – LossHealthState

image

Format tab:

  • Data labels – On
  • Links
    • Arrow – On
    • Label – On
    • Color – By Link Type
    • Thickness – On
  • Nodes
    • Max name length – 15
  • Size – change to a value that suits you the best

image

05. Add a timeline visual to the bottom of the report, then drag the TimeGenerated Field from the dataset to the Time field:

image

As you can see, as long as you understand what each field means in the OMS data type that you are interested in, it’s really easy to create cool Power BI reports, as long as you are using appropriate visuals. This is all I have to share today, until next time, have fun in OMS and Power BI!

OpsMgr Agent Task to Configure OMS Network Performance Monitor Agents

Written by Tao Yang

OMS Network Performance Monitor (NPM) has made to public preview few weeks ago. Unlike other OMS solutions, for NPM, additional configuration is required on each agent that you wish to enrol to this solution. The detailed steps are documented in the solution documentation.

The product team has provided a PowerShell script to configure the MMA agents locally (link included in the documentation). In order to make the configuration process easier for the OpsMgr users, I have created a management pack that contains several agent tasks:

  • Enable OMS Network Performance Monitor
  • Disable OMS Network Performance Monitor
  • Get OMS Network Performance Monitor Agent Configuration

image

Note: Since this is an OpsMgr management pack, you can only use these tasks against agents that are enrolled to OMS via OpsMgr, or direct OMS agents that are also reporting to your OpsMgr management group.

These tasks are targeting the Health Service class, if you are also using my OpsMgr 2012 Self Maintenance MP, you will have a “Health Service” state view, and you will be able to access these tasks from the task pane of this view:

image

I can use the “Get OMS Network Performance Monitor Agent Configuration” task  to check if an agent has been configured for NPM.

i.e. Before an agent is configured, the task output shows it is not configured:

image

Then I can use the “Enable OMS Network Performance Monitor” task to enable NPM on this agent:

image

Once enabled, if I run the “Get OMS Network Performance Monitor Agent Configuration” task  again, the task output will show it’s enabled and also display the configured port number:

image

and shortly after, you will be able to see the newly configured node in OMS NPM solution:

image

If you want to remove the configuration, just simply run the “Disable OMS Network Performance Monitor” task:

image

You can download the sealed version of this MP HERE. I’ve also pushed the VSAE project for this MP to GitHub.

Visualising OMS Agent Heartbeat Data in Power BI

Written by Tao Yang

Introduction

Few days ago, the OMS product team has announced the OMS Agent Heartbeat capability. If you haven’t read about it, you can find the post here: https://blogs.technet.microsoft.com/msoms/2016/08/16/view-your-agent-health-in-oms-2/. In this post, Nini, the PM for the agent heartbeat feature explained how to create custom views within OMS portal to visualize the agent heartbeat data. Funny that I also started working on something similar around the same time, but instead of creating visual presentations within OMS, I did it Power BI. I managed to create couple of Power BI reports for the OMS agent heartbeat, using both native and custom Power BI Visuals:

01. Agent Locations Map Report:

Since the agent heartbeat data contains the geo location of the agent IP address, I’ve created this report to map the physical location the agent on an interactive map.

02. Agent Statistics Report:

This report has several parts, it contains the following parts:

  • A heat map based on the country where the agent is located (Agent Location by Country). The colour highlighting the country changes based on the agent count.
  • An interactive “fish tank” visual. In this visual, each fish represent an OMS agent. the size of the fish presents number of heartbeats generated by the agent. So, the older the agent (fish) is, the more heartbeat will be generated to the OMS workspace (fish tank), and the bigger the fish will become.
  • A Brick chart shows the percentage (this chart contains 100 tiles) of the agent by OS type (Linux vs Windows).
  • A tornado chart shows agent distribution by country. Agent OS type is also separated in different colours.
  • A Pie Chart shows agent distribution by management groups (SCOM attached vs direct attached vs Linux agents)
  • Agent version Donut chart that separates agent counts by agent version numbers (both Windows agents and Linux agents).

The fish visual is called “Enlighten Aquarium”, as you can see below, it’s an animated visual.

In this blog post, I will walk through the steps of creating these reports.

Instructions

Pre-Requisites

Before we create these reports, you need to make sure:

01. Power BI account

You will need to have a Power BI account (either a free or pro account) so OMS can inject data into your Power BI workspace.

02. Power BI preview feature is enabled in OMS

At the time of writing this post, the Power BI integration feature in OMS is still under public preview. Therefore if you haven’t done so, you will need to manually enable this feature first. To do so, go to the “Preview Features” tab in the OMS settings page, and enable “Power BI Integration”:

03. Connect your OMS workspace to your Power BI workspace.

Once the Power BI Integration feature is enabled, you need to connect OMS to Power BI. This is achieved by providing the Power BI account credential in the “Accounts” tab of the OMS settings page:

image

04. Setting up Power BI injection schedules

We need to inject the OMS agent heartbeat data to Power BI. We can just use a simple query: “Type=Heartbeat”, and set the schedule to run every 15 minutes:

image

05. Wait 15 – 30 minutes

You will have to wait a while before you can see the data in Power BI.

06. Download Power BI Custom visual

Since these reports use number of custom Power BI visuals, you will need to download them to your local computer first, and then import them into the reports when you start creating the reports. To download custom visuals, go to the Power BI Visuals Gallery (https://app.powerbi.com/visuals/) and download the following visuals:

  • Brick Chart
  • Hierarchy Slicer
  • Donut Chart GMO
  • Timeline
  • Tornado Chart
  • Enlighten Aquarium

Create Reports

To start creating the report, firstly logon to Power BI using the account you’ve used to make the connection in OMS, and then find the Dataset you have specified. in this post, I’ve created a dataset called “OMS – Agent Heartbeat”. By clicking on the dataset, you will be presented to an empty report:

image

You will then need to import the custom visuals – by clicking on the “…” icon under Visualizations, and select “Import a custom visual”

image

You can only import one at a time, so please repeat this process and import all the custom visuals I have listed above.

Creating Agent Location Report

For the Agent location report, we will add 3 visuals:

  • Hierarchy Slicer – for filtering IP addresses and computer names
  • Map – for pinpointing the agent location
  • Timeline – for filtering the time windows

image

Agent Filter (Hierarchy Slicer)

image

Add a Hierarchy slicer and place on the left side of the page, then drag the ComputerIP and Computer fields from to the “Fields” section, please make sure you place ComputerIP on top of Computer:

image

it’s also a good idea to turn off single selection for the hierarchy slicer so you can select multiple items:

image

Agent Location Map

image

Add the Map visual to the report, configure it as listed below:

  • Location – RemoteIPCountry
  • Legend – Computer
  • Latitude – Average of Remote IPLatitude
  • Longitude – Average of RemoteIPLongitude
  • Size – Count of Computer (Distinct)

image

Note: since the latitude and longitude shouldn’t change between different records for the same computer as long as the IP doesn’t change, so it doesn’t matter if you use average, or maximum or minimum, the result of each calculation should be the same.

Time Slicer (Timeline)

image

Add a timeline slicer to the bottom of the report page, configure it to use the TimeGenerated field:

image

To same some space on the report page, you may also turn off the labels for the timeline slicer:

image

Lastly, add a text box on the top of the report page, give it a title, also if you want to, assign each visual a title by highlighting the visual, then click on Format icon, and update the title field:

image

To use this report, you can make your selections in the hierarchy slicer and the timeline slicer. The map will be automatically updated.

Create Agent Statistic Report

For the second report, you can create a new page of the existing report, or create a brand new report based on the same dataset. We will use the following visuals in this report:

  • Filled Map
  • Aquarium
  • Brick Chart
  • Tornado Chart
  • Pie Chart
  • DonutChartGMO

image

Agent Location By Country (Filled Map)

image

Configure the Filled Map visual as shown below:

image

OMS Agent By Heartbeat Count (Aquarium)

image

Configure the Aquarium visual as shown below:

image

Agent OS Type (Brick Chart)

image

Configure the Brick Chart visual as shown below:

image

Agent Distribution By Country (Tornado Chart)

image

Configure the Tornado Chart as shown below:

image

Agent Distribution By Management Groups (Pie Chart)

image

Configure the Pie Chart as shown below:

image

Agent Version (DonutChartGMO)

image

Configure the DonutChartGMO visual as shown below:

image

and change the Primary Measure under Legend to “Value” / “Percentage” / “Both”, whichever you prefer:

image

Most of the visuals used by this report are interactive. i.e. if I click on a section in the Agent Version DonutChartGMO visual, other visuals will be automatically updated to reflect the selection I made in the DonutChartGMO visual.

Once you’ve configured all the visuals, please make sure you save your report.

Conclusion

There are many things you can do with the Power BI reports you’ve just created. i.e. you can share it with other people, ping individual visuals or entire report to a dashboard, or create an Iframe link and embed the report to 3rd party systems that support IFrame (i.e. SharePoint sites). We are not going to get into details of how to consume these reports today.

Please note that during my testing, the RemoteIPLatitude and RemoteIPLongitude data from the heartbeat events are not very accurate for the computers in my lab. I’m based in Melbourne, Australia but the map coordinates pinged to a location in Sydney, which is over 1000km away from me.

Please also be aware that for SCOM attached agents, each time when the agent sends heartbeat, it will send 2 heartbeats via different channels. This behaviour is by design – my good friend and fellow CDM MVP Stanislav Zhelyazkov(@StanZhelyazkov) has explained this in his blog post: https://cloudadministrator.wordpress.com/2016/08/17/double-heartbeat-events-in-oms-log-analytics/

This is all I have to share for today. until next time, have fun with OMS and Power BI!

OMS Near Real Time Performance Data Aggregation Removed

Written by Tao Yang

Few weeks ago, the OMS product team has made a very nice change for the Near Real Time (NRT) Performance data – the data aggregation has been removed! I’ve been waiting for the official announcement before posting this on my blog. Now Leyla from the OMS team has finally broke the silence and made this public: Raw searchable performance metrics in OMS.

I’m really excited about this update. Before this change, we were only able to search 30-minute aggregated data via Log Search. this behaviour brings some limitations to us:

  • It’s difficult to calculate average values based on other intervals (i.e. 5-minute or 10-minute)
  • Performance based Alert rules can be really outdated – this is because the search result is based on the aggregated value over the last 30 minutes. In critical environment, this can be a bit too late!

By removing the data aggregation and making the raw data searchable (and living a longer life), the limitations listed above are resolved.

Another advantage this update brings is, it greatly simplified the process of authoring your own OpsMgr performance collection rules for OMS NRT Perf data. Before this change, the NRT perf rules come in pairs – each perf counter you want to collect must have 2 rules (with the identical data source module configurations). One rule is for collecting raw data and another is to collect the 30-minute aggregated data. This has been discussed in great details in Chapter 11 of our Inside Microsoft Operations Management Suite book (TechNet, Amazon). Now, we no longer need to write 2 rules for each perf counter. We only need to write one rule – for the raw perf data.

The sample OpsMgr management pack below collects the “Log Cache Hit Ratio” counter for SQL Databases. It is targeting the Microsoft.SQLServer.Database class, which is the seedclass for pre-SQL 2014 databases (2005, 2008 and 2012):

As you can see from the above sample MP, the rule that collects aggregated data is no longer required.

image

So if you have written some rules collecting NRT perf data for OMS in the past, you may want to revisit what you’ve done in the past and remove the aggreated data collection rules.

ConfigMgr OMS Connector

Written by Tao Yang

Earlier this week, Microsoft has release a new feature  in System Center Configuration Manager 1606 called OMS Connector:

image

As we all know, OMS supports computer groups. We can either manually create computer groups in OMS using OMS search queries, or import AD and WSUS groups. With the ConfigMgr OMS Connector, we can now import ConfigMgr device collections into OMS as computer groups.

Instead of using the OMS workspace ID and keys to access OMS, the ConfigMgr OMS connector requires an Azure AD Application and Service Principal. My friend and fellow Cloud and Data Center Management MVP Steve Beaumont has blogged his setup experience few days ago. You can read Steve’s post here: http://www.poweronplatforms.com/configmgr-1606-oms-connector/.  As you can see from Steve’s post, provisioning the Azure AD application for the connector can be pretty complex if you are doing it manually – it contains too many steps and you have to use both the old Azure portal (https://manage.windowsazure.com) and the new Azure Portal (https://portal.azure.com).

To simplify the process, I have created a PowerShell script to create the Azure AD application for the ConfigMgr OMS Connector. The script is located in my GitHub repository: https://github.com/tyconsulting/BlogPosts/tree/master/OMS

In order to run this script, you will need the following:

  • The latest version of the AzureRM.Profile and AzureRM.Resources PowerShell module
  • An Azure subscription admin account from the Azure Active Directory that your Azure Subscription is associated to (the UPN must match the AAD directory name)

When you launch the script, you will firstly be prompted to login to Azure:

image

Once you have logged in, you will be prompted to select the Azure Subscription and then specify a display name for the Azure AD application. If you don’t assign a name, the script will try to create the Azure AD application under the name “ConfigMgr-OMS-Connector”:

SNAGHTMLc560723

This script creates the AAD application and assign it Contributor role to your subscription:

image

At the end of the script, you will see the 3 pieces of information you need to create the OMS connector:

  • Tenant
  • Client ID
  • Client Secret Key

You can simply copy and paste these to the OMS connector configuration.

Once you have configured the connector in ConfigMgr and enabled SCCM as a group source, you will soon start seeing the collection memberships being populated in OMS. You can search them in OMS using a search query such as “Type=ComputerGroup GroupSource=SCCM”:

image

Based on what I see, the connector runs every 6 hours and any membership additions or deletions will be updated when the connector runs.

i.e. If I search for a particular collection based on the last 6 hours, I can see this particular collection has 9 members:

image

During my testing, I deleted 2 computers from this collection few days ago. If I specify a custom range targeting a 6-hour time window from few days ago, I can see this collection had 11 members back then:

image

This could be useful sometimes when you need to track down if certain computers have been placed into a collection in the past.

This is all I have to share today. Until next time, enjoy OMS Smile.

Scoping OMS Performance Data in Power BI

Written by Tao Yang

when working on a dashboard or a portal, sometimes it is good that the portal is more interactive. I often found it’s more useful then just a static widget. Since I come from the monitoring back ground, I’ll use performance data as an example.

In the good old SCOM, we have this awesome 3rd party web portal called Squared Up, which allows you to choose the time frame for the perf graph:

image

and you can also select the time frame by highlighting a section from the graph itself:

image

In OMS, when we are playing with the Near Real-Time (NRT) Performance data (Type=Perf), we also have the options to specify the time frame of our choice:

image

Additionally, if we have chosen a time scope that is 6 hours or less, we are able to see the raw NRT perf data coming in every few seconds (in light blue colour):

image

Both Squared Up (for SCOM) and OMS portal provides very interactive ways to consume the perf data.

As we all know, OMS has the ability to send collected data to Power BI, therefore we are also able to create Power BI reports that contains performance data injected by OMS. i.e.:

image

As you can see, with the Power BI Line Chart visual, we can even add a trend line (the black dotted line), which is very nice in my opinion. However, by using native visuals, there are few limitations with displaying performance data in Power BI:

  • The time frame cannot be easily scoped
  • The computer and performance counters cannot be easily scoped

What I mean is, you can absolutely create a filters on either visual level, or page level or even the report level to create desired scopes – just like what I did in the example above:

image

But these filters are rather static. You won’t be able to alter them once you’ve saved the report. Obviously, as the report creator, you don’t really want to multiple almost identical visuals for different counters for different computers. In my opinion, reports like these become less interactive and user friendly because they are too static.

So, how do we make these Power BI reports more interactive? there are few options:

1. Use a Slicer to filter the computers OR the counters

In Power BI, you can add a slicer to your page. Slicers makes the report more interactive. users can choose one or more items from the slicer and other visuals on the page will be updated automatically based on users selection.

image

In the above example, I’ve used page level filter to only display the ‘Availability MBytes’ counter, and users can use the slicer to choose the computers they are interested in.

This solution is easy to implement, it may satisfy the requirements if you are only interested in a specific counter from a long term trend point of view – since we are not filtering the time windows, it will display the entire period that is available in Power BI.

2. Use the Custom Visual ‘Hierarchy Slicer’ to filter the computers AND the counters

For Power BI, you can download custom visuals from https://app.powerbi.com/visuals/?WT.mc_id=Blog_CustomVisuals and then import into your reports.

One of the custom visual you can download is called Hierarchy Slicer:

image

As the name suggests, comparing to the original built-in slicer, this visual allows you to build a hierarchy for your slicers:

image

As you can see, I’ve added Computer name as the top level filter in the hierarchy slicer, followed by the counter name as the second level in the slicer. As the result, I don’t have to use the filters for this page. Users can simply click on a counter (2nd level)  to view the graph for the counter on that specific computer, or select a computer (1st level) to see all the perf data for that particular computer. Obviously, you can make the counter name as the top of the hierarchy and place the computer name as the second level if that suits your needs better.

Note: As per introduction video for this visual, you can enable multi-select by configuring the visual and turn off the ‘Single Select’ option:

image

However, based on my experience, this option is only available when you are using Power BI Desktop. It is not available in Power BI Online.

image

Therefore we won’t be able to use multi-select for the OMS injected data because we cannot use Power BI Desktop with OMS data.

3. Use the Brush Chart custom visual to scope the time frame

Another cool custom visual is called Brush Chart, it is also called ‘Advanced Time Slicer’ on the download page:

image

I am using this together with the hierarchy slicer, so I can scope both computers and counters, as well as the perf data time window.

image

As you can see, there are 2 graphs on this visual. I can use mouse (or other pointing devices) to select a time window from the bottom graph, and the top graph will be automatically zoomed into the selected time period.

4. Use the Time Brush Custom Visual to scope the time frame

The Time Brush custom visual is very similar to the Brush Chart (aka Advanced Time Slicer).

image

It cannot be used by itself, it acts as the control for other visuals. in the example below, I’m using it together with the Line Chart visual, as well as the hierarchy slicer:

image

As you can see, when I select a period from the Time Brush visual, the line chart got updated automatically.

5. use other custom visuals

There are a lot of other custom visuals that you can download. for example, there’s another time slicer called TimeLine that allows you specify a precise,  specific time frame.

image

Conclusion

By using the combination of various slicers, we can produce more interactive and user friendly reports in Power BI. In the examples listed above, I can quickly produce a single report for ALL the OMS performance data, and users can simply choose the computer, counter and the time frame from the report itself. There is no need to create separate reports for different counters or computers.

I hope you find these tips useful, and have fun with OMS and Power BI!

Calculating SQL Database DTU for Azure SQL DB Using PowerShell

Written by Tao Yang

over the last few weeks, I have been working on a project related to Azure SQL Database. One of the requirements was to be able to programmatically calculate the SQL Database DTU (Database Throughput Unit).

Since the DTU concept is Microsoft’s proprietary IP, the actual formula for the DTU calculation has not been released to the public. Luckily, Microsoft’s Justin Henriksen has developed an online Azure SQL DB DTU Calculator, you can also Justin’s blog here. I was able to use the web service Justin has developed for the online DTU Calculator, and I developed 2 PowerShell functions to perform the calculation by invoking the web service. The first function is called Get-AzureSQLDBDTU, which can be used to calculate DTU for individual databases, the second function is called Get-AzureSQLDBElasticPoolDTU, which can be used to calculate DTU for Azure SQL Elastic Pools.

Obviously, since we are invoking a web service, the computer where you are running the script from requires Internet connection. Here’s a sample script to invoke the Get-AzureSQLDBDTU function:

Note: you will need to change the variables in the ‘variables’ region, the $LogicalDriveLetter is the drive letter for the SQL DB data file drive.

The recommended Azure SQL DB service tier and coverage % can be retrieved in the ‘Recommendations’ property of the result:

image

the raw reading for each perf sample can be retrieved in the ‘SelectedServiceTiers’ property of the result:

image

Lastly, thanks Justin for developing the DTU calculator and the web service, and pointing me to the right direction.

SharePointSDK PowerShell Module Updated to Version 2.1.0

Written by Tao Yang

OK, this blog has been very quiet recently. Due to some work related requirements, I had to pass few Microsoft exams. so I have spent most of my time over the last couple of months on study. Firstly, I passed the MCSE Private Cloud Re-Certification exam, then I passed the 2 Azure exams: 70-532 Developing Microsoft Azure Solutions and 70-533 Implementing Microsoft Azure Infrastructure Solutions. Other than studying and taking exams, I have also been working on a new version of the SharePointSDK PowerShell module during my spare time. I have finished everything on my to-do list for this release last night, and I’ve just published version 2.1.0 on PowerShell Gallery and GitHub:

This new release includes the following updates:

01. Fixed the “format-default : The collection has not been initialized.” error when retrieving various SharePoint objects.

i.e. When retrieving the SharePoint list in previous versions using Get-SPList function, you will get this error:

image

This error is fixed in version 2.1.0. now you will get a default view defined in the module:

image

02. SharePoint client SDK DLLs are now automatically loaded with the module.

I have configured the module manifest to load the SharePoint Client SDK DLLs that are included in the module folder. As the result of this change, the Import-SPClientSDK function is no longer required and has been removed from the module completely.

In the past, the Import-SPClientSDK function will firstly try to load required DLLs from the Global Assembly Cache (GAC) and will only fall back to the DLLs located in the module folder if they don’t exist in GAC. Since the Import-SPClientSDK function has been removed, this behaviour is changed in this release. Starting from this release, the module will not try to load the DLLs from GAC, but ALWAYS use the copies in the module folder.

03. New-SPListLookupField function now supports adding additional lookup columns.

When adding a lookup field in a SharePoint list, you can specify including one or more additional columns. i.e.:

image

The previous versions of this module did not support adding additional columns when creating a lookup field. In this version, you are able to add additional columns using the “-AdditionalSourceFields” parameter to achieve this goal.

04. Various minor bug fixes

Other than above mentioned updates, this version also included various minor bug fixes.

Special Thanks

I’d like to thank my friend and fellow CDM MVP Jakob Gottlieb Svendsen (@JakobGSvendsen) for his feedback. Most of the items updated in this release were results of Jakob’s feedbacks.

Blog Site Recovered–Finally

Written by Tao Yang

If you are a regular visitor of this blog, you may have noticed that this blog has been down since last Thursday, and I’ve only been able to get it back online few hours ago (Monday afternoon my time). The downtime was caused by the server which is hosting my blog. My hoster couldn’t recover the sever, and ended up restored my site from the backup (that they took  on 24th April, which was 3 weeks ago).

Due to the hoster’s lack of ability to maintain my site, I have lost 3 weeks of data (2 most recent blog posts, comments, etc.). Only until today after I talked to their technical support people on the phone, I found out that they only backup my site once a week (and the recent backups were corrupted).

Putting my emotion aside, I have managed to find the Windows Live Writer’s drafts for the 2 blog posts that I lost, and I have just re-published them. I made sure the URLs are still the same as the original ones, but you may see them appear in your RSS feeds again (as duplicate feeds). Also, if you have left any comments on my blog over the last 3 weeks, they are probably gone now.

I apologise for any inconvenience this outage may have caused. I am looking into my own WordPress backup solutions now.

Also, my hosting plan is due for renewal in about a month time, I think I’ve got do something about it Smile.

Upcoming SCOM Bootcamp-Melbourne Australia

Written by Tao Yang

I’m teaming up with Infront Consulting, Australia and will deliver a 4-day in-person instructor-led SCOM 2012 bootcamp at Melbourne, Australia. The content of this bootcamp was developed by Infront Consulting group and it has been very popular internationally.

This bootcamp is designed for SCOM administrators and operators. If you are running SCOM (or planning to implement SCOM) in your environment, I strongly recommend you enrol to this bootcamp and spend 4 days with myself and other folks attending the bootcamp.

Here’s the detail of this training event:

SCOMBootcamp

SCOM 2012 Bootcamp – Australia

Date: 20 – 23 June 2016

Location:

Saxons Training Facilities Melbourne
Level 8
500 Collins Street
Melbourne VIC 3000

Please join us for the first Infront Consulting SCOM 2012 Bootcamp in Australia! Tao Yang is a well-known author, speaker, blogger and SCOM expert who will be guiding you in person in the SCOM 2012 R2 Bootcamp.

This four-day Bootcamp is a mix of in-depth instructor led training and hands-on labs where you will learn how to administer System Center Operations Manager 2012. This course will provide students with an understanding of the Operations Manager 2012 Architecture, features and how to administer and maintain Operations Manager 2012.

Cost: $3,600 AUD + GST per student, includes course materials and access to Hands on Labs.

Modules covered:

Session 1: Overview of System Center Operations Manager 2012

Session 2: Operations Manager 2012 Architecture

Session 3: Installing Operations Manager 2012

Session 4: Installing the Gateway Server Role

Session 5: Configuring Operations Manager Security

Session 6: Agent Deployment and Configuration

Session 7: Alert Notification and Incident Remediation

Session 8: Management Pack Tuning and Targeting Best Practices

Session 9: Tuning of the Core Microsoft MPs

Session 10: Application Performance Monitoring

Session 11: Network Monitoring in Operations Manager 2012

Session 12: Working in the Operations Manager Shell

Session 13: Building Custom Monitoring Solutions & Distributed Applications

Session 14: Reporting & Dashboards

Session 15: Third Party Extensions

Registration Link:

https://www.eventbrite.com/e/scom-2012-bootcamp-australia-tickets-25190237679

Hope to see you there!