Consolidate Multiple Squared Up Instances into Single Dashboard

Written by Tao Yang

Recently, I have been involved in many conversations in regards to managing multiple SCOM management groups with other SCCDM MVP colleagues. I am planning to write more on this topic in the future. 2 weeks ago, after I posted using Squared Up as an universal dashboard and demonstrated how to list active alerts from another management group using their SQL plugin, my friends at Squared Up hinted me that I can also use the Iframe plugin for this purpose. Therefore, today, I’d like to demonstrate how can we present information from multiple management group (multiple Squared Up instances) into a single Squared Up page / dashboard using their Iframe plugin.

The Iframe plugin allows to you display an embedded page within a dashboard. In my lab, I have 2 SCOM management groups, one located in my home lab and another one is hosted in Azure. I have installed Squared Up on the web console server of each MG. Today, I managed to produce 2 dashboards in one of my Squared Up instances:

Alerts from Multiple MGs:

image

Servers from Multiple MGs:

image

Each section (left and right) uses an instance of Iframe plugin to display a particular web page from a Squared Up instance. The setup is relatively easy. I’ll go through how to setup them now.

01. Configure the page you want to display the way you like. i.e. for the alert page, using the options to customise however you want it to be.

image

02. save the changes and copy the URL.

03. Create a new page, split into multiple sections, and in one of the sections, choose “Web Content”, and copy the URL in the src field

image

image

03. As shown above, add ?embed=true at the end of the URL (to remove the header), and add 2 additional parameters (refresh = true, scrolling = true).

04. Repeat above steps in other sections.

For the server list dashboard, you can drill down by clicking on a server, and then you will be able to trigger agent tasks associated to that particular server:

image

So, why am I using Squared Up to consolidate views instead of configuring a Connected MGs scenario? Connected MGs have its limitations such as:

  • both management groups must be running the same version.
  • both management groups must be located in the same domain or trusted domains. untrusted domains are not supported

More information about Connected MG can be found here: https://technet.microsoft.com/en-au/library/hh230698.aspx

By using the Iframe Plugin, you simply consolidate the views into a single page, therefore the limitations for connected MGs listed above don’t apply here.

This is what I have to share today. As I mentioned in the beginning of this article, there will be more on managing multiple management groups in the future. so stay tuned Smile.

Using Squared Up As an Universal Dashboard Solution

Written by Tao Yang

Background

I’ve been playing with Squared Up a lot lately – to get myself familiar with the new 2.0 version, thus my recent few posts were all related to it.

few days ago, I was involved in a conversation around from SCOM users / consumers point view, how to bring data from multiple management groups into a single pane of glass. As the result of this conversation, I’ve spent some time and tried the Squared Up SQL Plugin. After couple of hours, I managed to produce 2 dashboards using this plugin, both using data sources that are foreign to the OpsMgr management group the Squared Up instance is connected to.

In this blog, I’ll go through the steps setting up the following dashboards:

  • Active Alerts from another OpsMgr management group (data source: the OperationsManager DB from the other management group).
  • ConfigMgr 2012 Package Distribution Dashboard (data source: ConfigMgr primary site DB).

 

I will demonstrate using the Squared Up 2.0 dashboard installed on the OpsMgr web console server in my home lab.

The foreign OpsMgr management group is hosted in my Azure subscription. All the servers used by this management group are connected to my home lab via a Azure S2S VPN connection. They are located on the same domain as my on-prem lab.

The ConfigMgr infrastructure is also located in my home lab (on-prem).

Pre-Requisites

Setting up DB access in SQL

Since the SQL connection string used by this plugin is stored in clear text, SquaredUp does not recommend using a username and password. Therefore, in the connection string, I’m using integrated security.

Since the SquaredUp IIS Application pool is running using the local NetworkService account, I must grant the SquaredUp web server’s computer account datareader access to the database that’s going to be used as the data source. i.e. for my ConfigMgr primary site DB:

image

and for the OpsMgr operational DB:

image

Installing Squared Up SQL Plugin

You will need to install the latest version (2.0.2) of the plugin. if you have already installed it before, please make sure you update to this version. There was a bug in the earlier versions, and it has been fixed in 2.0.2.

 

ConfigMgr Package Distribution Dashboard

SNAGHTML421df2f8

This dashboard contains 3 parts (two parts on the top, one on the bottom). the top 2 parts displays the a single number (how many package distributions are in error and retrying states). the bottom part is a list for all the distributions that are in these 2 states.

All 3 parts are using the SQL plugin, the connection string for all 3 parts are:

Data Source=<ConfigMgr DB Server>;Initial Catalog=<ConfigMgr Site DB>;Integrated Security=True;

the top 2 parts are configured like this:

image

Pkg Dist – Error State part (Top left):

SQL query string:

SELECT Count([StateGroupName]) FROM v_ContentDistributionReport where StateGroupName = ‘Error’

Pkg Dist – Retrying State part (Top right):

SQL query string:

SELECT Count([StateGroupName]) FROM v_ContentDistributionReport where StateGroupName = ‘Retrying’

Other parameters:

isscalar: true

scalarfontsize: 120

Pkg Dist – List (Bottom):

image

SQL query string:

SELECT [PkgID],[DistributionPoint],[State],[StateName],[StateGroupName],[SourceVersion],[SiteCode],Convert(VARCHAR(24),[SummaryDate],113) as ‘Summary Date’,[PackageType] FROM v_ContentDistributionReport where StateGroupName <> ‘Success’ order by StateGroupName desc

other parameters:

isscalar: false

 

Active Alerts Dashboard for a foreign OpsMgr MG

image

Similar to the previous sample, there are 2 parts on the top displaying scalar values. In this case, I’ve chosen to display the active alerts count for critical and warning alerts. Followed by the 2 big scalar numbers, I added 2 lists for active critical & warning alerts.

SQL connection strings:

Data Source=<OpsMgr DB server>;Initial Catalog=OperationsManager;Integrated Security=True;

Active Alert Count – Critical (Top left):

SQL query string:

select count(id) from [dbo].[AlertView] where ResolutionState <> 255 and severity = 2

isscalar: true

scalarfontsize: 120

Active Alert Count – Warning (Top right):

SQL query string:

select count(id) from [dbo].[AlertView] where ResolutionState <> 255 and severity = 1

isscalar: true

scalarfontsize: 120

Active Alerts – Critical (list):

SQL query string:

SELECT Case a.[MonitoringObjectHealthState] When 0 Then ‘Not Monitored’ When 1 Then ‘Healthy’ When 2 Then ‘Warning’ When 3 Then ‘Critical’ END As ‘Health State’, a.[MonitoringObjectFullName] as ‘Monitoring Object’,a.[AlertStringName] as ‘Alert Title’,r.ResolutionStateName as ‘Resolution State’,Case a.Severity When 0 Then ‘Information’ When 1 Then ‘Warning’ When 2 Then ‘Critical’ END As ‘Alert Severity’, Case a.Priority When 0 Then ‘Low’ When 1 Then ‘Medium’ When 2 Then ‘High’ END As ‘Alert Priority’,Convert(VARCHAR(24),a.[TimeRaised],113) as ‘Time Raised UTC’ FROM [dbo].[AlertView] a inner join dbo.ResolutionStateView r on a.ResolutionState = r.ResolutionState where a.ResolutionState <> 255 and a.severity = 2 order by a.TimeRaised desc

isscalar: false

tableshowheaders: true

Active Alerts – Warning (list):

SQL query string:

SELECT Case a.[MonitoringObjectHealthState] When 0 Then ‘Not Monitored’ When 1 Then ‘Healthy’ When 2 Then ‘Warning’ When 3 Then ‘Critical’ END As ‘Health State’, a.[MonitoringObjectFullName] as ‘Monitoring Object’,a.[AlertStringName] as ‘Alert Title’,r.ResolutionStateName as ‘Resolution State’,Case a.Severity When 0 Then ‘Information’ When 1 Then ‘Warning’ When 2 Then ‘Critical’ END As ‘Alert Severity’, Case a.Priority When 0 Then ‘Low’ When 1 Then ‘Medium’ When 2 Then ‘High’ END As ‘Alert Priority’,Convert(VARCHAR(24),a.[TimeRaised],113) as ‘Time Raised UTC’ FROM [dbo].[AlertView] a inner join dbo.ResolutionStateView r on a.ResolutionState = r.ResolutionState where a.ResolutionState <> 255 and a.severity = 1 order by a.TimeRaised desc

isscalar: false

tableshowheaders: true

As shown in the dashboard screenshot above, currently I have 9 critical and 12 warning alerts in the MG on the cloud, this figure matches what’s showing in the Operations console:

SNAGHTML4252316b

 

Conclusion

By using the Squared Up SQL plugin, you can potentially turn Squared UP into a dashboard for any systems (not just OpsMgr). The limit is your imagination Smile. I have also written few posts on Squared Up before, you can find them here: http://blog.tyang.org/tag/squaredup/

Lastly, I encourage you to share your brilliant ideas with the rest of us, and I will for sure keep posting on this topic if I come up with something cool in the future.

Accessing OpsMgr Performance Data in Squared Up Dashboard

Written by Tao Yang

squaredup

13/03/2015 Update: Correction after feedback from Squared Up, Squared Up does not read perf data from Operational DB. Thus this post is updated with the correct information.

Yesterday, my friend and fellow SCCDM MVP Cameron Fuller has posted a good article explaining the differences between performance view and performance widget in OpsMgr. If you haven’t read it, please read it first from here: World War Widget: The performance view vs the performance widget and come back to this article.

As Cameron explained, performance views read data from the operational DB and you can access the most recent short term data. The performance widgets read data from the Data Warehouse DB and you are able to access the long term historical data this way.

I’d also like to throw a 3rd option into the mix, however, this is not something native in OpsMgr, but it is via the 3rd party dashboard Squared Up.

To be honest, Access Performance data must be my most favourite feature in Squared Up. In this post, I will show off few features related to this topic in the Squared Up console.

01. Automatically Switch Between Data Sets

Since all performance collection rules write performance data into both databases, Squared Up only reads performance data from Data Warehouse DB. When accessing the performance data in Squared Up, as long as you have already established Data Warehouse DB connection, Squared Up will automatically detect the best aggregation set for the performance data. You can access both long term and short term data from a single view:

image

As shown above, the default period is 12 hours, the data displayed is the raw performance data (not aggregated), if I change the period to last 30 days, notice the performance counter name is also updated with “(hourly)” at the end – this means this graph is now based on the hourly aggregate dataset:

image

If I change the period again, this time, I select “all”, as shown below, it is showing about a year’s worth of data, and it has automatically switched to the daily aggregate dataset:

image

02. Accessing the numeric value from the graph

Other than being able to auto detect and switch to the more appropriate data source and data set, if you move the cursor to any point on the graph, you will be able to read the exact figure at that point of time:

image

03. Selecting a Period from the graph:

You can also highlight a period from the graph, and Squared Up will update the graph to only display the period you’ve just highlighted:

image

image

04. Exporting Performance Data to Excel

You can also export the data to Excel using the export button on the top right hand side of the page.

image

When you open the exported Excel document, you’ll see 2 tabs – one for the numeric data on a table, one for the graph itself:

SNAGHTML3db0931b

SNAGHTML3db16d20

 

Conclusion

This is all based on my own experience, just my 2 cents on the topic that Cameron has started. I think it would be good to also show the community what a 3rd party product can do in addition to the native capabilities.

If you haven’t played with Squared Up before, I strongly recommend you to go take a look: http://www.squaredup.com, you can access the online demo from their website too. They also have few demo videos that you can watch: http://squaredup.com/resources/videos/

Lastly, please feel free to drop me an email if you want to carry on this discussion.

Various Ways to Find the ID of a Monitoring Object in OpsMgr

Written by Tao Yang

Often when working in OpsMgr, we need to find the ID of a monitoring object. For example, in the recent Squared Up Dashboard  version 2.0 Customer Preview webinar (https://www.youtube.com/watch?v=233oTAefrRM), it was mentioned in the webinar that the monitoring object IDs must  be located when preparing the Visio diagram for the upcoming Visio plugin.

In this post, I’ll demonstrate 3 methods to retrieve the monitoring object ID from SCOM. These 3 methods are:

  • Using OpsMgr built-in PowerShell Module “OperationsManager”
  • Using OpsMgr SDK via Windows PowerShell
  • Using SCSM Entity Explorer

In the demonstrations, I will show how to retrieve the monitoring object ID for a particular SQL database:

  • Monitoring Class Display Name: SQL Database
  • DB name: master
  • DB Engine: MSSQLSERVER
  • SQL Server: SQLDB01.corp.tyang.org

image

Note: Before I start digging into this topic, if you are not very PowerShell savvy, and only want a simple GUI based solution, please go straight to the last method (using SCSM Entity Explorer).

 

Using OpsMgr PowerShell Module OperationsManager

01. Define variables and connect to the management server:

02. Get the monitoring class based on its display name:

However, in my management group, there are 2 classes with the same name “SQL Database”:

image

As you can see, the first item in the array $MonitoringClasses is the correct one in this case. We will reference it as $MonitoringClasses[0].

03. Get the monitoring object for the particular database:

The Get-SCOMClassInstance cmdlet does not take any criteria, therefore, the command above retrieves all instances of the SQL Database class, then filter the result based on the database name, SQL server name and SQL DB instance name to locate the particlar database that we are looking for.

image

The monitoring object ID is highlighted as above.

image

The type for the ID field is Guid. You can also convert it to a string as shown above.

 

Using OpsMgrSDK Via Windows PowerShell

In this example, I won’t spend too much time on how to load the SDK assemblies, in the script, I’m assuming the SDK DLLs are already loaded into the Global Assembly Cache (GAC). So, in order to use this script, you will need to run this on an OpsMgr management server or a web console server, or a computer that has operations console installed.

01. Define variables, load SDK assemblies and connect to OpsMgr management group:

02. Get the monitoring class based on the display name

image

As you can see, since the display name is not unique, 2 classes are returned from the search (this is same as the first method), except this time, the type for $MonitoringClass varible is a ReadOnlyCollection. However, we can still reference the correct monitoring class using $MonitoringClass[0]

image

03. Get the monitoring object for the particular database:

Please refer to this page for the properties that you can use to build the search criteria (MonitoringObjectGenericCriteria)

As you can see, unlike the first method using the built-in module, we can specify a more granular search criteria to locate the monitoring object (as result, the command execution should be much faster). However, please keep in mind although there is only one monitoring object returned from the search result, the $MonitoirngObject variable is still a ReadOnlyCollection:

image

And you can access the particular SQL Database (Monitoring Object) using $MonitoringObject[0]:

image

 

Using SCSM Entity Explorer

SCSM Entity Explorer is a free utility developed by Dieter Gasser. You can download it from TechNet Gallery: https://gallery.technet.microsoft.com/SCSM-Entity-Explorer-68b86bd2

Although as the name suggested, it was developed for SCSM, it also works with OpsMgr. Once you’ve downloaded it and placed on a computer, you can follow the instruction below to locate the particular monitoring object.

01. Connect to an OpsMgr management server and search the monitoring class using display name

image

As shown above, there are 2 classes returned when searching the display name “SQL Database”. You can find the correct one from the full name on the right.

02. Load objects for the monitoring class:

Go to the objects class and click on “Load Objects” button to load all instances.

image

Unfortunately, the we cannot modify what properties to be displayed on the objects list, and the display name does not contain the SQL server and DB instance name. In this scenario, the only way to find the correct instance is to open each one using the “View Details” button.

image

Once you’ve located the correct instance, the monitoring object ID is displayed on the objects list.

Having said that, if you are looking for a monitoring object from a singleton class (where there can only be 1 instance in the MG, such as a group), this method is probably the easiest out of all 3.

i.e. When I’m looking for a group I created for the Hyper-V servers and their health service watchers, there is only instance:

image

Also, for certain monitoring objects (such as Windows Server), you can easily locate the correct instance based on the display name:

image

Conclusion

based on your requirements (and the information available for search), you can choose one of these methods whichever you think it’s the best.

Lastly,  if you know other ways to locate monitoring object ID, please leave a note here or send me an email.

PowerShell Script to Extract CMDB Data From System Center Service Manager Using SDK

Written by Tao Yang

Background

In my previous post Writing PowerShell Module That Interact With Various SDK Assemblies, I’ve explained how to create a PowerShell module that embeds various SDK DLLs and I’ve used System Center Service Manager SDK as an example. Well, the reason that I created the module for Service Manager SDK is because I needed to write a script to extract CMDB data from Service Manager. In this post, I’ll go through what’ I’ve done and the script can also be downloaded at the end of the article.

So, I needed to write a script to export configuration items from Service Manager, I have the following requirements:

  • The script must be generic and extendable to be able to extract instances of any CI classes.
  • The properties (to be exported) of each class should also be configurable.
  • Supports delta export (Only export what’s changed since last execution).
  • Be able to also export CI Relationships
  • Be able to filter unwanted relationships (from being exported).

After evaluating different options, I have decided to directly interact with Service Manager SDK in the script instead of using the native Service Manager PowerShell module and the community based module SMLets.

Pre-requisite

As I just mentioned, this script requires the SMSDK module I have created previously (you will have to locate the SDK DLLs from your Service Manager management server and copy them to the module folder as I explained in the previous post).

Configuration

In order to make the script generic while being extendable, I’ve used a XML file to define various configurations for the script:

image

I have added a lot of comments in this XML file so it should be very self-explanatory. Just few notes here:

  • This XML configuration file must be placed in the same folder as the script.
  • For each property that you wish to be exported from Service manager, list them under <Properties><PropertyName> tag.
  • This script also exports the relationships associated with each CI object that is exported. However, only the relationships where the exported CI object is the source object are exported.
  • Both <PropertyName> and <CIClassName> are the internal names, Please do not use the display names.
  • You can use the SCSM Entity Explorer (Free download from TechNet Gallery) to identify what are the internal names for the class and property that you wish to export.

image

 

Script

Since I have written a lot of scripts using OpsMgr SDK in the past, I didn’t find Service Manager SDK too hard (although this is only the second time I’ve written scripts for Service Manager). The script itself is fairly simple and short:

 

To execute the script, simply pass the service management server name (user name and password are optional), and you can also use -verbose if you’d like to see verbose messages:

.\SMConfigItemExtract.ps1 -ManagementServer SCSMMS01 -verbose

Outputs

This script will create a separate CSV file for each CI class that’s configured in the XML. It will also create a single CSV file for ALL relationships export:

image

SNAGHTML1484a713

The script also writes the execution time stamp to the config.xml under <LastSyncFileDateUTC>. When the script runs next time, it will retrieve this value and only export the configuration items that have been changed after this time stamp. If you need to force a full sync, please manually remove the value in this tag:

image

Download

You can download the prerequisite SMSDK PowerShell module HERE.

You can download the script and the config.xml file HERE.

My Experience of Using Silect MP Studio During Management Pack Update Process

Written by Tao Yang

Thanks to Silect’s generosity, I have been given a NFR (Not For Resell) license for the MP Studio to be used in my lab last November. When I received the license, I created a VM and installed it in my lab straightaway.  However, due to my workloads and other commitments, I haven’t been able to spend too much time exploring this product. In the mean time, I’ve been trying to get all the past and current MS management packs ready so I can load them into MP Studio to build my repository.

Today, one of my colleagues came to me seeking help on an error logged in the Operations Manager log on all our DPM 2012 R2 servers (where SQL is locally installed):

image

It’s obvious the offending script(GetSQL2012SPNState.vbs) is from the SQL 2012 MP, and we can tell the error is that the computer FQDN where WMI is trying to connect to is incorrect. In the pixelated area, the FQDN contains the NetBIOS computer name plus 2 domain names from the same forest.

I knew the SQL MP in our production environment is 2 version behind (Currently on version 6.4.1.0), so I wanted to find out if the latest one (6.5.4.0) has fixed this issue.

Therefore, as I always do, I firstly went through the change logs in the MP guide. The only thing I can find that might be related to SPN monitoring is this line:

SPN monitor now has overridable ‘search scope’ which allows the end user to choose between LDAP and Global Catalog

image

I’m not really sure if the new MP is going to fix the issue, and no, I don’t have time to unseal and read the raw XML code to figure it out because this version of the SQL 2012 monitoring MP has 49,723 lines of code!

At this stage, I thought MP Studio might be able to help (by comparing 2 MPs). So I remoted back to my home lab and quickly loaded all versions of SQL MP that I have into MP Studio.

SNAGHTML10c3cde7

I then chose to compare version 6.5.4.0 (the latest version) with version 6.4.1.0 (the version loaded in my production environment):

image

It took MP Studio few seconds to generate the comparison result, and I was surprised how many items have been updated!

image

Unfortunately, there is no search function in the comparison result window, but fortunately, I am able to export the result to Excel. When I exported to Excel, there are 655 rows! when I searched the script name mentioned in the Error log (GetSQL2012SPNState.vbs), I found the script was actually updated:

image

Because the script is too long and it’s truncated in the Excel spreadsheet, I had to go back to MP Studio and find this entry (luckily entries are sorted alphabetically).

Once the change is located, I can copy both parent value and the child value into clipboard:

image

I pasted the value into Notepad++, as it contains some XML headers / footers and both versions of script, I removed headers and footers, and separated the scripts into 2 files.

Lastly, I used the Compare Plugin from NotePad++ to compare the differences in both scripts, and I found an additional section in the new MP (6.5.4.0) may be related to the error that we are getting (as it has something to do with generating the domain FQDN):

image

After seeing this, I took an educated guess that this could be the fix to our issue and asked my colleague to load MP version 6.5.4.0 into our Test management group to see if it fixes the issue. When we went to load the MP, we found out that I have already loaded it in Test (I’ve been extremely busy lately and I forgot I did it!). So my colleague checked couple of DPM servers in our test environment and confirmed the error does not exist in Test. It seems we have nailed this issue.

Conclusion

Updating management packs has always been a challenging task (for everyone I believe). In my opinion, we are all facing challenges because not knowing EXACTLY what has been changed. This is because:

  • It is impossible to read and compare each MP files (i.e. the SQL 2012 Monitoring MP has around 50,000 lines of code, then plus the 2008 and 2005 MP, plus the library MP, etc.), they are just too big to read!
  • MP Guide normally only provides a vague description in the change log (if the are change logs after all).
  • Any bugs caused by the human error would not be captured in the change logs.
  • Sometimes it is harder to test a MP in test environment because test environments normally don’t have the same load as production, therefore it is harder to test some workflows (i.e. performance monitors).

And we normally rely on the following sources to make our judgement:

  • The MP Guide. – Only if the changes are captured in the guide, and they are normally very vague.
  • Social media (tweets and blogs) – but this is only based on the blog author’s experience, the bug you have experienced may not been seen in other people’s environment (i.e. this particular error I mentioned in this post probably won’t happen in my lab because I only have a single domain in the forest).

Normally, you’d wait someone else to be the guinea pig, test it out and let you know if there are any issues before you start updating your environment (i.e. the recent bug in Server OS MP 6.0.7294.0 was firstly identified by SCCDM MVP Daniele Grandini and it was soon been removed from the download site by microsoft).

In MP Studio, the feature that I wanted to explore the most is the MP compare function. It really provides OpsMgr administrators a detailed view on what has been changed in the MP and you (as OpsMgr admin) can use this information to make better decisions (i.e. whether to upgrade or not? are there any additional overrides required?). Based on today’s experience, if I start timing before I loaded the MPs into the repository, it probably took me less than 15 minutes to identify this MP update is something very worth trying (in order to fix my production issue).

Lastly, There are many other features MP Studio provides, I have only spent a little bit time on it today (and the result is positive). In my opinion, sometimes, the best way to describe something is to use an example, thus I’m sharing today’s experience with you. I hope you’ve found it informative and useful.

P.S. Coming back to the bug in the Server OS MP 6.0.7294 that I have mentioned above, I ran a comparison between 6.0.7294 and previous version 6.0.7292, I can see a lot of perf collection rules have been changed:

image

and if I export the result in Excel, I can actually see the issue described by Daniele (highlighted in yellow):

image

Oh, one last word before I call it the day, to Silect – would it be possible to provide search function in the comparison result window (so I don’t have to rely on Excel export)?

Writing PowerShell Modules That Interact With Various SDK Assemblies

Written by Tao Yang

Over the last few months, there have been few occasions that I needed to develop PowerShell scripts needed to leverage SDK DLLs from various products such as OpsMgr 2012 R2, SCSM 2012 R2 and SharePoint Client Component SDK.

In order to be able to leverage these SDK DLLs, it is obvious that prior to running the scripts, these DLLs must be installed on the computers where the scripts are going to be executed. However, this may not always be possible, for example:

  • Version Conflicts (i.e. OpsMgr): The OpsMgr SDK DLLs are installed into computer’s Global Assembly Cache (GAC) as part of the installation for the Management Server, Operations Console or the Web Console. However, you cannot install any components from multiple OpsMgr versions on the same computer (i.e. Operations console from OpsMgr 2012 and 2007).
  • Not being able to install or copy SDK DLLs (i.e. Azure Automation): If the script is a runbook in Azure Automation, you will not be able to pre-install the SDK assemblies on the runbook servers.

 

In order to be able to overcome these constraints, I have developed a little trick: developing a simple PowerShell module, placing the required DLLs in the PS module folder and use a function in the module to load the DLLs from the PS Module base folder. I’ll now explain how to develop such PS module. I’ll use the custom module I’ve created for the Service Manager 2012 R2 SDK last week as an example. In this example, I named my customised module “SMSDK”.

01. Firstly, create a module folder and then create a new PowerShell module manifest using “New-ModuleManifest” cmdlet.

02. Copy the  required SDK DLLs into the PowerShell Module Folder. The module folder would also contain the manifest file (.psd1) and a module script file (.psm1).

SNAGHTML12c34a9

03. Create a function to load the DLLs. In the “SMSDK” module that I’ve written, the function looks like this:

As you can see, In this function, I have hardcoded the DLL file names, assembly version and public key token. The script will try to load the assemblies (with the specific names, version and public key token) from the Global Assembly Cache first (line 32). If the assemblies are not located in the GAC, it will load the assemblies from the DLLs located in the PS Module folder (line 38).

The key to this PS function is, you must firstly identify the assemblies version and public key token. There are 2 ways to can do this:

  • Using the PowerShell GAC module on a machine where the assemblies have already been loaded into the Global Assembly Cache (i.e. in my example, the Service Manager management server):

image

  • Load the assemblies from the DLLs and then get the assemblies details from the current app domain:

image

Note: although you can load the assemblies from the GAC without specifying the version number, in this scenario, you MUST specify the version to ensure the correct version is loaded. It happened to me before when I developed a script that uses OpsMgr SDK, it worked on most of the computers but one computer. It took me a while to find out because the computer had both OpsMgr and Service Manager SDKs loaded in the GAC, the wrong assembly was loaded because I didn’t not specify the version number in the script.

Now, Once the Import SDK function is finalised, you may call it from scripts or other module functions. For example, in my “SMSDK” module, I’ve also created a function to establish connection to the Service Manager management group, called Connect-SMManagementGroup. This function calls the Import SDK (Import-SMSDK) function to load assemblies before connecting to the Service Manager management group:

For your reference, You can download the sample module (SMSDK) HERE. However, the SDK DLLs are not included in this zip file. For Service Manager, you can find these DLLs from the Service Manager 2012 R2 Management Server, in the <SCSM Management Server Install Dir>\SDK Binaries folder and manually copy them to the PS module folder:

image

Lastly, this blog is purely based on my recent experiences. Other than the Service Manager module that I’ve used in this post, I’ve also used this technique in few of my previous work, i.e. the “SharePointSDK” module and the upcoming “OpsMgrExtended” module that will soon be published (You can check out the preview from HERE and HERE). I’d like to hear your thoughts, so please feel free to email me if you’d like to discuss further.

Updated Management Pack for Windows Server Logical Disk Auto Defragmentation

Written by Tao Yang

defragBackground

I have been asked to automate Hyper-V logical disk defragmentation to address a wide-spread production issue at work. Without having a second look, I went for the famous Autodefrag MP authored by my friend and SCCDM MVP Cameron Fuller.

Cameron’s MP was released in Octorber, 2013, which is around 1.5 year ago. When I looked into Cameron’s MP, I realised unfortunately, it does not meet my requirements.

I had the following issues with Cameron’s MP:

  • The MP schema is based on version 2 (OpsMgr 2012 MP schema), which prevents it from being used in OpsMgr 2007. This is a show stopper for me as I need to use it on both 2007 and 2012 management groups.
  • The monitor reset PowerShell script used in the AutoDefrag MP uses OpsMgr 2012 PowerShell module, which won’t work with OpsMgr 2007.
  • The AutoDefrag MP was based on Windows Server OS MP version 6.0.7026. In this version, the fragmentation monitors are enabled by default. However, since version 6.0.7230, these fragmentation monitors have been changed to be disabled by default. Therefore, the overrides in the AutoDefrag MP to disable these monitors become obsolete since they are already disabled.

In the end, I have decided to rewrite this MP, but it’s still based on Cameron’s original logics.

New MP: Windows Server Auto Defragment

I’ve given the MP a new name: Windows Server Auto Defragment (ID: Windows.Server.Auto.Defragment).

The MP includes the following components:

Diagnostic Tasks: Log defragmentation to the Operations Manager Log

There are 3 identical diagnostic tasks (for Windows Server 2003, 2008 and 2012 logical disk fragmentation monitors). These tasks log an event log entry to agent’s Operations Manager log before the defrag recovery tasks starts.

Group: Drives to Enable Fragmentation Monitoring

This is an empty instance group. Users can place logical disks into this group to enable the “Logical Disk Fragmentation Level” monitors from the Microsoft Windows Server OS MPs.

You may add any instances of the following classes into this group:

  • Windows Server 2003 Logical Disk
  • Windows Server 2008 Logical Disk
  • Windows Server 2012 Logical Disk

 

Group: Drives to Enable Auto Defrag

This is an empty instance group. Users can place logical disks into this group to enable the diagnostic and recovery tasks for auto defrag.

You may add any instances of the following classes into this group:

  • Windows Server 2003 Logical Disk
  • Windows Server 2008 Logical Disk
  • Windows Server 2012 Logical Disk

 

Group: Drive to Enable Fragmentation Level Performance Collection

This is an empty instance group. Users can place logical disks into this group to enable the Windows Server Fragmentation Level Performance Collection Rule.

Note: Since this performance collection rule is targeting the “Logical Disk (Server)” class, which is the parent class of OS specific logical disk classes, you can simply add any instances of the “Logical Disk (Server)” class into this group.

Event Collection Rule: Collect autodefragmentation event information

This rule collects the event logged by the “Log defragmentation to the Operations Manager Log” diagnostic tasks.

Reset Disk Fragmentation Health Rule

This rule is targeting the RMS / RMS Emulator, it runs every Monday at 12:00 and resets any unhealthy instances of disk fragmentation monitors back to healthy (so the monitor regular detection and recovery would run again next weekend).

Auto Defragmentation Event Report

This report lists all auto defragmentation events collected by the event collection rule within a specified time period

image

Windows Server Fragmentation Level Performance Collection Rule

This rule collects the File Percent Fragmentation counter via WMI for Windows server logical disks. This rule is disabled by default.

If a logical drive has been placed into all 3 above groups as I mentioned above, you’d probably see a performance graph similar to this:

image

As shown in above figure, Number 1 indicates the monitor has just ran and the defrag recovery task was executed, the drive has been defragmented. Number 2, 3 and 4 indicates the fragmentation level is slowly building up over the week and hopefully you’ll see this similar pattern on a weekly interval (because the fragmentation level monitor runs once a week by default).

Various views

The MP also contains various views under the “Windows Server Logical Drive Auto Defragment” folder:

image

What’s Changed from the Original AutoDefrag MP?

Comparing with Cameron’s original MP, I have made the following changes in the new version:

  • The MP is based on MP schema version 1, which works with OpsMgr 2007 (as well as OpsMgr 2012).
  • Changed the minimum version of all the referencing Windows Server MPs to 6.0.7230.0 (where the fragmentation monitors became disabled by default).
  • Sealed the Windows Server Auto Defragment MP. However, in order to allow users to manually populate groups, I have placed the group discoveries into an unsealed MP “Windows Server Auto Defragment Group Population”. By doing so, all MP elements are protected (in the sealed MP), but still allowing users to use the groups defined in the MP to manage auto defrag behaviours.
  • Changed the monitor overrides from disabled to enabled because these monitors are now disabled by default. This means the users will now need to manually INCLUDE the logical disks to be monitored rather than excluding the ones they don’t want.
  • Replaced the Linked Report with a report to list auto defrag events.
  • Additional performance collection rule to collect the File Percent Fragmentation counter via WMI. This rule is also disabled by default. It is enabled to a group called “Drives to Enable Fragmentation Level Performance Collection”
  • Updated the monitor reset script to use SDK directly. This change is necessary in order to make it work for both OpsMgr 2007 and 2012. The original script would reset the monitor on every instance, the updated script would only reset the monitors for the unhealthy instances. Additionally, the monitor reset results are written to the RMS / RMSE’s Operations Manager log.
  • Updated LogDefragmentation.vbs script for the diagnostic task to use MOM.ScriptAPI to log the event to Operations Manager log instead of the Application log.
  • Updated message in LogDefragmentation.vbs from “”Operations Manager has performed an automated defragmentation on this system” to “Operations Manager will perform an automated defragmentation for <Drive Letter> drive on <Server Name>” – Because this diagnostic task runs at the same time as the recovery task, so the defrag is just about to start, not finished yet, I don’t believe the message should use past tense.
  • Updated the diagnostic tasks to be disabled by default.
  • Created overrides to enable the diagnostics for the “Drives to Enable Auto Defrag” group (same group where the recovery tasks are enabled).
  • Updated the Data Source module of the event collection rule to use “Windows!Microsoft.Windows.ScriptGenerated.EventProvider” and it is only looking for event ID 4 generated by the specific script (LogDefragmentation.vbs). –by using this data source module, we can filter by the script name to give us more accurate detection.

 

How do I configure the management pack?

Cameron suggested me to use the 5 common scenarios from his original post when explaining different monitoring requirements. In Cameron’s post, he has listed the following 5 scenarios:

01. We do not want to automate defragmentation, but we want to be alerted to when drives are highly fragmented.

In this case, you will need to place the drives that you want to monitor in the “Drives to Enable Fragmentation Monitoring” group.

02. We want to ignore disk fragmentation levels completely.

In this case, you don’t need to import this management pack at all. Since the fragmentation monitors are now disabled by default, this is the default configuration.

03. We want to auto defragment all drives.

In this case, you will need to place all the drives that you want to auto defrag into 2 groups:

  • Drives to Enable Fragmentation Monitoring
  • Drives to Enable Auto Defrag

04. We want to auto defragment all drives but disable monitoring for fragmentation on specific drives.

Previously when Cameron released the original version, he needed to work on an exclusion logic because the fragmentation monitors were enabled by default. With the recent releases of Windows Server OS Management Packs, we need to work on a inclusion logic instead. So, in this case, you will need to add all drives that you want to monitor fragmentation level to the “Drives to Enable Fragmentation Monitoring” group, and put a subset of these drives to “Drives to Enable Auto Defrag” group.

05. We want to auto defragment all drives but disable automated defragmentation on specific drives.

This case would be similar to case #3: you will need to place the drives that you are interested in into these 2 groups:

  • Drives to Enable Fragmentation Monitoring
  • Drives to Enable Auto Defrag

In addition to these 5 scenarios, another scenario this MP is catered for is:

06. We want to collect drive fragmentation level as performance data

In this case, if you want to simply collect the fragmentation level as perf data (with or without fragmentation monitoring), you will need to add the drives that you are interested in into the “Drives to Enable Fragmentation Level Performance Collection” group.

So, How do I configure these groups?

By default, I have configured these groups to have a discovery rule to discover nothing on purpose:

image

As you can see, the default group discoveries are looking for any logical drives with the device name (drive letter) matches regular expression ^$. “^$” represent blank / null value. Since all the discovered logical device would have a device name, these groups will be empty. You will need to modify the group memberships to suit your needs.

For example, if you want to include C: drive of all the physical servers, the group membership could be something like this:

grouppop

Note: In SCOM, only Hyper-V VMs are discovered as virtual machines. if you are running other hypervisors, the “virtual machine” property probably wont work.

MP Download

There are 2 management pack files included in this solution. You can download them HERE.

image

Credit

Thanks Cameron for sharing the original MP with the community and providing guidance, review and testing on this version. I’d also like to thank all other OpsMgr focused MVP folks who have been involved in this discussion.

Lastly, as always, please feel free to contact me if you have questions / issues with this MP.

Session Recording for My Presentation in Microsoft MVP Community Camp Melbourne Event

Written by Tao Yang

image

Last Friday, I presented in the Melbourne MVP Community Camp day, on the topic of “Automating SCOM Tasks Using SMA”.

I have uploaded the session recording to YouTube. You can either watch it here:

If you’d like to watch it in full screen, please go to: https://www.youtube.com/watch?v=QW99bVFKg80

 

Or on YouTube: https://www.youtube.com/watch?v=QW99bVFKg80

You can also download the presentation deck from HERE.

And Here’s the sample script I used in my presentation when I explained how to connect to SCOM management group via SDK:

Overall, I think I could have done better – as I wasn’t in the best shape that day. I have been sick for the last 3 weeks (dry cough passed on to me from my daughter). The night before the presentation, I was coughing none-stop and couldn’t go to sleep. I then got up, looked up the Internet and someone suggested that sleeping upright might help. I then ended up slept on the couch for 2.5 hours before got up and drove to Microsoft’s office. So I was really exhausted even before I got on stage. Secondly, the USB external Microphone didn’t work on my Surface, so the sound was recorded from the internal mic – not the best quality for sure.

Anyways, for those who’s watching the recording online, I’m really interested in hearing back from you if you have any suggestions or feedbacks in regards to the session itself, or the OpsMgrExtended module that I’m about to release. So, please feel free to drop me an email if you like Smile.

Microsoft MVP Community Camp 2015 and My Session for SMA Integration Module: OpsMgrExtended

Written by Tao Yang

141215_comcamp2015_Melbourne_01

On next Friday (30th Jan, 2015), I will be speaking at the Microsoft MVP Community Camp Day in Melbourne. I am pretty excited about this event as this is going to be my first presentation since I have become a System Center MVP in July 2014.

My session is titled “Automating SCOM tasks using SMA”. Although this name sounds a little bit boring, let me assure you, it won’t be boring at all! The stuff I’m going to demonstrate is something I’ve been working on during my spare time over the last 6 month, and so far I’ve already written over 6,000 lines of PowerShell code. Basically, I have created a module called “OpsMgrExtended”. this module can be used as a SMA Integration Module as well as a standalone PoewrShell module. It directly interact with OpsMgr SDKs, and can be used by SMA runbooks or PowerShell scripts to perform some advanced tasks in OpsMgr such as configuring management groups, creating rules, monitors, groups, overrides, etc.

If you have heard or used my OpsMgr Self Maintenance MP, you’d know that I have already automated many maintenance / administrative tasks in this MP, using nothing but OpsMgr itself. In this presentation, I will not be showing you anything that’s already been done by the Self Maintenance MP. I will heavily focus on automating management pack authoring tasks.

To date, I haven’t really discussed this piece of work in details with anyone other than few SCOM focused MVPs (and my wife of course). This is going to be the first time I’m demonstrating this project in public.

In order to help promoting this event, and also, trying to “lure” you to come to my session if you are based in Melbourne, I’ve recorded a short video demonstrating how I’ve automated the creation of a blank MP and then a Performance Monitor rule (with override) using SharePoint, Orchestrator and SMA. I will also include this same demo in my presentation, and it is probably going to be one of the easier ones Smile.

I’ve uploaded the recording to YouTube, you can watch from https://www.youtube.com/watch?v=aX9oSj_eKeY or from below:

Please watch in Youtube and switch to the full screen mode.

 

If you like what you saw and would like to see more and find out what’s under the hood, please come to this free event next Friday. You can register from here.

image