Monthly Archives: November 2014

VMM 2012 Addendum Management Pack: Detect Failed VMM Jobs

Written by Tao Yang

Background

My MVP friend Flemming Riis needed OpsMgr to alert on failed VMM jobs. After discovering that the native VMM MPs don’t have a workflow for this, I have offered my help and built this addendum MP to alert failed and warning (Completed w/ Info) VMM jobs:

image

I thought it is going to be a quick task, turned out, I started writing this MP about 1 month ago and only able to release it now!

The actual MP is pretty simple, 2 rules sharing a same data source which executes a PowerShell script to detect any failed and warning jobs in VMM. I wrote the initial version in few hours and sent it to Flemming and Steve Beaumont  to test in their environments right before the MVP Summit. After the summit, we found out the MP didn’t work in their clustered VMM environments. We then spent a lot of time emailing back and forth trying to figure out what the issue was. In the end, I had to build a VMM cluster in my lab in order to test and troubleshoot it Smile.

So, BIG BIG “Thank You” to both Flemming and Steve for their time and effort on this MP. It is certainly a team effort!

MP Pre-Requisites

This MP has 2 pre-requisites:

  • PowerShell script execution must be allowed on VMM servers and the VMM PowerShell module must be installed on the VMM server (It should by default).
  • The VMM server must be fully integrated with OpsMgr (configure via VMM console). This integration is required because this integration creates RunAs account to run workflows in native VMM management pack. This Addendum management pack also utilise this RunAs account.

SNAGHTML42d92eab

Alert Rules:

This MP contains 2 alert rules:

  • Virtual Machine Manager Completed w/ Info Job Alert Rule (Disabled by default)
  • Virtual Machine Manager Failed Job Alert Rule (Enabled by default)

image

Both rules shares a same data source with same configuration parameters values (to utilise Cook Down). They are configured to run on a schedule and detects failed / warning jobs since the beginning of the rule execution cycle. i.e. by default, they run every 3 minutes, so they would detect any unsuccessful jobs since 3 minutes ago. An alert is generated for EVERY unsuccessful job:

SNAGHTML42e07b14

SNAGHTML42e1b950

Note: Please keep in mind, If you enable the “Completed w/ Info job alert rule”, because we utilise Cook Down in these 2 rules, if you need to override the data source configuration parameters (IntervalSeconds, SyncTime, TimeoutSeconds), please override BOTH rules and assign same values to them so the script in the data source module only need to run once in every cycle and feed the output to both workflows.

Download

Since it’s a really simple MP, I didn’t bother to write a proper documentation for this, it’s really straight forward, I think I have already provided enough information in this blog post.

Please test and tune it according to your requirements before implementing it in your production environments.

Download Link

Lastly, I’d like to thank Steve and Flemming again for their time and effort on this MP. If you have any questions in regards to this MP, please feel free to send me an email.

My Experience Manipulating MDT Database Using SMA, SCORCH and SharePoint

Written by Tao Yang

Background

At work, there is an implementation team who’s responsible for building Windows 8 tablets in a centralised location (we call it integration centre) then ship these tablets to remote locations around the country. We use SCCM 2012 R2 and MDT 2013 to build these devices using a MDT enabled task sequence in SCCM. The task sequence use MDT locations to apply site specific settings (I’m not a OSD expert, I’m not even going to try to explain exactly what these locations entries do in the task sequence).

SNAGHTML4d5244b

In order to build these tablets for any remote sites, before kicking off the OSD build, the integration centre’s default gateway IP address must be added to the location entry for this specific site, and removed from any other locations.

SNAGHTML4dbe1c0

Because our SCCM people didn’t want to give the implementation team access to MDT Deployment Workbench, my team has been manually updating the MDT locations whenever the implementation team wants to build tablets.

I wasn’t aware of this arrangement until someone in my team went on leave and asked me to take care of this when he’s not around. Soon I got really annoyed because I had to do this few times a day! Therefore I decided to automate this process using SMA, SCORCH and SharePoint so they can update the location themselves without giving them access to MDT.

The high level workflow is shown in the diagram below:

MDT Automation

Design

01. SharePoint List

Firstly, I created a list on one of our SharePoint sites, and this list only contains one item:

SNAGHTML52b1a8c

02. Orchestrator Runbook

I firstly deployed the SharePoint integration pack to the Orchestrator management servers and all the runbook servers. Then I setup a connection to the SharePoint site using a service account

SNAGHTML532bed6

The runbook only has 2 activities:

image

Monitor List Items:

SNAGHTML53742a6

Link:

SNAGHTML536dbb9

The link filters the list ID. ID must equal to 1 (first item in the list). This is to prevent users adding additional item to the list. They must always edit the first (and only) item on the list.

Start SMA Runbook called “Update-MDTLocation”:

image

image

This activity runs a simple PowerShell script to start the SMA runbook. The SMA connection details (user name, password, SMA web service server and web service endpoint) are all saved in Orchestrator as variables.

SNAGHTML53aaec0

03. SMA Runbook

SNAGHTML5413f3c

Firstly, I created few variables, credentials and connections to be used in the runbook:

Connections:

Credential:

  • Windows Credential that has access to the MDT database (we have MDT DB located on the SCCM SQL server, so it only accepts Windows authentication). I named the credential “ProdMDTDB”

Variables:

  • MDT Database SQL Server address. I named it “CM12SQLServer”
  • Gateway IP address. I named it “GatewayIP”

 

Here’s the code for the SMA runbook:

Putting Everything Together

As demonstrated in the diagram in the beginning of this post, here’s how the whole workflow works:

  1. User login to the SharePoint site and update the only item in the list. He / She enters  the new location in the “New Gateway IP Location” field.
  2. The Orchestrator runbook checks updated items in this SharePoint list every 15 seconds.
  3. if the Orchestrator runbook detects the first (and only) item has been updated, it takes the new location value, start the SMA runbook and pass the new value to the SMA runbook.
  4. SMA runbook runs a PowerShell script to update the gateway location directly from the MDT database.
  5. SMA runbook sends email to a nominated email address when the MDT database is updated.

The email looks like this:

SNAGHTML197a0f95

The Orchestrator runbook and the SMA runbook execution history can also be viewed in Orchestrator and WAP admin portal:

image

image

Room for Improvement

I created this automation process in a quick and easy way to get them off my back. I know in this process, there are a lot of areas can be improved. i.e.

  • Using a SMA runbook to monitor SharePoint list direct so Orchestrator is no longer required (i.e. using the script from this article. – Credit to Christian Booth and Ryan Andorfer).
  • User input validation
  • Look up AD to retrieve user’s email address instead of hardcoding it in a variable.

Maybe in the future when I have spare time, I’ll go back and make it better , but for now, the implementers are happy, my team mates are happier because it is one less thing off our plate Smile.

Conclusion

I hope you find my experience in this piece of work useful. I am still very new in SMA (and I know nothing about MDT). So, if you have any suggestions or critics, please feel free to drop me an email.

Installing VMM 2012 R2 Cluster in My Lab

Written by Tao Yang

I needed to build a 2-node VMM 2012 R2 cluster in my lab in order to test an OpsMgr management pack that I’m working on. I was having difficulties getting it installed on a cluster based on 2 Hyper-V guest VMs, and I couldn’t find a real step-to-step detailed dummy guide. So after many failed attempts and finally got it installed, I’ll document the steps I took in this post, in case I need to do it again in the future.

AD Computer accounts:

I pre-staged 4 computer accounts in the existing OU where my existing VMM infrastructure is located:

  • VMM01 – VMM cluster node #1
  • VMM02 – VMM cluster node #2
  • VMMCL01 – VMM cluster
  • HAVMM – Cluster Resource for VMM cluster

SNAGHTML14878767

I assign VMMCL01 full control permission to the HAVMM (Cluster resource) computer AD account:

SNAGHTML148c6725

IP Addresses:

I allocated 4 IP addresses, one for each computer account listed above:

image

Guest VMs for Cluster Nodes

I created 2 identical VMs (VMM01 and VMM02) located in the same VLAN. There is no requirement for shared storage between these cluster nodes.

Cluster Creation

I installed failover cluster role on both VMs and created a cluster.

image

image

image

image

image

VMM 2012 R2 Installation

When installing VMM management server on a cluster node, the installation will prompt if you want to install a highly available VMM instance, select yes when prompted. Also, the SQL server hosting the VMM database must be a standalone SQL server or a SQL cluster, the SQL server cannot be installed on one of the VMM cluster node.

DB Configuration

image

Cluster Configuration

image

DKM Configuration

image

Port configuration (left as default)

image

Library configuration (need to configure manually later)

image

Completion

image

Run VMM install again on the second cluster node.

As instructed in the completion window, run ConfigureSCPTool.exe –AddNode HAVMM.corp.tyang.org CORP\HAVMM$

Cluster Role is now created and can be started:

image

OpsMgr components

In order to integrate VMM and OpsMgr, OpsMgr agent and console need to be installed on both VMM cluster node. I pointed the OpsMgr agent to my existing management group in the lab, approved manually installed agent and enabled agent proxy for both node (required for monitoring clusters).

Installing Update Rollup

After OpsMgr components are installed, I then installed the following updates from the latest System Center 2012 R2 Update Rollup (UR 4 at the time of writing):

  • OpsMgr agent update
  • OpsMgr console update
  • VMM management server update
  • VMM console update

Connect VMM to OpsMgr

I configured OpsMgr connection in VMM console:

2014-11-19_22-29-43

 

Conclusion

The intention of this post is simply to dump all the screenshots that I’ve taken during the install, and document the “correct” way to install VMM cluster that worked in my lab after so many failed attempts.

The biggest hold up for me was without realising I need to create a separate computer account and allocate a separate IP address for the cluster role (HAVMM). I was using the cluster name (VMMCL01) and its IP address in the cluster configuration screen and the installation failed:

image

After going to through the install log, I realised I couldn’t use the existing cluster name:

image

When I ran the install again using different name and IP address for the cluster role, the installation completed successfully.

Visual Studio 2013 Community Edition

Written by Tao Yang

Nowadays, Visual Studio is definitely one of my top 5 most-used applications. I have also started using Visual Studio Online to store source codes few months ago. I have started migrating my management packs and PowerShell scripts into Visual Studio Online, and connect Visual Studio to my Visual Studio Online repository.

Microsoft has released a new edition of Visual Studio 2013 few days ago: Visual Studio 2013 Community Edition. This morning, in order to test it, I uninstalled Visual Studio Ultimate from one of my laptops, and installed the new community edition instead.

I tested all the features and extensions that I care about, I have to say I’m amazed all of them worked!

Visual Studio Online: I am able to connect to my Visual Studio Online and retrieved a Management Pack project that I’m currently working on.

SNAGHTML730a5e

 

Visual Studio Authoring Extension (VSAE): I installed VSAE version 1.1.0.0, same as the previous installation on my laptop, all MP related options are still there:

image

I tried to build the MP in the solution I’m working on, and it was built successfully:

SNAGHTML7680c5

 

PowerShell Tools for Visual Studio 2013: This is a community extension developed by PowerShell MVP Adam Driscoll (More information can be found here). This extension enables Visual Studio as a PowerShell script editor. As expected, it works in the Community edition and my PowerShell script is nicely laid out:

SNAGHTML7b8c49

 

In the past, when Microsoft has discontinued development of OpsMgr 2007 R2 Authoring Console and replaced it with VSAE, in my opinion, it has made it harder for average IT Pros to start authoring management packs. One of the reasons is that VSAE is an extension for Visual Studio, it requires Visual Studio Professional or Ultimate edition, which are not cheap comparing with the old Authoring console (Free).  Therefore I am really excited to find out VSAE works just fine with the latest free Community edition. I’m hoping the community edition would benefit OpsMgr and Service Manager specialists around the world by providing us an affordable authoring solution.

Lastly, having said that, in terms of licensing for the community edition, there are some limitations. Please read THIS article carefully before using it. i.e. If you are working for a large enterprise and are developing a commercial application, you probably not going to able to use it.

Disclaimer: In this post, I’m only focusing the technical aspect based on my experience. Please don’t hold me responsible when you misused Visual Studio 2013 Community edition and violated the licensing condition. As I mentioned in the post, please read THIS article carefully to determine if you are eligible first!