Tag Archives: MimboloveVMM
My MVP friend Flemming Riis needed OpsMgr to alert on failed VMM jobs. After discovering that the native VMM MPs don’t have a workflow for this, I have offered my help and built this addendum MP to alert failed and warning (Completed w/ Info) VMM jobs:
I thought it is going to be a quick task, turned out, I started writing this MP about 1 month ago and only able to release it now!
The actual MP is pretty simple, 2 rules sharing a same data source which executes a PowerShell script to detect any failed and warning jobs in VMM. I wrote the initial version in few hours and sent it to Flemming and Steve Beaumont to test in their environments right before the MVP Summit. After the summit, we found out the MP didn’t work in their clustered VMM environments. We then spent a lot of time emailing back and forth trying to figure out what the issue was. In the end, I had to build a VMM cluster in my lab in order to test and troubleshoot it .
So, BIG BIG “Thank You” to both Flemming and Steve for their time and effort on this MP. It is certainly a team effort!
This MP has 2 pre-requisites:
- PowerShell script execution must be allowed on VMM servers and the VMM PowerShell module must be installed on the VMM server (It should by default).
- The VMM server must be fully integrated with OpsMgr (configure via VMM console). This integration is required because this integration creates RunAs account to run workflows in native VMM management pack. This Addendum management pack also utilise this RunAs account.
This MP contains 2 alert rules:
- Virtual Machine Manager Completed w/ Info Job Alert Rule (Disabled by default)
- Virtual Machine Manager Failed Job Alert Rule (Enabled by default)
Both rules shares a same data source with same configuration parameters values (to utilise Cook Down). They are configured to run on a schedule and detects failed / warning jobs since the beginning of the rule execution cycle. i.e. by default, they run every 3 minutes, so they would detect any unsuccessful jobs since 3 minutes ago. An alert is generated for EVERY unsuccessful job:
Note: Please keep in mind, If you enable the “Completed w/ Info job alert rule”, because we utilise Cook Down in these 2 rules, if you need to override the data source configuration parameters (IntervalSeconds, SyncTime, TimeoutSeconds), please override BOTH rules and assign same values to them so the script in the data source module only need to run once in every cycle and feed the output to both workflows.
Since it’s a really simple MP, I didn’t bother to write a proper documentation for this, it’s really straight forward, I think I have already provided enough information in this blog post.
Please test and tune it according to your requirements before implementing it in your production environments.
Lastly, I’d like to thank Steve and Flemming again for their time and effort on this MP. If you have any questions in regards to this MP, please feel free to send me an email.
I needed to build a 2-node VMM 2012 R2 cluster in my lab in order to test an OpsMgr management pack that I’m working on. I was having difficulties getting it installed on a cluster based on 2 Hyper-V guest VMs, and I couldn’t find a real step-to-step detailed dummy guide. So after many failed attempts and finally got it installed, I’ll document the steps I took in this post, in case I need to do it again in the future.
AD Computer accounts:
I pre-staged 4 computer accounts in the existing OU where my existing VMM infrastructure is located:
- VMM01 – VMM cluster node #1
- VMM02 – VMM cluster node #2
- VMMCL01 – VMM cluster
- HAVMM – Cluster Resource for VMM cluster
I assign VMMCL01 full control permission to the HAVMM (Cluster resource) computer AD account:
I allocated 4 IP addresses, one for each computer account listed above:
Guest VMs for Cluster Nodes
I created 2 identical VMs (VMM01 and VMM02) located in the same VLAN. There is no requirement for shared storage between these cluster nodes.
I installed failover cluster role on both VMs and created a cluster.
VMM 2012 R2 Installation
When installing VMM management server on a cluster node, the installation will prompt if you want to install a highly available VMM instance, select yes when prompted. Also, the SQL server hosting the VMM database must be a standalone SQL server or a SQL cluster, the SQL server cannot be installed on one of the VMM cluster node.
Port configuration (left as default)
Library configuration (need to configure manually later)
Run VMM install again on the second cluster node.
As instructed in the completion window, run ConfigureSCPTool.exe –AddNode HAVMM.corp.tyang.org CORP\HAVMM$
Cluster Role is now created and can be started:
In order to integrate VMM and OpsMgr, OpsMgr agent and console need to be installed on both VMM cluster node. I pointed the OpsMgr agent to my existing management group in the lab, approved manually installed agent and enabled agent proxy for both node (required for monitoring clusters).
Installing Update Rollup
After OpsMgr components are installed, I then installed the following updates from the latest System Center 2012 R2 Update Rollup (UR 4 at the time of writing):
- OpsMgr agent update
- OpsMgr console update
- VMM management server update
- VMM console update
Connect VMM to OpsMgr
I configured OpsMgr connection in VMM console:
The intention of this post is simply to dump all the screenshots that I’ve taken during the install, and document the “correct” way to install VMM cluster that worked in my lab after so many failed attempts.
The biggest hold up for me was without realising I need to create a separate computer account and allocate a separate IP address for the cluster role (HAVMM). I was using the cluster name (VMMCL01) and its IP address in the cluster configuration screen and the installation failed:
After going to through the install log, I realised I couldn’t use the existing cluster name:
When I ran the install again using different name and IP address for the cluster role, the installation completed successfully.