Friday, June 19, 2015

OMS Information Sources Aggregation

Microsoft Operations Manager Suite (OMS) is the latest & greatest Azure based service offering of Microsoft. And it can do a LOT for your organization.

However, since it’s brand new service, people and organizations who want to use it, need to do some RTFM (Read The FRIENDLY Manual) in order to get the most out of it. As usual those resources are scattered across the internet. Therefore I’ve decided to start an aggregation of all the sources out there, all about OMS and how to use it.

Please feel free to comment in order to add the resources you’re using as well. This way this posting will help you all out there, looking for some good & solid information, all about OMS.

This posting will be updated when required. Hopefully I’ll have to update it on a daily basis Smile. So – again – feel free to reach out and add your sources as well.

Community based sources, like blogs etc.

  1. Cloud Administrator, blog by Stanislav Zhelyazkov
  2. Tao Yang’s System Center Blog, blog by Tao Yang (duh!)
    • All postings tagged with OMS
  3. Cameron Fuller Archive, company blog where Cameron Fuller works
    • All postings written by Cameron, look for the ‘Cloud’ banner
  4. Thoughts on OMS, OpsMgr & Azure, my blog 
  5. ???

Microsoft based sources, like documentation, blogs etc.

  1. Operational Insights documentation, official Azure documentation
  2. $, blog by Daniele Muscetta
  3. Wei Out There, blog by Wei H Lim
  4. System Center: Operations Manager Engineering Blog, blog by SCOM Engineering Team
    • All postings tagged with OMS
    • All postings tagged with Opinsights
  5. Stefan Stranger's Weblog - Manage your IT Infrastructure, blog by Stefan Stranger
  6. OMS Log Search API Documentation , official documentation
  7. ???

Event Viewer vs. PowerShell Created HTML Reports

No. I don’t miss Windows Server 2003x at all. Or perhaps I do. One ‘small’ thing which has grown into a major nuisance in the later versions of Windows Server. It’s the Event Viewer. In WS 2003x it just worked. And it was even fast AND stable.

How much has changed in the later versions of Windows Server… No matter how beefed that server is, the Event Viewer takes a LONG time to load and when it’s loaded, it’s not very stable either. Especially when you’ve opened the Event Logs of multiple servers and are cycling through them in order to gain a broader picture of a certain issue you’re troubleshooting. Changes are the MMC will simply stall or crash…

So I can’t say I am a fan of the MMC Smile.

However, sometimes I need to troubleshoot a certain SCOM issue. All those times the starting point is the Operations Manager event log in order to gain a better insight of the underlying causes.

And at moments like those, I’ve got to use the Event Viewer, with all it’s quirks and ‘hidden features’. As long as I open the Operations Manager event log for only one server, all is fine. But when I open the Operations Manager event log of multiple servers and apply heavy filtering as well, things start to get nasty.

So it was time for a new approach in which the usage of the Event Viewer is limited as much as possible and the rest is handled by PowerShell…

Say what? PowerShell as an Event Viewer alternative?
Yes and no. I use PowerShell in order to tap into the related servers and their Operations Manager event logs, dig through them, collect the required Event ID’s and pipe the output to nicely formatted HTML files, one HTML file per server and per collected Event ID. Every time PowerShell has created such an HTML file, it’s automatically opened by the default internet browser of the system running that PowerShell script.

So now my work flow – compared to using the MMC only – is like this:

  1. Using the Event Viewer in order to see what nasty and unwanted Event ID’s are logged in the Operations Manager event log on what servers;
  2. Write those Event ID’s and server names down;
  3. Run the PS script in order to get those nicely formatted (I like CSS!) HTML files.

Compare this:

To this:
Here every tab represents a certain server with a certain Event ID.

I know for myself what I prefer Smile.

Making it even easier
The first generation of the PS script required a bit more care. Simply because the server names and Event ID’s were hard coded in that script. So every time another set of servers was to be queried AND/OR other Event ID’s needed to be collected, the PS script had to be modified.

And every time a modification was made to the PS script, changes were the script was wrecked because someone deleted too much or modified parts which shouldn’t be touched at all.

In order to elevate that issue, the next generation of the PS script uses two text files instead ('C:\Server Management\Events.txt' & 'C:\Server Management\Servers.txt'), containing the required information (Event ID’s and server names).

The names of these text files are self explanatory I hope Smile. But just in case here is some basic explanation:

  1. C:\Server Management\Events.txt
    Contains the Event ID’s to be collected from the Operations Manager event log, like this:

  2. C:\Server Management\Servers.txt
    Contains the server names to be queried, like this:

These files need to be created with the exact names and in the correct folders. Of course, you’re free to modify the PS script as required.

The PS file will check for the presence of BOTH files. When one or both files are missing the script will notify the user about it and end the script:

Good looks are EVERYTHING!
Even though PowerShell can quite easily direct it’s output to HTML, the basic formatting isn’t that nice to look at:

With some basic CSS the same HTML output looks like this:

Therefore a basic CSS file is required (C:\Server Management\HTML Reports\Format.css). The same PS script checks for the presence of the related folder AND file. And when either of them is missing, they’re created by the same PS script Smile.

Talking about making live easy uh?

Google Chrome doesn’t like the basic CSS file since it totally ignores it. Instead it shows a lot of garbage on your screen. So open these files with IE.

And an additional tip about CSS:
When you want to sent these HTML files to your colleagues, it’s better to safe them to web archive files from IE since that way the formatting will be integrated. Otherwise – when your colleagues don’t have the CSS file on their systems – they’ll look at the ‘plane Jane’ HTML output of PowerShell which isn’t that nice at all to look at.

The PS script itself
To be found on my OneDrive.

Thursday, June 18, 2015

New Sources!!!

For sure the internet contains many treasures, most of which are hidden. By luck I bumped into TWO new blogs about System Center, containing good information. Time to share it with the rest of the world.

  1. The Manageability Guys
    By luck I bumped into the TechNet blog run by the PFE UK Manageability team, all about System Center. Even though they don’t blog on a regular basis, the postings are really good and interesting.

    Through a particular channel (than you guys/girls Smile), this blog was brought to my attention. The author is a person with a deep knowledge AND experience on System Center things. However, he/she has chosen to stay anonymous. None the less, this blog kicks ass!

For myself I’ve added these two blogs to my list of blogs to visit on a regular basis. For anyone working with System Center I strongly advise to do the same.

Cross Post: SCOM, Cloud & Elephant In The Room…

Fellow MVP Cameron Fuller has written an excellent whitepaper all about the relevance of SCOM in todays (and tomorrows) world which is moving with an ever accelerating pace into the cloud.

In this whitepaper Cameron does some serious investigation before he comes to a (very interesting) conclusion about the question whether SCOM is still relevant in the world of the cloud. Even though SCOM is one of Cameron’s major passions, he refrains from emotion, takes a few steps back and takes an objective look at the whole matter.

To my humble opinion, this is what makes Cameron stand out of many other bloggers, writers and presenters. Even though this topic touches the very existence of his professional career, he remains his cool & calm and – unbiased! – investigates the matter at hand. Combined with his humor it makes this whitepaper not only very interesting to read but also fun without ever loosing focus.

So for anyone involved with SCOM (and the elephant, no matter it’s voluntary or not), this whitepaper is a MUST read.

Besides the obvious credits (Cameron Fuller of course, who else?), additional credits go to Savision who publishes this whitepaper and offer it for FREE.

The whitepaper can be downloaded from here.

Thursday, June 11, 2015

Resistance Is Futile. You’ll Be Assimilated…

Most techies will recognize these two sentences in a split second. Star Trek, the Borg! Totally awesome! The ‘old’ series that is. But perhaps that’s just me…

Back to our planet and current situation
But more down to earth, these two sentences do apply here in our world as well. And not in a negative way. As we all know, when Microsoft has set it’s collective mind onto something, it’s not a question whether it’s going to happen but more WHEN it’s going to happen.

Destination? The cloud!
So Microsoft is all in the cloud. And as such many traditional on-prem services are revamped or totally rewritten in order to become another Azure based service in the already massive & impressive portfolio.

And for System Center based products the same thing is happening. Whether we’re talking Orchestrator, DPM, SCCM, VMM or SCOM. And I am convinced that even a big and bloated product like SCCM will finally live in Azure one day.

Invent it or buy it
And Microsoft is moving fast here. When some technologies are lacking or not just spot on, they’re ‘invented’ as required. Or, when something is already available and it’s top notch, Microsoft won’t hesitate in acquiring those technologies, or better the owner of those very same technologies and the IP (Intellectual Property) involved.

The last acquisition
So yesterday the BIG news came out: Microsoft acquired BlueStripe, the company who built awesome software enabling SCOM to AUTO detect new applications and create DYNAMICALLY the related Distributed Applications.

BlueStripe's FactFinder (as the product was titled) was capable of detecting applications – ALL the tiers involved – automatically! And it wasn’t limited to Microsoft based technologies only. No way! Linux, Oracle, the whole LAMP stack, FactFinder just detected it and started to collect performance data, pointing out the potential culprits for otherwise very hard to detect performance issues.

First this awesome software lived in it’s own space but soon it allowed to interoperate with SCOM, so SCOM became the single-pane-of-glass, where FactFinder delivered crucial information SCOM couldn’t – or not sufficiently – collect and correlate.

FactFinder started out for on-prem based workloads only. But soon when the cloud train started moving, BlueStripe jumped on the wagon and was in for that ride as well. So the next iterations of their software fully supported the hybrid scenario, where workloads live on-prem as well in the cloud.

For now BlueStripe’s FactFinder can’t be bought anymore since Microsoft is working hard to integrate it into the next generations of Windows Server, System Center and Azure. And believe me when I tell you this train is moving FAST!!!

Visualization is KEY… The NEXT acquisition perhaps?
About just a year ago BlueStripe started to work together with Squared Up, a UK based company building awesome SCOM HTML5 driven dashboards. These dashboards are SUPER fast, very nice to look at and do have highly competitive pricing.

This was a smart move by BlueStripe since it made their Views look so much more sexier in the SCOM Console and lighting fast.

So this is just me thinking out loud:

Without a doubt Microsoft will have a list tucked away with companies they would like to have onboard (read: acquired) in order to incorporate the technologies owned by those very same companies into their Azure portfolio. Is Squared Up going to be the next one?

Only time will tell. One thing for sure: exciting times are ahead with the big shift to the hybrid scenario where cloud based workloads (Azure for instance) are just a fact of life and fully integrated with the on-prem based workloads.

Monday, June 8, 2015

SCOM 2012 R2 PowerShell: Enumerate All SCOM MS & Gateway Servers & Count All MMA’s Reporting To Them

Suppose you’re running a big SCOM 2012 R2 environment. Many SCOM 2012 R2 Management Servers (MS) and Gateway Servers (GW’s) are deployed. Many Microsoft Monitoring Agents (MMA’s) are out there as well.

And now you want to have a quick overview of how many SCOM 2012x MS server you’ve got in total. Like wise for the GW’s AND you want to know per MS/GW how many MMA’s are reporting to it.

Yes. It can be done by clicking through the Console. And yes, you can run a Report for it.

I race you!
But – like with many other things in your work – it has to be done QUICK! So now PowerShell comes into play since it goes way much faster with some good PowerShell cmdlets.

Therefore I’ve made this PS script which connects to your SCOM 2012 R2 MG and enumerates all MS servers, GW’s and counts the MMA’s reporting to the related MS/GW server. It puts the output on the screen AND into a file (Enum_MS_GW_MMA.txt), located in the folder C:\Server Management. At the end of the PS script this file is opened in Notepad for you, so you don’t have to browse to it.

The only thing you’ve got to modify in this PS script is the entry for the FQDN of one of your SCOM 2012 R2 MS Servers:

After that, save it and run it.

Also with this PS script I used other resources. In this case I used a PS script written by Jimmy Harper in order to obtain more detailed information about the SCOM Gateway Servers. You can find this posting, written by Jimmy, here.

You can download the script from here.

Wednesday, June 3, 2015

SCCM PowerShell: Enumerate Patch Device Collections In A Certain Folder & Write Output To CSV Files

As a Best Practice in SCCM 2012x, you use dedicated Device Collections for patching. This way it’s far more easier to gain (and keep!!!) granular control over patch management as a whole. And ALSO as a Best Practice, these dedicated patch Device Collections are placed in a dedicated folder with a sound naming scheme being used.

And in that folder there will be other folders as well in order to differentiate between patch Device Collections for clients and servers. So finally you’ve got something like this:

But in order to stay on top of things, it’s a hard requirement to KNOW exactly which servers are contained by what patch Device Collection. You don’t want that special server which doesn’t allow for an automatic reboot to end up in the patch Device Collection which does just that since it’s targeted by the ADR allowing automatic reboots… Ouch!

Also for the process side of things, you need to have this information so you can use that information in your service manager system like SCSM for instance.

Of course, you can open each patch Device Collection, copy it’s members and paste it into an Excel sheet. But that’s so time consuming and so 1980’s!

The PowerShell challenge & a ‘pat’ on the back of Coretech
But hey! We’ve got PowerShell these days! And SCCM 2012x is PowerShell driven! So why not write a PS script which does just that.

And while we’re at it, this PS script must enumerate the correct patch Device Collections (these patch specific Device Collections come and go, as required) and create per patch Device Collection a separate CSV file, having the name of the related patch Device Collection.

On itself nothing difficult to do with PowerShell. However, as it turned out, it was a challenge to enumerate the correct folder in the SCCM 2012x Console since there isn’t a default PS cmdlet for it. Gladly the community came to the rescue. I found different resources about how to achieve just that.

For me however, one blog stood out since it provided a ‘fix’ for it with just a few lines of PS code. Awesome! Yes, you need to obtain the FolderID by using a free tool (Coretech WMI and PowerShell Explorer tool). But when you’ve got that FolderID you’re in good shape.

Therefore a BIG word of thanks to Coretech since they provided me with the correct information, PowerShell code and previously mentioned tool, all for free, like the true community spirit. So all credits go to them since without this posting written by them, this PS script couldn’t be made and kept so simple.

How the PS script works
So now it’s time for the PS script. it does a couple of things which I’ll explain here.

First and foremost, it’s best to run this PS script from a SCCM server.

And – again - YOU need to run the Coretech WMI and PowerShell Explorer tool FIRST in order to obtain the correct FolderID which you’re going to use in this PS script.

Replace [YOUR SCCM Side Code] with your side code:

Replace the folder in the comment with the correct folder.
Replace the entry 16777XYZ after $FolderID with the correct one you’ve found with the Coretech WMI and PowerShell Explorer tool 
Replace [YOUR SCCM Side Code] with your side code after -Namespace "ROOT\SMS\Site_

Create the folder D:\Server Management\Server Patch Device Collections on the system where you’re going to run this PS script from. Of course you’re free to use the C:\ drive as well Smile.
This part of the PS script checks whether that folder is already present. When not found it will create it for you. Of course you can modify this part of the PS script as required.

Now a check is performed in order to see whether old CSV files exist in the folder D:\Server Management\Server Patch Device Collections:

When CSV files are found, the user will be prompted whether or not these CSV files may be overwritten by new ones:

When the user is okay with overwriting the old CSV files the script will enumerate all patch (server) Device Collections, get it’s members and pipe them into a CSV file, one per Device Collection. Each CSV file will be named as their respective (server) patch Device Collection:

The user has selected not to overwrite the existing CSV files. Further script execution is stopped:

There aren’t old CSV files found. The script will execute as intended and let the user know when finished. The user will also be reported where the newly created CSV files are to be found:

The PS script
You can find the PS script here.

Of course, you can make this script more fancier. Like asking the user where to place the CSV files and use the answer as a parameter in the script.

Or when old CSV files are found, they can be moved to another folder and let the script continue.

None the less, this script provides the basics with some additional checking (folder/CSV files). And the rest of it, I leave it up to you, the community.

So feel free to make it more ‘posh’ and send me back the results. When I am impressed I’ll share it on my blog with referring to you.

And last but not least: Without Coretech this script wouldn’t have been so easy to make.

New Support Tip: SCOM Agent Push Fails With Error Code: 800706BA

The SCOM support team has posted a new article all about troubleshooting Error Code: 800706BA when trying to push install a SCOM 2012x Agent.

Go here to read about this issue, it’s cause and how to solve it.

Cross Post: SCCM 2012x: Application Model & Advanced Detection Logic

SCCM PFE Steve Rachui has written an excellent posting all about the SCCM 2012x Application Model and advanced detection logic.

Like many postings written by Steve, they contain TONS of worth while information, even though I’ve to read it a couple of times in order to ‘wrap my head around it’.

So whenever you’re into SCCM and interested into the deeper aspects of the application model THIS is the place to start. The same posting contains links to previous postings all about the application model as well.

Thanks Steve for sharing!

Posting: ConfigMgr 2012, the Application Model and advanced Detection Logic.

SP1 SCCM 2012 R2 Installation Resources

This is nothing but a cross post, containing links to useful blog postings all about rolling out SP1 for SCCM 2012 R2. So all credits go to the people who wrote them.

  1. System Center Dudes
    Step-by-Step SCCM 2012 R2 SP1 Upgrade Guide

  2. Ronni Pedersen
    Installing SCCM 2012 SP2/R2 SP1 – Quick Start Guide

  3. Anoop C Nair
    Download and Upgrade SCCM 2012 R2 SP1 and SCCM 2012 SP2 without any Confusion

  4. Henk Hoogendoorn
    Check the database before doing a ConfigMgr 2012 R2 SP1 upgrade
    Doing a ConfigMgr 2012 R2 SP1 upgrade (Notes from the field)

  5. TechNet
    Upgrade Configuration Manager to a New Service Pack

Monday, June 1, 2015

SCOM PowerShell: Move RSME Role Holder To Other OM12x Management Server

OM12x: Goodbye SCOM 2007x RMS & HELLO RMSE!
With OM12x the SCOM 2007x RMS was finally gone. Or better, it’s functionality distributed over ALL OM12x Management Servers. This has made OM12x far more stable since the single-point-of-failure (SPoF) was finally gone.

Even though this is a huge step forward, Microsoft needed to address another small challenge. Some old MP’s require the SCOM 2007x RMS. So in SCOM 2012x the SCOM 2007x RMS role had to be emulated in order to get those MP’s working as well. Hence the RMS Emulator Role (RMSE) was born.

On itself nothing special. Just an emulator in order to make those old MP’s ‘think’ they’re communicating with a plain SCOM 2007x RMS while it’s just the latest and greatest SCOM 2012 R2 UR#6 Management Server.

Who needs the RSME?
The Exchange Server 2010 MP requires the RMSE, or better the Correlation Engine of this MP does. And there are other older MP’s as well requiring the RMSE.

How to manage it?
With just a few PS cmdlets the RMSE can be removed, queried for (who’s hosting it at this moment) and moved to another server, or set when previously removed.

In relatively small OM12x environments this is done by just a few people. But in bigger organizations there needs to be a process in place about how to move RSME. And on top of it all, a PS script which does just that. That way not only the process is in place but a uniform method of operation as well when the RMSE needs to be moved to another server when the SCOM 2012x Management Server hosting that role by default is down.

The PS script
This PS script does a couple of things. First it checks for the current RMSE Role holder and reports it’s findings:

Then it shows a dialogue box with the SCOM 2012x MS servers to choose from. In order to allow a roll back to the MS server which originally hosted the RMSE Role, that server is listed as well:

Select the server and click OK. Now the new RMSE Role holder will be set. A check will run, in order to see whether the move went just fine or not:

Move went wrong:

PS 2.0 (or previous) and PS 3.0 or later
One thing to reckon with is that PS 2.0 or previous defines the forms a bit differently then PS 3.0 or later. So I’ve uploaded TWO PS scripts to my OneDrive, in order to address that issue:
  1. MoveRMSE - PS2 or previous.ps1
    For PS 2.0 or previous versions.
    Download here.
  2. MoveRMSE - PS3 or later.ps1
    For PS 3.0 or later versions.
    Download here.
Lines to modify in the PS scripts
You need to modify these lines in the PS script in order to make it work. These modifications are the same for both PS scripts by the way…


System Center Orchestrator Migration Toolkit

A few days ago Microsoft published a collection of tools for migrating integration packs, runbooks and standard activities to Azure Automation & Service Management Automation (SMA).

For anyone running Orchestrator this is a must have. Tool collection to be found here.

New KB Article: Agent Push Installation To WS 2012 Throws Error 800706D3

A few days ago Microsoft published KB3060495  all about fixing error code 800706D3 when trying to push install a SCOM 2012 R2 Agent to a Windows Server 2012 based server.

The same KB article describes the cause and how to solve it.