(This is part 1 of 3, in an articles series on how we at Unity started using “screenshots” and machine learning in our monitoring system)

Part 1: Getting screenshots from VMware machines (This Post)
Part 2: Using image analysis to detect machines in a broken state (Not published yet)
Part 3: Using Clarifai “Machine learning” to further narrow down our results (Not Published yet)

In my new job as a “DevOps” engineer (I know I know, there is no such thing as a DevOps engineer), I am working with Unity’s build farm, we are building the Unity engine, and a lots of artifacts 1000’s of times a day, so our build farm is relatively big, running multiple different OS’s like Windows, Mac and Linux.

We are continuously trying to improve our monitoring of the platform, both in order to detect failed machines, but also trying to gather information as to what have gone wrong, so we can use this information to prevent the same issues arising in the future, by giving feedback to the relecant teams.

We had a period, where we had some storage related issues, this caused Mac and Linux machines, in particular, to crash and hang, we had no trouble detecting the machines went offline, but since we weren’t able to connect to these machines, we could not tell what “state” they were in. So in order to document, what had happened, we had to look at the console of the given machine.
So I started thinking if there was an automated way that I could test for this, and it hit me, that I had read somewhere, that it I possible in Vmware to take a “screenshot” of a running machine.

So I wrote a PowerShell script that would take a screenshot of all our running VM’s in our build farm, then at least I had some documentation of what the “state” of the machines were.

In the above example am creating a “session” to reuse for the calls against Vcenter, so we will not see 100’s of connections in Vcenter.

In the next part of the article, I will cover some PowerShell functions I wrote to wrap ImageMagick to make some initial comparisons of each screenshot, sorting them in known good, known bad and What the h**l is going on here 😉

A friend of mine had had to go through a lot of data, and substitute some values… But these weren’t 1-1 replacement, he had to find the value which was in HEX and then convert them to regular characters.
The files he had to traverse had several million lines each, so certain performance considerations had to be taken.

My initial thinking was to use regular expressions, since they excel at manipulating text. So I started playing around with the -replace operator in PowerShell

In PowerShell you can do something like:

Which will replace the matching string, with numbered capturing group 1 from the Regex in this case the HEX value matching ([0-9A-F]+) in Regex parenthesis denotes a capturing group.

But I needed to change the HEX string to a “string”, so I tried different methods, but the PowerShell -Replace does not support callbacks.

So what I did was to create a callback function (Just named it Callback, could be called anything)

Instead of using -Replace I use the .Net type accelerator [Regex] to define the regex, and call the ::Replace method in that, which does support callbacks

This literally cut the running time down from hours to minutes.

I have been playing a lot with Azure automation, and Azure DSC pullserver in particular. I will not get into DSC or Azure automation in great detail in this post.

My goal was to set up an Automation Account, that could be used to handle multiple different servers and setups, so that meant that I had to install multiple DSC resources into the Automation account.
Currently there is a limitation in Azure DSC pullserver, that it does not support versioned modules (https://azure.microsoft.com/en-gb/documentation/articles/automation-dsc-overview/)

The traditional PowerShell DSC pull server expects module zips to be placed on the pull server in the format ModuleName_Version.zip”. Azure Automation expects PowerShell modules to be imported with names in the form of ModuleName.zip. See this blog post for more info on the Integration Module format needed to import the module into Azure Automation.

This means that you have to pack your modules differently depending on if you are going to use them on an internal DSC pullserver which supports versioning or in Azure.

So I wrote a small script, that will download DSC resources and zip them, depending on if they are going to be used in Azure or on a regular pullserver (When downloading for a “regular” pullserver, it will also create a checksum file)

One thing to notice here is that when I create the zip files, I am using [io.compression.zipfile], the reason for this is that there is a bug in Compress-Archive, that sometimes prevents you from unpacking the zip files with 3rd party tools like 7-zip, I have also come across that Azure Automation cannot unzip the files (Caused me to waste a looooot of time, to figure that one out)

I was playing around with the Azure RM cmdlets a while back, and had forgotten to read the manual 🙂 So I was running into some issues, with not getting them to work.

So I figured that there had to be some kind of dependency among the modules for them to work.. So instead of reading the documentation, I decided to write a small script, that would look at the module manifest to check for dependencies.

A few Examples

Output looks like this:

VERBOSE: Azure.Storage
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.ApiManagement
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.Automation
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.Backup
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.Batch
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.Compute
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.DataFactories
VERBOSE: --AzureRM.Profile

VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.HDInsight
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.Insights
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.KeyVault
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.Network
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.OperationalInsights
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.profile

VERBOSE: AzureRM.RedisCache
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.Resources
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.SiteRecovery
VERBOSE: --AzureRM.Profile

VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.Storage
VERBOSE: --AzureRM.Profile
VERBOSE: --Azure.Storage
VERBOSE: ----AzureRM.Profile

VERBOSE: AzureRM.StreamAnalytics
VERBOSE: --AzureRM.Profile

VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.TrafficManager
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.UsageAggregates
VERBOSE: --AzureRM.Profile

VERBOSE: AzureRM.Websites
VERBOSE: --AzureRM.Profile

Today I was working at a client, doing a bunch of Azure Automation stuff, moving a lot of the scheduled jobs into Azure Automation, when I noticed this little button.

So I had to dig a little deeper, you can now run your Hybrid Worker Jobs as user, that you have stored in your Azure Automation credential store. Before Hybrid Worker jobs, would run as “LocalSystem” since that is the account the agent is running under. I don’t know, when this was added, but it has to be recently.

In order to set it up, you have to logon to the azure portal and go to

Automation Accounts -> -> Hybrid Worker Groups -> -> Settings -> Hybrid worker group settings

One of the new features of PowerShell v5 DSC, is that you now can use ConfigurationNames in “clear-text” not GUID, meaning that you can now have human readable names for your configurations. Since they are easier to guess, then there is an added layer of security, now the Pull clients have to register themselves with the Pull server, with a preshared key. When this happens the client LCM will generate a unique AgentID that is used to discern the different clients.

In order to add the RegistrationKey settings you need to add a line to the web.config file of the DSC Pull server, that entry points to a location in the file system where it can find a file called RegistrationKeys.txt. (You can read more about it here: Link

Instead of manually editing the web.config file, I wrote a little script to add the configuration, to help automate the building of pull servers for my demo lab

This assumes you have installed to the Pull server to the “default” location.

Another annoyance that I have come a across is that I usually have a Danish keyboard layout, and when I use the xPSDesiredStateConfiguration Module, and try to setup a pull server it will complain that I cannot find a file in C:\Windows\System32\WindowsPowerShell\v1.0\Modules\PSDesiredStateConfiguration\PullServer\en. In order to fix this, you can create two localized folders containing the same files as in the en and en-US folders.. I wrote a little script to do this, based on the locale of the machine.

Since both scripts are altering files in protected areas of the file system both has to be run as Administrator

I have been playing a lot with DSC, and I have therefore had to use [System.Guid]::NewGuid() a lot, to create GUIDs for the DSC configuration clients.
PS C:\> [System.Guid]::NewGuid()


But in this latest revision, there is a new CMDLET that lets you create GUIDs: New-Guid

PS C:\> New-Guid


If you need to copy and paste the GUID into a configuration, you can do something like this:


I was playing around with Microsoft ORK (Operational Readiness Kit), which is a frontend for the PowerShell deployment toolkit, aimed at hosters.

I wanted to try to do a greenfield install, using a clean Server 2012R2 machine, which had not been joined to any domain, and run everything off that machine. I had downloaded all the pre reqs, and started the install, but it kept failing.

It entered an infinite loop, trying to access the PowerShell AD provider, so my first thought is that I will just turn off the autoload of the AD provider, by setting an environment variable

But the script still failed, so I looked at the code, and found this snippet.

This is what is creating the infinite loop, it tests to see if the AD:\ provider is loaded, if not remove the module and install it again and try to find the drive again.

I have not had time to dig through the entire script, to see if the “installing” machine actually needs the PowerShell AD module or not.. So for now, the machine on which you run the installer on, need to be in a domain.

There is a new version of PowerShell v5 out…. Get it while its hot 🙂

Here is the blog post

Here is the direct link

It has been a little slow here for a while, between being very busy at work, and then starting a new job, I haven’t had enough time to blog that much. But hopefully that will change… I have started working more with Azure, so as always I usually tend to learn better myself when writing stuff down.. So I will have a series of posts regarding PowerShell and Azure, initially it will probably be some getting started stuff, which I hope will be usefull (I know it will be for me, so that is the most important thing 🙂 )

Stay tuned