vRealize Automation – Setting the Orchestrator Server using Custom Properties

vcologoWe’re soon going to be implementing multiple vRealize Orchestrator (vRO) servers in our vRealize Automation (vRA) development environment to allow people to choose different endpoints. Essentially if someone’s working on a workflow, that shouldn’t affect a different developer who is testing a template. So I wanted a way of allowing customers (in this case, developers) to choose from a list of Orchestrator endpoints.

This might be useful in the following cases:

  • Allowing specific customers to use their own Orchestrators (for example in different domains)
  • Having multiple Development Orchestrators at various stages of “done”. So workflow developers can break workflows without hampering anyone else’s ability to deploy machines.
  • Using a Development Orchestrator from Production. For example some of our customer’s Blueprints can’t be tested in DEV as we don’t have the same networks available.

Thankfully, using vRA Custom Properties, this isn’t too difficult to implement.

Before you start, you should be comfortable using Orchestrator with vRA. This isn’t a guide on how to set up vRA/vRO integration. Your vRO server must be set up with all the necessary prerequisites that it would need to work as an endpoint.  If you’ve done this once, you’ve hopefully documented the process, so it’s just a case of replicating it on another server.

Create the vRO Endpoint in vRA

Assuming you haven’t already go multiple endpoints configured; the first thing you’re going to need to do is create one.

  1. Log into your vRealize Automation portal
  2. Navigate to Infrastructure > Endpoints > Endpoints
  3. Click New Endpoint > Orchestration > vCenter Orchestrator
    1. Name: Give it a useful name
    2. Address: use the FQDN of the server
    3. Credentials: Select existing credentials, or create new. Remember to use the format username@domain
    4. Custom properties: Create a new custom property named VMware.VCenterOrchestrator.Priority and give it a value. The value you give it depends on which order you would like vRA to attempt to use the server. You should probably set your most stable (production-ready) server with the lowest number.

Create the vRA Custom Property

You can add custom properties that apply to provisioned machines to the following elements:

  • Reservation, to apply the custom properties to all machines provisioned from that reservation
  • Business group, to apply the custom properties to all machines provisioned by business group members
  • Blueprint, to apply the custom properties to all machines provisioned from the blueprint
  • Build profile, which can be incorporated into any global or local blueprint, to apply the custom properties to all machines provisioned from the blueprint

If you’re going to set it in multiple places, be aware of the order of precedence.

  1. Wherever you decide to create the custom attribute, click New Property
  2. Create a new property with the Name VMware.VCenterOrchestrator.EndpointName
  3. Either leave the Value blank – or – if you want set it to whatever should be the default Orchestrator.
  4. Check the Prompt user box, and click the green checkmark to save your change before clicking OK

Create the vRA Property Dictionary entry

What we’ve done so far will create an empty text box on one or more blueprints, into which the user can enter an endpoint name. What we want to do next is provide them with a drop-down list of available endpoints.

  1. In vRA navigate to Infrastructure > Blueprints > Property Dictionary
  2. Click New Property Definition, set the name as VMware.VCenterOrchestrator.EndpointName.
  3. Set the Display name to something friendly like Orchestrator, and give it an appropriate description
  4. Set Control Type to DropDown, and click the Required check box
  5. Click the green checkmark to save your changes, then click Ok.
  6. Now, edit the Property Attributes of the Property Definition you just created. Click New Property Attribute
  7. Set the Type to ValueList, and the name to VMware.VCenterOrchestrator.EndpointName.
  8. Set the Value to be a comma-seperated list of your Orchestrator endpoint names (copy and paste so that you know you’re getting them right!)
  9. Click the green checkmark to save your changes, then click OK

SelectTheOrchestratorServerNow, test out a catalog request it should give you a drop-down list like this

Defining the CloneFrom attribute of a Blueprint via Build Profile in vRealize Automation

vRealize AutomationUsing vRealize Automation (vRA) to allow your users to deploy their own infrastructure in minutes is great, but if the first thing they need to do is apply a bunch of patches, they’re not going to be happy. It’s essential that you keep your templates up to date.

Doing so is fairly straightforward – clone the existing template, convert it to a running machine, power it on, apply all updates., shut it down and convert it to a template again. You can create a vRealize Automation workflow to do this fairly easily. But once you’ve done that … how do you go about updating your vRA blueprints to use the new template?

Normally you would click the browse button next to the Clone from box in the Build Information tab of the blueprint. This would present you with a list of templates on the attached compute. You would select the updated version of your template, and save your changes. If you’re using the same parent template for multiple blueprints, then this means you could end up doing this on a large number of blueprints.

So, what’s the alternative?

What we can do instead is we select the parent template using a custom attribute called CloneFrom. We can then assign this property using a Build Profile, which is then applied to multiple blueprints (this page contains a good description of Build Profiles).

Once implemented, we can – by updating a single Build Profile – update every blueprint which uses this Build Profile. (Even if you’are automating the creation and maintenance of Blueprints, updating a single Build Profile property is easier than updating the Clone from property on multiple blueprints.)

As we can (via the Property Dictionary) offer a choice of Custom Properties to the user who is requesting a machine, we can now offer a selection of parent VM templates. Fort example we could present a user with different versions of the template, or we could present a selection of templates with different base applications installed.

One nice effect of this is that as these machines use the same Blueprint, this means that they’re drawn from the same Maximum per user. This prevents users from gaming the system by deploying the maximum number of machines for each blueprint type. For example, say you’ve got a couple of Windows blueprints available to your developers (Vanilla and SQL) with the maximum per user set to 3 on each flavour, if the user wants 5 Vanilla, machines they could deploy 3 vanilla servers and 2 SQL Servers, and just remove or ignore the SQL installation. But, if you have a single blueprint with parent template defined as an attribute, then the limit for that blueprint is applied across all machines deployed from that blueprint.

Instructions

Optional – Creating a “placeholder” template

One complication is that you must define a value in the Clone from property of the Build Information tab of the blueprint; despite the fact that it’s just going to be overridden by a custom property. In order to avoid confusion, we using a “placeholder” VM template which we’ve named UseCustomProperty, this should make it clear to anyone else looking at the Blueprint, that we’re setting the parent template via attribute.

In vSphere client, all you need to do is create a new machine named UseCustomProperty. It does not need to have an operating system installed, but it does need to have a disk. The size of this disk wil be the size shown to the user when they request a machine, so this size should be the same as your template. Of course, as it’s Thin Provisioned it only takes up a fraction of that space.

Creating a Build Profile which can be used to assign a parent template

  1. Log into vCAC
  2. Go to Infrastructure > Blueprints > Build Profiles
  3. Click New Build Profile
  4. Give the build profile a sensible name and description.
  5. Under Custom Properties, create a New Property
  6. Set the Name of the property to CloneFrom, and set the value to be the name of your template
  7. Save your changes – remember to click the little green check-mark, before clicking Ok!

Updating a machine to use a parent template assigned via Build Profile

  1. Log into vCAC
  2. Go to Infrastructure > Blueprints > Blueprints
  3. Edit the blueprint of the machine you wish to update
  4. Optional: On the Build Information tab, modify the Clone from. From the browse menu, select your placeholder machine
  5. On the Properties tab, next to Build Profiles, select your Build Profile.
  6. Save your changes by clicking Ok

Updating the parent template of a machine already using a parent assigned using a Build Profile

  1. Log into vCAC
  2. Go to Infrastructure > Blueprints > Build Profiles
  3. Edit the build profile for the template which you wish to update
  4. Under Custom Properties, modify the CloneFrom property, setting the new value to be the name of your updated template.
  5. Save your changes – remember to click the little green check-mark, before clicking Ok!

PowerShell script to restart a Virgin Media SuperHub

SuperHub

Update 23/03/15 – Looks like there’s been an update to the SuperHub which broke the original version of the script. It has now been updated.

Sometimes, after a week or so of uptime, I find that wireless access through my Virgin Media SuperHub gets very slow (wired access is fine). Like most IT issues, the issue can be fixed with a restart, but as it’s a wireless issue, restarting the router via the web interface is sometimes out of the question. I usually end up having to go next door and restart the router manually.

To save this occasional annoyance, I wanted to schedule a restart for the router each morning when I’m unlikely to be using it. So, over the weekend, I wrote up this little PowerShell function to restart the router remotely using the web interface.

I’ve integrated this function into a script, and set it as a scheduled task to run every morning at 5am. (I figure that if I’m awake and on the internet at 5am, then I could probably do with a break anyway.)

I’m not sure yet what causes the network slowdown, it might be interference from a neighbour’s network, or electrical interference, but this seems to be enough to stop it happening.

Auto-generating PowerShell documentation in MarkDown

MarkdownI don’t mind writing documentation for my scripts, but I find that most documentation writing exercises tends to suffers from two problems:-

  • Duplicated effort – I tend to document my script inline, (so that Get-Help will work), why do I need to document the same details elsewhere?
  • Keeping it up-to-date – The only thing worse than no documentation, is clearly outdated documentation.

I had a folder full of PS1 scripts, each of which had the necessary headers. However, I needed to get this documentation into our repository of choice (a GitLab wiki).

So, in order to save me duplicating some effort, I wrote up a quick function that’ll read the inline documentation from all the PowerShell scripts in a folder, and output a markdown formatted document per script, as well as an index document which links to the others. This imports nicely into a GitLab wiki. At first I thought I’d be lucky, and someone would have created an ConvertTo-Markdown function, but of course a PowerShell object doesn’t map to a document in the same way that it maps to a table, or a CSV.

The documentation generated looks something like this.

Not sure how useful this will be to other people, as it’s a pretty specific set of circumstances. I’m also not 100% satisfied with it, as the help object returned by Get-Help is a bit of a chimera of object types, which has made the code a bit ugly.

Using an Ubuntu CrashPlan appliance to backup a NAS

I’ve recently upgraded some of my home setup; instead of Windows Home Server 2011 managing everything, I’m now using a NetGear ReadyNAS 104, to serve files and using Windows Server 2012 R2 for the other functions.

Moving my files to the Network Attached Storage (NAS) fixed some of my issues (NFS performs better for serving media to XMBC), but it left me with a small problem: I previously used CrashPlan for backing up files from the Windows Home Server’s local disks. However, (on Windows), CrashPlan doesn’t support backing up NAS. While there are a couple of workarounds, they are a little … inelegant.

I decided that the best solution would be to use an Ubuntu agent, running as a Hyper-V machine on the server to back-up the NAS. This headless appliance could be managed by the CrashPlan client running on a separate machine. The bulk of the work was already detailed in Bryan Ross’s articles Installing CrashPlan on a headless Linux Server and How to Manage Your CrashPlan Server Remotely, but I thought it’d be worth adding some specifics

  1. Download Ubuntu server and install on your platform of choice. I used Hyper-V and pretty much accepted all the defaults.
  2. Once the Ubuntu installation process has completed, log in and install NFS services
    sudo apt-get install nfs-common
  3. Install CrashPlan as per Bryan’s instructions here.
  4. On your Linux server, open CrashPlan’s configuration file .
    sudo nano -w /usr/local/crashplan/conf/my.service.xml
  5. Search for the serviceHost parameter and change it from
    127.0.0.1

    to this:-

    0.0.0.0

    This allows CrashPlan to accept commands from machines other than the local host.

  6. Save the file and then restart the CrashPlan daemon
    sudo /etc/init.d/crashplan restart
  7. Install the CrashPlan client on the machine you’re going to use to manage it.
  8. Open the client configuration file. On Windows, the default location would be C:\Program Files\CrashPlan\conf\ui.properties. Look for the following line:
    #serviceHost=127.0.0.1
  9. Remove the comment character (#) and change the IP address to that of your of your Ubuntu server. For example:
    serviceHost=192.168.1.29
  10. On the server, create the folders where you’re going to mount the NFS shares and manually mount them to make sure they work. You’ll need to know the IP address to that of your NAS, and the path to the share(s).
    mkdir -p /mnt/nfs/videos
    sudo mount 192.168.0.13:/data/Videos /mnt/nfs/videos
    mkdir -p /mnt/nfs/pictures
    sudo mount 192.168.0.13:/data/Pictures /mnt/nfs/pictures
    mkdir -p /mnt/nfs/software
    sudo mount 192.168.0.13:/data/Software /mnt/nfs/software
    mkdir -p /mnt/nfs/ebooks
    sudo mount 192.168.0.13:/data/eBooks /mnt/nfs/ebooks
  11. Now we need to make sure that when the server is restarted, it will automatically re-mount these shares. Edit fstab with the following command:-
    sudo nano -w /etc/fstab
  12. Add a line for each share you wish to mount, like this:-
    192.168.0.13:/data/Videos /mnt/nfs/videos nfs auto 0 0
    192.168.0.13:/data/Pictures /mnt/nfs/pictures nfs auto 0 0
    192.168.0.13:/data/Software /mnt/nfs/software nfs auto 0 0
    192.168.0.13:/data/eBooks /mnt/nfs/ebooks nfs auto 0 0
  13. Reboot the server and ensure that the NFS shares map correctly (you can see existing mounts by running the command mount)
  14. Now, on the client machine, open the CrashPlan GUI, and select the mount points which you wish to be backed-up.

This is now working well – although the change in method means that I need to send all of my data up to CrashPlan again, I was hoping that it’d be able to map the previously uploaded files to the same files on their new paths.

Get VDI Capacity Information from the Horizon View ADAM Database

Quick script to assist with capacity planning.

The version I’m using here (which is a little too specific to our environment to be posted here), also counts the number of people in the entitlement groups and appends that as a column.

AppSense and VMware View Horizon Linked Clones

The move from persistent physical desktops, to non-persistent linked clones (with a separate user personalisation layer) requires rethinking the way in which machines are configured and software is deployed. The challenge is to deliver a consistent, highly available platform with the maximum efficiency. In this case efficiency means utilising Horizon View’s ability to dynamically provision just enough desktops, while ensuring that the necessary configuration changes are delivered by AppSense.

Computer configuration

We do the bulk of computer configuration via GPO. This includes things like removing unnecessary Windows Features, optimisation of machines for use as VDI hosts, and using Group Policy Preferences to configure local groups and accounts.

Generic platform software (Java, Flash, Microsoft Office, the App-V Client, etc.) and Windows hotfixes are installed to the all of the Pool Masters via SCCM. Pool specific applications are also deployed to specific pool masters via manually configured SCCM Device Collections. This ensures consistency within pools and – where possible – between pools. Consistency is obviously important for the users and the people supporting them, but also helps with storage de-duplication.

This process effectively takes a vanilla Windows 7 machine as input, and outputs a configured corporate virtual machine desktop. This means that the majority of changes have been applied before AppSense gets involved.

Deploying AppSense Agents

The AppSense Client Configuration Agent and the Environment Manager Agent are also deployed to all of the pool masters. We do this via an SCCM package, which configures the MSI so that the clients look towards either the Production, or BCP, AppSense instance (based on their OU).

MSIEXEC /I "ClientCommunicationsAgent64.msi" /Q WEB_SITE="http://appsense:80/"

To avoid all linked-clones sharing the Pool Master’s AppSense SID, we need to remove the SID from the pool master. This is done via a shutdown script on a GPO linked to the pool master’s OU.

User configuration

User configuration is done via AppSense on the linked clones themselves.

As the Environment Manager Configuration is modified at a greater frequency than the pool masters are updated, we don’t want it installed on the pool master prior to deployment. Rather we want the machines to download the newest configuration as it becomes available. AppSense allows us to deliver the latest version of the configuration.

AppSense Configuration Schedule

Remember, that as we’ve already applied computer settings via GPO, we don’t need to worry about restarting the computer after the AppSense configuration has been installed (which we would need to do in order to apply AppSense start-up actions). We’ve also pre-deployed the agents (Environment and Client Communication), which means that the installation of the configuration should proceed fairly quickly.

Ensuring the machine is “ready” for the user

However, this approach did introduce an issue where View was provisioning Linked Clones and marking them as “Available” (and potentially allowing users to log on) before the configuration had been downloaded. This would result in users getting machines which had not yet had the configuration applied. In order to give AppSense enough time to deploy new configurations, we introduced an artificial delay to the start-up of the VMware View Agent (WSNM). The OUs which contain the linked clones, have the following start-up script:

Get-Service -Name "WSNM" | Stop-Service | Set-Service -StartUpType "Manual"
Start-Sleep -Seconds 300
Get-Service -Name "WSNM" | Start-Service | Set-Service -StartUpType "Automatic"

This script results in machines showing as “Agent Unreachable” for the first five minutes after starting-up which gives AppSense enough time to deploy the configuration.

Displaying vSphere disk properties (including provisioning & persistence)

I was doing some tidying of old scripts and came across something I thought it might be useful, so I tidied it up and added some documentation.

Screenshot showing results of script

This PowerShell script uses the vSphere PowerCLI to display a list of virtual machine disks, file-names, modes (persistent or non-persistent), sizes and whether or not the disk is thinly provisioned. You’ll need to connect to one or more vSphere servers first.


It’s likely that I based the original version of this script on someone else’s work as it contained a couple of techniques which I don’t tend to use (like using Select-Object to create object properties), but I’m afraid I can’t remember where, and searching for a couple of keywords brings back no results.

Accessing SCSM 2012 using PowerShell (without SMLets)

SCSM2012I wanted to use PowerShell to create a simple report of open (active & pending) incidents in System Center Service Manager 2012, but the only examples I could find online used the SMLets. Sometimes this wasn’t obvious, but soon became apparent when PowerShell choked over CMDLets like Get-SCSMObject.

While I’m sure the SMLets are handy for ad-hoc reports by administrators, I wanted the option for my report to be generated by a group of users on their own machines, so I was wary about deploying unnecessary (beta) software to a group of desktops. It was therefore preferable to do this using the native SCSM 2012 CMDlets (which the users already have installed as part of the SCSM 2012 installation).

Anton Gritsenko‘s mapping of the SMLets to their SCSM 2012 native commands was invaluable in the creation of this.

This script should work on any machine with PowerShell and SCSM Console installed. As with all PowerShell objects, this can then be output to HTML, CSV etc.

This is the first component of a more involved MI framework I have in mind, and was good practice with the SCSM 2012 API.

Remove machine objects from VMWare View’s ADAM database with PowerShell

26/02/14 –  I’ve updated this script to accept pipeline input and work a little more efficiently when removing multiple machines.

It’s one of those things that shouldn’t happen, but which inevitable does. Someone removes a View managed VM from vSphere, and View refuses to realise it’s gone. It also sometimes happens when machines fail to provision correctly (i.e., due to lack of available storage). The procedure is easy enough to follow, but it’s time-consuming and prone to error. In order to make the cleanup operation easier, I wrote up a quick function below. It relies on the free Quest AD CMDLets.

You’ll want to change yourconnectionserver to – well – your connection server. Obviously the normal caveats apply: the ones about running scripts you download from the internet in your Production environment.