PowerShell script to restart a Virgin Media SuperHub


Update 23/03/15 – Looks like there’s been an update to the SuperHub which broke the original version of the script. It has now been updated.

Sometimes, after a week or so of uptime, I find that wireless access through my Virgin Media SuperHub gets very slow (wired access is fine). Like most IT issues, the issue can be fixed with a restart, but as it’s a wireless issue, restarting the router via the web interface is sometimes out of the question. I usually end up having to go next door and restart the router manually.

To save this occasional annoyance, I wanted to schedule a restart for the router each morning when I’m unlikely to be using it. So, over the weekend, I wrote up this little PowerShell function to restart the router remotely using the web interface.

I’ve integrated this function into a script, and set it as a scheduled task to run every morning at 5am. (I figure that if I’m awake and on the internet at 5am, then I could probably do with a break anyway.)

I’m not sure yet what causes the network slowdown, it might be interference from a neighbour’s network, or electrical interference, but this seems to be enough to stop it happening.

Auto-generating PowerShell documentation in MarkDown

MarkdownI don’t mind writing documentation for my scripts, but I find that most documentation writing exercises tends to suffers from two problems:-

  • Duplicated effort – I tend to document my script inline, (so that Get-Help will work), why do I need to document the same details elsewhere?
  • Keeping it up-to-date – The only thing worse than no documentation, is clearly outdated documentation.

I had a folder full of PS1 scripts, each of which had the necessary headers. However, I needed to get this documentation into our repository of choice (a GitLab wiki).

So, in order to save me duplicating some effort, I wrote up a quick function that’ll read the inline documentation from all the PowerShell scripts in a folder, and output a markdown formatted document per script, as well as an index document which links to the others. This imports nicely into a GitLab wiki. At first I thought I’d be lucky, and someone would have created an ConvertTo-Markdown function, but of course a PowerShell object doesn’t map to a document in the same way that it maps to a table, or a CSV.

The documentation generated looks something like this.

Not sure how useful this will be to other people, as it’s a pretty specific set of circumstances. I’m also not 100% satisfied with it, as the help object returned by Get-Help is a bit of a chimera of object types, which has made the code a bit ugly.

Using an Ubuntu CrashPlan appliance to backup a NAS

I’ve recently upgraded some of my home setup; instead of Windows Home Server 2011 managing everything, I’m now using a NetGear ReadyNAS 104, to serve files and using Windows Server 2012 R2 for the other functions.

Moving my files to the Network Attached Storage (NAS) fixed some of my issues (NFS performs better for serving media to XMBC), but it left me with a small problem: I previously used CrashPlan for backing up files from the Windows Home Server’s local disks. However, (on Windows), CrashPlan doesn’t support backing up NAS. While there are a couple of workarounds, they are a little … inelegant.

I decided that the best solution would be to use an Ubuntu agent, running as a Hyper-V machine on the server to back-up the NAS. This headless appliance could be managed by the CrashPlan client running on a separate machine. The bulk of the work was already detailed in Bryan Ross’s articles Installing CrashPlan on a headless Linux Server and How to Manage Your CrashPlan Server Remotely, but I thought it’d be worth adding some specifics

  1. Download Ubuntu server and install on your platform of choice. I used Hyper-V and pretty much accepted all the defaults.
  2. Once the Ubuntu installation process has completed, log in and install NFS services
    sudo apt-get install nfs-common
  3. Install CrashPlan as per Bryan’s instructions here.
  4. On your Linux server, open CrashPlan’s configuration file .
    sudo nano -w /usr/local/crashplan/conf/my.service.xml
  5. Search for the serviceHost parameter and change it from

    to this:-

    This allows CrashPlan to accept commands from machines other than the local host.

  6. Save the file and then restart the CrashPlan daemon
    sudo /etc/init.d/crashplan restart
  7. Install the CrashPlan client on the machine you’re going to use to manage it.
  8. Open the client configuration file. On Windows, the default location would be C:\Program Files\CrashPlan\conf\ui.properties. Look for the following line:
  9. Remove the comment character (#) and change the IP address to that of your of your Ubuntu server. For example:
  10. On the server, create the folders where you’re going to mount the NFS shares and manually mount them to make sure they work. You’ll need to know the IP address to that of your NAS, and the path to the share(s).
    mkdir -p /mnt/nfs/videos
    sudo mount /mnt/nfs/videos
    mkdir -p /mnt/nfs/pictures
    sudo mount /mnt/nfs/pictures
    mkdir -p /mnt/nfs/software
    sudo mount /mnt/nfs/software
    mkdir -p /mnt/nfs/ebooks
    sudo mount /mnt/nfs/ebooks
  11. Now we need to make sure that when the server is restarted, it will automatically re-mount these shares. Edit fstab with the following command:-
    sudo nano -w /etc/fstab
  12. Add a line for each share you wish to mount, like this:- /mnt/nfs/videos nfs auto 0 0 /mnt/nfs/pictures nfs auto 0 0 /mnt/nfs/software nfs auto 0 0 /mnt/nfs/ebooks nfs auto 0 0
  13. Reboot the server and ensure that the NFS shares map correctly (you can see existing mounts by running the command mount)
  14. Now, on the client machine, open the CrashPlan GUI, and select the mount points which you wish to be backed-up.

This is now working well – although the change in method means that I need to send all of my data up to CrashPlan again, I was hoping that it’d be able to map the previously uploaded files to the same files on their new paths.

Get VDI Capacity Information from the Horizon View ADAM Database

Quick script to assist with capacity planning.

The version I’m using here (which is a little too specific to our environment to be posted here), also counts the number of people in the entitlement groups and appends that as a column.

AppSense and VMware View Horizon Linked Clones

The move from persistent physical desktops, to non-persistent linked clones (with a separate user personalisation layer) requires rethinking the way in which machines are configured and software is deployed. The challenge is to deliver a consistent, highly available platform with the maximum efficiency. In this case efficiency means utilising Horizon View’s ability to dynamically provision just enough desktops, while ensuring that the necessary configuration changes are delivered by AppSense.

Computer configuration

We do the bulk of computer configuration via GPO. This includes things like removing unnecessary Windows Features, optimisation of machines for use as VDI hosts, and using Group Policy Preferences to configure local groups and accounts.

Generic platform software (Java, Flash, Microsoft Office, the App-V Client, etc.) and Windows hotfixes are installed to the all of the Pool Masters via SCCM. Pool specific applications are also deployed to specific pool masters via manually configured SCCM Device Collections. This ensures consistency within pools and – where possible – between pools. Consistency is obviously important for the users and the people supporting them, but also helps with storage de-duplication.

This process effectively takes a vanilla Windows 7 machine as input, and outputs a configured corporate virtual machine desktop. This means that the majority of changes have been applied before AppSense gets involved.

Deploying AppSense Agents

The AppSense Client Configuration Agent and the Environment Manager Agent are also deployed to all of the pool masters. We do this via an SCCM package, which configures the MSI so that the clients look towards either the Production, or BCP, AppSense instance (based on their OU).

MSIEXEC /I "ClientCommunicationsAgent64.msi" /Q WEB_SITE="http://appsense:80/"

To avoid all linked-clones sharing the Pool Master’s AppSense SID, we need to remove the SID from the pool master. This is done via a shutdown script on a GPO linked to the pool master’s OU.

User configuration

User configuration is done via AppSense on the linked clones themselves.

As the Environment Manager Configuration is modified at a greater frequency than the pool masters are updated, we don’t want it installed on the pool master prior to deployment. Rather we want the machines to download the newest configuration as it becomes available. AppSense allows us to deliver the latest version of the configuration.

AppSense Configuration Schedule

Remember, that as we’ve already applied computer settings via GPO, we don’t need to worry about restarting the computer after the AppSense configuration has been installed (which we would need to do in order to apply AppSense start-up actions). We’ve also pre-deployed the agents (Environment and Client Communication), which means that the installation of the configuration should proceed fairly quickly.

Ensuring the machine is “ready” for the user

However, this approach did introduce an issue where View was provisioning Linked Clones and marking them as “Available” (and potentially allowing users to log on) before the configuration had been downloaded. This would result in users getting machines which had not yet had the configuration applied. In order to give AppSense enough time to deploy new configurations, we introduced an artificial delay to the start-up of the VMware View Agent (WSNM). The OUs which contain the linked clones, have the following start-up script:

Get-Service -Name "WSNM" | Stop-Service | Set-Service -StartUpType "Manual"
Start-Sleep -Seconds 300
Get-Service -Name "WSNM" | Start-Service | Set-Service -StartUpType "Automatic"

This script results in machines showing as “Agent Unreachable” for the first five minutes after starting-up which gives AppSense enough time to deploy the configuration.

Displaying vSphere disk properties (including provisioning & persistence)

I was doing some tidying of old scripts and came across something I thought it might be useful, so I tidied it up and added some documentation.

Screenshot showing results of script

This PowerShell script uses the vSphere PowerCLI to display a list of virtual machine disks, file-names, modes (persistent or non-persistent), sizes and whether or not the disk is thinly provisioned. You’ll need to connect to one or more vSphere servers first.

It’s likely that I based the original version of this script on someone else’s work as it contained a couple of techniques which I don’t tend to use (like using Select-Object to create object properties), but I’m afraid I can’t remember where, and searching for a couple of keywords brings back no results.

Accessing SCSM 2012 using PowerShell (without SMLets)

SCSM2012I wanted to use PowerShell to create a simple report of open (active & pending) incidents in System Center Service Manager 2012, but the only examples I could find online used the SMLets. Sometimes this wasn’t obvious, but soon became apparent when PowerShell choked over CMDLets like Get-SCSMObject.

While I’m sure the SMLets are handy for ad-hoc reports by administrators, I wanted the option for my report to be generated by a group of users on their own machines, so I was wary about deploying unnecessary (beta) software to a group of desktops. It was therefore preferable to do this using the native SCSM 2012 CMDlets (which the users already have installed as part of the SCSM 2012 installation).

Anton Gritsenko‘s mapping of the SMLets to their SCSM 2012 native commands was invaluable in the creation of this.

This script should work on any machine with PowerShell and SCSM Console installed. As with all PowerShell objects, this can then be output to HTML, CSV etc.

This is the first component of a more involved MI framework I have in mind, and was good practice with the SCSM 2012 API.

Remove machine objects from VMWare View’s ADAM database with PowerShell

26/02/14 –  I’ve updated this script to accept pipeline input and work a little more efficiently when removing multiple machines.

It’s one of those things that shouldn’t happen, but which inevitable does. Someone removes a View managed VM from vSphere, and View refuses to realise it’s gone. It also sometimes happens when machines fail to provision correctly (i.e., due to lack of available storage). The procedure is easy enough to follow, but it’s time-consuming and prone to error. In order to make the cleanup operation easier, I wrote up a quick function below. It relies on the free Quest AD CMDLets.

You’ll want to change yourconnectionserver to, er, your connection server. Obviously the normal caveats apply: the ones about running scripts you downloads from the internet in your Production environment.

Invoke a PowerShell script directly from Subversion

I’ve been thinking about doing something like this for a while. By adding this to PowerShell profiles, I can ensure that other people who use my scripts/functions are using the latest versions by having them run directly from a Subversion URL. This negates the requirement for them to have a local SVN repo (and for them to keep it up to date).

Our Subversion is set up for basic, rather than AD integrated, authentication, but I imagine AD integrated authentication would be easier to implement (probably using Invoke-WebRequest with the UseDefaultCredentials parameter). Rather than prompt the user at each use, I set up a service account which has only Read permissions on the repository, and hardcoded the Base64String encoded username and password into the internal version of this script.

The terrifying looking regex was based on SqlChow‘s example, with PS1 appended to the end.

Using AppSense to deliver Microsoft App-V 5 applications to non-persistent VDI desktops at logon

I’m currently working on user-centric application delivery to non-persistent VDI desktops. The rationale for this is that the more applications which can be delivered dynamically, the fewer pools we need to provision. This suits applications like Microsoft Project and Visio, which both tend to be used by a small number of people on each pool. These apps are too expensive to deploy to non-users, and need locked-down to fulfil license requirements if deployed under Citrix. User-based deployment via App-V allows the application to be targeted to users in existing pool; however, the non-persistent nature of the desktop means that the application needs delivered quickly (and silently) at each logon.

While App-V integrates with SCCM, user-targeted applications can take a couple of minutes to be available. This isn’t really an option for this kind of non-persistent desktop deployment as it’s likely to result in confused users whose “missing” applications appear as they’re on the ‘phone to the service desk.

We needed a way to deliver applications to users, based on their AD group membership during logon. AppSense (which we already had deployed) seemed ideal. AppSense has wizard-based integration for App-V, but the dialog only allows you to select App-V 4’s SFT files, not the new APPV files created by version 5.


I started by running a custom (PowerShell) script, using the new CMDLets. The script itself is pretty straightforward, checking to see if the CMDLets are loaded (and loading if necessary), adding the client-package from the file, and publishing it. We need to set the -Global paramater on the Publish-AppvClientPackage as we’re running the script under System context, and we want it to be visible to the user.


The problem was that AppSense would wait until the script executed before completing the logon. With some of the larger applications, this resulted in a delay of 10-15 seconds before the desktop was usable.

Invoking the PowerShell executable with the command above passed as an argument above worked as intended, but required encoding the path argument in order to escape the pipe character. This meant that a simple installation like

PowerShell.exe –Command {Add-AppvClientPackage -Path "\\apps\app-v$\Microsoft Office Project 2010\Microsoft Office Project 2010.appv" | Publish-AppvClientPackage –Global}

Would turn into this:-


Not the most human-readable command, and it would be difficult to manage more than a few applications.

I got around this by writing up a script which would accept the APPV file path as an argument. I converted it to an EXE (using PowerShell Studio) and put it on a share. Conversion to EXE avoided changing the security policy to Bypass from RemoteSigned (or the requirement to deploy a certificate), and also simplified the command-line used in AppSense. The result looks like this:-


This action is run under the UserLogon node with a condition based on the user’s AD group membership. All assigned App-V installations are run synchronously.

As the user’s logon no longer waits for the execution to complete, the applications can take around 5-10 seconds to become available after the user is presented with their desktop. The delivery is quick, silent, and easy to document for handover to the team members who will be doing the day-to-day application management.

I’m sure AppSense will add support for App-V 5 in a future release, and it’ll be interesting to see what kind of functionality they build-in.