VMware Horizon Logoff Script with PowerCLI

One drawback with Horizon View is that it does not have the ability (through the GUI) to automate user logoffs or reboots on a daily/weekly basis. Thankfully, VMware has written some halfway decent PowerShell snap-ins so we can script such tasks. I say halfway decent because there are only 45 commandlets and only 20 are actually useful…

Automated logoffs are useful in instant and linked cloned scenarios (hopefully everyone is using instant clones and admiring how awesome they are) where you have to deploy an image update immediately and logoff existing sessions later. Thankfully for my blog readers, I have written such a script to do that. This script utilizes the Get-RemoteSession and Send-SessionLogoff Horizon PowerCLI commandlets. Unfortunately there is no PowerCLI commandlet to send messages to the active sessions, so I had to convert each session’s machine name to a string and do a ForEach loop to pipe those names into the msg.exe command.

The other unique thing about this script is that it only does half of the pool at a time (using some variable array magic). The way it’s currently written, it sends a message to half of the sessions that warns them they will be logged off in 15 minutes, warns them again at 60 seconds prior to logoff, logs the first half of sessions off, and then repeats the process with the second half of sessions.

Also, if you want to run this script real-time versus scheduling it in Task Scheduler, I have included Write-Host commands along the way so you can actually see which sessions are being warned and logged off throughout the whole process.

The only thing you’ll need to do before running the script is adjust the 3 variables at the top: $PoolName (name of the Horizon pool), $FirstWarning (How long of a warning the users get before logoff), and $FinalWarning (The second warning time the users get before logoff). Run this or schedule it on a Connection Server and you’re good to go! Enjoy!

###### VMware Horizon Logoff Script ######
###### Created by Nick Burton 10/9/2017 ######

# This script logs off any active sessions for a particular pool. This is useful for enforcing image updates.
# Simply set the pool name and two warning times below! Schedule it with task manager on a Connection Server.

#### KNOWN ISSUES ####
# Currently this script will NOT work if only one session exists due to the array usage in the variables. An IF statement could fix this.
# If two or three sessions exist, this script will logoff all sessions due to the half calculation and array locations starting at 0.

# First, set the poolname below:
$PoolName = “POOL NAME HERE”

# Next, set the first warning time prior to the reboot in seconds. 15 minutes = 900 seconds.
$FirstWarning = 900

# Finally, set the final warning time prior to the reboot in seconds.
$FinalWarning = 60


# Get first warning time minus final warning time in order to send second message at appropriate time
$WaitTime = $FirstWarning – $FinalWarning
$FirstWarningMinutes = $FirstWarning / 60

# Add all VMware snap-ins for View PowerCLI
Add-PSSnapin *vmware*

# Get all sessions for pool defined in PoolName variable and populate new sessions variable with string data
# Only strings can be accepted for upcoming msg command (VMware hasn’t introduced a message cmdlet)

$sessions = Get-RemoteSession -Pool_id $PoolName | %{$_.DNSName}
$sessionHalf = $sessions.count/2
Write-Host “Here are ALL of the sessions we are logging off:” -ForegroundColor Green
$sessions | Write-Host -ForegroundColor Green

# Populate logoff variable for use later
$Logoffs = Get-RemoteSession -Pool_id $PoolName

# Send first message to first half of sessions in pool using msg.exe using a foreach loop
Write-Host “Sending message to these sessions for pending reboot in $FirstWarningMinutes minutes:” -ForegroundColor Green
$sessions[0 .. $sessionHalf] | Write-Host -ForegroundColor Green
ForEach ($session in $sessions[0 .. $sessionHalf]) {msg /server:$session * “You will be logged off in $FirstWarningMinutes minutes! Please save all work!”}

# Wait for first warning time minus final warning time
Write-Host “Pausing for $WaitTime seconds…” -ForegroundColor Green
Start-Sleep -Seconds $WaitTime

# Send final warning
Write-Host “Sending message to these sessions for pending reboot in $FinalWarning seconds:” -ForegroundColor Green
$sessions[0 .. $sessionHalf] | Write-Host -ForegroundColor Green
ForEach ($session in $sessions[0 .. $sessionHalf]) {msg /server:$session * “You will be logged off in $FinalWarning seconds! Please save all work!”}
Start-Sleep -Seconds $FinalWarning

# Send the logoffs to half the sessions!
Write-Host “Logging off the following sessions!” -ForegroundColor Green
$sessions[0 .. $sessionHalf] | Write-Host -ForegroundColor Green
$Logoffs[0 .. $sessionHalf] | Send-SessionLogoff

# Wait two minutes for desktops to become available, etc. before doing the next half
Write-Host “Waiting two minutes for desktops to become available… there will likely be some errors thrown in a bit since some incoming sessions are already logged off – no big deal.” -ForegroundColor Green
Start-Sleep -Seconds 120

# Send first message to last half of sessions in pool using msg.exe using a foreach loop
Write-Host “Sending message to these sessions for pending reboot in $FirstWarningMinutes minutes:” -ForegroundColor Green
$sessions[$sessionHalf .. $sessions.count] | Write-Host -ForegroundColor Green
ForEach ($session in $sessions[$sessionHalf .. $sessions.count]) {msg /server:$session * “You will be logged off in $FirstWarningMinutes minutes! Please save all work!”}

# Wait for first warning time minus final warning time
Write-Host “Pausing for $WaitTime seconds…” -ForegroundColor Green
Start-Sleep -Seconds $WaitTime

# Send final warning message to last half of sessions in pool using msg.exe using a foreach loop
Write-Host “Sending message to these sessions for pending reboot in $FinalWarning seconds:” -ForegroundColor Green
$sessions[$sessionHalf .. $sessions.count] | Write-Host -ForegroundColor Green
ForEach ($session in $sessions[$sessionHalf .. $sessions.count]) {msg /server:$session * “You will be logged off in $FinalWarning seconds! Please save all work!”}

# Send logoffs to the other half! This will likely have a single error since the median session has already been logged off.
Write-Host “Logging off the following sessions!” -ForegroundColor Green
$sessions[$sessionHalf .. $sessions.count] | Write-Host -ForegroundColor Green
$Logoffs[$LogoffHalf .. $Logoffs.count] | Send-SessionLogoff

Write-Host “Script COMPLETE!” -ForegroundColor Green

Customizing VMware Horizon Connection Server Login Screen

Let’s finally do something on VMware Horizon! This post will cover how to customize your Connection Server login screen, something that is becoming more important as HTML5 access gains popularity. I could not find any documentation on this, so I dug through to find all of the relevant images and files needed. FYI – this was done on a Horizon 7.2 Connection Server. Paths may vary depending on your version.

Here’s the various things we will be customizing and the location to customize them. The path is relevant to C:\Program Files\VMware:

Background: VMware View\Server\broker\webapps\portal\webclient\icons-5622958\bg_image.jpg

Logo on top: VMware View\Server\broker\webapps\portal\webclient\icons-5622958\logo.png

All text in initial login screen*: VMware View\Server\broker\webapps\portal\WEB-INF\classes\com\vmware\vdi\installer\i18n\bundle_en.properties

  • * Requires restart of the VMware Horizon View Web Component service. I recommend just restarting the VMware Horizon View Connection Server service or the entire server.

I also recommend simply renaming the old files to .old so you always have the original file. Don’t forget to clear your cookies or open in incognito, or else the original images get cached!

When editing the bundle_en.properties file, it is pretty self-explanatory on which text to edit. For example, if I change the following lines to:

install.message.first=Here’s where you can customize some text!

install.message.second=Having issues? Contact the help desk at 123-456-7890!

Here’s what it looks after doing that and replacing the logo and background images. Don’t forget that the background image is .jpg and the logo is .png format; also don’t forget to restart the appropriate service, or just restart the Connection Server.


If your Connection Servers are behind a load-balancer, (they are, right?) then you can simply place a different background on each Connection Server so it looks like the backgrounds are randomly generated. Cool, huh?

Happy branding!

Update a Static/Dedicated MCS Image (vSphere)

The other day I made a change to the “master” machine that was initially used for the deployment of several MCS static/dedicated desktops. When I went to deploy additional desktops, I expected the changes to persist from the master. Guess what? That was not the case. But WHY?

If you pay close attention when you initially deploy the image, you will notice that MCS will do a full VMDK copy of your snapshot chain into a folder of every datastore that is defined in your hosted XenDesktop environment. This makes desktop creations extremely quick when scaling out additional VMs because it 1) negates the need to potentially copy VMDKs across datastores during desktop creation and 2) negates the need to consolidate snapshots during creation. The folder will typicially be the machine catalog name + basedisk + random datastore identifier assigned by XenDesktop. This applies to all MCS images; static and pooled.

We obviously want to keep the master of dedicated machines up-to-date to avoid unnecessary SCCM pushes, Windows updates, missed software, etc. when we deploy new desktops. Unfortunately Citrix does not give a GUI option for this, like we get on our pooled desktops in Studio:

So, what is almost always the method of action when no GUI option is available? That’s right – PowerShell!

There are two main things to consider here: the “Provisioning Scheme” and the new “Master Image.” The provisioning scheme name almost always matches the machine catalog name. It keeps track of the master image location and some other metadata. The master image is just the snapshot of your master machine that MCS does that full VMDK copy to each datastore that we talked about earlier.

Let’s get right to it. First, open PowerShell on your DDC, and get the provisioning scheme name and the current snapshot that is being used for the master:

add-pssnapin *citrix*


This will return two very important things for each MCS machine catalog: 1) the ProvisioningSchemeName and 2) the MasterImageVM. You will notice that this contains the name of the snapshot that mirrors the name you provided in vSphere, followed by .snapshot. This makes it easy to locate! Let’s assume our current snapshot is named “v1” and our master is named “XDMaster1.” So the MasterImageVM should look like:

XDHyp:\HostingUnits\<Cluster Name>\XDMaster1.vm\v1.snapshot

Note: If your VM is in a resource pool, this path will also contain that as a “directory.”

We will create a snapshot named “v2” on the master and make some changes, updates, etc. and shutdown the master. Let’s verify that XenDesktop now sees this snapshot in our hypervisor environment:

get-childitem -path “XDHyp:\HostingUnits\<Cluster Name>\XDMaster1.vm\v1.snapshot”

You will see that v2.snapshot is now a child item of your v1.snapshot! Good deal! So how do we point MCS to this snapshot? Simple:

First, let’s make it easy on ourselves and create a couple of variables. The two important ones that I touched on earlier: ProvisioningSchemeName and MasterImageVM:

$ProvScheme = “Windows 10 Static”

“Windows 10 Static” will be the ProvisioningSchemeName from earlier, or usually the name of your Machine Catalog.

$NewMasterImage =  “XDHyp:\HostingUnits\<Cluster Name>\XDMaster1.vm\v1.snapshot\v2.snapshot”

That will be the full path to your new snapshot. Remember to use get-childitem to ensure that the DDC sees your new snapshot.

Now, we will use the Publish-ProvMasterVMImage cmdlet to wrap it all up!

Publish-ProvMasterVMImage -ProvisioningSchemeName $ProvScheme -MasterImageVM $NewMasterImage

After running this command, pay attention to your vSphere tasks. You will see a temporary VM get copied, VMDKs get copied to the various datastores, and you should finally get a response from PowerShell that states 100% completion and where the new master image location points.

If you see the dreadful red text, pay attention and make sure you got your paths correct. It is easy to mistype the XDHyp path, forget quotes, etc.

I hope this post thoroughly covers how to update your master image on a static/dedicated Machine Catalog delivered via MCS! Thank you, PowerShell!

Restoring a Office 365 User Sync’d with AD

Background and Intro

Office 365 has an excellent method for providing a common identity for cloud and on-premise resources. Why would an IT administrator want to manage two separate accounts with different passwords, attributes, and group membership? Thankfully, Office 365 has DirSync (now Azure AD Connect, but DirSync sounds so much cooler, and I will forever call it that) to integrate the on-prem Active Directory with Office 365, backed by Azure AD.

Hopefully in this day and age, and now that we’ve reached the end of life for Server 2003, you have an Active Directory environment living on at least a 2008r2 functional level with AD Recycling Bin enabled. Right? Unfortunately in the not-so-perfect world we live in, there are still legacy applications and other roadblocks that keep organizations from making this jump.

Who hasn’t made the mistake of deleting a user account in a non-recycle-bin-enabled environment? And who wants to do an authoritative restore or tombstone animation? Why not just re-create the AD object? Oh, they’re sync’d with O365 and have a cloud mailbox as well…

The Process

So, how can we create a brand new user account in AD and re-map their cloud mailbox to the account? Or the AD object somehow got corrupted and we need to delete and re-create from scratch. But, again, they have an Office 365 mailbox tied to their sync’d user account. At first glance, it looks like the user and their mailbox gets thrown into oblivion, but it instead gets converted to a cloud-only account within the Deleted Users section in your Office 365 admin portal.

So go ahead and restore this object. Notice that it becomes a cloud-only object. So we’ve saved the mailbox, but we obviously want it to map back to our new AD user. Next, create the new user object in AD with the appropriate email and SMTP: value in the ProxyAddresses attribute.

Matching the ObjectGuid

So now we need to grab the AD user’s ObjectGuid. This is the value that is used to match the on-prem user account with the cloud object. Run the following to grab the ObjectGuid for the user and export it to a text file, replacing the CN, OU, and DC values where needed in the DN:

ldifde -d “CN=User1,OU=Users,DC=domain,DC=com” -f c:\User1.txt

Open PowerShell and mimic the Cloud users ImmutableID with the AD ObjectGuid


Set-MsolUser –UserPrincipalName user@domain.com -ImmutableId “someGuid=”

Run a DirSync and verify

Now run your DirSync – you should now see that the O365 user shows “Synced with Active Directory” and the user’s original mailbox is mapped to the new user account!

Citrix XA/XD SQL Mirroring

As you probably know, SQL is one of the foundations of a successful Citrix deployment. All transactions processed in a XenApp/XenDesktop environment must go through SQL (Citrix brought back connection leasing in 7.6 to temporarily workaround a SQL outage). SQL mirroring is Citrix’s recommended method of a highly available deployment. It also seems to be the cheapest and easiest to deploy. SQL mirroring must be application-aware, meaning that it doesn’t use any sort of VIP to trick the application into thinking that either SQL server can represent the same instance. Citrix will actually know and auto-detect that another database on another server will be used as a failover during the site creation.

I will start off by saying that I am no SQL guru; I know a few basic queries, join concepts, etc. Many Citrix admins aren’t SQL experts either, so we typically leave any sort of SQL-related stuff to the database guys. Well, I went ahead and gave SQL mirroring a shot myself in a recent deployment, and I was actually surprised by how easy it was! I must stress that setting it up the first time during the initial deployment looks way easier than re-configuring after the site has been deployed/pushed to production. So, I recommend doing it right the first time, as going back and re-configuring will introduce downtime to the environment.

So, let’s start with a brand new deployment – before you even touch the Citrix layer, SQL must be taken care of first. This is done with SQL 2014 on Server 2012R2. You will need 2 servers running SQL standard and 1 server with SQL Express acting as the witness. The witness can typically be installed on a multi-role server, such as a delivery controller, to save resources if needed. Since this is a standalone SQL environment just for the Citrix servers, we will keep the name at the default instance.

Start off by installing SQL Standard on both servers. You will need the database engine, client/server components, and COMPLETE management tools. I realized afterward that the install of basic management tools will not include the mirroring options/tools within Mgmt Studio. After that is complete, install SQL Express on your witness.

We will call the principal (primary) database server SQL1, mirror will be SQL2, and witness will be SQL3.

Let’s start by creating a SQL database on SQL1 with the full recovery model. Make sure that the Collation is set to Latin1_General_100_CI_AS_KS in order for Citrix to properly interact with it.


Set Is Read Committed Snapshot On to True after the database is created. This will improve performance and you will not get a warning when the site database is setup.  See here for Citrix article on this.


Do a full backup of SQL1 by right-clicking the database and go to backup. Make note of the location.


Copy the .bak to the same location on SQL2. Open up management tools on SQL2 go to the Databases folder (you should not have any databases on SQL2 yet!) and go to Restore Database…


Under the Restore Database options, make sure that RESTORE WITH NORECOVERY is selected in the recovery state. This is a very important step that is often overlooked, and will result in an error when attempting to initiate the mirror.


Okay – that essentially sums up the preparation process for the mirror, so we’re about halfway there! Now it’s time to actually initiate it. On SQL1, right-click the database and go to Tasks > Mirror. This will take you to the Mirror properties of the database.


** If the mirror properties do not show up, that usually means that you have not installed the complete set of management tools; only the basic. Go back and edit the existing SQL instance, just adding the complete management tools.

Before we start the mirror, we have to configure the security settings for it. Go ahead and click that so the security wizard comes up. Mirroring defaults to TCP 5022, so ensure that appropriate firewall rules allow this connection (including your witness instance!), on top of your basic SQL ports.

You should breeze through this wizard, ensuring that you specify the correct SQL server instances.

8 9 10 11 12

You will be prompted with a pop-up menu after the security wizard. Go ahead and click Start Mirroring to initiate the process:


Bam! You have just successfully setup a SQL mirrored instance. You will notice that the mirror properties will show the state of the mirror instance, so this status page is usually a good place to start when troubleshooting issues.


If you happen to run into an error during the initialization, particularly a 1418 error, follow this blog for some good pointers.

All righty – go ahead and start your new site setup. When asking for the instance / database in Studio, make sure and point to SQL1 (principal) for your databases. It will automatically configure your connection strings to use SQL2 as the failover. Please note that we used a single database for the site, logging, and monitoring. It’s usually best practice to have these in 3 separate databases, so you will need to configure the mirroring for each database using the above steps.

When completing backups, log round-ups, etc., make sure and use your principal for the backup source. Do not backup the mirror.

Thanks for reading – I hope this helps!

Office 365 License Changes

If you haven’t already heard, Microsoft is removing their Small Business, Small Business Premium, and Midsize Business plans, and replacing them with Business, Business Essentials, and Business Premium subscriptions. Starting October 2014, companies will be forced to subscribe to the new models at their next subscription renewal.

Many companies under 300 users have taken advantage of these plans, primarily due to the cost savings compared to the Enterprise (E1, E3, E5) subscriptions.

It is extremely important to note that there are some slight differences between the old models and the new. Particularly those who are currently using Small Business Premium or Midsize Business. Those with Small Business Premium and Midsize Business will be pushed toward Business Premium.

Two very important things to note for Business Premium:

  1. Users will lose Microsoft Access from their Office suite, assuming they are using Office 365 Pro Plus.
  2. Under Midsize Business, Microsoft allowed users to license their Office products within RDS/XenApp environments. Now, if a user attempts to license Office on a server that is a RDS session host, they will receive the following error:


So, if a company utilizes Office Pro Plus in any sort of RDS/XenApp environment, they must now subscribe to an Enterprise subscription (or purchase a volume license). Thankfully, Microsoft now allows the ability to mix and match users between Business and Enterprise, so it isn’t an all-or-nothing scenario if you only have a set of users that utilize RDS/XenApp in your environment.

The following article is helpful for making the transition:


Remember to carefully go over the changes to ensure this switch will not affect your users. It is pretty disappointing that Microsoft does not give you any notice of these changes when renewing your subscription.

Hope this helps!


Citrix Personal vDisks

Citrix PvD (or Personal vDisk) is a great way to allow users to have the freedom of installing their own applications and customizations, while still having the standard vDisk as its base disk. Citrix has made major improvements to this architecture throughout the years. The personal vDisk is a virtual disk that you will actually attach to your virtual machine where the user’s read/writes will go (in order for the changes to persist upon logon/logoff). XenDesktop will magically “merge” the base and PvD in order to reflect version changes, etc.

One of the main problems that users will run into is when they run out of storage of their PvD. This will prevent the user from installing applications, etc., and of course, you, as the Citrix admin, will be responsible for fixing it. This article will  be written assuming you are using an ESXi/vSphere environment, but this will pertain to any virtual environment (Hyper-V, XenServer)

There are a couple of basic things you should know about PvD customizations. The settings are contained within HKLM > SOFTWARE >  Citrix > personal vDisk > Config. You can change these settings by simply creating a new version on the golden image. The main setting you will want to tweak is “PercentOfPvDForApps.”

For whatever reason, Citrix defaulted this to 50/50. This means that if you provide a user with a 60GB PvD (we’ll use this number for the rest of the article), 30GB will be used for application installs, and 30GB will be used for the user’s profile. This could be overkill for the profile, especially if you are using profile management. So make sure and adjust this setting to best suit your environment.

Another thing you should know is that (by default) the user will see 2 drives – the C and P drive. The P drive is actually the whole 60GB VMDK you attached for the PvD. The C drive is a hidden VHD contained within the P drive, which represents the thick provisioned app percentage that you specified in the registry! So C = P * PercentOfPvDForApps(%)

Hidden VHD in the vDisk
Hidden VHD in the vDisk

From what I understand – the UserData.v2.vhd.thick_provision is essentially the initial thick reserved app space, while any future expansions go to the UserData.V2.vhd thin provisioned VHD.

So, that’s some basics of personal vDisks in MCS or PVS. I am also going to go ahead and write about a specific scenario I ran into while expanding a user’s vDisk:

As I talked about before, expanding a vDisk is really easy. Simply expand the VMDK in vSphere console, partition it out using disk manager in Windows, shut down the user’s machine, start it up, and Citrix will automagically expand out the C:\ drive based on the PercentOfPvDForApps in the registry.

Well, I ran into a user who had about 11% of his C:\ drive remaining, and it refused to expand, even when I expanded his personal vDisk VMDK. His applications were failing to install because he did not have sufficient disk space. I was baffled!

I happened to stumble across an article that said the user’s C:\ drive portion would not expand unless it had less than 10% remaining (just my luck, the user had around 11%). So I moved an application folder over to his C:\ drive just to eat up space, restarted his VM, and bam! The C:\ drive expanded to the percentage I had expected.

Hopefully this article helped you understand a little bit about how personal vDisks operate, and I hope that the scenario helps someone as unlucky as me out there. Enjoy!!

Adding UniFi Access Point to Controller

Hi all! I am going to do a quick run-through of adding UniFi Access Points to a controller running on a Windows host.

Let me start off by saying how impressed I am with the UniFi line. They really are giving the big wireless guys a run for their money, including Cisco and Meraki.  With their no-license model and free controller software (that’s right – FREE), they are a great solution for any small, medium, and even large businesses. You can do a ton of options with UniFi, including guest policies, portal customization, WPA-Enterprise, and much more. Hopefully I will get a chance to cover all of these cool features in the future.

So, you’ve purchased a single UniFi WAP or 3-pack and have no idea what to do next. Well, first you MUST install your controller software on a Windows, Mac or Linux machine. If you don’t have a PoE switch, they also come with power injectors in the box!

After installing the controller, configure the initial admin username/password and login. Plug-in the APs. Hopefully, your APs will show up automatically during the setup process and you can add them to the controller. If they do not show up, chances are that your APs are on a different subnet than your controller. Basically, the APs will send out a broadcast discover to find out the location of the controller. Standard networking concepts will tell you that broadcasts do not traverse subnets. So how do you add it to a controller on a different subnet?

You will need to get the IP address of the AP by searching on your DHCP server – the MAC address will start with 04:18. After figuring out the IP address, SSH into it with a client like PuTTY. Login to the AP with ubnt / ubnt. Then, run the following command:

set-inform http://<ip of controller>:8080/inform

This will cause the AP to send a discovery unicast packet directly to the controller. You should now see the AP appear on the controller. Adopt the AP and run the set-inform command again. The AP will then reboot and then will be officially adopted on the controller!

Unfortunately, this particular multi-subnet scenario is not documented well in the UniFi documentation. So I hope this helps – enjoy your new access points!

Official user guide: http://dl.ubnt.com/guides/UniFi/UniFi_AP_AP-LR_User_Guide.pdf

Good Wiki documentation: http://wiki.ubnt.com/UniFi_FAQ

Expanding a Citrix PVS VHD Image

Ah, Citrix Provisioning Services… argumentatively one of the most ingenious technologies when it comes to Citrix’s XenApp/XenDesktop suite. PVS gives administrators the ability to install application updates, perform image maintenance, maintain version control, etc. without ANY service interruptions to production virtual desktops. On top of that, the cache-to-RAM/overflow-to-disk option (7.1) gives storage administrators a huge relief, since all VM “writes” are written to a cache pool, located on the RAM of the virtual machine! (Read more here):


However, I have noticed one major flaw in PVS, and that is the ability to expand the base image VHD (the read-only copy that everyone is steaming from). Is it doable? Yes. Is it supported by Citrix? No (CTX118608). Is it way more difficult than it should be? Maybe. That really depends if you have decided to go to a 16MB vs. 2MB dynamic block size. Although you [theoretically] get better performance out of the 16MB block size (and sacrifice a small amount of storage, since you’re eating up the entire 16MB block, per each written block), you do lose the ability to natively manage the VHD utilizing diskpart or Hyper-V, because Windows is not compatible with the 16MB-blocked VHD! Does Citrix have this documented anywhere? Absolutely not.

If you’re lucky enough to have the 2MB block size, you can simply create a new merged base of your image, rename the VHD and PVP files, and delete the LOK file. You will then want to delete the version in the Versions window. This will delete the version from the PVS database, but since you’ve renamed it, it will not delete the VHD.

Afterward, run diskpart:

Select vdisk file=”[Location of VHD]”
list vdisk
expand vdisk maximum=[#ofMB]

You can then exit diskpart and expand the partition by mounting the VHD and expanding it in disk manager. Then take the disk offline and unmount it. Afterward, you can import the disk back into PVS.

Now, the really tricky part is expanding the VHD when you have the 16MB block size. If you try and mount the VHD like we did the 2MB-blocked one, you’ll get an error about the disk being corrupted. The only way is to convert this back to the 2MB VHD. So, how can we do that?

First, you will need to utilize a tool called CVhdMount.exe, in combination with the disk tools found in Hyper-V. Chances are, your PVS server does not have the Hyper-V role installed, so you will need to put this tool on your Hyper-V server. This tool can be found in Program Files\Citrix\Provisioning Services on your PVS server. You will also need to install the drivers found in Provisioning Services\Drivers on your Hyper-V server so that it can work with these special VHDs. Copy both the CVhdMount.exe and drivers folder to the Hyper-V host. The rest of the instructions are on Hyper-V.

Next, right-click cfsdep2.inf and hit install. Then, in device manager, add legacy hardware manually, as a storage controller.  Click Have Disk… and point it to CVhdMp.inf. This will install the Citrix Virtual Hard Disk Adapter so that CVhdMount.exe has the instructions on how to operate.

Afterward, run CVhdMount.exe -p 1 “[path to VHD]

You’ll get a magical message that the bus interface has been opened with device serial #1. Now, open up disk manager, and you will see that the disk is there! Set the disk to Offline so we can perform the VHD conversion, without risking the chance of corruption.

Next, in Hyper-V manager, select New > Hard Disk and create a dynamic VHD. Instead of creating a new blank virtual disk, copy the contents of an existing physical disk (your magically attached VHD!). This could take awhile, depending on the specs of your Hyper-V host. After the new 2MB VHD is created, edit the disk using Hyper-V manager to expand it to the new appropriate size.

Copy the VHD over to your PVS storage, import it in the PVS Console, and you should now see your new 2MB-blocked VHD!


Hey everybody! My name is Nick Burton and this is my first IT blog; well, my first blog ever! I’m extremely excited to get this started, as I have been meaning to do it for quite some time.

Throughout the years of my professional IT experience, I have ran into many issues that are undocumented or poorly documented by the product vendors; ranging from Office 365 down to the nitty gritty of Citrix Provisioning Services. I hope to get at least one blog post a week so that I can share some unique tips and tricks for different scenarios. I also am going to try to post at least once a month about a cool technology that I would like to share with the world, including general (and unbiased) ratings for the product.

Feel free to check out my About Me page to learn a little bit more about my background. I hope you all enjoy!