List Horizon Desktop Pool To Global Entitlement Associations

As promised from my last post, here is the function for listing Desktop Pools to their associated Global Entitlements – Get-HVPoolsToGEs. Enjoy!

 # Get desktop pools to Global Entitlement associations. Requires Join-Object module and HVHelper module from PowerCLI.
# Run "Install-Module -Name Join-Object -RequiredVersion 2.0.1" for join-object module install.
# Connect to pod using Connect-HVServer prior to execution. 
# Function written by Nick Burton -

Function Get-HVPoolsToGEs {

# Get app info

$hvpools = Get-HVPool

foreach ($hvpool in $hvpools)

$poolInfo+= New-Object PSObject -Property @{
"PoolName" = $hvpool.Base.Name;
"AssignedGE" = $hvpool.GlobalEntitlementData.GlobalEntitlement.Id;

# Get global entitlement info

$hvGEs = Get-HVGlobalEntitlement
$GEInfo = @()

foreach ($HVGE in $HVGEs)

$GEInfo+= New-Object PSObject -Property @{
"Name" = $HVGE.Base.DisplayName;
"GEID" = $HVGE.Id.Id;

$JoinParams = @{
    Left              = $poolInfo
    Right             = $GEInfo
    LeftJoinProperty  = 'AssignedGE'
    RightJoinProperty = 'GEID'
    Type              = 'OnlyIfInBoth'
    Prefix            = 'GE_'
Join-Object @JoinParams | select PoolName,GE_Name

List Horizon App to Global Entitlement Associations

Sometimes it is handy to list all of the associations between Horizon Application Pools to Global Entitlements. Here is a quick function I wrote to do so! Introducing Get-HVAppsToGEs.

You must first install the HVHelper Modules with PowerCLI, as well as a really cool module called Join-Object for performing SQL-like joins within two PowerShell arrays. This is required because the relationship is based on the Global Entitlement ID, but we want to translate this to the readable name.

 # Get apps to Global Entitlement associations. Requires Join-Object module and HVHelper module from PowerCLI.
# Run "Install-Module -Name Join-Object -RequiredVersion 2.0.1" for join-object module install.
# Connect to pod using Connect-HVServer prior to execution. 
# Function written by Nick Burton -

Function Get-HVAppsToGEs {

# Get app info

$hvapps = Get-HVApplication

foreach ($hvapp in $hvapps)

$appInfo+= New-Object PSObject -Property @{
"AppName" = $hvapp.Data.Name;
"AssignedGE" = $hvapp.Data.GlobalApplicationEntitlement.Id;

# Get global entitlement info

$hvGEs = Get-HVGlobalEntitlement
$GEInfo = @()

foreach ($HVGE in $HVGEs)

$GEInfo+= New-Object PSObject -Property @{
"Name" = $HVGE.Base.DisplayName;
"GEID" = $HVGE.Id.Id;

$JoinParams = @{
    Left              = $appInfo
    Right             = $GEInfo
    LeftJoinProperty  = 'AssignedGE'
    RightJoinProperty = 'GEID'
    Type              = 'OnlyIfInBoth'
    Prefix            = 'GE_'
Join-Object @JoinParams | select AppName,GE_Name

Here’s what the result looks like:

Enjoy! I’ll likely follow this up with a Desktop Pool-to-GE script later.

UPDATE: As promised, here is the desktop pool-to-GE version.

Get VMware Unified Access Gateway (UAG) Session Count Using PowerShell

With the recent COVID-19 catastrophe, we had the need to automatically pull UAG session statistics and report them to a dashboard so we could keep track of our external Horizon users to ensure the 2k limit per UAG was not exceeded. What better way to automate this than a little PowerShell?

VMware has some documentation on the API URL and definitions here – unfortunately there’s no information on how to interact with the API, authenticate, etc., so I had to figure that out for myself. Special thanks to pallabpain for the great REST framework that I used for this script – I will certainly continue to utilize it!

I recommend creating a read-only account in the GUI under Advanced Settings -> Account Settings. Definitely no reason to use your production admin account for this:

Here’s the full script – you just need to paste it into a .ps1, execute it, and it will become an available function:

# This is a script to grab UAG authenticated sessions
# Written by Nick Burton

# Ignore cert errors

add-type @"
    using System.Net;
    using System.Security.Cryptography.X509Certificates;
    public class TrustAllCertsPolicy : ICertificatePolicy {
        public bool CheckValidationResult(
            ServicePoint srvPoint, X509Certificate certificate,
            WebRequest request, int certificateProblem) {
            return true;
[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy
[System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Ssl3,Tls,Tls11,Tls12'

# Create a function for UAG API call - thanks to for this great REST framework!

function Get-UAGSessionCount([string]$username, [string]$password, [string]$UAGHostName) {
  $url = "https://"+$UAGHostName+":9443/rest/v1/monitor/stats"
  # Step 1. Create a username:password pair
  $credPair = "$($username):$($password)"
  # Step 2. Encode the pair to Base64 string
  $encodedCredentials = [System.Convert]::ToBase64String([System.Text.Encoding]::ASCII.GetBytes($credPair))
  # Step 3. Form the header and add the Authorization attribute to it
  $headers = @{ Authorization = "Basic $encodedCredentials" }
  # Step 4. Make the GET request
  $responseData = Invoke-WebRequest -Uri $url -Method Get -Headers $headers -UseBasicParsing

  $firstString = "<authenticatedSessionCount>"
  $secondString = "</authenticatedSessionCount>"
  $content = $responseData

  # Get the value between the two strings above
    $pattern = "$firstString(.*?)$secondString"

    # Output result of pattern match using regex
    $result = [regex]::Match($content,$pattern).Groups[1].Value

    # Return result
    return $result

Execute it with PowerShell, and you now have an available PowerShell function called Get-UAGSessionCount:

Get-UAGSessionCount -username [read-only UAG account] -password [password] -UAGHostName [FQDN or IP of UAG]

After executing, you’ll simply get a number back. This number represents the AuthenticatedSession count found in the UAG GUI:

With this, you can do some pretty cool stuff. Add it to a scheduled task that outputs the value to a csv with the date/time. Use it in PowerBI, Tableau, etc. across your UAG’s to have a real-time dashboard of your remote workforce. Happy monitoring!

VMware Horizon Logoff Script with PowerCLI

One drawback with Horizon View is that it does not have the ability (through the GUI) to automate user logoffs or reboots on a daily/weekly basis. Thankfully, VMware has written some halfway decent PowerShell snap-ins so we can script such tasks. I say halfway decent because there are only 45 commandlets and only 20 are actually useful…

Automated logoffs are useful in instant and linked cloned scenarios (hopefully everyone is using instant clones and admiring how awesome they are) where you have to deploy an image update immediately and logoff existing sessions later. Thankfully for my blog readers, I have written such a script to do that. This script utilizes the Get-RemoteSession and Send-SessionLogoff Horizon PowerCLI commandlets. Unfortunately there is no PowerCLI commandlet to send messages to the active sessions, so I had to convert each session’s machine name to a string and do a ForEach loop to pipe those names into the msg.exe command.

The other unique thing about this script is that it only does half of the pool at a time (using some variable array magic). The way it’s currently written, it sends a message to half of the sessions that warns them they will be logged off in 15 minutes, warns them again at 60 seconds prior to logoff, logs the first half of sessions off, and then repeats the process with the second half of sessions.

Also, if you want to run this script real-time versus scheduling it in Task Scheduler, I have included Write-Host commands along the way so you can actually see which sessions are being warned and logged off throughout the whole process.

The only thing you’ll need to do before running the script is adjust the 3 variables at the top: $PoolName (name of the Horizon pool), $FirstWarning (How long of a warning the users get before logoff), and $FinalWarning (The second warning time the users get before logoff). Run this or schedule it on a Connection Server and you’re good to go! Enjoy!

###### VMware Horizon Logoff Script ######
###### Created by Nick Burton 10/9/2017 ######

# This script logs off any active sessions for a particular pool. This is useful for enforcing image updates.
# Simply set the pool name and two warning times below! Schedule it with task manager on a Connection Server.

#### KNOWN ISSUES ####
# Currently this script will NOT work if only one session exists due to the array usage in the variables. An IF statement could fix this.
# If two or three sessions exist, this script will logoff all sessions due to the half calculation and array locations starting at 0.

# First, set the poolname below:
$PoolName = “POOL NAME HERE”

# Next, set the first warning time prior to the reboot in seconds. 15 minutes = 900 seconds.
$FirstWarning = 900

# Finally, set the final warning time prior to the reboot in seconds.
$FinalWarning = 60


# Get first warning time minus final warning time in order to send second message at appropriate time
$WaitTime = $FirstWarning – $FinalWarning
$FirstWarningMinutes = $FirstWarning / 60

# Add all VMware snap-ins for View PowerCLI
Add-PSSnapin *vmware*

# Get all sessions for pool defined in PoolName variable and populate new sessions variable with string data
# Only strings can be accepted for upcoming msg command (VMware hasn’t introduced a message cmdlet)

$sessions = Get-RemoteSession -Pool_id $PoolName | %{$_.DNSName}
$sessionHalf = $sessions.count/2
Write-Host “Here are ALL of the sessions we are logging off:” -ForegroundColor Green
$sessions | Write-Host -ForegroundColor Green

# Populate logoff variable for use later
$Logoffs = Get-RemoteSession -Pool_id $PoolName

# Send first message to first half of sessions in pool using msg.exe using a foreach loop
Write-Host “Sending message to these sessions for pending reboot in $FirstWarningMinutes minutes:” -ForegroundColor Green
$sessions[0 .. $sessionHalf] | Write-Host -ForegroundColor Green
ForEach ($session in $sessions[0 .. $sessionHalf]) {msg /server:$session * “You will be logged off in $FirstWarningMinutes minutes! Please save all work!”}

# Wait for first warning time minus final warning time
Write-Host “Pausing for $WaitTime seconds…” -ForegroundColor Green
Start-Sleep -Seconds $WaitTime

# Send final warning
Write-Host “Sending message to these sessions for pending reboot in $FinalWarning seconds:” -ForegroundColor Green
$sessions[0 .. $sessionHalf] | Write-Host -ForegroundColor Green
ForEach ($session in $sessions[0 .. $sessionHalf]) {msg /server:$session * “You will be logged off in $FinalWarning seconds! Please save all work!”}
Start-Sleep -Seconds $FinalWarning

# Send the logoffs to half the sessions!
Write-Host “Logging off the following sessions!” -ForegroundColor Green
$sessions[0 .. $sessionHalf] | Write-Host -ForegroundColor Green
$Logoffs[0 .. $sessionHalf] | Send-SessionLogoff

# Wait two minutes for desktops to become available, etc. before doing the next half
Write-Host “Waiting two minutes for desktops to become available… there will likely be some errors thrown in a bit since some incoming sessions are already logged off – no big deal.” -ForegroundColor Green
Start-Sleep -Seconds 120

# Send first message to last half of sessions in pool using msg.exe using a foreach loop
Write-Host “Sending message to these sessions for pending reboot in $FirstWarningMinutes minutes:” -ForegroundColor Green
$sessions[$sessionHalf .. $sessions.count] | Write-Host -ForegroundColor Green
ForEach ($session in $sessions[$sessionHalf .. $sessions.count]) {msg /server:$session * “You will be logged off in $FirstWarningMinutes minutes! Please save all work!”}

# Wait for first warning time minus final warning time
Write-Host “Pausing for $WaitTime seconds…” -ForegroundColor Green
Start-Sleep -Seconds $WaitTime

# Send final warning message to last half of sessions in pool using msg.exe using a foreach loop
Write-Host “Sending message to these sessions for pending reboot in $FinalWarning seconds:” -ForegroundColor Green
$sessions[$sessionHalf .. $sessions.count] | Write-Host -ForegroundColor Green
ForEach ($session in $sessions[$sessionHalf .. $sessions.count]) {msg /server:$session * “You will be logged off in $FinalWarning seconds! Please save all work!”}

# Send logoffs to the other half! This will likely have a single error since the median session has already been logged off.
Write-Host “Logging off the following sessions!” -ForegroundColor Green
$sessions[$sessionHalf .. $sessions.count] | Write-Host -ForegroundColor Green
$Logoffs[$LogoffHalf .. $Logoffs.count] | Send-SessionLogoff

Write-Host “Script COMPLETE!” -ForegroundColor Green

Customizing VMware Horizon Connection Server Login Screen

Let’s finally do something on VMware Horizon! This post will cover how to customize your Connection Server login screen, something that is becoming more important as HTML5 access gains popularity. I could not find any documentation on this, so I dug through to find all of the relevant images and files needed. FYI – this was done on a Horizon 7.2 Connection Server. Paths may vary depending on your version.

Here’s the various things we will be customizing and the location to customize them. The path is relevant to C:\Program Files\VMware:

Background: VMware View\Server\broker\webapps\portal\webclient\icons-5622958\bg_image.jpg

Logo on top: VMware View\Server\broker\webapps\portal\webclient\icons-5622958\logo.png

All text in initial login screen*: VMware View\Server\broker\webapps\portal\WEB-INF\classes\com\vmware\vdi\installer\i18n\

  • * Requires restart of the VMware Horizon View Web Component service. I recommend just restarting the VMware Horizon View Connection Server service or the entire server.

I also recommend simply renaming the old files to .old so you always have the original file. Don’t forget to clear your cookies or open in incognito, or else the original images get cached!

When editing the file, it is pretty self-explanatory on which text to edit. For example, if I change the following lines to:

install.message.first=Here’s where you can customize some text!

install.message.second=Having issues? Contact the help desk at 123-456-7890!

Here’s what it looks after doing that and replacing the logo and background images. Don’t forget that the background image is .jpg and the logo is .png format; also don’t forget to restart the appropriate service, or just restart the Connection Server.


If your Connection Servers are behind a load-balancer, (they are, right?) then you can simply place a different background on each Connection Server so it looks like the backgrounds are randomly generated. Cool, huh?

Happy branding!

Update a Static/Dedicated MCS Image (vSphere)

The other day I made a change to the “master” machine that was initially used for the deployment of several MCS static/dedicated desktops. When I went to deploy additional desktops, I expected the changes to persist from the master. Guess what? That was not the case. But WHY?

If you pay close attention when you initially deploy the image, you will notice that MCS will do a full VMDK copy of your snapshot chain into a folder of every datastore that is defined in your hosted XenDesktop environment. This makes desktop creations extremely quick when scaling out additional VMs because it 1) negates the need to potentially copy VMDKs across datastores during desktop creation and 2) negates the need to consolidate snapshots during creation. The folder will typicially be the machine catalog name + basedisk + random datastore identifier assigned by XenDesktop. This applies to all MCS images; static and pooled.

We obviously want to keep the master of dedicated machines up-to-date to avoid unnecessary SCCM pushes, Windows updates, missed software, etc. when we deploy new desktops. Unfortunately Citrix does not give a GUI option for this, like we get on our pooled desktops in Studio:

So, what is almost always the method of action when no GUI option is available? That’s right – PowerShell!

There are two main things to consider here: the “Provisioning Scheme” and the new “Master Image.” The provisioning scheme name almost always matches the machine catalog name. It keeps track of the master image location and some other metadata. The master image is just the snapshot of your master machine that MCS does that full VMDK copy to each datastore that we talked about earlier.

Let’s get right to it. First, open PowerShell on your DDC, and get the provisioning scheme name and the current snapshot that is being used for the master:

add-pssnapin *citrix*


This will return two very important things for each MCS machine catalog: 1) the ProvisioningSchemeName and 2) the MasterImageVM. You will notice that this contains the name of the snapshot that mirrors the name you provided in vSphere, followed by .snapshot. This makes it easy to locate! Let’s assume our current snapshot is named “v1” and our master is named “XDMaster1.” So the MasterImageVM should look like:

XDHyp:\HostingUnits\<Cluster Name>\XDMaster1.vm\v1.snapshot

Note: If your VM is in a resource pool, this path will also contain that as a “directory.”

We will create a snapshot named “v2” on the master and make some changes, updates, etc. and shutdown the master. Let’s verify that XenDesktop now sees this snapshot in our hypervisor environment:

get-childitem -path “XDHyp:\HostingUnits\<Cluster Name>\XDMaster1.vm\v1.snapshot”

You will see that v2.snapshot is now a child item of your v1.snapshot! Good deal! So how do we point MCS to this snapshot? Simple:

First, let’s make it easy on ourselves and create a couple of variables. The two important ones that I touched on earlier: ProvisioningSchemeName and MasterImageVM:

$ProvScheme = “Windows 10 Static”

“Windows 10 Static” will be the ProvisioningSchemeName from earlier, or usually the name of your Machine Catalog.

$NewMasterImage =  “XDHyp:\HostingUnits\<Cluster Name>\XDMaster1.vm\v1.snapshot\v2.snapshot”

That will be the full path to your new snapshot. Remember to use get-childitem to ensure that the DDC sees your new snapshot.

Now, we will use the Publish-ProvMasterVMImage cmdlet to wrap it all up!

Publish-ProvMasterVMImage -ProvisioningSchemeName $ProvScheme -MasterImageVM $NewMasterImage

After running this command, pay attention to your vSphere tasks. You will see a temporary VM get copied, VMDKs get copied to the various datastores, and you should finally get a response from PowerShell that states 100% completion and where the new master image location points.

If you see the dreadful red text, pay attention and make sure you got your paths correct. It is easy to mistype the XDHyp path, forget quotes, etc.

I hope this post thoroughly covers how to update your master image on a static/dedicated Machine Catalog delivered via MCS! Thank you, PowerShell!

Restoring a Office 365 User Sync’d with AD

Background and Intro

Office 365 has an excellent method for providing a common identity for cloud and on-premise resources. Why would an IT administrator want to manage two separate accounts with different passwords, attributes, and group membership? Thankfully, Office 365 has DirSync (now Azure AD Connect, but DirSync sounds so much cooler, and I will forever call it that) to integrate the on-prem Active Directory with Office 365, backed by Azure AD.

Hopefully in this day and age, and now that we’ve reached the end of life for Server 2003, you have an Active Directory environment living on at least a 2008r2 functional level with AD Recycling Bin enabled. Right? Unfortunately in the not-so-perfect world we live in, there are still legacy applications and other roadblocks that keep organizations from making this jump.

Who hasn’t made the mistake of deleting a user account in a non-recycle-bin-enabled environment? And who wants to do an authoritative restore or tombstone animation? Why not just re-create the AD object? Oh, they’re sync’d with O365 and have a cloud mailbox as well…

The Process

So, how can we create a brand new user account in AD and re-map their cloud mailbox to the account? Or the AD object somehow got corrupted and we need to delete and re-create from scratch. But, again, they have an Office 365 mailbox tied to their sync’d user account. At first glance, it looks like the user and their mailbox gets thrown into oblivion, but it instead gets converted to a cloud-only account within the Deleted Users section in your Office 365 admin portal.

So go ahead and restore this object. Notice that it becomes a cloud-only object. So we’ve saved the mailbox, but we obviously want it to map back to our new AD user. Next, create the new user object in AD with the appropriate email and SMTP: value in the ProxyAddresses attribute.

Matching the ObjectGuid

So now we need to grab the AD user’s ObjectGuid. This is the value that is used to match the on-prem user account with the cloud object. Run the following to grab the ObjectGuid for the user and export it to a text file, replacing the CN, OU, and DC values where needed in the DN:

ldifde -d “CN=User1,OU=Users,DC=domain,DC=com” -f c:\User1.txt

Open PowerShell and mimic the Cloud users ImmutableID with the AD ObjectGuid


Set-MsolUser –UserPrincipalName -ImmutableId “someGuid=”

Run a DirSync and verify

Now run your DirSync – you should now see that the O365 user shows “Synced with Active Directory” and the user’s original mailbox is mapped to the new user account!

Citrix XA/XD SQL Mirroring

As you probably know, SQL is one of the foundations of a successful Citrix deployment. All transactions processed in a XenApp/XenDesktop environment must go through SQL (Citrix brought back connection leasing in 7.6 to temporarily workaround a SQL outage). SQL mirroring is Citrix’s recommended method of a highly available deployment. It also seems to be the cheapest and easiest to deploy. SQL mirroring must be application-aware, meaning that it doesn’t use any sort of VIP to trick the application into thinking that either SQL server can represent the same instance. Citrix will actually know and auto-detect that another database on another server will be used as a failover during the site creation.

I will start off by saying that I am no SQL guru; I know a few basic queries, join concepts, etc. Many Citrix admins aren’t SQL experts either, so we typically leave any sort of SQL-related stuff to the database guys. Well, I went ahead and gave SQL mirroring a shot myself in a recent deployment, and I was actually surprised by how easy it was! I must stress that setting it up the first time during the initial deployment looks way easier than re-configuring after the site has been deployed/pushed to production. So, I recommend doing it right the first time, as going back and re-configuring will introduce downtime to the environment.

So, let’s start with a brand new deployment – before you even touch the Citrix layer, SQL must be taken care of first. This is done with SQL 2014 on Server 2012R2. You will need 2 servers running SQL standard and 1 server with SQL Express acting as the witness. The witness can typically be installed on a multi-role server, such as a delivery controller, to save resources if needed. Since this is a standalone SQL environment just for the Citrix servers, we will keep the name at the default instance.

Start off by installing SQL Standard on both servers. You will need the database engine, client/server components, and COMPLETE management tools. I realized afterward that the install of basic management tools will not include the mirroring options/tools within Mgmt Studio. After that is complete, install SQL Express on your witness.

We will call the principal (primary) database server SQL1, mirror will be SQL2, and witness will be SQL3.

Let’s start by creating a SQL database on SQL1 with the full recovery model. Make sure that the Collation is set to Latin1_General_100_CI_AS_KS in order for Citrix to properly interact with it.


Set Is Read Committed Snapshot On to True after the database is created. This will improve performance and you will not get a warning when the site database is setup.  See here for Citrix article on this.


Do a full backup of SQL1 by right-clicking the database and go to backup. Make note of the location.


Copy the .bak to the same location on SQL2. Open up management tools on SQL2 go to the Databases folder (you should not have any databases on SQL2 yet!) and go to Restore Database…


Under the Restore Database options, make sure that RESTORE WITH NORECOVERY is selected in the recovery state. This is a very important step that is often overlooked, and will result in an error when attempting to initiate the mirror.


Okay – that essentially sums up the preparation process for the mirror, so we’re about halfway there! Now it’s time to actually initiate it. On SQL1, right-click the database and go to Tasks > Mirror. This will take you to the Mirror properties of the database.


** If the mirror properties do not show up, that usually means that you have not installed the complete set of management tools; only the basic. Go back and edit the existing SQL instance, just adding the complete management tools.

Before we start the mirror, we have to configure the security settings for it. Go ahead and click that so the security wizard comes up. Mirroring defaults to TCP 5022, so ensure that appropriate firewall rules allow this connection (including your witness instance!), on top of your basic SQL ports.

You should breeze through this wizard, ensuring that you specify the correct SQL server instances.

8 9 10 11 12

You will be prompted with a pop-up menu after the security wizard. Go ahead and click Start Mirroring to initiate the process:


Bam! You have just successfully setup a SQL mirrored instance. You will notice that the mirror properties will show the state of the mirror instance, so this status page is usually a good place to start when troubleshooting issues.


If you happen to run into an error during the initialization, particularly a 1418 error, follow this blog for some good pointers.

All righty – go ahead and start your new site setup. When asking for the instance / database in Studio, make sure and point to SQL1 (principal) for your databases. It will automatically configure your connection strings to use SQL2 as the failover. Please note that we used a single database for the site, logging, and monitoring. It’s usually best practice to have these in 3 separate databases, so you will need to configure the mirroring for each database using the above steps.

When completing backups, log round-ups, etc., make sure and use your principal for the backup source. Do not backup the mirror.

Thanks for reading – I hope this helps!

Office 365 License Changes

If you haven’t already heard, Microsoft is removing their Small Business, Small Business Premium, and Midsize Business plans, and replacing them with Business, Business Essentials, and Business Premium subscriptions. Starting October 2014, companies will be forced to subscribe to the new models at their next subscription renewal.

Many companies under 300 users have taken advantage of these plans, primarily due to the cost savings compared to the Enterprise (E1, E3, E5) subscriptions.

It is extremely important to note that there are some slight differences between the old models and the new. Particularly those who are currently using Small Business Premium or Midsize Business. Those with Small Business Premium and Midsize Business will be pushed toward Business Premium.

Two very important things to note for Business Premium:

  1. Users will lose Microsoft Access from their Office suite, assuming they are using Office 365 Pro Plus.
  2. Under Midsize Business, Microsoft allowed users to license their Office products within RDS/XenApp environments. Now, if a user attempts to license Office on a server that is a RDS session host, they will receive the following error:


So, if a company utilizes Office Pro Plus in any sort of RDS/XenApp environment, they must now subscribe to an Enterprise subscription (or purchase a volume license). Thankfully, Microsoft now allows the ability to mix and match users between Business and Enterprise, so it isn’t an all-or-nothing scenario if you only have a set of users that utilize RDS/XenApp in your environment.

The following article is helpful for making the transition:

Remember to carefully go over the changes to ensure this switch will not affect your users. It is pretty disappointing that Microsoft does not give you any notice of these changes when renewing your subscription.

Hope this helps!


Citrix Personal vDisks

Citrix PvD (or Personal vDisk) is a great way to allow users to have the freedom of installing their own applications and customizations, while still having the standard vDisk as its base disk. Citrix has made major improvements to this architecture throughout the years. The personal vDisk is a virtual disk that you will actually attach to your virtual machine where the user’s read/writes will go (in order for the changes to persist upon logon/logoff). XenDesktop will magically “merge” the base and PvD in order to reflect version changes, etc.

One of the main problems that users will run into is when they run out of storage of their PvD. This will prevent the user from installing applications, etc., and of course, you, as the Citrix admin, will be responsible for fixing it. This article will  be written assuming you are using an ESXi/vSphere environment, but this will pertain to any virtual environment (Hyper-V, XenServer)

There are a couple of basic things you should know about PvD customizations. The settings are contained within HKLM > SOFTWARE >  Citrix > personal vDisk > Config. You can change these settings by simply creating a new version on the golden image. The main setting you will want to tweak is “PercentOfPvDForApps.”

For whatever reason, Citrix defaulted this to 50/50. This means that if you provide a user with a 60GB PvD (we’ll use this number for the rest of the article), 30GB will be used for application installs, and 30GB will be used for the user’s profile. This could be overkill for the profile, especially if you are using profile management. So make sure and adjust this setting to best suit your environment.

Another thing you should know is that (by default) the user will see 2 drives – the C and P drive. The P drive is actually the whole 60GB VMDK you attached for the PvD. The C drive is a hidden VHD contained within the P drive, which represents the thick provisioned app percentage that you specified in the registry! So C = P * PercentOfPvDForApps(%)

Hidden VHD in the vDisk
Hidden VHD in the vDisk

From what I understand – the UserData.v2.vhd.thick_provision is essentially the initial thick reserved app space, while any future expansions go to the UserData.V2.vhd thin provisioned VHD.

So, that’s some basics of personal vDisks in MCS or PVS. I am also going to go ahead and write about a specific scenario I ran into while expanding a user’s vDisk:

As I talked about before, expanding a vDisk is really easy. Simply expand the VMDK in vSphere console, partition it out using disk manager in Windows, shut down the user’s machine, start it up, and Citrix will automagically expand out the C:\ drive based on the PercentOfPvDForApps in the registry.

Well, I ran into a user who had about 11% of his C:\ drive remaining, and it refused to expand, even when I expanded his personal vDisk VMDK. His applications were failing to install because he did not have sufficient disk space. I was baffled!

I happened to stumble across an article that said the user’s C:\ drive portion would not expand unless it had less than 10% remaining (just my luck, the user had around 11%). So I moved an application folder over to his C:\ drive just to eat up space, restarted his VM, and bam! The C:\ drive expanded to the percentage I had expected.

Hopefully this article helped you understand a little bit about how personal vDisks operate, and I hope that the scenario helps someone as unlucky as me out there. Enjoy!!