Citrix tips and tricks, tweaks and suggestions

A mixture of different tips and tweaks I suggest that you can make to improve your Citrix farms. If I come across others that I feel are of use I will add them to this post. Feel free to add suggestions in the comments section.

♣ Hardware
♣ Databases
♣ Licenses
♣ Golden Image
♣ Director
♣ StoreFront/Receiver for Web
♣ Domain Infrastructure
♣ User Profiles
♣ Logon Times
♣ User Environment Management
♣ Group Policy
♣ Workspace Control
♣ Hotfixes
♣ Printing
♣ Skype for Business
♣ Graphics
♣ Citrix Policies
♣ .NET
♣ PVS vDisk/Target Device 7.x Anti-Virus exclusions
♣ PVS 7.x Server Anti-Virus exclusions
♣ Other Anti-Virus exclusions and recommendations
♣ Provisioning Services
♣ Updating PVS Target Device Software/VM Tools
♣ Machine Creation Services
♣ NetScaler
♣ App-V


  • Size your hardware (GPU, CPU, RAM) to cope with peak load. Every environment will have a time of the day (morning, lunch) or day of the week where load is at it’s highest. Size for this. This ensures acceptable user experience during peak load and even better for the rest of the time.
  • Take care especially when sizing CPU. It’s most of the time easier to max out a CPU than it is anything else.
  • Assign more than 2vCPUs per XenApp VM. In the past, many people would have recommended VMs with no more than 2vCPUs. If you needed more, you scaled out. Now the opposite is suggested in that if you assign 4vCPUs or even 6vCPUs to XenApp servers, you can get better performance and of course more user density.
  • Keep Hypervisors dedicated to RSDH or VDI VMs. Do not mix SQL with VDI for example.
  • Never size for average IOPs or network bandwidth. Always size for extra as there might come a time when it is needed.
  • CPU over commitment can cause performance issues. Over commit no more than 2x the amount of physical processors per server. Always aim to overcommit by 1.5x to 2x. For example, if a single host has 8 physical processors, assign 12-16 vCPUs across multiple VMs confining to numa node constraints. Hyperthreading allows for 16 vCPUs to be assigned in this case however you should go with no more than 16.
  • Enable hyper-threading (or I should say don’t disable it). HT allows single processors to behave like two logical processors because two independant threads can run at the same time. This increases performance by better utilising idle resoruces.
  • Determine how many NUMA nodes your hosts have. You can use esxtop on vSphere or xl info -l on XenServer. All Hypervisors these days are NUMA aware. A host with 16 cores and two NUMA nodes means there are 8 cores per NUMA node. In this scenario, you shouldn’t assign any more than 8vCPUs max to any virtual machine. If you assign more, the virtual machine will start to use resources from the second NUMA node causing latency and degradation since this is classed as “remote” memory access.
  • Use and enable Cluster on Die when using CoD supported Intel chips. Some chips come with uneven NUMA nodes i.e. a box with 14 physical vCPUs and two NUMA nodes consisting of 8 CPUs and 6 CPUs. Cluster on Die when enabled can take a core from the 8 core NUMA node and present it to the 6 core NUMA node, so each node becomes even.
  • Virtual machines require more memory than what is allocated for devices such as SVGA frame buffers and other attached devices that the hypervisor has to map through. The amount of extra required memory per virtual-machine depends on the number of vCPUs, memory, connected devices and operating system architecture. Over-commiting memory to virtual machines can waste memory increase the overhead reducing the available memory for other virtual machines. Some technologies built in to hypervisors such as memory compression in vSphere help with overcommitment but in general you do not want to overcommit.
  • Assign 4-8GB RAM per virtual machine. The amount of RAM obviously depends on what is running inside the VM, however you should always avoid over-commiting RAM assignment to prevent wasting resource.
  • Use Transparent Page Sharing. A vSphere feature which allows virtual machines with identical sets of memory content to share these pages. This increases memory available on ESX hosts.
  • NIC teaming should always be used at a hypervisor level for redundancy. NICs should be uplinked to separate physical switches which provides redundancy at a switch level. NIC teaming in cases can also provide more throughput.
  • Use PortFast on ESX host uplinks. Failover events can cause spanning tree protocol (STP) recalculations which can cause temporary network disconnects. PortFast immediately sets the port back to a forwarding state and prevents link state changes on ESX hosts from affecting the STP topology.
  • Remove unneeded virtual machine devices such as CD-ROMs, Floppy Drives from virtual machines. This will reduce the amount of resources the hypervisor uses to map such devices through to a virtual machine.
  • Use VMXNET3 NICs with vSphere as there is better performance and reduced host processing when compared with an E1000 NIC for example. Citrix Provisioning Service also does not support running virtual machines on an E1000 NIC.
  • When using vSphere, use esxtop to view NUMA node, local/remote memory access and other statistics to ensure there are no performance issues.



  • Install the license server component on a dedicated machine if possible. License servers cannot share licenses however a single license server is enough to handle sites with thousands of users and servers (can support 10,000 requests at once). If your license server happens to fail your site will go in to a 30 day grace period. This means it is not absolutely necessary to make the license server highly available because you have 30 days to recover the failed server but that is a decision for you. The license server processing and receiving threads can be modified to increase performance, see

Golden Image

When creating a Golden Image to be used by PVS/MCS, I suggest considering all or some of the following:

  1. Install the Operating System
  2. Install Hypervisor Tools
  3. Join VM to domain
  4. Install the Citrix VDA choosing to optimize performance at the Features section. Choosing this option runs the TargetOSOptimizer tool to improve the performance of the operating system to use as a virtual desktop. The optimizations made are explained here
  5. Further tweaks to the gold image can be found here
  6. Install Target Device Software (if using PVS/creating vDisk)
  7. Install any applications needed on the vDisk.
  8. Install Anti-Virus and tweak for VDI/RDSH use.
  9. Install Service Packs, Windows Updates and Microsoft/Citrix hotfixes
  10. Change Power Options from Balanced to High Performance which will favour performance but may use more energy
  11. Turn off update popups for applications such as Adobe Reader, Flash & Java
  12. Run a virus-scan and defrag on your vDisk/MCS image

Additional Gold Image tweak suggestions:

Removing old device drivers from Gold Image:


  • Install Director on it’s own dedicated machine and see tweaks
  • Note: Director logon duration times can be skewed if you have disclaimers on your Desktop machines. For example, the longer it takes a user to accept the disclaimer, the more time is added on to their logon duration count. Avoid disclaimers if possible and instead move them to the Desktop wallpaper, StoreFront UI, NetScaler UI themes etc.

StoreFront/Receiver for Web

Domain Infrastructure

  • Make sure your Active Directory Sites and Services are properly configured as it is critical authentication takes place against Domain Controllers not on the other side of the world if it can be avoided. Also make sure you have a sufficient amount of Domain Controllers to handle the authentication requests and Group Policy processing, especially at peak logon times when they will be busiest. Avoid multiple group nestings, keep the group membership strategy simple.


User Profiles

  • Use features of Citrix Profile Management to mitigate issues such as large roaming profiles. CPM does just what Roaming Profiles does but with additional neat features. Redirection also helps improve logon times since there will be no cachine, downloading or streaming of files and folders. Folder Redirection via Citrix Policies easily allow you to redirect when users access published Desktops but not when the user requests a published application. Doing things like this will speed up the logon process and erases the need to have pointless folder redirection against applications and machines that will make no use of a profile. See for an insight to CPM including tips.
  • User Profile Management when used should have the UserProfileManager.exe process located in C:\Program Files\Citrix\User Profile Manager\ added to Anti-Virus process trusted lists. See for more information.

Logon Times

For tips to decrease logon times and decrease the Interactive Session time in Citrix Director read the following:

User Environment Management

Group Policy

  • With Group Policy you want to reduce the number of GPOs applied to Citrix machines as much as possible. One way of doing this is to merge as many settings as possible in to one Group Policy object rather than have seperate policies. Windows will take much less time to apply one policy than it will five seperate ones that could have easily been merged. Disable unused GPO sections. You may find that your Group Policy objects have only computer or user settings defined. Disabling the unused computer or user section can speed up the time to apply these policies. This is achieved via GPMC. Assign logon scripts to users via GPO’s instead of AD via their AD account (if you use logon scripts).
  • Note: If you do use logon scripts, do you know how long they take to run? If not, you should check. Personally I think we should all be moving away from logon scripts where possible. You could also test running logon scripts after logon has finished by altering Group Policy setting Computer Configuration-> Administrative Templates -> System -> Group Policy -> Configure Logon Script Delay. This policy setting allows you to configure how long the Group Policy client waits after logon before running scripts.
  • Logon Script processing times can easily be reviewed from Citrix Director.2-min

Workspace Control




  • Use VDI for users who require more resource, or users who need a persisted desktop experience. If you have general office workers who use the same set of applications every day such as word processing and emails, consider using RDSH. Using RDSH can mean more users for less compute.

Skype for Business


  • Running Legacy OS and getting poor performance? (Windows 7/2008 R2 or earlier)? Enable Legacy Graphics Mode. Seriously, this gives a good boost in performance and is designed for the older OS models.3-min
  • Use ThinWire+ (introduced XA/XD 7.6 FP3) for newer OS VDA (W8/Server 2012+) for best graphic performance. Do not use H.264 unless you have a reason. H.264 consumes more CPU and will waste resource if not needed, not to mention the likely decrease in user/VDA density per host.
  • Use GPUs or enable popup blockers to prevent your CPU being hit by advertisements, and rich multimedia content. See for examples of how CPU can be affected and how GPU/ad-blockers help.
  • Read the following post for a low down on Citrix graphics modes

Citrix Policies

  • Make sure to go through each policy setting, review, and disable things you do not need. Settings like audio redirection is enabled by default but does not suit every environment. Also be sure not to apply policies that do not apply to your VDA version. I have come across sites for example with Server OS 7.6 VDA installed and the administrator is trying to configure Session Limits through Citrix Policies. Read what VDA version each policy applies to.


  • If you have upgraded .NET or installed updates through patching to a base image I always recommend running NGEN to manually regenerate the Native Image Cache assemblies before pushing the image to production. The ngen.exe update /force command achieves this. Take note to run NGEN on both 32/64bit Framework directories.
  • Use the ngen queue status command to check the status of the NGEN task. Once the status check returns The .NET Runtime Optimization service is stopped message NGEN is complete.

PVS vDisk/Target Device 7.x Anti-Virus exclusions

Consider excluding the below files from anti-virus scans and/or on-access scanning.

  • Pagefile and Print Spooler directory
  • vDisk Write Cache file (vdiskdif.vhdx or .vdiskcache)
  • C:\Program Files\Citrix\Provisioning Services\BNDevice.exe
  • C:\Program Files\Citrix\Provisioning Services\drivers\BNIStack6.sys
  • C:\Program Files\Citrix\Provisioning Services\drivers\CNicTeam.sys
  • C:\Program Files\Citrix\Provisioning Services\drivers\CFsDep2.sys
  • C:\Program Files\Citrix\Provisioning Services\drivers\CVhdBusP6.sys
  • C:\Program Files\Citrix\Provisioning Services\drivers\CVhdMp.sys

Citrix have made recommendations that you exclude these files from being scanned by your anti-virus software. However you should consult with your security team first before setting such exclusions. Also as recommended you should perform scheduled scans on all files and folders including those excluded files and folders.

PVS 7.x Server Anti-Virus exclusions

Consider excluding the below files from anti-virus scans and/or on-access scanning and add these processes to allowed processes/whitelists.

  • Pagefile.
  • VHD, VHDX, AVHD and AVHDX files within PVS stores
  • C:\Windows\System32\drivers\CvhdBusP6.sys (Server 2008 R2)
  • C:\Windows\System32\drivers\CvhdMp.sys (Server 2012 R2)
  • C:\Windows\System32\drivers\CfsDep2.sys
  • C:\Program Files (x86)\Common Files\Citrix\System32\CdfSvc.exe
  • C:\Program Files\Citrix\Provisioning Services\StreamProcess.exe
  • C:\Program Files\Citrix\Provisioning Services\StreamService.exe
  • C:\Program Files\Citrix\Provisioning Services\SoapServer.exe
  • C:\Program Files\Citrix\Provisioning Services\Inventory.exe
  • C:\Program Files\Citrix\Provisioning Services\MgmtDaemon.exe
  • C:\Program Files\Citrix\Provisioning Services\Notifier.exe
  • C:\Program Files\Citrix\Provisioning Services\BNTFTP.exe
  • C:\Program Files\Citrix\Provisioning Services\PVSTSB.exe
  • C:\Program Files\Citrix\Provisioning Services\BNPXE.exe
  • C:\Program Files\Citrix\Provisioning Services\BNAbsService.exe
  • C:\ProgramData\Citrix\Provisioning Services\Tftpboot\ARDBP32.BIN

Citrix have made recommendations that you exclude these files from being scanned by your anti-virus software. However you should consult with your security team first before setting such exclusions. Also as recommended you should perform scheduled scans on all files and folders including those excluded files and folders.

Other Anti-Virus exclusions and recommendations

  • See for tips on excluding files and processed for StoreFront, Delivery Controllers, VDA etc.
  • You should consider host level Anti-Virus especially when you have many VDAs. Bitdefender is available for XenServer and the likes of Sophos vShield can be installed on ESX or Hyper-V. Deploying this technology will increase VM density per host.

Provisioning Services

  • See here for a list of best practices from Citrix.
  • Don’t get caught up about if you should separate streaming and management traffic. It isn’t such a requirement anymore, especially with faster networking these days. I never sugggest doing this.
  • Interrupt Safe Mode – Unless you actually have reason, disable this. I have come across (vSphere) environments that have had this option enabled but make no use out of it. I have seen 4 minute boot times reduced to 1 minute just by unticking this feature. Within the PVS Console, right-click each PVS Server -> Configure Bootstrap -> Options -> untick Interrupt safe mode (check this if the device hangs during boot).


  • NetQueue – A feature of vSphere released in ESX 3.5, if you are receiving slow boots even with the above Interrupt Safe Mode switched off then you may want to test disabling this feature (if it is enabled). NetQueue monitors the network load of all VMs and assigns queues to VMs deemed critical. To disable, via CLI, run set -s netNetqueueEnabled -v FALSE. Probably not as much an issue now as NICs run at speeds of 1GB+.
  • Bootstrap – If you have multiple PVS servers in a load balanced setup, make sure each PVS servers bootstrap contains it’s own IP address at the top of the bootstrap list. It makes sense that if a Worker VM gets it’s bootstrap from one PVS server that it should attempt boot from the same one.
  • PVS Target Device VMs should be using RAM Write Cache with Overflow to HDD. Configure a RAM cache size of 256-512MB RAM for VDI desktops and 2-4GB RAM for XenApp/RDSH servers.
  • E1000 NICs (vSphere) are not supported.

Updating PVS Target Device Software/VM Tools

  • Perform vDisk reverse image if updating PVS Target Device software below v7.6.1 documented with Hyper-v
  • Uninstall Target Device Software before VM tools upgrade/install. Install Target Device Software after VM tools upgrade/install.
  • PVS vDisk reverse imaging using VMware vCenter Convertor
  • It is also advised to install/reinstall the VDA after installing or upgrading VMware Tools. This is because both Citrix and VMware install graphics drivers and Citrix’s drivers should be installed last.

Machine Creation Services


  • Use the TCP profile nstcp_default_XA_XD_profile with your Unified Gateway Virtual Servers. This profile is designed to get the most out of ICA netwok performance. It includes features such as:
    • Use nagle’s algorithm – Reduces the number of packets that need to be sent over the network by combining small outgoing messages, sending them all at once. This is good for ICA, which is by nature a chatty protocol.
    • Forward Acknowledgement – Works alongside TCP SACK and improves congestion control.

Note: Make sure firewall’s that sit in-between the NetScaler’s are not tearing down TCP sessions and recreating them without the required TCP flags originally set by the TCP profile. Nagle’s algorithm etc. use flags within the TCP packet to make the end-client aware that such optimisations are supported and can be used.


  • Enable SCS (Shared Content Store) particularly with non-persistent machines. Using SCS applications will not be cached on disk, but rather streamed from App-V publishing servers. Given that the VDA and App-V servers normally reside within the datacentre, streaming should work well over fast, low latent links. Some application data known as Feature Block 0/Publishing Feature Block will be cached, these files consist of application icons, scripts and any metadata required to run the application. The application data size will be small.
  • On non-persistent VMs, pre-add application packages to a persistent drive. This involves firstly gathering the list of applications that exist on an App-V server to text file using a command such as Get-AppVServerPackage, and using that text tile to pre-add application packages to a non-persistent VM using a scheduled task on system start-up such as Add-AppVClientPackage.
  • On non-persistent VMs, pre-publish application packages to a perssitent drive. This will speed up application launch considerably. This involves firstly gathering the list of applications that exist on an App-V server to text file using a command such as Get-AppVServerPackage, and using that text tile to pre-publich application packages to a non-persistent VM using a scheduled task on system start-up such as Publish-AppVClientPackage.

Leave a Reply