Local Host Cache has been introduced to FMA with XenApp & XenDesktop 7.12+ and is the recommended component to combat database outages allowing users to connect to their resources when the database is out of reach. Connection Leasing is still around and will still be enabled in many scenarios as I will discuss later.If you want to read up on Connection Leasing see http://www.jgspiers.com/citrix-connection-leasing/
Note: Connection Leasing has been deprecated in XenApp and XenDesktop 7.12. The feature is not removed, and will still be supported up until the next Current Release after LTSR 7.15.
From XA/XD 7.12 onwards, Local Host Cache will be the recommended feature of your Citrix farm that allows users to be brokered on to applications and desktops (pooled VDI desktops not supported, just like Connection Leasing) in the event the site database goes offline. LHC is only there to ensure contingency in that operations continue whilst you recover the database connection. So do not treat the priority of restoring a SQL connection any different than before. If you install or upgrade to XenApp/XenDesktop 7.12 all Delivery Controllers receive a LocalDB SQL Express database which stores the Local Host Cache configuration regardless of whether you use Local Host Cache or not. However if you have LHC disabled configuration synchronization using the Citrix Config Synchronizer Service does not occur.
- Up to 1.2GB RAM for the local database service.
- No set CPU rule but Local Host Cache will perform better with more CPU resource. The LocalDB can use up to 4 cores.
- Storage must be available for the LocalDB to grow during a database outage. Once the database is back online the LocalDB will shrink after it is recreated.
The way Local Host Cache works is similar to the XenApp 6.x days but with some extra improvements. When VDA’s register they register against a Citrix Broker Service running on all Delivery Controller’s in a farm. When users are brokered on to VDA’s the Citrix Broker Service is used to find a suitable machine to host the session. All the data generated from such activites including brokering information etc. is stored in the Site database.
Every 2 minutes the Citrix Broker Service on a Delivery Controller checks to see if any changes have been made to the principal broker’s configuration. Changes can include assigning desktops to users, or deleting/adding a Machine Catalog or Delivery Group. If a change has been made since the last check the principal broker uses the Citrix Config Synchronizer Service to copy the all broker configuration including new changes (which prompts a database recreation) to a secondary broker service called the Citrix High Availability Service on the Delivery Controller. The secondary broker service imports the configuration data in to a local SQL express database running on the controller. The Citrix Config Synchronizer Service then makes sure the local database matches the information in the site database.
If site database access to the principal broker service (Citrix Broker Service) is lost, VDA’s re-register with the Citrix High Availability Service running on the elected controller as the Citrix Broker Service stops listening for requests and passes that job to the Citrix High Availability Service. Now any brokering communication between StoreFront and the Delivery Controller or VDA registrations involve the Citrix High Availability Service on the elected broker. As an outage occurs, only one controller is elected as the “in-charge” controller that handles all VDA registration requests and brokering duties. All Delivery Controllers in a farm use an FQDN list of each controllers name alphabetically to determine the elected broker upon database outage. If an elected controller was to fail, another available controller will take over. As only one controller is elected, it must be able to handle the additional load of VDA and brokering operations.
Also note whilst database outages are on-going, machines are not power managed, you cannot use Citrix Studio to perform administrative tasks etc. when the Local Host Cache is in use. If a user tries to broker on to a powered off VDA, you must manually power it on before that user can connect.
Once the database comes back online the Citrix Broker Service takes back the role of primary and all communication is re-routed away from the Citrix High Availability Service.
Now I mentioned before that Connection Leasing is still around and enabled under many scenarios. These scenarios are:
- Installing a fresh XenApp/XenDesktop 7.12 farm results in LHC being disabled and Connection Leasing being enabled.
- Installing a fresh XenApp/XenDesktop 7.15 farm results in LHC being enabled and Connection Leasing being disabled.
- Upgrading from a farm that had Connection Leasing enabled results in CL still being enabled and LHC being disabled under 7.12+ when you have less than 5K VDA’s.
- Upgrading from a farm that had Connection Leasing disabled results in CL still being disabled and LHC being enabled under 7.12+ when you have less than 5K VDA’s.
- Upgrading from a farm that had Connection Leasing disabled or enabled results in CL still being disabled or enabled and LHC being disabled under both scenarios when you have more than 5K VDA’s.
To see if Local Host Cache is enabled simply run Get-BrokerSite on one of your Delivery Controller’s.
To enable Local Host Cache run command Set-BrokerSite -ConnectionLeasingEnabled $false
Now run Set-BrokerSite -LocalHostCacheEnabled $true
Shortly after an event should be logged within Event Viewer stating that the Citrix Config Synchronizer Service received an updated configuration. Any time a configuration change is made within Studio or PowerShell this event will be logged. Controllers are elected based on alphabetical order. Notice how the Controller1 broker server is elected. Election takes place whilst the Site database is active. If Site database access is lost the Citrix Broker Service on each controller logs an event. After around 1 minute the Citrix Broker Service hands operations over to the Citrix High Availability Service and we are now operating in Local Host Cache mode. The Citrix Availability Serrvice reports it has become active and will broker user requests until the SQL database is back online. On your VDA’s, the Citrix Desktop Service will report it has lost contact with the non-elected Delivery Controllers and attempt to restart. If for some reason it tries to register with a non-elected controller the connection will be refused. The VDA will then end up registering with the elected controller.
Note: By default, VDA registration to a Delivery Controller occurs over TCP port 80. If you have changed the default port of the Citrix Broker Service on each Delivery Controller, you must also change the Citrix High Availability Service port to the same. If you do not, VDAs will not be able to register with the Citrix High Availability Service. To do this you can run C:\Program Files\Citrix\Broker\Service\HighAvailabilityService.exe –VdaPort portnumber on your Delivery Controller(s).
StoreFront will temporatily remove all non-elected controllers from the list of active services so they are not queried during resource enumeration or brokering. When users log on to StoreFront, the available resources are listed as normal. Communication for resource enumeration etc. between StoreFront and the Citrix High Availability Service is done via Controller1. You should now be able to launch resources as normal. Note: Citrix Director will not receive any information from the Citrix Monitor Service during a database outage.
As the database is offline the Citrix Broker Service will monitor the connection to the Site database. Once the Site database is back online the Citrix Broker Service informs the Citrix High Availability Service that is will take over operations again.
The Citrix Broker Service confirms normal brokering activity will resume.
- Users with connections to desktops might encounter problems reconnecting during an outage when LHC is used. If this happens, restart the Citrix High Availability Service.
- Citrix XenApp and XenDesktop 7.14 can now support an outage of 10K VDAs per zone up to a maximum of 40K VDAs per single site. Previously, you were limited to 5K VDAs.