Local Host Cache XenApp & XenDesktop

Local Host Cache has been introduced to FMA with XenApp & XenDesktop 7.12+ and is the recommended component to combat database outages allowing users to connect to their resources when the database is out of reach. Connection Leasing is still around and will still be enabled in many scenarios as I will discuss later.If you want to read up on Connection Leasing see http://www.jgspiers.com/citrix-connection-leasing/

Note: Connection Leasing has been deprecated in XenApp and XenDesktop 7.12. The feature is not removed, and will still be supported up until the next Current Release after the next LTSR release post 7.12.

From XA/XD 7.12 onwards, Local Host Cache will be the recommended feature of your Citrix farm that allows users to be brokered on to applications and desktops (pooled VDI desktops not supported, just like Connection Leasing) in the event the site database goes offline. LHC is only there to ensure contingent operations continue whilst you recover the database connection so do not treat the priority of restoring a SQL connection any different than before. If you install or upgrade to XenApp/XenDesktop 7.12 all Delivery Controllers receive a LocalDB SQL Express database which stores the Local Host Cache configuration regardless of whether you use Local Host Cache or not.1-min However if you have LHC disabled configuration synchronization using the Citrix Config Synchronizer Service does not occur.

System Requirements:

  • Up to 1.2GB RAM for the local database service.
  • No set CPU rule but Local Host Cache will perform better with more CPU resource. The LocalDB can use up to 4 cores.
  • Storage must be available for the LocalDB to grow during a database outage. Once the database is back online the LocalDB will shrink after it is recreated.

The way Local Host Cache works is similar to the XenApp 6.x days but with some extra improvements. When VDA’s register they register against a Citrix Broker Service running on all Delivery Controller’s in a farm. When users are brokered on to VDA’s the Citrix Broker Service is used to find a suitable machine to host the session. All the data generated from such activites including brokering information etc. is stored in the Site database.

Every 2 minutes the Citrix Broker Service on a Delivery Controller checks to see if any changes have been made to the principal broker’s configuration. Changes can include assigning desktops to users, or deleting/adding a Machine Catalog or Delivery Group. If a change has been made since the last check the principal broker uses the Citrix Config Synchronizer Service to copy the all broker configuration including new changes (which prompts a database recreation) to a secondary broker service called the Citrix High Availability Service on the Delivery Controller. The secondary broker service imports the configuration data in to a local SQL express database running on the controller. The Citrix Config Synchronizer Service then makes sure the local database matches the information in the site database.3-min

If site database access to the principal broker service (Citrix Broker Service) is lost, VDA’s re-register with the Citrix High Availability Service running on the elected controller as the Citrix Broker Service stops listening for requests and passes that job to the Citrix High Availability Service. Now any brokering communication between StoreFront and the Delivery Controller or VDA registrations involve the Citrix High Availability Service on the elected broker. As an outage occurs, only one controller is elected as the “in-charge” controller that handles all VDA registration requests and brokering duties. All Delivery Controllers in a farm use an FQDN list of each controllers name alphabetically to determine the elected broker upon database outage. If an elected controller was to fail, another available controller will take over. As only one controller is elected, it must be able to handle the additional load of VDA and brokering operations.

Also note whilst database outages are on-going, machines are not power managed, you cannot use Citrix Studio to perform administrative tasks etc. when the Local Host Cache is in use. If a user tries to broker on to a powered off VDA, you must manually power it on before that user can connect.

Once the database comes back online the Citrix Broker Service takes back the role of primary and all communication is re-routed away from the Citrix High Availability Service.2-min

Now I mentioned before that Connection Leasing is still around and enabled under many scenarios. These scenarios are:

  • Installing a fresh XenApp/XenDesktop 7.12 farm results in LHC being disabled and Connection Leasing being enabled.
  • Installing a fresh XenApp/XenDesktop 7.15 farm results in LHC being enabled and Connection Leasing being disabled.
  • Upgrading from a farm that had Connection Leasing enabled results in CL still being enabled and LHC being disabled under 7.12+ when you have less than 5K VDA’s.
  • Upgrading from a farm that had Connection Leasing disabled results in CL still being disabled and LHC being enabled under 7.12+ when you have less than 5K VDA’s.
  • Upgrading from a farm that had Connection Leasing disabled or enabled results in CL still being disabled or enabled and LHC being disabled under both scenarios when you have more than 5K VDA’s.

To see if Local Host Cache is enabled simply run Get-BrokerSite on one of your Delivery Controller’s.4-min

To enable Local Host Cache run command Set-BrokerSite -ConnectionLeasingEnabled $false

Now run Set-BrokerSite -LocalHostCacheEnabled $true5-min

Shortly after an event should be logged within Event Viewer stating that the Citrix Config Synchronizer Service received an updated configuration. Any time a configuration change is made within Studio or PowerShell this event will be logged. 6-minControllers are elected based on alphabetical order. Notice how the Controller1 broker server is elected. Election takes place whilst the Site database is active. 7-minIf Site database access is lost the Citrix Broker Service on each controller logs an event. 8-minAfter around 1 minute the Citrix Broker Service hands operations over to the Citrix High Availability Service and we are now operating in Local Host Cache mode. 9-minThe Citrix Availability Serrvice reports it has become active and will broker user requests until the SQL database is back online. 10-minOn your VDA’s, the Citrix Desktop Service will report it has lost contact with the non-elected Delivery Controllers and attempt to restart. 11-minIf for some reason it tries to register with a non-elected controller the connection will be refused. 12-minThe VDA will then end up registering with the elected controller. 13-minStoreFront will temporatily remove all non-elected controllers from the list of active services so they are not queried during resource enumeration or brokering. 14-minWhen users log on to StoreFront, the available resources are listed as normal. Communication for resource enumeration etc. between StoreFront and the Citrix High Availability Service is done via Controller1. You should now be able to launch resources as normal. 15-minNote: Citrix Director will not receive any information from the Citrix Monitor Service during a database outage.

As the database is offline the Citrix Broker Service will monitor the connection to the Site database. Once the Site database is back online the Citrix Broker Service informs the Citrix High Availability Service that is will take over operations again.

16-minThe Citrix Broker Service confirms normal brokering activity will resume. 17-min

Known issues/Improvements

  • Users with connections to desktops might encounter problems reconnecting during an outage when LHC is used. If this happens, restart the Citrix High Availability Service.
  • Citrix XenApp and XenDesktop 7.14 can now support an outage of 10K VDAs per zone up to a maximum of 40K VDAs per single site.

14 Comments

  • Pingback: Delivery Controller 7.12 and Licensing – Carl Stalhood

  • Shiva ch

    December 7, 2016

    Hey George,

    You have mentioned that LHC doesn’t support Pooled connections but citrix documentation says accessing pooled resources (shared desktops) is supported.

    https://docs.citrix.com/en-us/xenapp-and-xendesktop/7-12/whats-new.html

    Reply
  • Shiva ch

    December 7, 2016

    nvm, I think it only supports Hosted Shared desktops not VDIs.

    Reply
  • George Spiers

    December 7, 2016

    Correct, pooled VDI desktops are not supported. Server based VDA and static VDI is supported with LHC.

    Reply
    • shiva ch

      December 19, 2016

      Do you know if it’s going to support pooled VDIs in coming releases?

      Reply
      • George Spiers

        December 19, 2016

        That I don’t know for now however I would bet that there will be support in future releases as pooled VDI desktops play such a large part in many organisations.

        Reply
  • Nikhil

    December 9, 2016

    Does monitor Service still monitors during the SQL DB inaccessible. Once the SQL DB is accessible does monitoring service resumes writing to MDB?

    Reply
    • George Spiers

      December 9, 2016

      No and yes the monitor service resumes once the Citrix Broker Service detects the SQL database is back online.

      Reply
  • Pingback: Delivery Controller 7.13 and Licensing – Carl Stalhood

  • Vipin Tyagi

    March 29, 2017

    I have some concerns.

    Principle broker service, CCS, Secondary Broker service, Local BD is on all DDC. LHC also will be on all DDCs? How can we check LHC (like XA 6.5, there was a file).

    In Normal cases, Sessions will be load balanced among multiple DDCs; CSS will query Principle Broker Service to get changes in every 2 min and Will re-create LHC DB if got positive answer ( changes found). Question is :Principle Broker Service on each DDC will contact to Site DB to get changes (Changes might have been done on other domain controller)? How frequent? Won’t it increase Load on DB? How this load is handled?

    Secondary Broker Service will also get a list of other Secondary Broker service in Zone (Secondary broker service on other DDC). There will be an election process to elect one working Secondary broker service among these all. In case of failure, Sessions will be served by only this main secondary service hence no session load balancing. All VDAs will be re-registered. Question is : Will it not impact performance of service as all VDA needs to be registered with one Service at the same time. How this is handled?

    In case of connection failure only with 1 DDC, LHC will be triggered? How other DDCs will know? In case of connection failure with all DDCs in Zone, How all DDCs will verify it?

    Thanks in Advance
    Vipin Tyagi

    Reply
  • George Spiers

    March 29, 2017

    Yes LHC is on all DDCs, so each DDC has LocalDB installed, all the configuration files are stores in %ProgramFiles%\Microsoft SQL Server\120\LocalDB\. The LHC database is located in C:\Windows\ServiceProfiles\NetworkService\HaDatabaseName.mdf.

    Yes each Principal Broker Service speaks directly with SQL, changes made on another Domain Controller wouldn’t be stored in SQL? Changes made to the Citrix Site database are detected by the Principal Broker Service. Such changes are made via Studio/PowerShell.

    Yes VDA re-registration will impact performance, that’s how the current architecture works. Citrix tested this current first release of LHC with up to 5K VDAs. It takes ALL DDCs to lose contact with the database for LHC to be triggered, one DDC won’t trigger LHC.

    Reply
    • Vipin Tyagi

      March 30, 2017

      Thanks for your reply,

      changes made on another Domain Controller wouldn’t be stored in SQL: sorry for mistyping, I meant for DDCs. So Principle Broker service will contact to SQL to detect changes made in site. Frequency for this would be 2 min as well?

      What mechanism is being used to detect that its One DDC failure or ALL DDC failure?

      Secondary broker service will be elected after failure or before failure?

      Reply
      • George Spiers

        March 30, 2017

        Hi Vipin.
        If a change is detected on the Principal Broker, the Citrix CSS is used to copy the configuration to the Citrix High Availability Service. A new LHC is generated as you mentioned previously containing all the previous and new configurations. That data is then imported in to the LocalDB database. If no changes have happened, no config is copied. The Citrix Broker Service is the Principal Broker, so this service checks SQL every 2 minutes for new configuration.
        CSS continually provides information to each DDC about all controllers in a zone or site, the brokers communicate amongst eachother so I’d assume this way detection of one or all DDCs could be made. If there is a failure, the secondary broker is elected after failure. This can be around 1-2 minutes.

        Reply
  • Pingback: Delivery Controller 7.14 and Licensing – Carl Stalhood

Leave a Reply