Recently we published the Reference Architecture for Citrix XenApp and XenDesktop 7.12 on vSAN 6.5 all-flash. Since I spent that past few months testing, validating, and writing the RA, I just wanted to highlight some aspects of the RA for our customers and partners.

Before going into some of the key results, let’s take a look at the overall VMware vSAN all-flash solution:

Citrix XenApp and XenDesktop 7.12(Citrix7.12 with Hotfix for XenApp and XenDesktop 7.12 Machine Creation Service (MCS) - Binary fix for Catalog Deletion Error on vSAN 6.2.)release delivers high performance personal desktops and applications with all the flexibility, performance, and user experience.

Further improved TCO with vSAN storage efficiency features such as deduplication and compression, and erasure coding (RAID 5/6) with minimal resource overhead and minimal impact on desktop application performance.

Citrix XenDesktop 7.12 with VMware App Volumes 2.11 works with vSAN to manage desktops and applications.

We used Login VSI 4.1 Knowledge Worker workload benchmark mode to measure VDI performance in terms of VSIMax score. Both Provisioning Services (PVS) and Machine Creation Services (MCS) achieved over 120 sessions per host for the Knowledge Worker workload. Space efficiency features such as deduplication and compression, and erasure coding reduce capacity consumption while ensuring the same levels of availability and performance, and a lower total cost of ownership. vSAN provides great performance for Citrix XenApp and XenDesktop.

Citrix XenApp and XenDesktop on vSAN All-Flash Solution Test Environment

We built out an 8-node all-flash vSAN Cluster with two disk groups in each ESXi host, XenApp and XenDesktops were placed on the cluster, and a management cluster consisting of a 4-node hybrid vSAN cluster with Citrix XenApp and Xendesktop infrastructure and Login VSI launchers as shown in the figure below.

All-Flash vSAN Specifications and Performance

Server Specification (per host)

2 x 10 Intel(R) Xeon(R) CPU E5-2690 @ 3.0GHz v2 hyper-threading

512GB RAM

SSD: 2 x 800GB Solid State Drive (Intel SSDSC2BA40) as Cache SSD

SSD: 8 x 400GB Solid State Drive (Intel SSDSC2BA40) as Capacity SSD

Virtual Desktop Template Configuration (optimized image)

For Desktop OS Machine

Windows 7 Enterprise SP1—32-bit 2vCPU 2GB memory Disk size: 30GB, 10GB



For Server OS Machine

Windows Server 2012 R2 64-bit 8vCPU 24GB memory Disk size: 100GB, 40GB



The applications include Adobe_Flash_Player_16_ActiveX, Adobe_Reader_XI_-11.0.10, Doro_1.82, FreeMind, and Microsoft Office Professional Plus 2010.

We validated 1,000 Windows 7 virtual desktops for both PVS and MCS, and recorded the performance and resource utilization differences for these two provisioning methods:

For PVS mode, we used native applications installed on the OS image.

For MCS mode, we used VMware AppStack to provision and assign the applications

For XenApp, we used Windows Server 2012 R2 as the OS and validated the performance and resource utilization for PVS and MCS provisioning methods.

We used Login VSI 4.1 to load the system with simulated desktop workloads using common desktop applications like Microsoft Office, Internet Explorer, Adobe Reader, and so on. vSAN can support up to 200 desktops per host from the storage perspective if the host CPU is sized properly. During the Login VSI testing, we found that our servers were CPU bound. We simulate a real user scenario and keep peak CPU usage around 80%. Within this design criteria, we limited the session number to 1,000,

First we focused on the Login VSI benchmark on PVS desktop with native installed applications, vSAN deduplication and compression was enabled and policy was set to RAID 5 (Failure tolerance method= Raid 5/6 (Erasure Coding)-Capacity). 998 sessions passed easily without reaching VSIMax v4.1 at the baseline of 671.

Then we focused on Login VSI testing on the other three configurations.

VSImax knowledge worker v4.1 was not reached in any of the test configurations. The MCS XenDesktop with AppStack testing passed 971 sessions with deduplication and compression+RAID 5+checksum enabled, PVS XenApp testing passed 995 sessions; MCS XenApp testing passed 986 sessions.

Space Saving by Enabling Deduplication and Compression and EC (RAID 5) Policy

We measured the space savings of a real VDI environment deployment, provisioned XenDesktop and XenApp pool on an all-flash vSAN Cluster with deduplication and compression, and Erasure Coding enabled in four configurations. Deduplication and compression are applied on a “per disk group” basis. The results of deduplication vary for different types of data.

In the PVS XenDesktop with natively installed applications test scenario: a collection of 1,000 streamed Windows 7 VM desktops was provisioned on the vSAN datastore from the PVS Server Console. The applications were installed with OS disk. The standard vDisk image was configured with cache in device RAM with overflow on the hard disk. The maximum RAM size was 512 MB. Used space on vSAN datastore was 1,012GB, deduplication and compression ratio was around 4.35x.

In test scenario MCS XenDesktop with AppStack: we provisioned pool with 1,000 MCS random Windows 7 virtual desktops, and one AppStack was assigned to those 1,000 desktops, the total space usage was 1,239GB. Deduplication and compression ratio was around 2.43x.

In the PVS XenApp test scenario: a collection of 25 streamed Windows Server 2012R2 (which allows multiple users to access) VMs were provisioned on the vSAN datastore from the PVS server, used space was around 895GB, deduplication and compression ratio was around 2.23x. In the MCS XenApp test scenario: 25 MCS random Windows Server 2012 R2 virtual machines were deployed, used space was 1,034GB, deduplication and compression ratio was around 1.68x.

Summary

vSAN is optimized for modern all-flash storage with space saving features that lower TCO while delivering best-fit performance. All-flash vSAN is ready for VDI with tested and validated deployments of Citrix XenApp and XenDesktop 7.12 combined with VMware App Volumes 2.11. For more detailed information, refer to the comprehensive reference architecture on Storage Hub.