-
My last adventure in the density tests for Citrix XenApp solutions centers (XA) Hosted shared desktops. For those who do not know the XA Hosted shared model, combining application and session virtualization, based on a single operating system instance on a Citrix server to publish XA offices and familiar Windows applications. Hosted shared model centralizes application delivery and management (securing data and applications in the data center), and it scales well to support high densities of users. It is designed to provide a locked down, streamlined and standardized environment with a core set of applications, making it ideal for task workers where personalization is less of a focus. While XA server can be hosted directly on a bare metal server on Windows Server 08 R2, my recent test explores the densities that you can achieve when XA runs in a virtualized environment.
shared hosted desktops are ideally suited for task workers who require a static application portfolio, as opposed to knowledge workers that require a larger or customized environment. Generally, the hosted shared desktops are one part of a multifaceted approach to desktop and application delivery. Citrix FlexCast ™ technology can customize the delivery and adapt to meet the performance, security and flexibility for every application. You can see different use cases of FlexCast to http://flexcast.citrix.com, which compares and contrasts different delivery models and provides a tool to help you characterize the configuration requirements at the site.
Why virtualize XenApp hosted shared services for Desktops?
There are advantages to deploy Citrix XA in a virtualized configuration. Citrix service execution XA and Windows Server 08 in a hypervisor, it is easy to consolidate multiple server instances and application silo on a single physical machine. Consolidation can simplify management and optimize the use of resources - the traditional reasons why it often pursuing virtualization initiatives. Although XA is supported on Microsoft Hyper-V, Citrix XenServer and VMware vSphere hypervisors, I chose Microsoft Hyper-V for the first series of tests.
The purpose of my initial test was to explore densities users you might expect with the shared model Hosted when virtualized with Hyper-V. As the trial demonstrated, you can get acceptable user response times and good scalability when virtualizing XenApp services on Hyper-V.
This test is part of ongoing research to define guidelines for sizing and recommendations for internal, partner and customer deployments. focused on I'll post updates and additional blogs that search continues, so stay tuned for more results of my work.
Test Setup
The test environment included these basic components:
- HP ProLiant Gen8 Server DL380p. This dual-socketed server hosted two Intel Xeon E5-2680 clocked at 2.70GHz and 192GB RAM processors. The selected storage configuration SAS PCIe 3.0 controller to an HP Smart Array 6Gb / eight and 10000 RPM SAS disks set up in RAID 0 + 1 volumes.
- Citrix XenApp v6.5. Citrix XA was configured with the default settings for HDX, which includes Flash Redirection disabled for server-side video rendering. The resolution of the meeting was set to 1024 × 768.
- Microsoft Server 08 R2 SP1 with Hyper-V. The test configuration used Microsoft Roaming Profiles (listed MSRP) and profiles cached locally have been removed during the logoff process.
- Login VSI 3.5 ( www.loginvsi.com ). Login VSI is a load generation tool for benchmarking VDI that simulates the workload of production users. For this test, I chose the default average workload to simulate desktop activity of a typical knowledge worker. Login VSI generates an office productivity workload that includes Office 2010 with Microsoft Word, PowerPoint and Excel, Internet Explorer with Flash video applet Java application and Adobe Acrobat Reader.
Test Methodology
for each test, I follow this sequence of steps:
1) I used 10 pitchers Log VSI, first check that they were ready for the test. I started a script that invoked PerfMon scripts to capture complete system of performance indicators and initiated the workload simulation of the test in which Login VSI launched desktop sessions at intervals of 30 seconds.
2) Once workstations were connected, the part to the stable state of the trial began. During the rise and steady state Login VSI track statistics for the user experience, a loop through specific operations and measure response time at regular intervals. Response times were used to determine Login VSIMax, the maximum number of users of the test environment can support before performance is constantly deteriorating.
3) After a specified amount of time elapsed from the equilibrium state, Login VSI began to disconnect from the office sessions. After all sessions were disconnected, I stopped the execution of monitoring scripts.
4) Finally, I treated the Login VSI logs using VSI Analyzer and PAL, the PerfMon CSV tool to produce graphical and following measures.
Test Results
To understand the impact of VM configuration, including virtual CPUs (vCPUs) and memory allocations, I conducted several iterations test with different configurations. The configuration optimal management turned out to be 8 virtual machines with 4vCPUs and 16GB of RAM per VM, which underwent a user density of 203 sessions. The table below shows the different permutations tested, including supported configurations and unsupported. (Microsoft officially supports up to 4 vCPUs per VM. Out of curiosity, I tested configurations that have exceeded this restriction, but this configuration should never be deployed in production.)
I also wanted to explore how two other configuration choices affected user density:
- Intel ® HyperThreading (HT) technology. This processor technology provides more virtual threads by heart, which increases the performance of highly threaded applications. I ran a test iteration with HT off, reaching ~ 23% less density.
- Hyper-V Dynamic Memory. Hyper-V allows you to share and automatically reallocate memory among virtual machines running. I replayed the defining test XenApp 4GB memory per VM startup, a maximum of 16 GB, and 20% buffer. During the test, the memory usage consumed about 9GB per VM, and user density decreased by about 3% (with about 1% difference).
Based on these tests, running on Hyper proves to be an advantage, while Hyper-V Dynamic Memory seems to be only a slight disadvantage for the density of the session. Of course, you should always make your own assessment proof of concept using a representative workload for your own environment.
Metrics test
Detailed results for the test that provided the optimal density of supported users are given below. In this series of tests, Login VSI launched 220 sessions and 203 sessions reached VSIMax
The test configuration monitoring these general parameters :.
- Storage reading / writing report
- Medium: 7/93 reads / writes
- Max: 89/11 read / write
- average IOPS storage
- The average desktop: ~ 3.25
- The Office Max: ~ 8.09
There is important to note that the above data storage reflects all test phases (logons, the state of balance and disconnections). While you can use these results as general guidelines for sizing, remember that you have to adapt all configurations to match specific types of workloads and user capabilities.
Log VSIMax
The figure below shows connection VSIMax 203 sessions.
logical processor Run-Time
This metric use of physical processors in the host computer records. "Logical Processor Hyper-V Hypervisor (*) % Total Run Time" performance counter is more accurate than using "% Processor Time" counter of the system because the "% Processor Time" against only measures as time host processor. the "logical processor Hyper-V (*) % total Run time" performance counter is the best counter to be used to analyze the use of the overall processor Hyper-V server. the graphic shows the exhaustion of CPU resources in the density of advanced users.
memory
MBytes available is the amount of RAM, in megabytes, immediately available for allocation to a process or for system use. it is equal to the sum of memory assigned to the standby (cached), free and zero page lists. If this counter is low, then the the computer is low on physical RAM. This diagram shows that the 192GB RAM configured in server supplies sufficient memory resources throughout the test.
Network
Network Interface Bytes Total / sec is the combined rate of bytes sent and received on the single server NIC, including characters dithering.
Disk Queue Length
The average length Disk Queue is the average number of read and write requests that have been waiting for the C :. disk during the measured interval
Disk IOPS
calculated the statistical IOPS Graphed below represents the combined rate of reading and writing to disk C :. This is followed by graphs showing the rate bytes are transferred to or from the disk during read and write, respectively.
Hyper-V Dynamic Memory (separate test from the data above, 197 VSI Max)
statistics and charts below describe the behavior of dynamic memory Hyper-V.
added the memory counter shows the cumulative amount of added memory to the virtual machine over time. The table and graph below shows the overall statistics of each counter instances Added memory. Avg and Max are the average and maximum values in the entire diary
The memory counter Removed shows the cumulative amount of memory removed from the VM. The table and graph below shows the overall statistics of each of the memory counter instances deleted. Avg and Max are the average and maximum values throughout the newspaper.
The average pressure counter represents the average pressure in the virtual machine. The table and graph below shows the overall statistics of each against-examples. Min, Med and Max are the minimum, average and maximum throughout the log.
Summary
The purpose of this test was to show that the densities you can expect when deploying Citrix XenApp Hosted shared desktops on Hyper-V. XA virtualization in this way provides both good scalability and the benefits of simplified management and optimized use.
References
- XenDesktop with FlexCast ™
- XenApp Planning Guide: Virtualization Best Practices
- Hyper-V dynamic memory configuration Guide
0 Komentar