My role in the Citrix Alliance Group focuses on performance tests, especially for VDI and SBC reference architectures developed in collaboration with partners of the alliance. I spend most of my time to develop and validate reference architectures that customers can deploy Citrix solutions quickly with less risk. While you might think my work sounds like mindless number crunching, I get tremendously excited and the theory behind the performance analysis.
Recently, I came to think of ways to extend far beyond the performance evaluation of infrastructure components. I wanted to capture the overall experience of the user, and to collect data on customers and users to increase the other results of the performance tests. For this reason, I came up with a user experience capture method (UX) data when running workloads that focus on the performance and scalability of the hosted solution.
User Experience Statistics
To capture data UX, I run a series of tests related to scalability, with an average connection with VSI workload flash. For example, I run the same workload that I have done in previous scalability tests with similar operating capabilities. This workload simulates the activity of office of a typical knowledge worker, including applications such as Microsoft Office Word, PowerPoint, Outlook, Excel, Internet Explorer with a flash video applet, a Java application and Adobe Acrobat Reader.
The methodology of the UX test differs from other methods of scalability testing. For example, for a test of 75 users, I set 74 connecting virtual desktop sessions to 10 launchers virtual clients with a dedicated launcher that uses a physical system. This physical client system serves as the "canary in the coal mine," the collection protocol response times as well as frame rate statistics. During the test, the metric capture process begins in early test and lasts throughout the race Login VSI. I record screen captured video of the session Canary to act as a visual aid.
the graphs below show the data for three consecutive years the load Login VSI working loops. the goal for the cards is to compare the first test loop Login VSI to those subsequent to evaluate the user experience when connecting / rise and steady state phases of the test. you can clearly see the correspondence templates with only a small amount of deviation between each loop. looking at the similarities, you can deduce that each loop maintains the overall user experience
75 User Test -. full test run (contains data for 3 consecutive VSI workload loops)
the charts above are presented primarily for educational purposes, since the best way to compare the following loops is average for per loop and observe the similarities and differences. Another useful comparison is to overlay the mapped data for each loop looking for significant differences -. Charts later in this blog illustrate this comparison method
ICART and FPS Metrics
The graphs chart above two metrics: ICA Round Trip Time (ICART) and frames Per Second (FPS). ICART is an EdgeSight End Monitoring of internal user experience (SHEU) meter that simulates hitting the user or by clicking the mouse. It systematically sends false strikes the transverse client stack ICA, making a full return loop through the Virtual Desktop Agent (VDA) and return to the original point - the customer. For example, a value ICART 500ms equivalent to a half of a second-strike response delay (a text character is sent and received through the session as a bitmap). (More on ICART EUEM EdgeSight and other settings are available on the Citrix EdgeSight reports of community.)
The FPS counter calculates the encoded frames transmitted by the host and sent to customers. Note that the captured data reflects the transmitted frame rate as opposed to received frame rate, so FPS readings can vary from a few frames of a second to second. However, the transmitted frames always reach customers and on average over time.
If the user experience is degraded significantly, these two counters detect. There are some scenarios where they can be very useful:
- The host virtual desktop or XenApp server CPU reaches 100% utilization for a prolonged period of time (eg , a few seconds or more). This adds latency to ICART and affects the FPS slowing the bitmap coding
process.
- The network connectivity reaches its capacity limit and is adding latency between virtual desktops or XenApp server and the client. This adds latency to ICART and affects SPF, which slows the rate at which the frames are transferred to the customer; some executives are deposited by the virtual office using HDX Tossing and Queuing, a key optimization of Citrix HDX Broadcast technology that improves UX, especially over WANs.
- The storage infrastructure used for virtual desktops or XenApp server reaches its limit. Excessive disk latency from scenarios involving a lack of IOPS available, among other reasons, it can be observed in the UX counters.
The establishment of a base
the test harness uses a virtual desktop with a dedicated physical client for line based. This customer "canary" is the first to connect and start the workload Login VSI. While executing the workload, records platform observations UX test (ICART and FPS), taking readings throughout the complete test cycle, including phases and stable connection. The baseline represents single session performance without other sessions on the server and capture parameters for a complete loop of the VSI workload.
basic single-user test with Login VSI Average Workload
The baseline then becomes a benchmark for the following the workload Login VSI loops for specific skills of the user, as shown in the graphic below. These charts compare ICART and FPS settings for base and workload loops of 25, 50, 75 and 100 users respectively.
Support Video Test
To validate the measurements collected, I also records the baseline video and following loops work load test. For example, in the UX tests for architecture VDI-in-a-Box reference, this video compares the user experience for reference and the last section of the 100-user test run. The video corroborates UX parameters, illustrating that the UX of the fourth loop VSI workload is substantially the same as the baseline. Read my blog about UX tests for this solution here.
Screenshot of the test video
Summary
My goal in the UX test is validate the scalability XenDesktop, XenApp solutions or VDI-in-a-Box and corroborate the results of other tests from the point of view of a user. While my set of tools are available for you to precisely replicate my methodology, there are many tools available on the market for the collection of data UX, including Citrix EdgeSight offerings, Scapa, Liquidware Labs, and even Login VSI (the latest release), among other suppliers. Note that I will continue to evolve my testing methodology UX as I vet further this new approach.
In the meantime, the parameters and UX videos give me a way to cross-check VSIMax scores collected and PerfMon data that I collect during my scalability testing. The offer valuable information about what the user sees, providing a visual reference qualitative and quantitative data -. Me to confirm the performance of the configuration I have observed in other scalability testing
References
- User Experience video test
- HP and Citrix VDI-in-a-Box: Part 3 - user Experience (Blog)
0 Komentar