I have already written two blog posts on properly configuring Citrix Management Profile and Folder Redirection and architecting so that it fits for large environments! . If you do not have read the previous two articles, then I suggest you read them first. You can find them in the links below:
/ blogs / 2012/02/11 / citrix-profile-management-and-vdi-do-it-right /
/ blogs / 2012 / 08/05 / citrix-profile-management-and-vdi-do-it-right-hand-2 /
in this third and final installment of this series, I will provide some guidelines IOPS and network bandwidth requirements for NAS devices or file servers hosting Citrix management profiles and redirected folders, which should help you determine the number of users that you can place on a single file server or NAS device .
First, I have to turn the standard of your mileage will vary warning! Try to determine how many users you can get on a file server is like trying to determine how many users you can get on a XenApp server, or the number of virtual desktops you can get on a single server? Based on what your users are, the applications they run and the hardware specifications of the server, the number of XenApp or XenDesktop users per server can vary considerably. Thus, the number of file server users may vary significantly both on the basis of all the same variables. That being said, I'll give you some figures from the real world of what I have personally seen to customers in the area and I will also provide data based on tests I've done and the criteria of my own personal consumption IOPS on my desktop.
File Server Performance and Scalability
Before entering the requirements for managing profiles and folder redirection, we're going to first review some basic details about file server performance and scalability. There are many things that will affect how a file server or NAS appliance will perform and scale. Some of the key elements that will determine the scalability include:
- How many IOPS does the physical storage media? IOPS capacity is driven by the following:
- How many disks / spindles are in the RAID set
- What type of RAID is used (RAID 1, 5, 6, 10, etc. ..)
- What is the memory RAID controller cache and how is it configured?
- are they writing optimized or first sent to the SSD storage hierarchy?
- How much RAM does the server to read CIFS caching?
- How long are network adapters?
- How many processor cores are there?
- Which version of CIFS / SMB is used 1.0, 2.0 or 2.1?
- has CIFS and TCP been set correctly?
- See my previous blog on tuning CIFS / blogs / 2010/10/21 / smb -Tuning-for-xenapp-and-file servers-on-windows-server-08 /
If you use Microsoft windows 08 R2 file servers or clusters (physical or virtual), then please make sure you do at least the following :.
- Give the file server at least 32 GB of RAM (preferably 64 GB)
- Give the file server cores at least 2+ / vCPUs (preferably if 4+ it will host a large number of users).
- Implement all my blog CIFS tuning adjustment SMB recommendations mentioned above.
- If the file server is physically make sure you teaming / bonding multiple cards.
- If you use local storage (not I! Hope), make sure you have as many 15k SAS drives as possible in the server and you have a RAID card with at least 1GB of battery backup cache. For a Windows file server I would recommend the shared cache between 25% read and 75% write.
If you use an enterprise class NAS device from NetApp or EMC, we hope it's safe to assume that the RAM, processors, and RAID configuration has been optimized. However, if you use a NAS device, it is still essential that you check the version of CIFS used (make sure it supports SMB 2) and make sure that CIFS and, if necessary, TCP has been set correctly .
As you can see, there are many variables that will determine how File Services which will occur. I do not want this blog in an article on design and storage subsystems agreement and protocol stacks for file servers, so I'll just focus on two key indicators that you need to determine .
- How many do you need IOPS per user and what is the ratio of read / write?
- How much bandwidth you need per user?
For the rest of this article, we will assume the management team and provide file services granted their infrastructure properly and all you need to provide is the amount of IOPS your users will generate and the amount of network bandwidth, they will consume.
IOPS how do we need?
So, the biggest question everyone wants to know is how many IOPS you really need for folder redirection and profiles? I decided to tackle this issue by examining data from three sources:
- How many IOPS should I use on my office
- How many IOPS are used in a automated test using the average LoginVSI workload?
- how many IOPS and bandwidth of the network are actually used on a live production system by a real customer?
It is important to remember that for my tests I I'm only tracking IOPS generated against the file server disks hosting the profile and redirected data folders. I do not follow local IOPS generated against C real Windows 7 workstations :. Records
My Own Office
For my first piece of analysis, I decided to examine the IOPS using my own laptop. For this test, I directed all my records, my profile and personal directory on a hosting dedicated disk nothing else for me to track the use of total IOPS against the disc to see how I produce.
I passed my test for 64 minutes and during that time opened many applications I normally use and I made sure I did as much as the actions possible. For all 64 minutes, I took no breaks and I am extremely active and actually ran more applications and has more tasks in this short time that I usually do not because I wanted workload to be the heaviest and the worst scenario I generate using my typical applications. Before the beginning of my test, I made sure that everything has been restarted so that none of my data would be held in the memory cache everywhere
Here is a summary of my actions :.
- I Iogged and immediately open Windows Media Player and started playing my AC / DC playlist of my home directory. MP3 played for the entire test.
- I open Internet Explorer and Firefox to some pages I usually regular checks and left both browsers and several tabs open all the time. I went often to the two browsers to view various websites for the whole duration of the test.
- I open Microsoft Communicator, and connected to my company server.
- I open Outlook for my company Citrix Exchange Server. I have a 1.3 GB OST file and use Outlook in cached mode and I have a file of 1.7 GB PST.
- I sent / received mail.
- I opened the elements of my inbox and did my normal mail and calendar tasks.
- I opened my email to PST file.
- I cleaned items from my inbox and articles from my inbox to my PST moved.
- I emptied my deleted items, which had over 3,000 emails in it.
- I closed corporate Outlook and Outlook ran two more times using the MAPI profiles for my personal email accounts which both have 1GB + PST files.
- I printed several emails with PDF files in my home directory.
- After checking my personal email, I reopened my Outlook for Corporate Exchange MAPI profile.
- Outlook remains open for the rest of the test and I frequently used.
- I opened and edited several Excel Spreadsheets in my home directory.
- I opened and edited several Word documents from my home directory.
- I downloaded a file of 108 MB of ShareFile to my home directory.
- I open Quicken and updated my accounts and ran several reports. My Quicken file in my home directory.
- I saved a PowerPoint file 6MB my Outlook to my home directory and saw it.
- I unsubscribed.
The table below shows the total of my IOPS against the hosting drive my profile, redirected folders and home directory:
Avg. Total IOPS | Avg. Read IOPS | Avg. Write IOPS | Max Read IOPS | Max Write IOPS |
5.7 | 3.1 | 2.6 | 189 | 36 |
The disk backup my profile, home directory, redirected folders was one SATA 70 RPM disk nothing accommodation. My workload was the only item currently running at the time, so that all IOPS were generated by my use. I average 5.7 total IOPS compared with a reading / writing 55/45%. While there were periodic spikes, mostly, no peaks were supported and I rarely went beyond 30 IOPS for more than a few seconds.
After a more detailed analysis of my workload, I was ultimately able to determine that the majority of my IOPS were generated by Outlook as it was to read and write to my file excluding cache OAB (Outlook.ost) and many large PST files I have.
LoginVSI Medium Workload
for my next test, I decided to use the average work load LoginVSI. If you are not familiar with LoginVSI, then you should really check. It was developed by a company called Login Consultants. They are a group of highly qualified virtualization consultants who have written excellent white tools and books. Check them out here:
http://www.loginvsi.com/
I reconfigured the average workload so that all I / O has been redirected to a file server unique and lacked the staff of each user directory. I set the shared directory on the same file server and so all I / Os generated by the workload would be followed, including the IE content and multimedia files. This single file server I have also placed all profiles of test users, redirected folders and directories.
The average workflow is an automated script that connects and performs the following actions during about 13 minutes
- Logs on
- Open Outlook using PST
- open and create / edit Word documents
- Open and create / edit Excel documents
- Open and create / edit PowerPoint documents
- opens IE
- opens the flash and media players
- disconnect
My test ran for a total of 15 minutes and included 3 users test each running the full workload means once. The table below details the total IOPS generated by 3 handsets users.
Avg. Total IOPS | Avg. Read IOPS | Avg. Write IOPS | Max Read IOPS | Max Write IOPS |
7.2 | 3.5 | 3.7 | 127 | 57 |
the following table details the total use of network generated against the file server, which had a single NIC operating at 1 Gig.
Avg. RX Mbps | Avg. TX Mbps | Max RX Mbps | Max Mbps TX |
.48 Mbps | 1.14 Mbps | 20.75 Mbps | 26.99 Mbps |
When looking at network utilization, it is important to remember that the network operates full duplex. For Gb NIC 1, this means that you can send and receive 1 Gb simultaneously. However, since we are limited to a maximum speed of 1 Gig in one direction, we will take the largest number in any single direction and use it to determine our average bandwidth per user. Based on the data from this test, the average IOPS and bandwidth per user is shown in the following table.
Avg. Total IOPS | Read / Write Ratio | Avg. Bandwidth |
2.4 | 49/51% | .36 Mbps |
Real World customer
while my first two tests provided valuable data, they were based on more limited and controlled scenarios also included use of the home directory rather than simply using Citrix Profile management and redirected folders. My last piece of data involves global real numbers generated by a XenDesktop client who has a production environment that has been running for over a year and a half. Here are some details on the environment:
- XenDesktop is used with a peak average of about 450 concurrent users per day
- All workstations are Windows 7 non-persistent delivered. via Provisioning services.
- Citrix management profiles and folder redirection has been implemented according to best practices outlined in my first blog in this series.
- A virtual file server R2 dedicated Windows 08 is used to host the redirect from Profile Management and folders. Directories are on a different server.
- Most users are standard office productivity workers. There are many different applications used, but the most common applications are Office 2010 and Lync, Internet Explorer, Firefox, media players, Adobe Acrobat, etc ...
I followed the use performance over several days between 06: 00-18: 00 to make sure each day has provided a model similar use. Daily was almost exactly the same from the point of view of use, showing that the data is valid. Also, I collected data over a period of 12 hours to make sure I am able to get all the ups and downs so that I could focus on periods of high use. For this customer, there were two particular periods that provided the best evidence:
- 07 hours 00-09 hours 30 (This is the period when most of the logon user)
- 10:30 a.m. to 4:00 p.m. (This is the period with the greatest number of users and the highest use)
now, let's take a look at data for each of these time periods
. 7:00 to 9:30 - the period logon
during the login period above, we started with 103 workstations already connected at 07:00 and over the next 150 minutes additional 243 users connected to a stable and consistent rate of about a user every 37 seconds. We ended up with 346 users connected by 09:30. Average number of users connected during the time was 225.
Avg. Total IOPS | Avg. Read IOPS | Avg. Write IOPS | Max Read IOPS | Max Write IOPS |
127 | 50 | 77 | 962 | 663 |
Avg. RX Mbps | Avg. TX Mbps | Max RX Mbps | Max Mbps TX |
4.58 mbps | 15.92 mbps | 230 mbps | 360 mbps |
Based on the time data of logon, average IOPS and bandwidth per user is shown in the table below.
Avg. Total IOPS | Read / Write Ratio | Avg. Bandwidth |
.5 | 40/60% | .07 Mbps |
10:30 a.m. to 4:00 p.m. - the extended period of use
over the long period of use, we started with 364 connected workstations and peaked at 444 workstations logged. By 11:00 we hit the use of advanced and stayed on top until some users began to disconnect around 15:30. Throughout the period, our average number of connected users was 420.
Avg. Total IOPS | Avg. Read IOPS | Avg. Write IOPS | Max Read IOPS | Max Write IOPS |
472 | 78 | 394 | 1058 | 3925 |
Avg. RX Mbps | Avg. TX Mbps | Max RX Mbps | Max TX Mbps |
5.95 Mbps | 6.43 Mbps | 159 Mbps | 348 Mbps |
on that basis supported real world data, average IOPS and bandwidth per user is shown in the table below.
Avg. Total IOPS | Read / Write Ratio | Avg. Bandwidth |
1.1 | 17/83% | .02 Mbps |
making sense of the numbers IOPS
now that we have a concrete example of the customer, we will try to make some sense of IOPS numbers. During the logon of the day where most connections occur, the file server is on average a fraction of 40/60 when playing versus IOPS write. This value is actually in line with the results of the test LoginVSI and results of the heavy workload of 64 minutes on my own desktop. My office was 60/40, 50/50 and was LoginVSI the customer real world was 40/60 vs. Reading writing.
However, as we look at the file server for the 5+ hours of sustained activity in the middle of the day, we see a major change in the ratio of read / write. Our ratio was actually a split 17/83% on bed writing report. So why workload to spend more write operations during the day? The answer lies in the reading skills of caching the file server and the read cache for each connection to the office file server. Once a file is read from disk, load Windows caching system RAM. The file is actually cached in two places; on the file server and the Windows client that read from the file server. So, as the day progresses, more and more read operations are handled by the RAM Cache system on the virtual desktop that originally read the file or by RAM file server system cache. That is why it is important to give your file server as much memory as possible. This is the same principle that allows our provisioning server to function effectively because it puts vDisk cache. You can get more details on Windows Caching of two documents that I've written.
http://support.citrix.com/article/ctx125126
/ blogs / 2010/11/05 / provisioning-and-services-CIFS performance-tuning shops
This native Windows ability to read caching is why we recommend that the cache on a RAID controller configured to be a 25/75 read for write caching. Since Windows has a built in caching mechanism for the readings, we spend more RAID controller resources for write operations.
Also, when you look at the IOPS figures for the period logon from the long period of use, you will notice that the IOPS load is much lighter during the login period and we do not really see any type of a file server IOPS massive hit in the known connection to as the logon storm. I'm sure many of you are wondering why or think that the numbers must be wrong. However, I can explain it very easily by saying that we have effectively eliminated the impact of the opening storm session because we have correctly configured profile management and Folder Redirection by the recommendations in my first blog this series. More importantly, we redirected all records including AppData, and thus the initial charge of reading the profile from the file server is quite minimal. In fact, after 18+ months of use, here are the statistics for the profile and redirected folders on the file server.
Type of file | Number of users | Avg . User-size |
folder Profile | 5617 | 14.6 MB |
Records forwarded | 5617 | 89 MB |
essential to note here is that the average size of our Citrix Management profile profile is less 15 MB. For those who have worked with profiles for any length of time, you know that's an incredibly small number. We achieved this by properly configuring the management profile with all necessary exclusions and redirecting all records available by default in Windows 7 via a group policy. You will notice that the average size of our redirected folders is 89 MB. As mentioned earlier, this client environment documents, videos and music files are not redirected to this file server. These folders are redirected to the user home directories hosted on other groups of files. When you look deeper into it, we find that about 80% + of all data in the redirected folder is AppData. If we redirect AppData, each user on average, would download 70MB + or more each time they connect. It would indeed place a greater load on the server to spin and create a logon script of the storm in IOPS perspective. I will not go into the pros / cons to redirect AppData because you can read my detailed reasons on why it should always be redirected to non-persistent desktops in the first part of this blog. We currently have zero compatibility issues for AppData redirected to this client and we have zero performance issues. I heard some people claim that it slows down the applications, but I can guarantee you that if you design your file services infrastructure correctly, it will not quite have a negative impact. If you really want to slow your users down, and try to let roam AppData or Streaming and watch what happens to your performance and file servers IOPS load !!! The reality is that 80% + of all the files in AppData never actually read all day to read them unnecessarily or download unnecessarily not only adds network overhead, but more importantly, it pollutes cache system on your file server and on your virtual Windows 7 desktop with AppData files that do not need to be read and cached. This will decrease the cache / Copy Read Hit% on your file servers and unnecessarily increase your IOPS read!
Making Sense network numbers
network bandwidth So we need to support properly configured Citrix Profiles and Folder Redirection? Well, during the LoginVSI test we averaged .36 Mbps per user. My real-world customer consumed .07 Mbps per user during the time of peak logon.
0 Komentar