With the introduction of the RAM cache with overflow to disk (introduced in PVS 7.1 Hotfix required), IOPS can be almost eliminated and has therefore, the new leading practice. However, this feature can have a significant impact of the write cache sizing. This arises from the difference in how the write cache is associated with this new option. To understand this function and how to make correct your solution so, Citrix Worldwide Consulting organized a series of tests, the results of which are not shared in this post. I also want my co-authors Martin Zugec, Jeff PinterParsons and Brittany Kemp to recognize, as this was not a job one man.)
Before we begin let me note only the scope of Article , We will not be covering IOPS information, as it has already well covered in previous blogs. If you have not already, we recommend that you read on the blogs, by Miguel Contreras and Dan Allen .
Turbo IOPS with the new PVS Cache in RAM disk overflow charge! - Part One
turbocharging IOPS with the new PVS Cache in RAM disk overflow feature! -. Part Two
However, what is covered, why the write cache sizing can change, what to do, and a deep dive, how it works. That was tested in the laboratory and all we want to thank the Citrix Solutions Lab Team for providing the hardware and the support that helped make this possible. The test environment and methodology at the end of the blog summarized.
Lessons Learned
For all of these tests, there are some earlier recommendations give, which were strengthened and new on the basis of additional evidence. Based on the results of the tests, Citrix Worldwide Consulting propose the following recommendations for the new RAM cache with overflow option disk.
- a surrounding area of cache on the hard disk If transition in RAM is not increased with overflow in cache twice as much space on the write cache as a rule of thumb on the disk and the buffer RAM of the 64MB default , be assigned. Remember, this store does not need to be on a SAN, but may be less expensive local store.
- The size of the RAM buffer big impact on the size of the write cache as all data that can be stored in the RAM buffer will consume no space on the write cache. Therefore, it is recommended to allocate more than the standard 64MB RAM buffer. This is not only an increased RAM buffer reduce IOPS requirements (see blogs mentioned above), but the write cache size requirements are also reduced. A larger RAM buffer can alleviate the larger write cache request for environments that do not have storage capacity. With enough RAM you can even eliminate the need for always writes to the memory. For desktop operating systems start with 256-512MB and server operating systems start with 2-4GB. implement
- Defragmenting the vDisk before image and after major changes. Defragmenting the vDisk resulted in write cache savings of up to 30% or more during the test. This will affect all of you that use versioning as a versioned vDisk defragments not recommended. a versioned vDisk Defragmenting will create large versioned discs (.avhd files). Run the defragmentation after vDisk versions Fusion Note . Defragmenting the vDisk by the VHD on the PVS server installation and manual defragmentation that run on it. This enables a more robust defragmentation because the OS is not loaded. An additional 15% reduction in the write cache size was seen with this approach over standard defragmentation.
The rest of this point is a deep dive, as the RAM cache overflow function works and test results all of which again the recommendations at. Also detailed in the test methodology and the environment for the examination will be used, which can help if you plan to perform additional testing in your environment.
RAM Cache write Cache Sizing vs Cache in HDD
mentioned as above, the write cache with the new RAM cache on ~~ POS = TRUNC grow much larger and much faster than the previously recommended cache on device HDD option, if not properly designed. Where the old option to write the data to the write cache in 4-KB clusters, the new option now reserves 2MB blocks on the write cache, which are composed of 4-KB clusters.
to illustrate the difference, refer to the visual below. Note that, although in the write cache 2MB a block of 512 clusters (4 KB * 512 = 2MB), is for the sake of simplicity we assume that each block of 8 cluster is.
If the target device starts, the operating system is not aware of the write cache and writes to the logical disk that is presented to it (the vDisk). By default, the Windows operating system to use the file system 4KB clusters, and therefore writes data in 4KB clusters.
The PVS driver then passes this data at the block level in the write cache. We start Assuming a blank area of the vDisk and we write more files that correspond to data only about 2 MB. The operating system writes the files on consecutive clusters on the vDisk that will be redirected to the write cache and 2MB complete a block, but also use to reserve an additional 2MB block that is underutilized.
Since the OS sees these sectors as empty blocks can be written and then used when additional data is written by the target device.
to test this behavior we a series of tests on both the RAM cache ran with overflow option and the cache on the device HDD for comparison.
The following is a comparison of a running office workers test on a device Windows 7 target. We can see from the data, see:
- used the RAM cache option on the write cache drive as the cache on the device HDD significantly more space. The reason for this is explained in the next section.
- The RAM cache option is strongly influenced by the size of the RAM buffer. This is because much of the data is not expected written to disk is obtained, as it will be remembered.
carrying out a similar test on Server 2012 R2 with 40 users will see the same results. Also, the RAM cache overflow option with the buffer RAM standard requires considerably more space than the cache in HDD again by an increase in the RAM buffer is greatly reduced.
to investigate further, we decided to carry out specific tests in order to characterize the behavior of the write cache effects. We have 1,000 files on the target device each 1 MB in size. This should translate into 1,000 MB data is written to the write cache. As seen below, this was the case for the cache in HDD option, but the RAM cache overflow option has doubled the result. We see a slight savings with increased RAM buffer, but the savings is simply the additional files that have been cached in memory. Keep reading if you want to know why we see this behavior and how to minimize the impact.
effects of fragmentation on write cache size
The perpetrator fragmentation above in excessive write cache size. This is caused by that PVS based redirection block and not on the basis of file. The redirection to the write cache is based on the logical block level structure of the vDisk. To illustrate this, consider the scenario below.
Note again that for illustrative purposes, we use 8 cluster in a block, while on the board, each block is composed of 512 clusters.
looking at a portion of the vDisk, the data is available about them, consider what happens if data across multiple clusters is fragmented. The operating system will write the data to the free clusters that are available to him. As it does so, the PVS driver will redirect this data to the write cache and store create 2MB blocks. However, the cluster in 2MB blocks correspond to the write cache to the clusters that provides the operating system and therefore only a few clusters are used within each block, and the rest is unused.
As shown below, six 1 KB (6KB) files in a 6MB increase the write cache result indeed! Consider segments as anywhere in the vDisk and the result is a far greater write cache drive.
The results shown in the previous section, are vDisks that are not defragmented were for testing. Once the vDisk is defragmented, the areas of the plate beneath more like the visual should be arranged. is written in this case, new data much more efficiently due to space in a smaller write cache size use.
The impressive in the tests again, this time to defragment vDisk. We can immediately see that, although the defragmentation of the cache on the hard drive size is not affected, it has a much greater effect when the RAM cache with overflow option. Now, although the write cache is reduced, it is not completely minimized as certain files are not defragmented. The test for Windows 7 was even higher significant when the standard RAM buffer used, but actually is lower when a 256 MB RAM buffer is used.
In the case of Server 2012 R2, which had a high 16% fragmentation, the write cache size decreases by more than 30% after defragmentation. Once a larger RAM buffer of the write cache is significantly lower than the cache in HDD option is added.
To illustrate how the board can not be fully defragmented, refer to the following test run. In this test we have 25,000 x 1 KB files, a little extreme, but done to help prove a point. As shown below, the first several thousand files write caching grow quickly to see how they are written by the operating system to partially fragmented portions of the plate. Once these parts are of the plate is, the remaining files are written sequentially and the write cache is growing much more slowly.
Final Caveat
to examine the final scenario, which can affect the size of the write cache to the RAM cache overflow option if existing data is changed on the vDisk. In this case, a file as a system file was already on the vDisk. If this file to change is changed to the write cache and these clusters are redirected changed. This in turn may result in underutilized space than 2MB blocks are created for changes in the write cache, which are much smaller than can be seen below.
In this case, the defragmentation, the output and the potential for not mitigate larger write cache still exists.
Test Methodology
The test methodology included two rounds of tests to compare the write cache size. The first round, an automated office worker workload on both Windows 7 and Server 2012R2. Each test 3 times for the repeatability was performed as follows :.
- The vDisk write cache option has been set for each scenario, as described below under test configurations
- A single target machine has booted run PVS Target Software 7.1.3.
- a startup script was called the size of the write cache to a file on a file share
- to log vdiskdif launched .vhdx was for RAM was monitored cache with overflow option
- .vdiskcache for the cache on the device HDD monitored option
- A typical office worker simulated workload.
- A single user is on Windows 7 simulated
- 40 users on Server 2012 R2 were simulated
The second This was done round the tests were conducted to determine the effect of small files on the write cache on growth and was only under Windows 7 running files of different sizes to create and comparing the write cache size. The file creation scenarios are described below under test configuration. Each test 3 times for the repeatability was performed as follows :.
- The vDisk write cache option has been set for each scenario, as described below under test configurations
- A single target machine has booted run PVS Target Software 7.1.3.
- a startup script was called the size of the write cache to a file on a file share
- to log vdiskdif launched .vhdx was for RAM cache monitored with overflow option
- .vdiskcache for the cache on the device HDD option was monitored
- A second run script has been launched, which data to provide by an equal number of files of a certain size. There was not a user login on the machine during the duration of the test.
test environment
was used The environment for the test consisted of HP Gen7 (infrastructure) and Gen8 (targets) physical blade servers with excess capacity for the purpose of testing , The components of interest are summarized below, and a visual summary of the test environment. The whole environment has been virtualized on Hyper-V 2012 R2. All components were PVS 7.1.3.1 (hotfix CPVS71003) are executed with PVS software.
- 2 x Provisioning Services server on Windows 2012 R2
- 4 vCPUs and 16GB of vRAM
- 1 XenApp 7.5 target device under Windows 2012 R2
- 8 vCPU and 44GB vRAM
- 50GB write cache drive on EMC SAN
- 1 XenDesktop 7.5 target device on Windows 7 SP1 (64-bit )
- 2 vCPU and 3 GB vRAM
- 15GB write cache drive Optimized for EMC SAN
- as per CTX127050
The environment uses a 10Gbps core switch with 3Gbps streaming PVS assigned
on environmental diagram Click for larger version.
test ~~ POS = TRUNC
There were three write cache configurations that have been tested both for XenApp 7.5 and XenDesktop 7.5 to provide baseline and compared write cache size data.
- 1. cache on the device's hard drive: This contained the baseplate size for the good legacy Write Cache Option knowledge. It was used for comparison against the caching on the target device RAM overflow to disk.
- 2 used. cache on the device RAM overflow to disk (default 64 MB) : This test was conducted to the growth of the write cache to rate buffer the standard RAM.
- 3. cache on the device RAM overflow to disk (optimized): This test was performed, the write cache drive growth to evaluate if a larger RAM buffer is used as the current recommendation. RAM buffer of 256 MB for Desktop OS and 2048MB for Server OS were tested. Note that with a large enough RAM to the write cache buffer will never write to the disk.
After further investigation, several tests were repeated with a freshly defragmented disk in order to investigate the effects. The two vDisks the tests were run on were the following:
- The first sentence runs on Windows 7 on a versioned vDisk were tested. The vDisk had a single version that was used to apply the optimization as good practices. On Server 2012 R2, the first set of runs were on a base vDisk that do not defragment a 16% fragmentation and had been.
- The second set of runs using the same vDisk when it merged and was defragmented. Fragmentation on the vDisk after merging was 1%.
Thank you for reading my post,
Amit Ben-Chanoch
Worldwide Consulting
Desktop & Apps Team
Project Accelerator
Virtual Desktop Manual
0 Komentar