Hi everybody,
Some of my users complained about degraded performances after I migrated the host from 5.1 to 5.5. After a bit of poking around, I just had to remove the IOPS limits in order to restore their previous performances...
So, I've done a really simple test, on 2 vms. Here are the infos about the test :
A host running vSphere 5.1, with SSD card plugged into it (OCZ ZDrive R4). One W2k12 R2 vm running on it, vm version 9, vmtools build 9221. 2 disks, limited to 500 iops each, so 1000 iops in total
A host running vSphere 5.5, same SSD card plugged in. One clone VM of the first, upgraded vm version to 10, vmtools build to 9349. IOPS limit is the same
Storage I/O control is disabled on the 2 hosts, but I did some tests with sioc enabled and it wasn't conclusive
Test procedure :
1. Took this file http://vmktree.org/iometer/OpenPerformanceTest32.icf
2. IOMeter v1.1.0 Iometer project - Downloads, prebuild binaries for x86-64
3. Launched the "Max Throughput-100%Read" Specification with no change, aside from the "Transfer Request Size" . I changed this param from 8Kb to 512Kb
Here are the results :
As you can see, the limitation is handled completely differently from one version to the other... Moreover, the average response time is completely insane on the ESX 5.5 !
I searched around for an explanation about this change, but I didn't find anything...
So, how can I revert to the previous behaviour ? Because we have different types of workload and different types of IO size, I can't have a simple formula to get a proper limit. I must have a hard IOPS limit, regardless of their sizes...
Thanks all,
-Vincent.