Over the last several years the storage industry has been completely inundated with the acronym: IOPS
Not that long ago, I worked for a cloud service provider and one of the most frequently asked questions by customers (both technical and non-technical) was, “How many IOPS can your storage platform deliver?”, or variations of that.
As the CTO, standing shoulder to shoulder with the VP of Sales, representing the flagship service offering for the company, I could have said… well, frankly, any number I wanted and the customer would taken it as gospel and marveled at it. Instead, in an attempt to leave the customer with a better impression of who we were as a company, I chose to take the time to explain that IOPS by itself with no further definition is ambiguous at best and is absolutely incomplete in terms of measuring performance. So I proceeded.
Speaking of performance simply in terms of IOPS is like comparing the performance of two sports cars based solely on the maximum RPMs the engine can produce. But of course there are many other factors to take into consideration besides just RPMs.
First, the acronym ‘IOPS’ stands for Input/Output Operations per second. This is great for comparing relative performance factors when all else is equal. This is generally achievable when you are performing in-house testing and you control and maintain consistently all of the benchmarking elements (hardware, software, test parameters, etc).
The first major problem is, all is not equal. Every hardware vendor has their own test setup with their own hardware, software, and test parameters. So it is not difficult to conclude that there will be inconsistencies with regard to the testing method that would result in wildly variable performance results. So the next logical question would be, “Under which conditions were the results achieved?”.
If I have two storage devices I want to test and benchmark (HDD, SSD, etc), the more I can ensure test parameter consistency, the more certain I can be that the relative comparisons of IOPS will be accurate. But this is precisely where we come off the rails when considering IOPS as the end-all-be-all for absolutely comparing storage device performance. Every storage vendor tests their devices differently. A few test parameters used in benchmarking are: % read, % write, % random, % sequential, queue depth, and block size.
Here is where the next problem shows up. While all the above mentioned test parameters can effect the IOPS result, block size is very influential in the equation of performance. It is the size (usually in KB) of each I/O operation. Block size is inversely related to IOPS (meaning if block size decreases, IOPS increase and vice versa). Don’t expect this relationship to be linear, but do expect block size to have a fairly significant impact on the IOPS result.
Based on this relationship, if I want to show some impressive performance numbers to the crowd, I am going to test using a low block size in order to yield the best IOPS result. This is why I refer to block size as the vendor-sleight-of-hand. Most vendors do not include the block size value along side their IOPS value. Some vendors will use 512 bytes for example, which causes their IOPS value to be artificially bloated. It is not wrong to say that for a 512 byte block size the device is capable of X number of IOPS, but it isn’t exactly real-world accurate. For comparison, a typical block size in the real-world ranges between 4KB and 8KB (8 to 16 times the size of the vendor tested block size). The same can be said of the ‘% random’ and ‘% sequential’ ratios because all real-world environments have both occurring. The bottom line is, IOPS can be very deceiving without understanding the whole story.
What other important relationship exists between IOPS and block size? Interestingly, if you multiply these two values together, you will arrive at the effective throughput rate for the device. For example, ‘# of read operations per sec’ x ‘4KB per operation’ = ‘read KBs per sec’.
In conclusion, to gain the most accurate view of performance, I recommend considering the following attributes of any storage device:
- Total device read throughput
- Total device write throughput
- Read access latency
- Write access latency
And with regard to test parameters, the following are important to know:
- Number of read IOPS at a given block size
- Number of write IOPS at a given block size
- Percent random I/O
- Percent sequential I/O
For additional information related to the discussion of IOPS, there is an excellent article written by Leah Schoeb which covers these concepts as they relate to Flash technology.
Link to Leah’s article: The truth about SSD performance benchmarks