Fifteen years ago I wrote a little filesystem benchmark called Bonnie. I hadn’t maintained it in years and there are a few interesting forks out there. Suddenly, by accident I found myself fiddling with the Bonnie code and I think I’m going to call the new version “Bonnie 64”. Herewith, for those who care about filesystem performance, the details.
The History · I stopped working on Bonnie sometime in the mid-Nineties. When I left Open Text in 1996 I kicked around a few ideas on what to do next; one of them was to build a business around Bonnie, becoming a clearing-house and central information resource for disk and filesystem benchmarking. I quickly got sucked into the XML project and consulting work, and the only left-over is the long-unmaintained Bonnie page at Textuality.
One of the issues with Bonnie is that it was 32-bit software, and wouldn’t run tests on datasets larger than 2.1Gigabytes. This is a problem because, to test disk performance, you need test files much bigger than main memory, and lots of computers have main memory bigger than 2G these days.
Bonnie++ · At some point after I abandoned Bonnie, I started corresponding with Russell Coker, an excellent Aussie Open-Source hacker, who wanted to crack the 32-bit barrier by extending Bonnie to work with lots of little files. I thought that was a fine idea, and then he said he also wanted to migrate Bonnie from C to C++, and I thought that was a terrible idea, purely for aesthetic reasons; I can’t stand C++.
So Russell, with my agreement but without any of my involvement, created Bonnie++. It has done very well and is quite widely used; I reported recently on some interesting Bonnie++ numbers around the Linux “Reiser 4” filesystem.
Bonnie++ not only is 64-bit but tests a bunch of things, such as small-file creation/deletion rates, that Bonnie doesn’t.
SuSE et al · In 1996, Linus Torvalds wrote:
If somebody wants to do benchmarking,I'd suggest using at least
- lmbench (nice microbenchmark)
- bonnie (reasonable disk performance benchmark)
...
And over the years, Bonnie’s been used in Linux-land not just for benchmarking but for stress-testing filesystem implementations. Now it ships with most Linux distros. SuSE was the first, and I interacted with the engineer there who added it. I haven’t been able to keep track of which distro ships which versions of Bonnie or Bonnie++.
Solaris · When I joined Sun, I discovered that we had, for many years, been using the old original un-improved Bonnie as part of the Solaris Hardware Compatibility Test Suite.
What with our big Solaris-on-64-bit-Intel push, that version of Bonnie really needed revision, and a Sun group in Beijing led by Phost Feng cracked Bonnie‘s 32-bit limitation in a natural 64-bit way and sent me the code and asked me what I thought.
I did a couple of runs on my OS X box here and it was fine (built first time!) except for the output.
Reporting · One of the reasons that Bonnie has been popular all these years is its nice, compact, readable reporting format, which squeezes a whole lot of information about filesystem performance into a single line of output. This really works well when you’re testing a bunch of different configurations; you can publish a very compressed report, one line per configuration, which does a good job of illustrating what’s going on.
Except for, when I was testing, I saw two kinds of breakage in the output.
The first was just a minibug, some printf
controls needed to be
%ld
instead of %d
.
Second, for small tests on big computers, some of the numbers, reported in KByte/sec, were getting so big that they were jostling the report columns out of place. So I decided that everything should be reported in M/Sec, and the resulting reports are much cleaner. Here’s a sample.
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU M/sec %CPU /sec %CPU
1.25GMac 20 13.4 97.6 33.5 18.4 46.8 25.7 14.5 97.0 295.3 73.8 4069 21.4
1.25GMac 200 13.6 97.3 18.7 10.1 19.5 10.4 14.7 98.6 383.1 95.8 1797 10.8
1.25GMac 2000 13.3 95.9 19.5 10.2 7.9 4.6 9.2 62.4 15.4 5.0 78 0.9
This computer has 768M on it; note that the Bonnie results when your memory’s bigger than your test data are generally bogus, since well-designed Unix-lineage systems, which OS X is, try hard to buffer everything to avoid doing I/O. The only way to defeat this and actually test I/O rates is to completely flood the available buffer space. This is the right thing to do, because in many production applications, memory is maxed out anyhow, so the actual I/O rate (what Bonnie measures) becomes an important performance-limiting factor.
Blocksize Surprise · Bonnie uses a block size of 16384 when it’s trying to do efficient file I/O. That was a reasonable number in 1990, but it looked awfully small to me. I changed it to 128K and re-ran it on a 2G dataset, with almost no difference in the reported results, except that the block I/O burned a little less CPU while running slightly (but consistently) slower. So I guess 16K is fine.
Bonnie 64? · So, I guess I’ll publish this version somewhere under the name “Bonnie 64”, and I hereby appeal to the maintainers of the other versions to get in touch; I have no idea if there’s any interest in trying to unify things, but at least we should say hello to each other.