Back to the main page

Bonnie++

What

Bonnie++ is a benchmark tools for testing hard disks and file system performances.
 
Installation
 
Get file from sunfreeware.com, file name is bonnie++-1.03d-sol10-sparc-local.gz

Gunzip it and now translate data stream to the package in current location.
# pkgtrans bonnie++-1.03d-sol10-sparc-local .
Install the package:
# pkgadd -d . SMCbonn
How does it work and examples Bonnie++ works in 2 ways. The first way is testing the IO, trying to simulate how some application or database works. The second way is test of creating, reading and deleting many small files. Run program without options to see the usage:
 /usr/local/sbin/bonnie++ 

You must use the "-u" switch when running as root.

usage: bonnie++ [-d scratch-dir] [-s size(MiB)[:chunk-size(b)]]
                [-n number-to-stat[:max-size[:min-size][:num-directories]]]
                [-m machine-name]
                [-r ram-size-in-MiB]
                [-x number-of-tests] [-u uid-to-use:gid-to-use] [-g gid-to-use]
                [-q] [-f] [-b] [-p processes | -y]
Version: 1.03d

-d = directory for testing
-s = size of files for IO test. To skip this test use zero. To have realistic test, use size that is double of RAM. 
-n = Number of files for file creation test (measured in multiples of 1024 files)
-m = hostname for display purpose
-r = your RAM, you can skip this since you don't need Bonnie to determine RAM, you pay attention on -s to be 2 x RAM
-x = number of tests
-u/g = run Bonnie as user/group. Specify only user and his primary group will be chosen. Recommended not to run as root. 
-q = quite mode so you may miss some messages
-f = fast mode (skip per-character IO test = write/read single character, use only block IO test)
-b = no write buffering
-p = number of processes used by semaphores
Examples have been performed on SunFire T2000 with 8G RAM. First test was on the local hard disk, using Nagios user, writing 16g to disk, time ~ 40min:
#  /usr/local/sbin/bonnie++ -d /.0/bonnie-local/ -s 16g -m server_name -f -b -u nagios 

Using uid:100, gid:101.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.

Version 1.03d       ------Sequential Output------ --Sequential Input- --Random-		

                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--		

Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP	
server_name     16G           51181  55 23045  47           56212  46 294.1   4	

                    ------Sequential Create------ --------Random Create--------

                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--

              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                 16   145   7 +++++ +++   162   6   162   6 +++++ +++   152   6

server_name,16G,,,51181,55,23045,47,,,56212,46,294.1,4,16,145,7,+++++,+++,162,6,162,6,+++++,+++,152,6
Here, we see output (WRITE output to file) speed is 51M/s, input (READ input from file) speed is 56 M/s. Bonnie also shows: 1. Re-write speed, which is read, do change, write back and re-read (simulate database operation). 2. Seek speed, which is always having seek request so disk stays busy. Let's compare this with write/read test using 'dd'
# /.0/bonnie-local> foreach i ( 1 2 )
foreach? echo ---- write to output file ----
foreach? /usr/local/bin/dd if=/dev/zero of=${i} bs=128k count=131072
foreach?  echo ---- read from input file ---
foreach? /usr/local/bin/dd if=${i} of=/dev/zero bs=128k count=131072
foreach? end

---- write to output file ----
131072+0 records in
131072+0 records out
17179869184 bytes (17 GB) copied, 318.901 s, 53.9 MB/s
---- read from input file ---
131072+0 records in
131072+0 records out
17179869184 bytes (17 GB) copied, 295.673 s, 58.1 MB/s
---- write to output file ----
etc etc
This is bit faster, but Bonnie++ should be more realistic. Okay, let's now see performance on DAS (Directly Attached Storage). It's StorEdge 3510 connected with FC cable (2 G/s) to T2000. I created 2 logical drives with ZFS over it. The first one is RAID1 or mirror consisting of 2 physical disks. The second one is RAID5 consisting of 3 physical disks.
# zpool list
NAME         SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
pool-raid1    68G  16.0G  52.0G    23%  ONLINE  -
pool-raid5   136G   136K   136G     0%  ONLINE  -

# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
pool-raid1               16.0G  50.9G    19K  /pool-raid1
pool-raid1/bonnie-raid1  16.0G  50.9G  16.0G  /pool-raid1/bonnie-raid1
pool-raid5                132K   134G    19K  /pool-raid5
pool-raid5/bonnie-raid5    18K   134G    18K  /pool-raid5/bonnie-raid5
Bonnie++ on RAID1
#  /usr/local/sbin/bonnie++ -d /pool-raid1/bonnie-raid1 -s 16g -n 0 -m server_name -f -b -u nagios 
Using uid:100, gid:101.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...

Version 1.03d       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
server_name     16G           58408  72 32513  58           100124  72 537.1  12
server_name,16G,,,58408,72,32513,58,,,100124,72,537.1,12,,,,,,,,,,,,,
Bonnie++ on RAID5
#  /usr/local/sbin/bonnie++ -d /pool-raid5/bonnie-raid5 -s 16g -n 0 -m server_name -f -b -u nagios 
Using uid:100, gid:101.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...

Version 1.03d       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
server_name    16G           54601  65 31029  56           128065  92 606.3   7
server_name,16G,,,54601,65,31029,56,,,128065,92,606.3,7,,,,,,,,,,,,,
If we repeate test on local disk without file creation test, input speed is faster.
#  /usr/local/sbin/bonnie++ -d /.0/bonnie-local -s 16g -n 0 -m server_name -f -b -u nagios 

Using uid:100, gid:101.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...

Version 1.03d       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
server_name    16G           51977  51 29662  62           73907  59 984.3   9
server_name,16G,,,51977,51,29662,62,,,73907,59,984.3,9,,,,,,,,,,,,,
Let's test speed over NFS (NIC is 100M/s) on DAS RAID5
#  /usr/local/sbin/bonnie++ -d /mnt -s 16g -n 0 -m nfs_client -f -b -u nagios 

Using uid:100, gid:101.
Writing intelligently...done
Rewriting...
done
Reading intelligently...done
start 'em...done...done...done...

Version 1.03d       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
nfs_client     16G            9436  25  5748  17           11288  18 322.9  19
nfs_client,16G,,,9436,25,5748,17,,,11288,18,322.9,19,,,,,,,,,,,,,
Let's test speed over iSCSI (NIC is 100M/s) on DAS RAID1
#  /usr/local/sbin/bonnie++ -d /iscsi-no-chap/bonnie -s 2g -n 0 -m iscsi_client -f -b -u nagios 

Using uid:100, gid:101.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...

Version 1.03d       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
iscsi_client    2G            2896   5  2201   5           10731  13 116.3   3
iscsi_client,2G,,,2896,5,2201,5,,,10731,13,116.3,3,,,,,,,,,,,,,
Well, NFS performes better than iSCSI. Bonnie++ has the Perl script bon_csv2html that creates nice HTML page with results. The input is last line from the output, just pipe that to perl script.
# echo "iscsi_client,2G,,,2896,5,2201,5,,,10731,13,116.3,3,,,,,,,,,,,,," | perl bon_csv2html > /tmp/iscsi_client.html
And you get nice table like this.
Sequential Output Sequential Input Random
Seeks
Sequential Create Random Create
Size:Chunk SizePer CharBlockRewritePer CharBlockNum FilesCreateReadDeleteCreateReadDelete
K/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPUK/sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU/ sec% CPU
iscsi_client2G28965220151073113116.33
Back to the main page