[nycbug-talk] Measuring Disk Performance

Henry M henry95 at gmail.com
Thu Dec 15 00:54:12 EST 2011


Whenever I've bench-marked disks, I've always just used dd and /dev/zero

 Example: I want to see how fast I can write a 1GB file

$ dd if=/dev/zero of=1GB bs=1024 count=1048576
1048576+0 records in
1048576+0 records out
1073741824 bytes (1.1 GB) copied, 8.72947 s, 123 MB/s

As long as the system load is consistent, you should get consistent
results. You can have fun by running multiple versions at once, to simulate
heavier "real-world" load.

You can change the byte size, or count accordingly. Just be careful what
values you give dd, you can easily fill up your disk, or break something
nasty with a typo (Yes I've done both )

-Henry

On Mon, Dec 12, 2011 at 7:31 PM, Pete Wright <pete at nomadlogic.org> wrote:

> On Mon, Dec 12, 2011 at 11:57:41AM -0500, Isaac Levy wrote:
> > Hi All,
> >
> > Disks: I need to measure disk performance across heterogeneous UNIX
> boxes, and I'm sick of bonnie++ for a bunch of reasons I won't' waste time
> on here.
> >
> > I know from conversations that a few folks here have their own ways of
> testing disks-
> >
> > I'd really like to know what people do to measure general disk
> performance?  e.g. really simple tests:
> >
> >   - r/w/d large files
> >   - r/w/d small files
> >   - disk performance when directories contain large numbers of files
> >
> > I commonly have need to test things like:
> >   - different block sizes
> >   - different inode allocations (UFS/ext3
> >   - different filesystem partition layouts
> >   - different filesystem features (think ZFS fun)
> >
> > Any thoughts, experiences, urls or shell utils to share?
> >
>
> hey ike!
> there is actually a pretty decent chapter on measuring disk and
> filesystem performance in "High Performance PostgreSQL 9.0".  they talk
> about using a tool, bundled with bonie++, called zcav that will track
> transfer rates from the begining to end of a disk subsystem.  it also
> will output data into gnuplot friendly format for pcitures.  i used this
> quite extensivly while tuning a linux dataware house a while back.
>
> other tools that I'm happy with are iozone and fio:
>
> http://www.iozone.org/
> http://freecode.com/projects/fio
>
> i find that when doing benchmarking of systems for eval purposes or
> benchmarking i end up using a mixture of many different tools.  i find
> that differnent tools will stress different parts of a given i/o
> subsystem.  so i'd generally do something like:
>
> - initial test using dd with variable blocksizes (dependent upon
> underlying filesystem block size)
> - several bonie++ tests, followed by some tests using iozone and fio
> - depending on how system will be used in production i try some
>  application level tests.  for a db - pgbench, webserver apache bench
>  etc..
>
> I have also done some interesting testing using this package written in
> erlang that does a good job in generating load on a wide range of
> appliation called Tsung:
> http://tsung.erlang-projects.org/
>
> Hope This Helps!
> -pete
>
> --
> Pete Wright
> pete at nomadlogic.org
>
> _______________________________________________
> talk mailing list
> talk at lists.nycbug.org
> http://lists.nycbug.org/mailman/listinfo/talk
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.nycbug.org/pipermail/talk/attachments/20111215/c420bf6b/attachment.html>


More information about the talk mailing list