[nycbug-talk] Measuring Disk Performance

Jesse Callaway bonsaime at gmail.com
Thu Dec 15 10:12:33 EST 2011


the advantage to some of the heavier tools is that they attempt to break
caching mechanisms so that you can see worst case performance.
dd looks cool since it certainly skips any possibility of filesystem
speedups.
I'd just say to make sure 1G (or whatever you test with) is large enough to
break controller and on-disk cache.
On Dec 15, 2011 9:21 AM, "Isaac Levy" <ike at blackskyresearch.net> wrote:

> On Dec 15, 2011, at 12:54 AM, Henry M wrote:
>
> Whenever I've bench-marked disks, I've always just used dd and /dev/zero
>
>  Example: I want to see how fast I can write a 1GB file
>
> $ dd if=/dev/zero of=1GB bs=1024 count=1048576
> 1048576+0 records in
> 1048576+0 records out
> 1073741824 bytes (1.1 GB) copied, 8.72947 s, 123 MB/s
>
> As long as the system load is consistent, you should get consistent
> results. You can have fun by running multiple versions at once, to simulate
> heavier "real-world" load.
>
>
> Oh- &, a little xargs, and some date(1) and time(1) fun.
>
>
> You can change the byte size, or count accordingly. Just be careful what
> values you give dd, you can easily fill up your disk, or break something
> nasty with a typo (Yes I've done both )
>
> -Henry
>
>
> Yeaaahhhh- this is exactly what I was thinking, but I didn't think to just
> dd from /dev/zero and build some small tests.
>
> Sweet.
>
> Rocket-
> .ike
>
>
>
> _______________________________________________
> talk mailing list
> talk at lists.nycbug.org
> http://lists.nycbug.org/mailman/listinfo/talk
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.nycbug.org:8443/pipermail/talk/attachments/20111215/b06333c1/attachment.htm>


More information about the talk mailing list