pavement

RAID, performance tests

From FreeBSDwiki
Revision as of 21:59, 21 December 2007 by Jimbo (Talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


Equipment:

Athlon X2 5000+, 3x Western Digital 250GB drives (WDC WD2500JS-22NCB1 10.02E02 SATA-300), Nvidia nForce onboard RAID controller, Promise TX2300 RAID controller
Athlon 64 3500+, 5x Seagate 750GB drives (Seagate ST3750640NS 3.AEE SATA-150), Nvidia nForce onboard RAID controller

Procedure:

/usr/bin/time -h measuring simultaneous cp of two 3.1GB random binary files to /dev/null
files generated with dd if=/dev/random of=/data/random.bin bs=16M count=200

Notes:

system default of vfs.read_max=8
bonnie++ was flirted with, but couldn't figure out how to make it read big enough chunks of data to ever once hit the disk instead of the cache!

Test data:

vfs.read_max=8:
 1 250GB disk: 3m 56s
 2 250GB disks, gmirror round-robin: 4m 38s
 3 250GB disks, gmirror round-robin: 3m 24s
vfs.read_max=128:
 5 750GB disks, graid3: 0m 51s (peak: 140+MB/sec)
 3 250GB disks, graid3: 1m 05s (peak: 130+ MB/sec)
 3 250GB disks, graid3 -r: 1m 13s (peak: 120+ MB/sec)
 2 250GB disks, nVidia onboard RAID1: 1m 19s (peak: 120+ MB/sec)
 2 250GB disks, Promise TX2300 RAID1: 1m 32s (peak: 100+ MB/sec)
 3 250GB disks, gmirror round-robin: 1m 40s (peak: 65+MB/sec)
 3 250GB disks, gmirror split 128K: 1m 52s (peak: 65+MB/sec)
 1 250GB disk: 1m 55s (peak: 60+ MB/sec)
 2 250GB disks, gmirror round-robin: 1m 57s (peak: 65+ MB/sec)

Preliminary conclusions:

system default of vfs.read_max=8 is insufficient for ANY configuration, including vanilla single-drive
gmirror read performance sucks; surprisingly, so does both Promise RAID1 and nVidia RAID1: why the hell aren't RAID1 reads done like RAID0 reads?
graid3 is the clear performance king here and offers very significant write performance increase as well
Personal tools