pavement

RAID, performance tests

From FreeBSDwiki
(Difference between revisions)
Jump to: navigation, search
Line 13: Line 13:
 
Procedure:
 
Procedure:
 
  /usr/bin/time -h measuring simultaneous cp of 3.1GB files to /dev/null
 
  /usr/bin/time -h measuring simultaneous cp of 3.1GB files to /dev/null
files generated with dd if=/dev/zero bs=16M count=200
+
    files generated with dd if=/dev/random bs=16M count=200
 +
write performance tested with dd if/dev/zero bs=16M count=200
 
  simultaneous cp processes use physically separate files
 
  simultaneous cp processes use physically separate files
 
  sysctl -w vfs.read_max=128 unless otherwise stated
 
  sysctl -w vfs.read_max=128 unless otherwise stated
Line 61: Line 62:
  
 
  '''5 processes'''
 
  '''5 processes'''
  5 250GB/500GB, graid3 -r            : 107MB/s (peak: 120+ MB/sec low: 80+MB/sec)  
+
  5 250GB/500GB, graid3 -r            : 107 MB/s (peak: 120+ MB/sec low: 80+MB/sec)  
  5 250GB/500GB, graid3              : 105MB/s (peak: 130+ MB/sec low: 90+MB/sec)
+
  5 250GB/500GB, graid3              : 105 MB/s (peak: 130+ MB/sec low: 90+MB/sec)
 
+
1 500GB disk                        :  84 MB/s
  
 
Ancillary data:
 
Ancillary data:

Revision as of 14:56, 26 December 2007


Equipment:

Athlon X2 5000+ 
    3x Western Digital 250GB drives (WDC WD2500JS-22NCB1 10.02E02 SATA-300)
    2x Western Digital 500GB drives (WDC WD5000AAKS-00YGA0 12.01C02)
    Nvidia nForce onboard RAID controller, Promise TX2300 RAID controller
Athlon 64 3500+ 
    5x Seagate 750GB drives (Seagate ST3750640NS 3.AEE SATA-150)
    Nvidia nForce onboard RAID controller

Procedure:

/usr/bin/time -h measuring simultaneous cp of 3.1GB files to /dev/null
    files generated with dd if=/dev/random bs=16M count=200
write performance tested with dd if/dev/zero bs=16M count=200
simultaneous cp processes use physically separate files
sysctl -w vfs.read_max=128 unless otherwise stated

Notes:

system default of vfs.read_max=8
testing proved files generated with /dev/random performed no 
  differently on read than files generated with /dev/zero
bonnie++ was flirted with, but couldn't figure out how to make it read 
  big enough chunks of data to ever once hit the disk instead of the cache!

Test data:

write performance (1 process)
5 250GB/500GB, graid3               : 153 MB/s
5 250GB/500GB, graid3 -r            : 142 MB/s
1 500GB drive                       :  72 MB/s
1 250GB drive                       :  58 MB/s


1 process
5 250GB/500GB, graid3               : 213 MB/s (dips down to 160MB/sec)
5 750GB disks, graid3               : 152 MB/s (wildly fluctuating 120MB/s-200MB/s)
3 250GB disks, graid3               : 114 MB/s (dips down to 90MB/sec)
1 500GB drive                       :  76 MB/s
1 750GB drive                       :  65 MB/s (60MB/s-70MB/s)
1 250GB drive                       :  56 MB/s (very little variation)
2 processes
 5 250GB/500GB, graid3              : 128 MB/s (peak: 155+ MB/sec)
 5 750GB disks, graid3              : 125 MB/s (peak: 140+ MB/sec)
 3 250GB disks, graid3              :  98 MB/s (peak: 130+ MB/sec)
 3 250GB disks, graid3 -r           :  88 MB/s (peak: 120+ MB/sec)
 2 250GB disks, nVidia onboard RAID1:  81 MB/s (peak: 120+ MB/sec)
 2 250GB disks, Promise TX2300 RAID1:  70 MB/s (peak: 100+ MB/sec)
 3 250GB disks, gmirror round-robin :  64 MB/s (peak: 65+ MB/sec)
 3 250GB disks, gmirror split 128K  :  57 MB/s (peak: 65+ MB/sec)
 1 250GB disk                       :  56 MB/s (peak: 60+ MB/sec)
 2 250GB disks, gmirror round-robin :  55 MB/s (peak: 65+ MB/sec)
3 processes 
5 250GB/500GB, graid3               : 106 MB/s (peak: 130+ MB/sec low: 90+MB/sec)
5 250GB/500GB, graid3 -r            : 103 MB/s (peak: 120+ MB/sec low: 80+MB/sec)
4 processes
5 250GB/500GB, graid3               : 105 MB/s (peak: 130+ MB/sec low: 90+MB/sec)
5 250GB/500GB, graid3 -r            : 105 MB/s (peak: 120+ MB/sec low: 80+MB/sec)
5 processes
5 250GB/500GB, graid3 -r            : 107 MB/s (peak: 120+ MB/sec low: 80+MB/sec) 
5 250GB/500GB, graid3               : 105 MB/s (peak: 130+ MB/sec low: 90+MB/sec)
1 500GB disk                        :  84 MB/s

Ancillary data:

 vfs.read_max=8, 2 parallel cp processes
 1 250GB disk: 3m 56s
 2 250GB disks, gmirror round-robin: 4m 38s
 3 250GB disks, gmirror round-robin: 3m 24s

Preliminary conclusions:

system default of vfs.read_max=8 is insufficient for ANY configuration, including vanilla single-drive
gmirror read performance sucks
Promise and nVidia RAID1 are better, but oddly still SIGNIFICANTLY slower than graid3: wtf?
graid3 is the clear performance king here and offers very significant write performance increase as well
SATA-II offers significant performance increases over SATA-I on large arrays
Personal tools