pavement

RAID3, Software, How to setup

From FreeBSDwiki
(Difference between revisions)
Jump to: navigation, search
m (Reverted edits by 173.88.199.104 (talk) to last revision by 68.154.156.88)
 
(6 intermediate revisions by 3 users not shown)
Line 1: Line 1:
Example system has 1 80GB SATA system drive at /dev/ad0, and 5 750GB SATA drives available at /dev/ad1 through /dev/ad5.
+
In this example, we will set up a FreeBSD 6.2-RELEASE system with an 80GB SATA system drive at /dev/ad0, and 5 750GB SATA drives available at /dev/ad1 through /dev/ad5. Once we're done, we'll have those 5 750GB SATA drives in a [[RAID3]] array (ie, four data drives plus one parity drive) with a total storage space of 2.8 terabytes.  (The volume will only ''show'' 2.6T, but that's because of the 8% FreeBSD reserves by default for [[root]]'s use.)
  
 
  # '''graid3 load'''
 
  # '''graid3 load'''
Line 8: Line 8:
 
  # '''graid3 status'''
 
  # '''graid3 status'''
 
         Name        Status  Components
 
         Name        Status  Components
  raid3/myraid3array  COMPLETE  ad6
+
  raid3/myraid3array  COMPLETE  ad1
                               ad8
+
                               ad2
                               ad10
+
                               ad3
                               ad12
+
                               ad4
                               ad14
+
                               ad5
  
Now we need to format and mount it:
+
Now we need to [[newfs|format]] it (note the '''-U''' argument, to enable Soft Updates on the new array):
  
  # '''newfs /dev/raid3/myraid3array'''
+
  # '''newfs -U /dev/raid3/myraid3array'''
  
(you'll get several pages of cluster IDs scrolling by at this point, for a raid3 array of any significant size.  On the example 5x750GB array we're discussing here, this step took about 90 seconds.)
+
You'll get ''several'' pages of cluster IDs scrolling by extremely rapidly at this point.  On the example 5x750GB array we're discussing here, this step took about 90 seconds and scrolled several thousand lines.
 +
 
 +
With the array formatted, now we can mount it:
  
 
  # '''mkdir /mnt/myraid3array'''
 
  # '''mkdir /mnt/myraid3array'''
 
  # '''mount /dev/raid3/myraid3array /mnt/myraid3array'''
 
  # '''mount /dev/raid3/myraid3array /mnt/myraid3array'''
  
Voila, we now have a gigantic failure-tolerant array available!
+
And we're done - we now have a failure-tolerant array available!
  
 
  # '''df -h /mnt/myraid3array'''
 
  # '''df -h /mnt/myraid3array'''
Line 29: Line 31:
 
  /dev/raid3/myraid3array    2.6T    12K    2.6T    0%        /mnt/myraid3array
 
  /dev/raid3/myraid3array    2.6T    12K    2.6T    0%        /mnt/myraid3array
  
Don't forget to add an entry to [[/etc/fstab]] if you want to mount your new array automatically on boot.  You're done!
+
If you want to mount your new array automatically on boot, just add an entry to [[/etc/fstab]], and add '''geom_raid3_load="YES"''' to [[/boot/loader.conf]] to make sure that the RAID3 module will load at boot time before filesystems are mounted.  (The loader.conf entry may not be strictly necessary under all installations, but can't ever hurt.) You're done!
  
 
[[Category:FreeBSD Terminology]]
 
[[Category:FreeBSD Terminology]]
 
[[Category:FreeBSD for Servers]]
 
[[Category:FreeBSD for Servers]]
 
[[Category:RAID]]
 
[[Category:RAID]]

Latest revision as of 17:55, 25 August 2012

In this example, we will set up a FreeBSD 6.2-RELEASE system with an 80GB SATA system drive at /dev/ad0, and 5 750GB SATA drives available at /dev/ad1 through /dev/ad5. Once we're done, we'll have those 5 750GB SATA drives in a RAID3 array (ie, four data drives plus one parity drive) with a total storage space of 2.8 terabytes. (The volume will only show 2.6T, but that's because of the 8% FreeBSD reserves by default for root's use.)

# graid3 load
# graid3 label myraid3array ad1 ad2 ad3 ad4 ad5

You just made a RAID3 array... yes, it really was that easy. Check it out:

# graid3 status
        Name        Status  Components
raid3/myraid3array  COMPLETE  ad1
                              ad2
                              ad3
                              ad4
                              ad5

Now we need to format it (note the -U argument, to enable Soft Updates on the new array):

# newfs -U /dev/raid3/myraid3array

You'll get several pages of cluster IDs scrolling by extremely rapidly at this point. On the example 5x750GB array we're discussing here, this step took about 90 seconds and scrolled several thousand lines.

With the array formatted, now we can mount it:

# mkdir /mnt/myraid3array
# mount /dev/raid3/myraid3array /mnt/myraid3array

And we're done - we now have a failure-tolerant array available!

# df -h /mnt/myraid3array
Filesystem                 Size    Used   Avail   Capacity  Mounted on
/dev/raid3/myraid3array    2.6T    12K    2.6T    0%        /mnt/myraid3array

If you want to mount your new array automatically on boot, just add an entry to /etc/fstab, and add geom_raid3_load="YES" to /boot/loader.conf to make sure that the RAID3 module will load at boot time before filesystems are mounted. (The loader.conf entry may not be strictly necessary under all installations, but can't ever hurt.) You're done!

Personal tools