Skip site navigation (1)Skip section navigation (2)

FreeBSD Manual Pages

  
 
  

home | help
RAID(4)		       FreeBSD Kernel Interfaces Manual		       RAID(4)

NAME
     raid -- RAIDframe disk driver

SYNOPSIS
     device raidframe

DESCRIPTION
     The raid driver provides RAID 0, 1, 4, and	5 (and more!) capabilities to
     FreeBSD.  This document assumes that the reader has at least some famil-
     iarity with RAID and RAID concepts.  The reader is	also assumed to	know
     how to configure disks and	pseudo-devices into kernels, how to generate
     kernels, and how to partition disks.

     RAIDframe provides	a number of different RAID levels including:

     RAID 0  provides simple data striping across the components.

     RAID 1  provides mirroring.

     RAID 4  provides data striping across the components, with	parity stored
	     on	a dedicated drive (in this case, the last component).

     RAID 5  provides data striping across the components, with	parity dis-
	     tributed across all the components.

     There are a wide variety of other RAID levels supported by	RAIDframe,
     including Even-Odd	parity,	RAID level 5 with rotated sparing, Chained
     declustering,  and	Interleaved declustering.  The reader is referred to
     the RAIDframe documentation mentioned in the HISTORY section for more
     detail on these various RAID configurations.

     Depending on the parity level configured, the device driver can support
     the failure of component drives.  The number of failures allowed depends
     on	the parity level selected.  If the driver is able to handle drive
     failures, and a drive does	fail, then the system is operating in
     "degraded mode".  In this mode, all missing data must be reconstructed
     from the data and parity present on the other components.	This results
     in	much slower data accesses, but does mean that a	failure	need not bring
     the system	to a complete halt.

     The RAID driver supports and enforces the use of `component labels'.  A
     `component	label' contains	important information about the	component,
     including a user-specified	serial number, the row and column of that com-
     ponent in the RAID	set, and whether the data (and parity) on the compo-
     nent is `clean'.  If the driver determines	that the labels	are very
     inconsistent with respect to each other (e.g. two or more serial numbers
     do	not match) or that the component label is not consistent with it's
     assigned place in the set (e.g. the component label claims	the component
     should be the 3rd one a 6-disk set, but the RAID set has it as the	3rd
     component in a 5-disk set)	then the device	will fail to configure.	 If
     the driver	determines that	exactly	one component label seems to be	incor-
     rect, and the RAID	set is being configured	as a set that supports a sin-
     gle failure, then the RAID	set will be allowed to configure, but the
     incorrectly labeled component will	be marked as `failed', and the RAID
     set will begin operation in degraded mode.	 If all	of the components are
     consistent	among themselves, the RAID set will configure normally.

     Component labels are also used to support the auto-detection and auto-
     configuration of RAID sets.  A RAID set can be flagged as auto-config-
     urable, in	which case it will be configured automatically during the ker-
     nel boot process.	RAID file systems which	are automatically configured
     are also eligible to be the root file system.  There is currently only
     limited support (alpha and	pmax architectures) for	booting	a kernel
     directly from a RAID 1 set, and no	support	for booting from any other
     RAID sets.	 To use	a RAID set as the root file system, a kernel is	usu-
     ally obtained from	a small	non-RAID partition, after which	any auto-con-
     figuring RAID set can be used for the root	file system.  See raidctl(8)
     for more information on auto-configuration	of RAID	sets.

     The driver	supports `hot spares', disks which are on-line,	but are	not
     actively used in an existing file system.	Should a disk fail, the	driver
     is	capable	of reconstructing the failed disk onto a hot spare or back
     onto a replacement	drive.	If the components are hot swapable, the	failed
     disk can then be removed, a new disk put in its place, and	a copyback
     operation performed.  The copyback	operation, as its name indicates, will
     copy the reconstructed data from the hot spare to the previously failed
     (and now replaced)	disk.  Hot spares can also be hot-added	using
     raidctl(8).

     If	a component cannot be detected when the	RAID device is configured,
     that component will be simply marked as 'failed'.

     The user-land utility for doing all raid configuration and	other opera-
     tions is raidctl(8).  Most	importantly, raidctl(8)	must be	used with the
     -i	option to initialize all RAID sets.  In	particular, this initializa-
     tion includes re-building the parity data.	 This rebuilding of parity
     data is also required when	either a) a new	RAID device is brought up for
     the first time or b) after	an un-clean shutdown of	a RAID device.	By
     using the -P option to raidctl(8),	and performing this on-demand recompu-
     tation of all parity before doing a fsck(8) or a newfs(8),	file system
     integrity and parity integrity can	be ensured.  It	bears repeating	again
     that parity recomputation is required before any file systems are created
     or	used on	the RAID device.  If the parity	is not correct,	then missing
     data cannot be correctly recovered.

     RAID levels may be	combined in a hierarchical fashion.  For example, a
     RAID 0 device can be constructed out of a number of RAID 5	devices
     (which, in	turn, may be constructed out of	the physical disks, or of
     other RAID	devices).

     It	is important that drives be hard-coded at their	respective addresses
     (i.e. not left free-floating, where a drive with SCSI ID of 4 can end up
     as	/dev/da0c) for well-behaved functioning	of the RAID device.  This is
     true for all types	of drives, including IDE, SCSI,	etc.  For IDE drivers,
     use the option ATAPI_STATIC_ID in your kernel config file.	 For SCSI, you
     should 'wire down'	the devices according to their ID.  See	cam(4) for
     examples of this.	The rationale for fixing the device addresses is as
     follows: Consider a system	with three SCSI	drives at SCSI ID's 4, 5, and
     6,	and which map to components /dev/da0e, /dev/da1e, and /dev/da2e	of a
     RAID 5 set.  If the drive with SCSI ID 5 fails, and the system reboots,
     the old /dev/da2e will show up as /dev/da1e.  The RAID driver is able to
     detect that component positions have changed, and will not	allow normal
     configuration.  If	the device addresses are hard coded, however, the RAID
     driver would detect that the middle component is unavailable, and bring
     the RAID 5	set up in degraded mode.  Note that the	auto-detection and
     auto-configuration	code does not care about where the components live.
     The auto-configuration code will correctly	configure a device even	after
     any number	of the components have been re-arranged.

     The first step to using the raid driver is	to ensure that it is suitably
     configured	in the kernel.	This is	done by	adding a line similar to:

	   pseudo-device   raidframe	  # RAIDframe disk device

     to	the kernel configuration file.	No count argument is required as the
     driver will automatically create and configure new	device units as
     needed.  To turn on component auto-detection and auto-configuration of
     RAID sets,	simply add:

	   options    RAID_AUTOCONFIG

     to	the kernel configuration file.

     All component partitions must be of the type FS_BSDFFS (e.g. 4.2BSD) or
     FS_RAID.  The use of the latter is	strongly encouraged, and is required
     if	auto-configuration of the RAID set is desired.	Since RAIDframe	leaves
     room for disklabels, RAID components can be simply	raw disks, or parti-
     tions which use an	entire disk.

     A more detailed treatment of actually using a raid	device is found	in
     raidctl(8).  It is	highly recommended that	the steps to reconstruct,
     copyback, and re-compute parity are well understood by the	system admin-
     istrator(s) before	a component failure.  Doing the	wrong thing when a
     component fails may result	in data	loss.

WARNINGS
     Certain RAID levels (1, 4,	5, 6, and others) can protect against some
     data loss due to component	failure.  However the loss of two components
     of	a RAID 4 or 5 system, or the loss of a single component	of a RAID 0
     system, will result in the	entire file systems on that RAID device	being
     lost.  RAID is NOT	a substitute for good backup practices.

     Recomputation of parity MUST be performed whenever	there is a chance that
     it	may have been compromised.  This includes after	system crashes,	or
     before a RAID device has been used	for the	first time.  Failure to	keep
     parity correct will be catastrophic should	a component ever fail -- it is
     better to use RAID	0 and get the additional space and speed, than it is
     to	use parity, but	not keep the parity correct.  At least with RAID 0
     there is no perception of increased data security.

FILES
     /dev/raid*	     raid device special files.

SEE ALSO
     raidctl(8), config(8), fsck(8), mount(8), newfs(8),

HISTORY
     The raid driver in	FreeBSD	is a port of RAIDframe,	a framework for	rapid
     prototyping of RAID structures developed by the folks at the Parallel
     Data Laboratory at	Carnegie Mellon	University (CMU).  RAIDframe, as orig-
     inally distributed	by CMU,	provides a RAID	simulator for a	number of dif-
     ferent architectures, and a user-level device driver and a	kernel device
     driver for	Digital	Unix.  The raid	driver is a kernelized version of
     RAIDframe v1.1, based on the NetBSD port of RAIDframe by Greg Oster.

     A more complete description of the	internals and functionality of RAID-
     frame is found in the paper "RAIDframe: A Rapid Prototyping Tool for RAID
     Systems", by William V. Courtright	II, Garth Gibson, Mark Holland,	LeAnn
     Neal Reilly, and Jim Zelenka, and published by the	Parallel Data Labora-
     tory of Carnegie Mellon University.  The raid driver first	appeared in
     FreeBSD 4.4.

COPYRIGHT
     The RAIDframe Copyright is	as follows:

     Copyright (c) 1994-1996 Carnegie-Mellon University.
     All rights	reserved.

     Permission	to use,	copy, modify and distribute this software and
     its documentation is hereby granted, provided that	both the copyright
     notice and	this permission	notice appear in all copies of the
     software, derivative works	or modified versions, and any portions
     thereof, and that both notices appear in supporting documentation.

     CARNEGIE MELLON ALLOWS FREE USE OF	THIS SOFTWARE IN ITS "AS IS"
     CONDITION.	 CARNEGIE MELLON DISCLAIMS ANY LIABILITY OF ANY	KIND
     FOR ANY DAMAGES WHATSOEVER	RESULTING FROM THE USE OF THIS SOFTWARE.

     Carnegie Mellon requests users of this software to	return to

      Software Distribution Coordinator	 or  Software.Distribution@CS.CMU.EDU
      School of	Computer Science
      Carnegie Mellon University
      Pittsburgh PA 15213-3890

     any improvements or extensions that they make and grant Carnegie the
     rights to redistribute these changes.

FreeBSD	11.1		       October 20, 2002			  FreeBSD 11.1

NAME | SYNOPSIS | DESCRIPTION | WARNINGS | FILES | SEE ALSO | HISTORY | COPYRIGHT

Want to link to this manual page? Use this URL:
<https://www.freebsd.org/cgi/man.cgi?query=raid&sektion=4&manpath=FreeBSD+5.0-RELEASE>

home | help