Server Management

From unRAID
Jump to: navigation, search

unRAID Server management is accomplished through the use of a browser-based Management Utility.

Connecting To the Server Management Utility

Normally, to connect to the Management Utility, simply type the name of your server into your browser’s address bar:

http://lime-technology.com/Images/address_bar_small.jpg

The default server name is tower. This may be changed on the Settings page, or you may plug the Flash into your PC and edit the config/ident.cfg file directly. Alternately, instead of typing the server name, you could enter the server’s IP address.

Also by default, upon boot unRAID Server will attempt to contact the local network DHCP server to obtain an IP address. If your network does not have a DHCP server, or if you want to assign a static IP address to your server, you must plug the Flash into your PC and edit the config/network.cfg file where these settings are stored.

Main

The home page of the Management Utility is called Main. This page displays all the vital information about the hard drives in the unRAID Server array. This page is divided into four horizontal sections:

  1. A header area which displays the name of your Server and a Comment string. This is the name and comment of your server as it appears in My Network Settings. Both strings may be changed on the Settings page.
  2. A menu bar which displays a list of subpages. These will be explained below.
  3. The Disk Status section which displays all the critical information and status of the hard drives in your unRAID array.
  4. The Command area which consists of a set of buttons which let you Start and Stop the server, as well as initiate various utility operations.

Disk status

There is a line in this section for each disk (hard drive) of your unRAID server. In the unRAID organization, one hard drive serves as the parity disk; the other hard drives are called data disks.

The parity disk is what provides the redundancy in a RAID system. The parity disk is updated every time you write any of the data disks. If one day a data disk fails, there is sufficient information on the parity disk to permit the system to reconstruct the contents of the failed disk onto a new disk.

IMPORTANT One requirement of the unRAID™ system is that the capacity of the parity disk needs to be as large or larger than the capacity of the largest data disk.

The data disks are exported and appear as shares named disk1, disk2, …, in My Network Places under Windows.

Colored status indicator
The significance of the color indicator at the beginning of each line is as follows:
Green: the hard drive status is Normal
Yellow: the data contents of the actual hard drive are invalid. The parity disk has this status when Parity-Sync is taking place. A data disk has this status during Reconstruction.
Red: the disk is disabled.
Blue: a new disk not currently part of the array.
Grey: indicates no disk present.
Blinking: indicates the corresponding disk has been spun-down.
Model/Serial No.
This data is read directly from the hard drive.
Temperature
This is the temperature reported by the hard drive via S.M.A.R.T. When the disk is spun down, there will be an asterisk (*) displayed here instead. This is because sending the command to a hard drive to obtain S.M.A.R.T. information would cause it to spin up.
Size
This is the raw capacity of the hard drive expressed as the number of 1024-byte blocks.
Free
This is the amount of free space in the disk’s file system, expressed as the number of 1024-byte blocks. The free space of a freshly formatted disk will always be less than the disk’s raw size because of file system overhead.
Reads, Writes, Errors
The Read and Write statistics display the number of 4096-byte read and write operations that have been performed by the disk.
The Error statistic displays the number of read and write operations which have failed. In a protected array, any single-disk read error will be corrected on-the-fly (using parity reconstruction). The Error counter will increment for every such occurrence. Any single-disk write error will result in the Error counter being incremented and that disk being disabled.
Upon system boot, the statistics start out cleared; and they may be manually cleared at any time; refer to the Settings page.

Command area

Starting and Stopping the array

Normally following system boot up the array (complete set of disks) is automatically started (brought on-line and exported as a set of shares). But if there’s been a change in disk configuration, such as a new disk added, the array is left stopped so that you can confirm the configuration is correct. This means that any time you’ve made a disk configuration change you must log into the Management Utility and manually start the array.

Disk configuration changes

Here are the normal configuration changes you can make:

  • You add one or more new disks.
  • You replace a single disk with a bigger one.
  • You replace a failed disk.
  • You shuffle two or more data disks between slots.
  • You remove one or more data disks
Add one or more new disks

This is the normal case of expanding the capacity of the system by adding one or more new hard drives:

  1. Stop the array.
  2. Power down the server.
  3. Install your new hard drive(s).
  4. Power up the unit.
  5. Start the array.

When you Start the array, the system will first format the new disk(s). When this operation finishes, all the data disks, including the new one(s), will be exported and be available for use.

The format operation consists of two phases. First, the the entire contents of the new disk(s) is cleared (written with zeros), and then it’s marked active in the array. Next, a file system is created. unRAID Server uses the ReiserFS journalled file system.

The clearing phase is necessary to preserve the fault tolerance characteristic of the array. If at any time while the new disk(s) is being cleared, one of the other disks fails, you will still be able to recover the data of the failed disk. Unfortunately, the clearing phase can take several hours depending on the size of the new disks(s).

The capacity of any new disk(s) added must be the same size or smaller than your parity disk. If you wish to add a new disk which is larger than your parity disk, then you must instead first replace your parity disk. (You could use your new disk to replace parity, and then use your old parity disk as a new data disk.)

Replace a single disk with a bigger one

This is the case where you are replacing a single small disk with a bigger one:

  1. Stop the array.
  2. Power down the unit.
  3. Replace smaller disk with new bigger disk.
  4. Power up the unit.
  5. Start the array.

When you start the array, the system will reconstruct the contents of the original smaller disk onto the new disk. Upon completion, the disk’s file system will be expanded to reflect the new size. You can only expand one disk at a time.

A special case exists when the new bigger disk is also bigger than the existing parity disk. In this case you must use your new disk to first replace parity, and then replace your small disk with your old parity disk:

  1. Stop the array.
  2. Power down the unit.
  3. Replace smaller parity disk with new bigger disk.
  4. Power up the unit.
  5. Start the array.
  6. Wait for Parity-Sync to complete.
  7. Stop the array.
  8. Power down the unit.
  9. Replace smaller data disk with your old parity disk.
  10. Power up the unit.
  11. Start the array.
Replace a failed disk

This is the case where you have replaced a failed disk with a new disk:

  1. Stop the array.
  2. Power down the unit.
  3. Replace the failed hard disk with a new one.
  4. Power up the unit.
  5. Start the array.

When you Start the array after replacing a failed disk, the system will reconstruct the contents of the failed disk onto the new disk; and, if the new disk is bigger, expand the file system.

You must replace a failed disk with a disk which is as big or bigger than the original and not bigger than the parity disk. If the replacement disk is larger than your parity disk, then the system permits a special configuration change called swap-disable.

For swap-disable, you use your existing parity disk to replace the failed disk, and you install your new big disk as the parity disk:

  1. Stop the array.
  2. Power down the unit.
  3. Replace the parity hard disk with a new bigger one.
  4. Replace the failed hard disk with you old parity disk.
  5. Power up the unit.
  6. Start the array.

When you start the array, the system will first copy the parity information to the new parity disk, and then reconstruct the contents of the failed disk.

Shuffle two or more data disks between slots

This is the case where the system recognizes all the data disks, but notices that they are not in the same slots they used to be in. The Main page will display the disk model/serial numbers both of the actual placement and what the placement used to be.

If you start the array in this state, the system will just record the new positions of the disks. Note however, that if the same parity disk is not in the top slot you can not start the array.

Remove one or more data disks

In this case the missing disk(s) will be identified. If there is only one missing disk when you start the array it will be marked as failed. All data disks will be exported (including the missing one), but the system will be running unprotected; that is, if a disk fails you will lose data.

If there are two or more missing disks, you can not start the array. In this case you must either put the disks back, or click Restore on the Main page to reset the configuration. Other operations

Restore array configuration

When the array is Stopped there is a button in the Command area labeled Restore. This function will restore the array configuration data so that the system thinks it’s brand new with all new hard drives. When you Start the array, the system will start a background process to generate the parity information.

In the special case where all the hard drives are new, the format operation will not clear the data areas; it simply generates parity. This can be used when you’ve added new disk(s) and you don’t want to wait around for the clear phase to complete. In this case you could first Reset the array configuration, and then simply Start the array, and the system will re-sync parity, incorporating the new disk(s). Caution: if a disk fails during the operation, you will not be able to rebuild it.

The array configuration data is stored in the file config/super.dat on the Flash. For this reason, you must always have the Flash installed in your server.

Check parity

When the array is Started and parity is already valid, there is a button in the Command area labled Check which will initiate a background Parity-Check function. Parity-Check will march through all data disks in parallel, computing parity and checking it against stored parity on the parity disk. If a mismatch occurs, the parity disk will be updated (written) with the computed data and the Sync Errors counter will be incremented.

The most common cause of Sync Errors is power-loss which prevents buffered write data being written to disk. Anytime the array is Started, if the system detects that a previous unsafe shutdown occurred, then it automatically initiates a Parity-Check.

FAQ

How do hard drives become disabled?

When a write operation fails on a disk in a protected array, the system will disable the disk. A disabled disk will no longer be used in any way by the system. The disk still appears as a share, and you may still read and write it; however, the array will be running unprotected and another disk failure will cause data loss.

When a read operation fails the system will return the reconstructed data of the failed block. The system then tries to write the reconstructed data back to the failing disk. If this write operation fails then the hard disk will be disabled.

Once a disk is disabled it’s contents must be considered invalid (because there have been uncompleted writes). All further read requests to that disk will be serviced by reading Parity and all the other Data disks in order to reconstruct the requested data on-the-fly. All further write requests result in first reading all the other Data disks, and then updating the Parity disk.

Normally, you would replace the hard drive of a disabled disk; however, you can try to re-enable a disabled disk as follows:

  1. Stop the array.
  2. Power down the unit.
  3. Physically remove the failed disk, leaving the slot empty.
  4. Power up the unit.
  5. Start the array.
  6. Stop the array.
  7. Power down the unit.
  8. Re-install your failed disk.
  9. Power up the unit.
  10. Start the array.

When you start the array in step 5, the system will notice the failed disk’s slot is empty, and it will clear the identification data for that slot. Thus when you start the array in step 10, the system will treat the disk simply as a new disk.

The system records the presence of a disabled disk in the config/super.dat file.

What happens if two data disks fail at the same time?

Unfortunately you could very well lose all the data on both disks. But unlike other RAID systems, you will not lose the data on disks which didn’t fail.

How would I replace a smaller disk and add new disks at the same time?

Sorry, you must do one of those actions first. While such an operation would be possible, our beta testing has shown that there’s too great a chance to become confused and mess up your data.

What if there’s something really wrong, for example, maybe multiple drives are missing?

If the system detects a configuration where it can not start at all, then you will not be able to start the array. You must either correct the situation or reset the array configuration data.

How do I remove a hard disk that I don’t plan on replacing?

In this case you should reset the array configuration data so that the system can generate new parity information.

What is reconstruct?

Reconstruct is a term that refers to re-building the data contents of a disk using the parity information in RAID system. By reading the parity disk plus all the other data disks, the system can generate the data of the target disk.

Suppose my system becomes unusable, can I recover my data?

All data disk hard drives are formatted with the ReiserFS (3.6) file system. You can install your data disk hard drive in another Linux system and recover your data. You can also install your data disk hard drive in a Windows system and recover the data using freely available utilities to mount this file system.

Users

The Users page is used to set a password for the root user, and configure User level security for your server.

User level security is a feature that lets you restrict access to shares according to user name.

When User level security is enabled, you will need to enter the list of users who may access your server. When a user attempts to connect to a share on your unRAID server, a dialog box will appear asking them to enter their user name and password before being granted access to shares.

In addition, you can specify which users may access each share, as well as retrict access to read-only.

If User level security is not enabled, then you do not need to enter a list of users. Any user that attempts to connect to a share on your unRAID server is granted access, subject to the Export mode setting on the share.

Security

This section is used to enable or disable User level security.

User level security
This is the control for enabling or disabling User level security.

Users

This section lists each configured user name.

Regardless of whether User level security is enabled, the built-in user name root always appears atop the Users list. If you enter a non-blank password for the root user, then your browser will also prompt you for the password when you attempt to open the Management Utility. In addition, you must enter this password to log into the console or telnet session.

To create a new user (User level security enabled), scroll to the end of the Users list, enter the new User name and Password (and Retype password), and then click Add User.

To change the password of an existing user, just type the new Password (and Retype password) for the user and click Apply.

To delete a user, change the User name to blank and click Apply. Note that you can not delete the root user.

Technical notes:

  • All user access restrictions on shares is defined for each share on the Shares page.
  • Each new user is automatically given a unique uid, and unique gid (group name same as the user name). However, all objects (files/directories) created in shares will be owned by root.
  • Only root can access the System Management Utility, and log in to the system console or telnet session. The configured users do not have actual home accounts on the server.
  • The following files are maintained in the config directory on the Flash when User mode security is enabled:
config/passwd - contains user names and encrypted passwords
config/group - contains groups created for users
config/smbpasswd - contains user names and SMB encrypted passwords
User name
The name should only consist of the characters a-z, 0-9, - (dash), _ (underscore), and . (dot). Please do not use any uppercase letters.
Password
Type anything you want here. Blank is also ok.
Retype password
Must be the same as what you typed for Password.

Shares

The Shares page is used to configure shares and share access.

Export settings

This section lets you configure how the pre-defined flash and disk shares are exported, and whether User shares are enabled or not.

Flash share
This is the export mode of the flash share. See Export Mode below.
Disk shares
This is the export mode for the entire set of disk shares. See Export Mode below.
User shares
This is the control for enabling and disabling User shares.

Export Mode

The basic external access mode for a share is defined by it's Export mode:

Export read/write
The share will be exported and visible under My Network Places. If User mode security is not enabled, then anyone on the network can read/write data in the share. If User mode security is enabled, then anyone who can log into the server can read/write data in the share.
Export read-only
The share will be exported and visible under My Network Places. The share may not be written, but if User mode security is not enabled, then anyone on the network can read data in the share. If User mode security is enabled, then anyone who can log into the server can read data in the share.
Export read/write, hidden
The share will be exported, but will not show up in browse lists (i.e., under My Network Places). If User mode security is not enabled, then anyone on the network who knows about the share can read/write data in the share. If User mode security is enabled, then anyone who can log into the server and knows about the share can read/write data in the share.
Export read-only, hidden
The share will be exported, but will not show up in browse lists (i.e., under My Network Places). If User mode security is not enabled, then anyone on the network who knows about the share can read data in the share. If User mode security is enabled, then anyone who can log into the server and knows about the share can read data in the share.
Don't export
The share is not exported and can not be accessed.

User shares

This section lists all of the configured User shares. Note: if User shares are not enabled, then this section is not present.

Share name
This is the name of the share. Use only the characters: a-z, A-Z, 0-9, - (dash), and . (dot).
Comments
Optional descriptive text that will appear in the Comments column under My Network Places.
Allocation method
The method by which the system will select the disk to use for creating a User share, directory, or disk. See Allocation below.
Split level
The maximum depth in the directory tree which may be split across multiple disks. See Split level below.
Included disk(s)
The set of disks which will be considered for allocation. Blank means all disks. Disks are specified using the identifier disk1,disk2, etc. Separate each identifier with a comma.
Excluded disks(s)
The set of disk which will be excluded from consideration for allocation. Blank means no disks.
Export mode
Specifies the basic export mode of the share. See Export Mode above.

Note: The following configuration parameters are available only when User level security is enabled.

Exceptions
A list of users who are exceptions to the basic export mode of the share: If the export mode of the share is read/write, then this lists users who will have read-only access. If the export mode of the share is read-only, then this lists users who will have read/write access. Separate multiple user names with commas.
Valid users
A list of users who can exclusively access the share. Blank means all users.
Invalid users
A list of users who may not access the share at all. Blank means no users.

Overview

User shares is a unique feature of unRAID OS which provides a unified name space across multiple data disks. User shares simplify storage management by presenting a view of all unRAID storage as if it were one large file system.

When User Shares are enabled, unRAID OS will automatically create a set of shares named after the top-level directories found on each data disk. If the same top-level directory exists on more than one disk, then the exported share will contain all directories/files under that top-level directory on all the disks.

For example, suppose each disk has the following structure:

|-- disk1
|   `-- Movies
|       |-- Alien
|       |   |-- VIDEO_TS
|       |   |   |-- VIDEO_TS.IFO
|       |   |   |-- VTS_01_1.VOB
|       |   |   |-- VTS_01_2.VOB
|       |   |   |-- VTS_01_3.VOB
|       |   `-- folder.jpg
|       `-- Basic
|           |-- VIDEO_TS
|           |   |-- VIDEO_TS.IFO
|           |   |-- VTS_01_1.VOB
|           |   |-- VTS_01_2.VOB
|           `-- folder.jpg
|-- disk2
|   `-- Movies
|       `-- Cars
|           |-- VIDEO_TS
|           |   |-- VIDEO_TS.IFO
|           |   |-- VTS_01_1.VOB
|           |   |-- VTS_01_2.VOB
|           `-- folder.jpg
`-- disk3
    `-- Movies
        `-- Dejavu
            |-- VIDEO_TS
            |   |-- VIDEO_TS.IFO
            |   |-- VTS_01_1.VOB
            |   |-- VTS_01_2.VOB
            `-- folder.jpg

With User Shares enabled, for the above tree we would see this share under My Network Places:

//server/Movies

And it would have the following structure:

|-- Movies
    |-- Alien
    |   |-- VIDEO_TS
    |   |   |-- VIDEO_TS.IFO
    |   |   |-- VTS_01_1.VOB
    |   |   |-- VTS_01_2.VOB
    |   |   |-- VTS_01_3.VOB
    |   `-- folder.jpg
    `-- Basic
    |   |-- VIDEO_TS
    |   |   |-- VIDEO_TS.IFO
    |   |   |-- VTS_01_1.VOB
    |   |   |-- VTS_01_2.VOB
    |   `-- folder.jpg
    `-- Cars
    |       |-- VIDEO_TS
    |       |   |-- VIDEO_TS.IFO
    |       |   |-- VTS_01_1.VOB
    |       |   |-- VTS_01_2.VOB
    |       `-- folder.jpg
    `-- Dejavu
            |-- VIDEO_TS
            |   |-- VIDEO_TS.IFO
            |   |-- VTS_01_1.VOB
            |   |-- VTS_01_2.VOB
            `-- folder.jpg

In the case where the same object (directory or file) exists at the same hierarchy on multiple disks, the User Share will reference the object on the lowest numbered disk. For example, if Movies/Cars existed on both disk1 and disk2, then Cars under the Movies User Share would refer to the version on disk1.

Each time the array is Started, if User Shares are enabled, unRAID OS will regenerate and re-export each top-level directory as a network share.

Allocation method

When a new User share is created, or when any object (file or directory) is created within a User share, the system must determine which data disk the User share or object will be created on. In general, a new User share, or object within a User share, will be created on the data disk with the most free space. However there are a set of share configuration parameters available to fine tune disk allocation.

The basic allocation strategy for a share is defined by the Allocation method configuration parameter. You may select one of two Allocation methods for the system to use:

Most-Free - in this method, the system will simply pick the disk which currently has the most free space.

High-Water - in this method, the system will pick the disk which currently has the least free space that is still above a certain minimum (called the "high water" mark). Suppose in our example above, we have this situation:

 disk   size   free
-----  -----  -----
disk1   80GB   75GB
disk2  120GB  110GB
disk3   80GB   70GB

The initial high water mark is set to the 1/2 the size of the largest disk; in this case, it will be set to 60GB. In this state, disk1 has 15GB of free space above the "high water" mark; disk2 has 50GB, and disk3 has 10GB.

As new objects are created, the system will choose disk3 until the amount of free space on disk3 falls under 60GB. Subsequently, the system will start allocating from disk1 until it's free space falls under 60GB. Then it will allocate from disk2 until it's free space also falls under 60GB. Once the amount of free space on all disks is below 60GB, a new high water mark is established by dividing the old high water mark by 2.

The advantage of High-water method is that when writing a series of files, most of the time only one data disk will need to be spun up.

Split Level

Often media data will consolidated under a single directory, or directory tree. Then during playback the files will be accessed one after another. This is the case with the set of VOB files which make up a DVD movie. In this situation we want all the associated media files to be stored on the same physical disk if at all possible. This is because we don't want media playback to pause while the disk containing the next file spins up. unRAID OS solves this problem by introducing a configurable allocation parameter called "Split level".

Split level defines the highest level in the share directory hierarchy which can be split among multiple disks. In the Movie share example above, setting Split level to 1 only permits any object created directly under the Movie directory to be allocated to any disk according to the Allocation method. Thus, when we create the Alien subdirectory, it may reside on any of the data disks; however, when we create a file or another directory within the Movies/Alien directory, this object is at level 2, and will be created on whatever disk the Movies/Alien directory actually resides on.

If the share were organized differently, for example according to genre:

Movies/SciFi/Alien/...
Movies/Action/Basic/...
Movies/Action/Dejavu/...
Movies/Kids/Cars/...

Then you would set Split level to 2. This will let the genres expand among all disks, but still ensure that the contents of the actual movie directories stay within the same disk.

If you set the Split level to 0 for a share, then all directories/files created under that share will be on the same disk where the share was originally created.

If you set the Split level high, e.g., 999 for a share, then every directory/file created under that share will get placed on a disk according to Allocation method.

Included and Excluded disk(s)

The last way to control which disks are used by a share is through the Included disks(s) and Excluded disk(s) configuration parameters.

The Included disks(s) parameter defines the set of disks which are candiates for allocation to that share. If Included disk(s) is blank, then all present data disks are candiates. For example, to restrict a share to using only disk1, disk2, and disk3, you would set Included disk(s) to disk1,disk2,disk3.

The Excluded disk(s) parameter defines the set of disks which are exluded from consideration for allocation. If Excluded disk(s) is blank, then no disks are excluded.

When considering which disk to allocate space for a new object, unRAID OS first checks if it's in the Included disks(s) set, and the checks if it's in the Excluded disk(s) set.

Creating User Shares

To create a new User share, scroll to the end of the User shares list, enter the new Share name and other configuration, and then click Add Share. unRAID OS will select the disk to create the initial top-level share directory according to the configured Allocation method.

Deleting User Shares

To delete a User Share, just clear the Share name field and click Apply. Only entirely empty User Shares may be deleted.

Renaming User Shares

To rename a User share, just click in the Share name field of the share, type it's new name, and then click Apply.

Examples

Suppose we have a share called Movies for which we want everyone on the network to be able to read, but only larry can read/write:

Export mode: Export read-only
Exceptions: larry

Suppose we have a share called Finances which only mom and dad can access:

Export mode: Export read/write, hidden
Valid users: mom,dad

Further, suppose only mom should be able to change the files:

Export mode: Export read/write, hidden
Exceptions: dad
Valid users: mom,dad

Another way to achieve the same thing:

Export mode: Export read-only, hidden
Exceptions: mom
Valid users: mom,dad

Technical notes:

  • A user share configuration file called config/shares/<name>.cfg is stored on the Flash for each User Share (where <name> is the Share name). If this file does not exist, then a set of default values are used for the User Share. Whenever a User Share parameter is changed, it's configuration file is also updated, or created if it does not exist.
  • Adding a new User Share or changing the configuration parameters of an existing User Share will not break any current connections on other shares. Renaming or deleting a User Share will break all outstanding connections, however. This is because Samba must be stopped in order to rename or delete the top-level directory which is associated with the share.
  • User Shares are implemented using proprietary code which builds a composite directory hierarchy of all the data disks. This is created on a tmpfs file system mounted on /mnt/tmp. User Shares are exported using a proprietary FUSE pseudo-file system called 'shfs' which is mounted on /mnt/users.
  • When an object needs to be created on a selected disk, first the directory hierarchy is created on the disk (if it isn't already in place). When the last file of a particular directory on a disk is removed, the unused part of the directory hierarchy on that disk remains in place.
  • With User Shares enabled, files may still be accessed via the individual disk shares. However, depending on the disk directory hierarchy and user share settings, some operations on a disk share may not be reflected in the user share which includes this disk.

Settings

Identification

Network settings

Disk settings

Date and time

Devices

An unRAID server disk array consists of a single parity disk and a number of data disks. The data disks are exclusively used to store user data, and the parity disk provides the redundancy necessary to recover from any singe disk failure.

Terminology

Note that we’re careful to use the term disk when referring to an array storage device. We use the term hard drive when referring to an actual hard drive. This is because in a RAID system it’s possible to read/write an array disk whose corresponding hard drive is disabled or even missing! In addition, it’s useful to be able to ask, “which hard drive is assigned to be the parity disk?”, or, “which hard drive corresponds to data disk2?”.

Assigning devices

We need a way therefore, to assign hard drives to array disks. This is accomplished on the Devices page, in the Disk devices section. Here you will find a drop-down box for each array disk. The drop-down box lists all the unassigned hard drives. To assign a hard drive simply select it from the list. Each time a hard drive assignment is made, the system updates the config/disk.cfg file to record the assignment.

Note: The system doesn’t actually record information about the hard drive itself, but rather, the port through which the hard drive is accessed. We need to record the port so that we can detect, for example, when a disabled disk is replaced. (Port here is really the linux device pathname.)

Requirements

Unlike traditional RAID systems which stripe data across all the hard drives, an unRAID system stores files on individual data disks. Consequently, all file write operations will involve both the data disk the file is being written to, and the parity disk. For these reasons,

  • the parity disk size must be as large or larger than any of the data disks,

and

  • given a choice, the parity disk should be the fastest disk in your system.

Guidelines

Here are the steps you should follow when designing your disk array:

  1. Decide which hard drive you will use for parity, and which hard drives you will use for data disk1, disk2, etc., and label them in some fashion. Also, find the serial number of each hard drive and jot it down somewhere; you will need this information later.
  2. Connect cables to the hard drives in a logical manner. For example, if your motherboard has 4 SATA ports, they are labeled “sata-0″, “sata-1″, “sata-2″, and “sata-3″ (or something similar). Hook these cables to a contiguous set of hard drives, e.g., parity, disk1, disk2, and disk3. Follow a similar pattern for PCI disk controllers.
  3. Build your system, boot unRAID Server and start the Management Utility. If this is a fresh system build, then the Main page of the Management Utility will show no disks installed. This doesn’t mean the system can’t detect your hard drives; it just means that none have been assigned yet.
  4. Go to the Devices page and look at the Disk devices section. Here is where you will assign hard drives to the various array disks. You will notice there is a drop down box for each array disk device. Clicking on the drop down box reveals all the unassigned hard drives in the sytem. Remember the serial numbers you recored back in step 1? For each disk, choose the proper hard drive based on it’s serial number.

After you have assigned all of your hard drives, go back to the Main menu and you will see the array and be able to start it.

Boot device

Disk devices