- 1 Assigning storage devices
- 2 Starting and stopping the array
- 3 Array operations
- 3.1 Adding disks
- 3.2 Replacing disks
- 3.3 Removing disks
- 3.4 Checking array devices
- 3.5 Spin up and down disks
- 3.6 Reset the array configuration
- 3.7 Notifications
- 3.8 SMART Monitoring
- 4 Cache Operations
- 5 File System Management
- 5.1 Selecting a File System type
- 5.2 Setting a File System type
- 5.3 Creating a File System (Format)
- 5.4 Drive shows as unmountable
- 5.5 Checking a File System
- 5.6 Repairing a File System
- 5.7 Changing a File System type
- 5.8 Converting to a new File System type
- 5.9 Reformatting a drive
- 5.10 Reformatting a cache drive
- 5.11 BTRFS Operations
- 6 Unassigned Drives
- 7 Performance
- 8 Share Management
- 8.1 User Shares
- 8.1.1 Allocation method
- 8.1.2 Min. Free Space
- 8.1.3 Split level
- 8.1.4 Included and Excluded disk(s)
- 8.1.5 Default Shares
- 8.1.6 Mover Behavior with User Shares
- 8.2 Disk Shares
- 8.3 Network access
- 8.4 Access Permissions
- 8.1 User Shares
Assigning storage devices
To assign devices to the array and/or cache, first login to the server's webGui. Click on the Main tab and select the devices to assign to slots for parity, data, and cache disks. Assigning devices to Unraid is easy! Just remember these guidelines:
- Always pick the largest storage device available to act as your parity device(s). When expanding your array in the future (adding more devices to data disk slots), you cannot assign a data disk that is larger than your parity device(s). For this reason, it is highly recommended to purchase the largest HDD available for use as your initial parity device, so future expansions aren’t limited to small device sizes. If assigning dual parity disks, your two parity disks can vary in size, but the same rule holds true that no disk in the array can be larger than your smallest parity device.
- SSD support in the array is experimental. Some SSDs may not be ideal for use in the array due to how TRIM/Discard may be implemented. Using SSDs as data/parity devices may have unexpected/undesirable results. This does NOT apply to the cache / cache pool. Most modern SSDs will work fine in the array, and even NVMe devices are now supported, but know that until these devices are in wider use, we only have limited testing experience using them in this setting.
- Using a cache will improve array performance. It does this by redirecting write operations to a dedicated disk (or pool of disks in Unraid 6) and moves that data to the array on a schedule that you define (by default, once per day at 3:40AM). Data written to the cache is still presented through your user shares, making use of this function completely transparent.
- Creating a cache-pool adds protection for cached data. If you only assign one cache device to the system, data residing there before being moved to the array on a schedule is not protected from data loss. To ensure data remains protected at all times (both on data and cache disks), you must assign more than one device to the cache function, creating what is called a cache-pool. Cache pools can be expanded on demand, similar to the array.
- SSD-based cache devices are ideal for applications and virtual machines. Apps and VMs benefit from SSDs as they can leverage their raw IO potential to perform faster when interacting with them. Use SSDs in a cache pool for the ultimate combination of functionality, performance, and protection.
- Encryption is disabled by default. If you wish to use this feature on your system, you can do so by adjusting the file system for the devices you wish to have encrypted. Click on each disk you wish to have encrypted and toggle the filesystem to one of the encrypted options.
Unraid recognizes disks by their serial number (and size). This means that it is possible to move drives between SATA ports without having to make any changes in drive assignments. This can be useful for troubleshooting if you ever suspect there may be a hardware-related issue such as a bad port or a think a power or SATA cable may be suspect.
NOTE: Your array will not start if you assign or attach more devices than your license key allows.
Starting and stopping the array
Normally following system boot up the array (complete set of disks) is automatically started (brought on-line and exported as a set of shares). But if there's been a change in disk configuration, such as a new disk added, the array is left stopped so that you can confirm the configuration is correct. This means that any time you have made a disk configuration change you must log in to the webGui and manually start the array. When you wish to make changes to disks in your array, you will need to stop the array to do this. Stopping the array means all of your applications/services are stopped, and your storage devices are unmounted, making all data and applications unavailable until you once again start the array. To start or stop the array, perform the following steps:
- Log into the Unraid webGui using a browser (e.g. http://tower; http://tower.local from Mac)
- Click on Main
- Go to the Array Operation section
- Click Start or Stop (you may first need to click the "Yes I want to do this" checkbox)
Help! I can't start my array!
If the array can't be started, it may be for one of a few reasons which will be reported under the Array Operation section:
- Too many wrong and/or missing disks
- Too many attached devices
- Invalid or missing registration key
- Cannot contact key-server
- This Unraid Server OS release has been withdrawn
Too many disks missing from the array
If you have no parity disks, this message won't appear.
If you have a single parity disk, you can only have up to one disk missing and still start the array, as parity will then help simulate the contents of the missing disk until you can replace it.
If you have two parity disks, you can have up to two disks missing and still start the array.
If more than two disks are missing / wrong due to a catastrophic failure, you will need to perform the New Config procedure.
Too many attached devices
Storage devices are any devices that present themselves as a block storage device EXCLUDING the USB flash device used to boot Unraid Server OS. Storage devices can be attached via any of the following storage protocols: IDE/SATA/SAS/SCSI/USB. This rule only applies prior to starting the array. Once the array is started, you are free to attach additional storage devices and make use of them (such as USB flash devices for assignment to virtual machines). In Unraid Server OS 6, the attached storage device limits are as follows:
|Attached Storage Device Limits by Registration Key|
NOTE: The attached device limits do NOT refer to how many devices you can assign to the array or cache. Those limits are imposed by the software, not the license policy.
Invalid or missing key
A valid registration key is required in order to start the array. To purchase or get a trial key, perform the following steps:
- Log into the Unraid webGui using a browser (e.g. http://tower from most device, http://tower.local from Mac devices)
- Click on Tools
- Click on Registration
- Click to Purchase Key or Get Trial Key and complete the steps presented there
- Once you have your key file link, return to the Registration and paste it in the field then click Install Key.
If the word "expired" is visible at the top left of the webGui, this means your trial key has expired. Visit the registration page to request either an extension to your trial or purchase a valid registration key.
Blacklisted USB flash device
If your server is connected to the Internet and your trial hasn't expired yet, it is also possible that your USB flash device contains a GUID that is prohibited from registering for a key. This could be because the GUID is not truly unique to your device or has already been registered by another user. It could also be because you are using an SD card reader through a USB interface, which also tends to be provisioned with a generic GUID. If a USB flash device is listed as blacklisted, this is a permanent state and you will need to seek an alternative device to use for your Unraid Server OS installation.
Cannot contact key-server
This message will only occur if you are using a Trial license. If you are using a paid-for license then the array can be started without the need to contact the Unraid license server.
If your server is unable to contact our key server to validate your Trial license, you will not be able to start the array. The server will attempt to validate upon first boot with a timeout of 30 sec. If it can't validate upon first boot, then the array won't start, but each time you navigate or refresh the webGui it will attempt validation again (with a very short timeout). Once validated, it won't phone-home for validation again unless rebooted.
This Unraid Server OS release has been withdrawn
If you receive this message, it means you are running a beta or release candidate version of Unraid that has been marked disabled from active use. Upgrade the OS to the latest stable, beta, or release candidate version in order to start your array.
There are a number of operations you can perform against your array:
- Add disks
- Replace disks
- Remove disks
- Check disks
- Spin disks up/down
- Reset the array configuration
NOTE: In cases where devices are added/replaced/removed, etc., the instructions say "Power down" ... "Power up". If your server's hardware is designed for hot/warm plug, Power cycling is not necessary and Unraid is designed specifically to handle this. All servers built by LimeTech since the beginning are like this: no power cycle necessary.
Clear v Pre-Clear
Under Unraid a 'Clear disk is one that has been completely filled with zeroes and contains a special signature to say that it is in this state. This state is needed before a drive can be added to a parity-protected array without affecting parity. If Unraid is in the process of writing zeroes to all of a drive then this is referred to as a 'Clear' operation. This Clear operation can take place as a background operation while using the array, but the drive in question cannot be used to store data until the Clear operation has completed and the drive been formatted to the desired File System type.
A disk that is being added as a parity drive or one that is to be used to rebuild a failed drive does not need to be in a 'Clear' state as those processes overwrites every sector on the drive with new contents as part of carrying out the operation. In addition, if you are adding an additional data drive to an array that does not currently have a parity drive there is no requirement for the drive to be clear before adding it.
You will often see references in the forum or various wiki pages to 'Preclear'. This refers to getting the disk into a 'Clear' state before adding it to the array. The Preclear process requires the use of a third-party plugin. Prior to Unraid v6, this was highly desirable as the array was offline whole Unraid carried out the 'Clear' operation. but Unraid v6 now carries out 'Clear' as a background process with the array operational while it is running so it is now completely optional. Many users still like to use the Preclear process as in addition to putting the disk into a clear state it also performs a level of 'stress test' on the drive which can be used as a confidence check on the health of the drive. The Preclear as a result takes much longer than Unraid's more simplistic 'clear' operation. Many users like to Preclear new disks as an initial confidence check and to reduce the chance of a drive suffering from ‘what is known as infant mortality’ where one of the most likely times for a drive to fail is when it is first used (presumably due to a manufacturing defect).
It is also important to note that after completing a 'Preclear' you must not carry out any operation that will write to the drive (e.g. format it) as this will destroy the 'Clear' state.
This is the normal case of expanding the capacity of the system by adding one or more new hard drives.
The capacity of any new disk(s) added must be the same size or smaller than your parity disk. If you wish to add a new disk that is larger than your parity disk, then you must instead first replace your parity disk. (You could use your new disk to replace parity, and then use your old parity disk as a new data disk).
The procedure is:
- Stop the array.
- Power down the server.
- Install your new disk(s).
- Power up the server.
- Assign the new storage device(s) to a disk slot(s) using the Unraid webGui.
- Start the array.
- If your array is parity protected then Unraid will now automatically begin to clear the disk as this is required before it can be added to the array.
- This step is omitted if you do not have a parity drive.
- If a disk has been pre-cleared before adding it Unraid will recognize this and go straight to the next step.
- The clearing phase is necessary to preserve the fault tolerance characteristic of the array. If at any time while the new disk(s) is being cleared, one of the other disks fails, you will still be able to recover the data of the failed disk.
- The clearing phase can take several hours depending on the size of the new disks(s) and although the array is available during this process Unraid will not be able to use the new disk(s) for storing files until the clear has completed and the new disk has been formatted.
- Once the disk has been cleared, an option to format the disk will appear in the webGui. At this point, the disk is added to the array and shows as unmountable and the option to format unmountable disks is shown.
- Check that the serial number of the disk(s) is what you expect. You do not want to format a different disk (thus erasing its contents) by accident.
- Click the check box to confirm that you want to proceed with the format procedure.
- A warning dialog will be given warning you of the consequences as once you start the format the disks listed will have any existing contents erased and there is no going back. This warning may seem a bit like over-kill but there have been times that users have used the format option when it was not the appropriate action.
- The format button will now be enabled so you can click on it to start the formatting process.
- The format should only take a few minutes and after the format completes the disk will show as mounted and ready for use.
- You will see that a small amount of space will already show as used which is due to the overheads of creating the empty file system on the drive.
You can add as many new disks to the array as you desire at one time, but none of them will be available for use until they are both cleared and formatted with a filesystem
It is not mandatory for an ‘Unraid system to have a parity disk, but it is normal to provide redundancy. A parity disk can be added at any time, Each parity disk provides redundancy against one data drive failing.
Any parity disk you add must be at least as large as the largest data drive (although it can be larger). If you have two parity drives then it is not required that they be the same size although it is required that they both follow the rule of being at least as large as the largest data drive.
The process for adding a parity disk is identical to that for adding a data disk except that when you start the array after adding it Unraid will start to build parity on the drive that you have just added.
Upgrading parity disk(s)
If you wish to upgrade your parity device(s) to a larger one(s) so you can start using larger sized disks in the array or to add an additional parity drive, the procedure is as follows:
- Stop the array.
- Power down the unit.
- Install new larger parity disks. Note if you do this as your first step then steps 2 & 4 listed here are not needed.
- Power up the unit.
- Assign a larger disk to the parity slot (replacing the former parity device).
- Start the array.
When you start the array, the system will once again perform a parity sync to the new parity device and when it completes the array will once again be in a protected state. It is recommended that you keep the old parity drives contents intact until the above procedure completes as if an array drive fails during this procedure so you cannot complete building the contents of the new parity disk, then it is possible to use the old one for recovery purposes (ask on the forum for the steps involved). If you have a dual parity system and wish to upgrade both of your parity disks, it is recommended to perform this procedure one parity disk at a time, as this will allow for your array to still be in a protected state throughout the entire upgrade process.
Once you've completed the upgrade process for a parity disk, the former parity disk can be considered for assignment and use in the array as an additional data disk (depending on age and durability).
There are two primary reasons why you may wish to replace disks in the array:
- A disk needs to be replaced due to failure or scheduled retirement (out of warranty / support / serviceability).
- The array is nearly full and you wish to replace existing data disk(s) with larger ones (out of capacity).
In either of these cases, the procedure to replace a disk is roughly the same, but one should be aware of the risk of data loss during a disk replacement activity. Parity device(s) protect the array from data loss in the event a disk failure. A single parity device protects against a single failure, whereas two parity devices can protect against losing data when two disks in the array fail. This chart will help you better understand your level of protection when various disk replacement scenarios occur.
|Data Protection During Disk Replacements|
|With Single Parity||With Dual Parity|
|Replacing a single disk||Array cannot tolerate a disk failure without potential data loss to both the disk being replaced and the additional disk that has failed.||Array can tolerate up to one additional disk failure without potential data loss|
|Replacing two disks||Not possible!||Array cannot tolerate a disk failure without potential data loss to both the disk(s) being replaced and the additional disk that has failed.|
Replacing failed disk(s)
As noted previously, with a single parity disk, you can replace up to one disk at a time, but during the replacement process, you are at risk for data loss should an additional disk failure occur. With two parity disks, you can replace either one or two disks at a time, but during a two disk replacement process, you are also at risk for data loss. Another way to visualize the previous chart:
|Array Tolerance to Disk Failure Events|
|Without Parity||With Single Parity||With Dual Parity|
|A single disk failure||Data from that disk is lost||Data is still available and the disk can be replaced||Data is still available and the disk can be replaced|
|A dual disk failure||Data on both disks are lost||Data on both disks are lost||Data is still available and the disks can be replaced|
NOTE: If more disk failures have occurred than your parity protection can allow for, you are advised to post in the General Support forum for assistance with data recovery on the data devices that have failed.
What is a 'failed' (disabled) drive
It is important to realize what is meant by the term failed drive:
- It is typically used to refer to a drive that is marked with a red 'x' in the Unraid GUI.
- It does NOT necessarily mean that there is a physical problem with the drive (although that is always a possibility). More often than not the drive is OK and an external factor caused the write to fail.
- If the syslog shows that resets are occurring on the drive then this is a good indication of a connection problem.
- The SMART report for the drive is a good place to start.
- The SMART attributes can indicate a drive is healthy when in fact it is not. A better indication of health is whether the drive can successfully complete the SMART extended test without error. If it cannot complete this test error-free then there is a high likelihood that the drive is not healthy.
- CRC errors are almost invariably cabling issues. It is important to realize that this SMART attribute is never reset to 0 so if it stops increasing that is what you should be aiming to achieve.
- If you have sufficient parity drives then Unraid will emulate the failed drive using the combination of the parity drive(s) and the remaining 'good' drives. From a user perspective, this results in the system reacting as if the failed drive is still present.
- This is one reason why it is important that you have enabled notifications to get alerted to such a failure. From the end-user perspective, the system continues to operate and the data remain available. Without notifications enabled the user may blithely continue using their Unraid server not realizing that their data may now be at risk and they need to take some corrective action.
When a disk is marked as disabled and Unraid indicates it is being emulated then the following points apply:
- Unraid will stop writing to the physical drive. Any writes to the 'emulated' drive will not be reflected on the physical drive but will be reflected in parity so from the end-user perspective then the array seems to be updating data as normal.
- When you rebuild a disabled drive the process will make the physical drive correspond to the emulated drive by doing a sector-for-sector copy from the emulated drive to the physical drive. You can, therefore, check that the emulated drive contains the content that you expect before starting the rebuild process.
- If a drive is being emulated then you can carry out recovery actions on the emulated drive before starting the rebuild process. This can be important as it keeps the physical drive untouched for potential data recovery processes if the emulated drive cannot be recovered.
- If an emulated drive is marked as unmountable then a rebuild will not fix this and the rebuilt drive will have the same unmountable status as the emulated drive. The correct handling of unmountabled drives is described in a later section. It is recommended that you repair the file system before attempting a rebuild as the repair process is much faster that the rebuild process and if the repair process is not successful the rebuilt drive would have the same problem.
A replacement drive does not need to be the same size as the disk it is replacing. It cannot be smaller but it can be larger. If the replacement drive is not larger than any of your parity drives then the simpler procedure below can be used. In the special case where you want to use a new disk that is larger than at least one of your parity drives then please refer to the Parity Swap procedure that follows instead.
If you have purchased a replacement drive, many users like to pre-clear the drive to stress test the drive first, to make sure it's a good drive that won't fail for a few years at least. The Preclearing is not strictly necessary as replacement drives don't have to be cleared since they are going to be completely overwritten., but Preclearing new drives one to three times provides a thorough test of the drive, eliminates 'infant mortality' failures. You can also carry out stress tests in other ways such as running an extended SMART test or using tools supplied by the disk manufacturer that run on Windows or macOS.
This is a normal case of replacing a failed drive where the replacement drive is not larger than your current parity drive(s).
It is worth emphasizing that Unraid must be able to reliably read every bit of parity PLUS every bit of ALL other disks in order to reliably rebuild a missing or disabled disk. This is one reason why you want to fix any disk-related issues with your Unraid server as soon as possible.
To replace a failed disk or disks:
- Stop the array.
- Power down the unit.
- Replace the failed disk(s) with a new one(s).
- Power up the unit.
- Assign the replacement disk(s) using the Unraid webGui.
- Click the checkbox that says Yes I want to do this and then click Start.
When you start the array in normal mode after replacing a failed disk or disks, the system will reconstruct the contents onto the new disk(s) and, if the new disk(s) is/are bigger, expand the file system. If you start the array in Maintenance mode you will need to press the Sync button to trigger the rebuild.
- You must replace a failed disk with a disk that is as big or bigger than the original and not bigger than the smallest parity disk.
- The rebuild process can never be used to change the format of a disk - it can only rebuild to the existing format.
Rebuilding a drive onto itself
There can be cases where it is determined that the reason a disk was disabled is due to an external factor and the disk drive appears to be fine. In such a case you need to take a slightly modified process to cause Unraid to rebuild a 'disabled' drive back onto the same drive.
- Stop array
- Unassign disabled disk
- Start array so the missing disk is registered
- Stop array
- Reassign disabled disk
- Start array to begin rebuild. If you start the array in Maintenance mode you will need to press the Sync button to start the rebuild.
This process can be used for both data and parity drives that have been disabled.
This is a special case of replacing a disabled drive where the replacement drive is larger than your current parity drive. This procedure applies to both the parity1 and the parity2 drives. If you have dual parity then it can be used on both simultaneously to replace 2 disabled data drives with the 2 old parity drives.
NOTE: It is not recommended that you use this procedure for upgrading the size of both a parity drive and a data drive as the array will be offline during the parity copy part of the operation. In such a case it is normally better to first upgrade the parity drive and then afterward upgrade the data drive using the drive replacement procedure. This takes longer but the array remains available for use throughout the process, and in addition, if anything goes wrong you have the just removed drive available intact for recovery purposes
Why would you want to do this? To replace a data drive with a larger one, that is even larger than the Parity drive.
- Unraid does not require a replacement drive to be the same size as the drive being replaced. The replacement drive CANNOT be smaller than the old drive, but it CAN be larger, much larger in fact. If the replacement drive is the same size or larger, UP TO the same size as the smallest parity drive, then there the simple procedure above can be used. If the replacement drive is LARGER than the Parity drive, then a special two-step procedure is required as described here. It works in two phases:
- The larger-than-existing-parity drive is first upgraded to become the new the parity drive
- The old parity drive replaces the old data drive and the data of the failed drive is rebuilt onto it.
- As an example, you have a 1TB data drive that you want to replace (the reason does not matter). You have a 2TB parity drive. You buy a 4TB drive as a replacement. The 'Parity Swap' procedure will copy the parity info from the current 2TB parity drive to the 4TB drive, zero the rest, make it the new parity drive, then use the old 2TB parity drive to replace the 1TB data drive. Now you can do as you wish with the removed 1TB drive.
- If you have purchased a replacement drive, we always recommend many users to pre-clear the drive to stress test the replacement drive first, to make sure it's a good drive that won't fail for a few years at least. The Preclearing is not strictly necessary, as replacement drives don't have to be cleared, they are going to be completely overwritten. But Preclearing new drives one to three times provides a thorough test of the drive, eliminates 'infant mortality' failures.
- If your replacement drive is the same size or smaller than your current Parity drive, then you don't need this procedure. Proceed with the Replacing a Data Drive procedure.
- This procedure is strictly for replacing data drives in an Unraid array. If all you want to do is replace your Parity drive with a larger one, then you don't need the Parity Swap procedure. Just remove the old parity drive and add the new one, and start the array. The process of building parity will immediately begin. (If something goes wrong, you still have the old parity drive that you can put back!)
- IMPORTANT!!! This procedure REQUIRES that the data drive being replaced MUST be disabled first. If the drive failed (has a red ball), then it is already 'disabled', but if the drive is OK but you want to replace it anyway, then you have to force it to be 'failed', by unassigning it and starting and stopping the array. Unraid only forgets a drive when the array is started without the drive, otherwise it still associates it with the slot (but 'Missing'). The array must be started once with the drive unassigned or disabled. Yes, it may seem odd, but is required before Unraid will recognize that you are trying to do a 'Parity Swap'. It needs to see a disabled data disk with forgotten ID, a new disk assigned to its slot that used to be the parity disk, and a new disk assigned to the parity slot.
- Obviously, it's very important to identify the drives for assignment correctly! Have a list of the drive models that will be taking part in this procedure, with the last 4 characters of their serial numbers. If the drives are recent Toshiba models, then they may all end in GS or S, so you will want to note the preceding 4 characters instead.
The steps to carry out this procedure are:
- Note: these steps are the general steps needed. The steps you take may differ depending on your situation. If the drive to be replaced has failed and Unraid has disabled it, then you may not need steps 1 and 2, and possibly not steps 3 and 4. If you have already installed the new replacement drive (perhaps because you have been Preclearing it), then you would skip steps 5 through 8. Revise the steps as needed.
- Stop the array (if it's started)
- Unassign the old drive (if it's still assigned)
If the drive was a good drive and notifications are enabled, you will get error notifications for a missing drive! This is normal.
- Start the array (put a check in the Yes I want to do this checkbox if it appears (older versions: Yes, I'm sure))
Yes, you need to do this. Your data drive should be showing as Not installed.
- Stop the array again
- Power down
- [ Optional ] Pull the old drive
You may want to leave it installed, for Preclearing or testing or reassignment.
- Install the new drive (preclear STRONGLY suggested, but formatting not needed)
- Power on
- Stop the array
*If you get an "Array Stopping•Retry unmounting disk share(s)..." message, try disabling Docker and/or VM in Settings and stopping the array again after rebooting.
- Unassign the parity drive
- Assign the new drive in the parity slot
You may see more error notifications! This is normal.
- Assign the old parity drive in the slot of the old data drive being replaced
You should now have blue drive status indicators for both the parity drive and the drive being replaced.
- Go to the Main -> Array Operation section
You should now have a Copy button, with a statement indicating "Copy will copy the parity information to the new parity disk".
- Put a check in the Yes I want to do this checkbox (older versions: Yes, I'm sure), and click the Copy button
Now patiently watch the copy progress, takes a long time (~20 hours for 4TB on a 3GHz Core 2 Duo). All of the contents of the old parity drive are being copied onto the new drive, then the remainder of the new parity drive will be zeroed.
The array will NOT be available during this operation!
*If you disabled Docker and/or VM in settings earlier, go ahead and re-enable now.
When the copy completes, the array will still be stopped ("Stopped. Upgrading disk/swapping parity.").
The Start button will now be present, and the description will now indicate that it is ready to start a Data-Rebuild.
- Put a check in the Yes I want to do this checkbox (older versions: Yes, I'm sure), and click the Start button
The data drive rebuild begins. Parity is now valid, and the array is started.
Because the array is started, you can use the array as normal, but for best performance, we recommend you limit your usage.
Once again, you can patiently watch the progress, takes a long time too! All of the contents of the old data drive are now being reconstructed on what used to be your parity drive, but is now assigned as the replacement data drive.
- That's it! Once done, you have an array with a larger parity drive and a replaced data drive that may also be larger!
- Note: many users like to follow up with a parity check, just to check everything. It's a good confidence builder (although not strictly necessary) !
A disk failed while I was rebuilding another
If you only have a single parity device in your system and a disk failure occurs during a data-rebuild event, the data rebuild will be cancelled as parity will no longer be valid. However, if you have dual parity disks assigned in your array, you have options. You can either
- let the first disk rebuild complete before starting the second, or
- you can cancel the first rebuild, stop the array, replace the second failed disk, then start the array again
If the first disk being rebuilt is nearly complete, it's probably better to let that finish, but if you only just began rebuilding the first disk when the second disk failure occurred, you may decide rebuilding both at the same time is a better solution.
There may be times when you wish to remove drives from the system.
Removing parity disk(s)
If for some reason you decide you do not need the level of parity protection that you have in place then it is always possible to easily remove a parity disk.
- Stop the array.
- Set the slot for the parity disk you wish to remove to Unassigned.
- Start the array to commit the change and 'forget' the previously assigned parity drive.
CAUTION: If you already have any failed data drives in the array be aware that removing a parity drive reduces the number of failed drives Unraid can handle without potential data loss.
- If you started with dual parity you can still handle a single failed drive but would not then be able to sustain another drive failing while trying to rebuild the already failed drive without potential data loss.
- If you started with single parity you will no longer be able to handle any array drive failing without potential data loss.
Removing data disk(s)
Removing a disk from the array is possible, but normally requires you to once again sync your parity disk(s) after doing so. This means that until the parity sync completes, the array is vulnerable to data loss should any disk in the array fail.
To remove a disk from your array, perform the following steps:
- Stop the array
- (optional) Make note if your disk assignments under the main tab (for both the array and cache; some find it helpful to take a screenshot)
- Perform the Reset the array configuration procedure. When doing this it is a good idea to use the option to preserve all current assignments to avoid you having to re-enter them (and possibly make a mistake doing so).
- Make sure all pour previously assigned disks are there and set the drive you want removed to be Unassigned
- Start the array without checking the 'Parity is valid' box.
A parity-sync will occur if at least one parity disk is assigned and until that operation completes, the array is vulnerable to data loss should a disk failure occur.
It is also possible to remove a disk without invalidating parity if special action is taken to make sure that the disk only contains zeroes as a disk that is all zeroes does not affect parity. There is no support for this method built into the Unraid GUI so. it requires manual steps to carry out the zeroing process. It also takes much longer than the simpler procedure above.
There is no official support from Limetech for using this method so you are doing it at your own risk.
- This method preserves parity protection at all times.
- This method can only be used if the drive to be removed is a good drive that is completely empty, is mounted and can be completely cleared without errors occurring
- This method is limited to removing only one drive at a time (actually this is not technically true but trying to do multiple drives in parallel is slower than doing them sequentially due to the contention that arises for updating the parity drive)
- As stated above, the drive must be completely empty as this process will erase all existing content. If there are still any files on it (including hidden ones), they must be moved to another drive or deleted.
- One quick way to clear a drive of files is to reformat it! To format an array drive, you stop the array, and then on the Main page click on the link for the drive and change the file system type to something different than it currently is, then restart the array. You will then be presented with an option to format it. Formatting a drive removes all of its data, and the parity drive is updated accordingly, so the data cannot be easily recovered.
- Explanatory note: "Since you are going to clear the drive anyway, why do I have to empty it? And what is the purpose of this strange clear-me folder?" Yes, it seems a bit draconian to require the drive to be empty since we're about to clear and empty it in the script, but we're trying to be absolutely certain we don't cause data loss. In the past, some users misunderstood the procedure, and somehow thought we would preserve their data while clearing the drive! This way, by requiring the user to remove all data, and then add an odd marker, there cannot be any accidents or misunderstandings and data loss.
The procedure is as follows:
- Make sure that the drive you are removing has been removed from any inclusions or exclusions for all shares, including in the global share settings.
- Make sure the array is started, with the drive assigned and mounted.
- Make sure you have a copy of your array assignments, especially the parity drive.
- In theory you should not need this but it is a useful safety net in case if the "Retain current configuration" option under New Config doesn't work correctly (or you make a mistake using it).
- It is highly recommended to turn on reconstruct write, as the write method (sometimes called 'Turbo write'). With it on, the script can run 2 to 3 times as fast, saving hours!
- However when using 'Turbo Write' all drives must read without error so do not use it unless you are sure no other drive is having issues.
- To enable 'turbo Write' in Settings->Disk Settings, change Tunable (md_write_method) to reconstruct write
- Make sure ALL data has been copied off the drive; drive MUST be completely empty for the clearing script to work.
- Double check that there are no files or folders left on the drive.
- Note: one quick way to clean a drive is to reformat it! (once you're sure nothing of importance is left of course!)
- Create a single folder on the drive with the name clear-me - exactly 7 lowercase letters and one hyphen
- Run the clear an array drive script from the User Scripts plugin (or run it standalone, at a command prompt).
- If you prepared the drive correctly, it will completely and safely zero out the drive. If you didn't prepare the drive correctly, the script will refuse to run, in order to avoid any chance of data loss.
- If the script refuses to run, indicating it did not find a marked and empty drive, then very likely there are still files on your drive. Check for hidden files. ALL files must be removed!
- Clearing takes a loooong time! Progress info will be displayed.
- If running in User Scripts, the browser tab will hang for the entire clearing process.
- While the script is running, the Main screen may show invalid numbers for the drive, ignore them. Important! Do not try to access the drive, at all!
- When the clearing is complete, stop the array
- Follow the procedure for resetting the array making sure you elect to retain all current assignments.
- Return to the Main page, and check all assignments. If any are missing, correct them. Unassign the drive(s) you are removing. Double-check all of the assignments, especially the parity drive(s)!
- Click the check box for Parity is already valid, make sure it is checked!
- Start the array! Click the Start button then the Proceed button (on the warning popup that will pop up)
- (Optional) Start a correcting parity check to ensure parity really is valid and you did not make a mistake in the procedure. If everything was done correctly this should return zero errors.
Alternate Procedure steps for Linux proficient users
If y/u are happy to use the Linux Command line then you can replace steps 7 and 8 by performing the clearing commands yourself at a command prompt. (Clearing takes just as long though!) If you would rather do that than run the script in steps 7 and 8, then here are the 2 commands to perform:
umount /mnt/diskX dd bs=1M if=/dev/zero of=/dev/mdX status=progress
(where X in both lines is the number of the data drive being removed) Important!!! It is VITAL you use the correct drive number, or you will wipe clean the wrong drive! That's why using the script is recommended, because it's designed to protect you from accidentally clearing the wrong drive.
Checking array devices
When the array is started, there is a button under Array Operations labeled Check. Depending on whether or not you have any parity devices assigned, one of two operations will be performed when clicking this button.
It is also possible to schedule checks to be run automatically at User-defined intervals under Settings->Scheduler. It is a good idea to do this as an automated check on array health so that problems can be noticed and fixed before the array can deteriorate beyond repair. Typical periods for such automated checks are monthly or quarterly and it is recommended that such checks should be non-correcting.
If you have at least one parity device assigned, clicking Check which will initiate a Parity-check. This will march through all data disks in parallel, computing parity and checking it against stored parity on the parity disk(s).
By default, if an error is found during a Parity-check the parity disk will be updated (written) with the computed data and the Sync Errors counter will be incremented. If you wish to run purely a check without writing correction, uncheck the checkbox that says Write corrections to parity before starting the check. In this mode, parity errors will be notated but not actually fixed during the check operation.
A correcting parity check is started automatically when starting the array after an "Unsafe Shutdown". An "Unsafe Shutdown" is defined as any time that the Unraid server was restarted without having previously successfully stopped the array. The most common cause of Sync Errors is an unexpected power-loss, which prevents buffered write data from being written to disk. It is highly recommended that users consider purchasing a UPS (uninterruptable power supply) for their systems so that Unraid can be set to shut down tidily on power loss, especially if frequent offsite backups aren't being performed.
It is also recommended that you run an automatic parity check periodically and this can be done under Settings->Scheduler. The frequency is up to the user bun monthly or quarterly are typical choices. It is also recommended that such a check is set as non-correcting as if a disk is having problems there is a chance of you corrupting your parity if you set such a check to be correcting. The only acceptable result from such a check is to have 0 errors reported. If you do have errors reported then you should take pre-emptive action to try and find out what is causing them. If in doubt ask questions in the forum.
If you configure an array without any parity devices assigned, the Check option will start a Read check against all the devices in the array. You can use this to check disks in the array for unrecoverable read errors, but know that without a parity device, data may be lost if errors are detected.
A Read Check is also the type of check started if you have disabled drives present and the number of disabled drives is larger than the number of parity drives.
Any time a parity or read check is performed, the system will log the details of the operation and you can review them by clicking the History button under Array Operations. These are stored in a text file under the config directory on your Unraid USB flash device.
Spin up and down disks
If you wish to manually control the spin state of your rotational storage devices or toggle your SSD between active and standby mode, these buttons provide that control. Know that if files are in the process of being accessed while using these controls, the disk(s) in use will remain in an active state, ignoring your request.
When disks are in a spun-down state, they will not report their temperature through the webGui.
Reset the array configuration
If you wish to remove a disk from the array or you simply wish to start from scratch to build your array configuration, there is a tool in Unraid that will do this for you. To reset the array configuration, perform the following steps:
- Navigate to the Tools page and click New Config
- You can (optionally) elect to have the system preserve some of the current assignments while resetting the array. This can be very useful if you only intend to make a small change as it avoids you having to re-enter the details of the disks you want to leave unchanged.
- Click the checkbox confirming that you want to do this and then click apply to perform the operation
- Return to the Main tab and your configuration will have been reset
- Make any adjustments to the configuration that you want.
- Start the array to commit the configuration. You can start in Normal or Maintenance mode.
- Unraid will recognize if any drives have been previously used by Unraid, and when you start the array as part of this procedure the contents of such disks will be left intact.
- There is a checkbox next to the Start button that you can use to say 'Parity is Valid'. Do not check this unless you are sure it is the correct thing to do, or unless advised to do so by an experienced Unraid user as part of a data recovery procedure.
- Removing a data drive from the array will always invalidate parity unless special action has been taken to ensure the disk being removed only contains zeroes
- Reordering disks after doing the New Config without removing drives does not invalidate parity1, but it DOES invalidate parity2.
Undoing a reset
If for any reason after performing a reset, you wish to undo it, perform the following steps:
- Browse to your flash device over the network (SMB)
- Open the Config folder
- Rename the file super.old to super.dat
- Refresh the browser on the Main page and your array configuration will be restored
Unraid can be configured to send you status reports about the state of the array.
An important point about these reports is:
- They only tell you if the array currently has any disks disabled or showing read/write errors.
- The status is reset when you reboot the system, so it does not tell you what the status was in the past.
- IMPORTANT: The status report does not take into account the SMART status of the drive. You can therefore get a status report indicating that the array appears to be healthy even though the SMART information might indicate that a disk might not be too healthy.
Unraid can be configured to report whether SMART attributes for a drive are changing. The idea is to try and tell you in advance if drives might be experiencing problems even though they have not yet caused read/write errors so that you can take pre-emptive action before a problem becomes serious and thus might potentially lead do data loss. You should have notifications enabled so that you can see these notifications even when you are not running the Unraid GUI.
SMART monitoring is currently only supported for SATA drives and is not available for SAS drives.
Which SMART attributes are monitored can be configured by the user, but the default ones are:
- 5: Reallocated Sectors count
- 187: Reported uncorrected errors
- 188: Command timeout
- 197: Current / Pending Sector Count
- 198: Uncorrectable sector count
- 199: UDMA CRC error count
If any of these attributes change value then this will be indicated on the Dashboard by the icon against the drive turning orange. You can click on this icon and a menu will appear that allows you to acknowledge that you have seen the attribute change, and then Unraid will stop telling you about it unless it changes again.
You can manually see all the current SMART information for a drive by clicking on its name on the Main tab in the Unraid GUI.
There are two primary modes of operating the cache in Unraid:
Single device mode
When the number of disk slots for the cache is set to one, this is referred to as running in single device mode. In this mode, you will have no protection for any data that exists on the cache, which is why pool mode is recommended. However, unlike in pool mode, while in single device mode, you are able to adjust the filesystem for the cache device to something other than btrfs. It is for this reason that there are no special operations for single mode. You can only add or remove the device from the system.
NOTE: If you choose to use a non-btrfs file system for your cache device operating in single mode, you will not be able to expand to a cache pool without first reformatting the device with btrfs. It is for this reason that btrfs is the default filesystem for the cache, even when operating in single device mode.
Cache pool mode
When more than one disk is assigned to the cache, this is referred to as running in cache pool mode. This mode utilizes btrfs RAID 1 in order to allow for any number of devices to be grouped together in a pool. Unlike a traditional RAID 1, a btrfs RAID1 can mix and match devices of different sizes and speeds and can even be expanded and contracted as your needs change. To calculate how much capacity your btrfs pool will have, check out this handy btrfs disk usage calculator. Set the Preset RAID level to RAID-1, select the number of devices you have, and set the size for each. The tool will automatically calculate how much space you will have available.
Here are typical operations that are likely to want to carry out on the cache:
- Back up the cache to the array
- Switch the cache to run in pool mode
- Add disks
- Replace a disk
Backing up the cache to the array
The procedure shown assumes that there are at least some dockers and/or VMs related files on the cache disk, some of these steps are unnecessary if there aren't.
- Stop all running Dockers/VMs
- Settings -> VM Manager: disable VMs and click apply
- Settings -> Docker: disable Docker and click apply
- Click on Shares and change to "Yes" all cache shares with "Use cache disk:" set to "Only" or "Prefer"
- Check that there's enough free space on the array and invoke the mover by clicking "Move Now" on the Main page
- When the mover finishes check that your cache is empty
- Note that any files on the cache root will not be moved as they are not part of any share and will need manual attention
You. can then later restore files to the cache by effectively reversing the above steps:
- Click on all shares whose content you want on the cache and set "Use cache disk:" option to "Only" or "Prefer" as appropriate.
- Check that there's enough free space on the cache and invoke the mover by clicking "Move Now" on the Main page
- When the mover finishes check that your cache now has the expected content and that the shares in question no longer have files on the main array
- Settings -> Docker: enable Docker and click apply
- Settings -> VM Manager: enable VMs and click apply
- Start any Dockers/VMs that you want to be running
Switching the cache to pool mode
If you want a cache pool (i.e. a multi-drive cache) then the only supported format for this is BTRFS. If it is already in BTRFS format then you can follow the procedure below for adding an additional drive to a cache pool
If the cache is NOT in BTRFS format then you will need to do the following:
- Use the procedure above for backing up any existing content you want to keep to the array.
- Stop the array
- Click on the cache on the Main tab and change the format to be BTRFS
- Start the array
- The cache should how show as unmountable and offer the option to format the cache.
- Confirm that you want to do this and click the format button
- When the format finishes you now have a cache pool (albeit with only one drive in it)
- If you want additional drives in the cache pool y/u can (optionally) do it now.
- Use the restore part of the previous procedure to restore any content you want on the cache
Adding disks to a cache pool
- You can only do this if the cache is already formatted as BTRFS
- If it is not then you will need to first follow the steps in the previous section to create a cache pool in BTRFS format.
To add disks to the BTRFS cache (pool) in your array, perform the following steps:
- Stop the array.
- Navigate to the Main tab.
- Scroll down to the section labeled Cache Devices.
- Change the number of Slots to be at least as many as the number of devices you wish to assign.
- Assign the devices you wish to the cache slot(s).
- Start the array.
- Click the checkbox and then the button under Array Operations to format the devices.
- Make sure that the devices shown are those you expect - you do not want to accidentally format a device that contains data you want to keep.
Removing disks from a cache pool
- You can only do this if your cache is configured for redundancy at both the data and metadata level.
- you can check what raid level your cache is currently set to by clicking on it on the Main tab and scrolling down to the Balance Status section.
- you can only remove one drive at a time
- Stop the array
- Unassign a cache drive.
- Start the array
- Click on the cache drive
- if you still have more than one drive in the cache pool then you can simply run a Balance operation
- If you only have one drive left in the pool then switch the cache pool raid level to single as described below
Change Cache Pool RAID Levels
BTRFS can add and remove devices online, and freely convert between RAID levels after the file system has been created.
BTRFS supports raid0, raid1, raid10, raid5, and raid6 (but see the section below about raid5/6), and it can also duplicate metadata or data on a single spindle or multiple disks. When blocks are read in, checksums are verified. If there are any errors, BTRFS tries to read from an alternate copy and will repair the broken copy if the alternative copy succeeds.
By default, Unraid creates BTRFS volumes in a cache pool with data=raid1 and metadata=raid1 to give redundancy.
For more information about the BTRFS options when using multiple devices see the BTRFS wiki article.
You can change the BTRFS raid levels for a cache pool from the Unraid GUI by:
- If the array is not started then start it in normal mode
- Click on the Cache on the Main tab
- Scroll down to the Balance section
- At this point information (including current RAID levels) will be displayed.
- Add the appropriate additional parameters added to the Options field.
- As an example, the following screenshot shows how you might convert the cache from the RAID1 to the SINGLE profile.
- Start the Balance operation.
- Wait for the Balance to complete
- The new RAID level will now be fully operational.
Replace a disk in a cache pool
- You can only do this if the cache is formatted as BTRFS AND in is set up to be redundant.
- You can only replace up to one disk at a time from your cache pool.
To replace a disk in the redundant pool, perform the following steps:
- Stop the array.
- Physically detach the disk from your system you wish to remove.
- Attach the replacement disk (must be equal to or larger than the disk being replaced).
- Refresh the Unraid webGui when under the Main tab.
- Select the cache slot that previously was set to the old disk and assign the new disk to the slot.
- Start the array.
- If presented with an option to Format the device, click the checkbox and button to do so.
Remove a disk from a cache pool
There have been times when users have indicated they would like to remove a disk from a cache pool they have set up while keeping all the data intact. This cannot be done from the Unraid GUI but is easy enough to do from the command line in a console session.
Note: You need to maintain the minimum number of devices for the profile in use, i.e., you can remove a device from a 3+ device raid0 pool but you can't remove one from a 2 device raid0 pool (unless it's converted to a single profile first).
With the array running type on the console:
btrfs dev del /dev/mapper/sdX1 /mnt/cache
Replace X with the correct letter for the drive you want to remove from the system as shown on the Main tab (don't forget the 1 after it).
Wait for the device to be deleted (i.e., until the command completes and you get the cursor back).
The device is now removed from the pool, you don't need to stop the array now, but at the next array stop you need to make Unraid forget the now-deleted member, and to achieve that:
- Stop the array
- Unassign all pool devices
- Start the array to make Unraid "forget" the pool config
- If the docker and/or VMs services were using that pool best to disable those services before start or Unraid will recreate the images somewhere else, assuming they are using /mnt/user paths)
- Stop array (re-enable docker/VM services if disabled above)
- Re-assign all pool member except the removed device
- Start array
You can also remove multiple devices with a single command (as long as the above rule is observed):
btrfs dev del /dev/mapper/sdX1 /dev/mapper/sdY1 /mnt/cache
but in practice this does the same as removing one device, then the other, as they are still removed one at a time, just one after the other with n/ further input from you.
As of version 6.9, you can create multiple pools and manage them independently. This feature permits you to define up to 35 named pools, of up to 30 storage devices per pool. Pools are created and managed via the Main page.
- Note: A pre-6.9.0 cache disk/pool is now simply a pool named "cache". When you upgrade a server which has a cache disk/pool defined, a backup of
config/disk.cfgwill be saved to
config/disk.cfg.bak, and then cache device assignment settings are moved out of
config/disk.cfgand into a new file,
config/pools/cache.cfg. If later you revert back to a pre-6.9.0 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache. As long as you reassign the correct devices, data should remain intact.
When you create a user share or edit an existing user share, you can specify which pool should be associated with that share. The assigned pool functions identically to the current cache pool operation.
Something to be aware of: when a directory listing is obtained for a share, the Unraid array disk volumes and all pools which contain that share are merged in this order:
pool assigned to share
all the other pools in strverscmp() order.
A single-device pool may be formatted with either xfs, btrfs, or (deprecated) reiserfs. A multiple-device pool may only be formatted with btrfs.
- Note: Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well, this is mostly true in that the same data exists on both disks but not necessarily at the block-level. Now let's say you create another pool, and what you do is un-assign one of the devices from the existing 2-device btrfs pool and assign it to this pool. Now you have x2 single-device btrfs pools. Upon array Start user might understandably assume there are now x2 pools with exactly the same data. However, this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a
wipefson that device so that upon mount it will not be included in the old pool. This of course effectively deletes all the data on the moved device.
File System Management
Selecting a File System type
Each array drive in an Unraid system is set up as a self-contained file system. Unraid currently supports the following file system types:
- XFS: This is the default format for array drives on a new system. It is a well-tried Linux file system and deemed to be the most robust.
- XFS is better at recovering from file system corruption than BTRFS (which can happen after unclean shutdowns or system crashes).
- BTRFS: This is a newer file system that supports advanced features not available with XFS. It is considered not quite as stable as XFS but many Unraid users have reported in seems as robust as XFS when used on array drives where each drive is a self-contained file system. Some of its features are:
- It supports detecting file content corruption (often colloquially known as bit-rot) by internally using checksumming techniques
- It can support a single file system spanning multiple drives, and in such a case it is not necessary that the drives all be of the same size.
- In multi-drive mode various levels of RAID can be supported (although these are a BTRFS specific implementation and not necessarily what one expects). The default in Unraid for a cache pool is RAID1 so that data is stored redundantly to protect against drive failure.
- is the only option supported when using a cache pool spanning multiple drives that need to run as a single logical drive as this needs the multi-drive support.
- In multi-drive mode in the cache pool it is not always obvious how much usable space you will end up with. The BTRFS Space Calculator can help with this.
- ReiserFS: This is supported for legacy reasons for those migrating from earlier versions of Unraid where it was the only supported file system type.
- There is only minimal involvement from Linux kernel developers on maintaining the ReiserFS drivers on new Linux kernel versions so the chance of a new kernel causing problems with ReiserFS is higher than for other Linux fine system types.
- It has a hard limit of 16TB on a ReiserFS file system and commercial grade hard drives have now reached this limit.
- Write performance can degrade significantly as the file system starts getting full.
- It is extremely good at recovering from even extreme levels of file system corruption.
- It is now deprecated for use with Unraid and should not be used by new users.
These formats are standard Linux formats and as such any array drive can easily be removed from the array and read on any Linux system. This can be very useful in any data recovery scenario. Note, however, that the initial format needs to be done on the Unraid system as Unraid has specific requirements around how the disk is partitioned that are unlikely to be met if the partitioning is not done on Unraid. Unfortunately, these formats cannot be read as easily on Windows or macOS systems as these OS do not recognize the file system formats without additional software being installed that is not freely obtainable.
A user can use a mixture of these file system types in their Unraid system without it causing any specific issues. in particular, the Unraid parity system is file system agnostic as it works at the physical sector level and is not even aware of the file system that is in use on any particular drive.
In addition drives can be encrypted.
If using a cache pool (i.e multiple drives) then the only supported type is BTRFS and the pool is formatted as a single entity. By default, this will be the BTRFS version of RAID1 to give redundancy, but other BTRFS options can be achieved by running the appropriate btrfs command.
Additional file formats are supported by the Unassigned Devices and Unassigned Devices Plus plugins. There can be useful when you have drives that are to be used for transfer purposes, particularly to systems that do not support standard Linux formats.
Setting a File System type
The File System type for a new drive can be set in 2 ways:
- Under Settings->Disk Settings the default type for array drives and the cache pool can be set.
- On a new Unraid system this will be XFS for array drives and BTRFS for the cache.
- Explicitly for individual drives by clicking on a drive on the Main tab (with the array stopped) and selecting a type from those offered.
- When a drive is first added the file system type will show as auto which means use the setting specified under Settings->Disk Settings.
- Setting an explicit type over-rides the global setting
- The only supported format for a cache containing more than one drive is BTRFS.
Creating a File System (Format)
Before a disk can be used in Unraid then an empty file system of the desired type needs to be created on the disk. This is the operation commonly known as "format" and it erases any existing content on the disk.
If a drive has already been formatted by Unraid then if it now shows as unmountable you probably do NOT want to format it again unless you want to erase its contents. In such cases, the appropriate action is usually instead to use the File System check/repair process detailed later.
The basic process to format a drive once the file system type has been set is:
- Start the array
- Any drives where Unraid does not recognize the format will be shown as unmountable and there will be an option to format unmountable drives
- Check that ALL the drives shown as unmountable are ones you want to format. You do not want to accidentally format another drive and erase its contents
- Click the check box to say you really want to format the drive.
- Carefully read the resulting dialog that outlines the consequences
- The Format button will now be enabled so if you want to go ahead with the format click on it.
- The format process will start running for the specified disks.
- If the disk has not previously been used by Unraid then it will start by rewriting the partition table on the drive to conform to the standard Unraid expects.
- The format should only take a few minutes but if the progress does not automatically update you might need to refresh the Main tab.
Once the format has completed then the drive is ready to start being used to store files.
Drive shows as unmountable
A drive can show as unmountable in the Unraid GUI for two reasons:
- The disk has never been used in Unraid and you have just added is new a new disk slot in the array. In this case, you want to follow the format procedure shown above to create a new empty file system on the drive so it is ready to receive files.
- File system corruption has occurred. This is not infrequent if a write to a disk fails for any reason and Unraid marks the disk as disabled, although it can occur at other times as well. In such a case you want to use the file system check/repair process documented below to get the disk back into a state where you can mount it again and see all its data.
- Note that this process can be carried out on a disk that is being ‘emulated’ by Unraid prior to carrying out any rebuild process. If a disk is showing as unmountable while being emulated then it will also show as unmountable after the rebuild (as all the rebuild process does is make the physical disk match the emulated one). In addition the process for repairing a file system is much faster than the rebuild process so there is not much point in wasting time on a rebuild if the repair is not going to work.
- IMPORTANT: You do not want to format the drive as this will update parity accordingly and you would lose the contents of the drive.
It is often a good idea to make a post in the forums and attach your system’s diagnostics zip file (obtained via Tools->Diagnostics) if you want any feedback on such an issue.
Checking a File System
If a disk that was previously mounting fine suddenly starts showing as unmountable then this normally means that there is some sort of corruption at the file system level. This most commonly occurs after an unclean shutdown but could happen any time a write to a drive fails or if the drive ends up being marked as 'disabled' (i.e. with a red ',' in the Unraid GUI). If the drive is marked as disable and being emulated then the check is run against the emulated drive and not the physical drive.
IMPORTANT: At this point, the Unraid GUI will be offering an option to format unmountable drives. This will erase all content on the drive and update parity to reflect this making recovering the data impossible/very difficult so do NOT do this unless you are happy to lose the contents of the drive.
To recover from file system corruption then one needs to run the tool that is appropriate to the file system on the disk. Points to note that users new to Unraid often misunderstand are:
- Rebuilding a disk does not repair file system corruption
- If a disk is showing as being emulated then the file system check and/or repair are run against the emulated drive and not the physical drive.
Preparing to test
The first step is to identify the file system of the drive you wish to test or repair. If you don't know for sure, then go to the Main page of the webGui, and click on the name of the drive (Disk 3, Cache, etc). Look for File system type, and you will see the file system format for your drive (should be xfs, btrfs or reiserfs).
If the file system is XFS or ReiserFS (but NOT BTRFS), then you must start the array in Maintenance mode, by clicking the Maintenance mode check box before clicking the Start button. This starts the unRAID driver but does not mount any of the drives.
If the file system is BTRFS, then make sure the array is started, and NOT in Maintenance mode.
Running the Test using the webGui
The process for checking a file system using the Unraid GUI is as follows:
- Make sure that you have the array started in the correct mode. If necessesary stop tke array and srestart in the correct mode by clicking/unclicking tke Maintenance Mode checkbox next to the Start button.
- From the Main screen of the webGui, click the name of the disk that you want to test or repair. For example, if the drive of concern is Disk 5, then click on Disk 5. If it's the Cache drive, then click on Cache. If in Maintenance mode then The disks will not be mounted but the underlying /dev/mdX type devices that correspond to each diskX in the unRaid GUI will have been created. This is important as any write operation against one of these 'md' type devices will also update parity to reflect that write has happened.
- You should see a page of options for that drive, beginning with various partition, file system format, and spin down settings.
- The section following that is the one you want, titled Check Filesystem Status. There is a box with the 2 words Not available in it. This is the command output box, where the progress and results of the command will be displayed. Below that is the Check button that starts the test or repair, followed by the options box where you can type in options for the test/repair command.
- The tool that will be run is shown and the status at this point will show as Not available. The Options field may include a parameter that causes the selected tool to run in check-only mode so that the underlying drive is not actually changed. For more help, click the Help button in the upper right.
- Click on the Check button to run the file system check
- Information on the check progress is now displayed. You may need to use the Refresh button to get it to update.
- If you are not sure what the results of the check mean you should copy the progress information so you can ask a question in the forum. When including this information as part of a forum post use the mark them as code (using the <?> icon) to preserve the formatting as otherwise it becomes difficult to read.
Running the Test using tke command line
If you ever need to run a check on a drive that is not part of the array then you need to run the appropriate command from a console/terminal session. As an example for an XFS disk you would use a command of the form:
where X corresponds to the device identifier shown in the Unpaid GUI. Points to note are:
- The value of X can change when Unraid is rebooted so make sure it is correct for the current boot
- Note the presence of the '1' on the end to indicate the partition to be checked.
- The reason for not doing it this way on array drives is that although the disk would be repaired parity would be invalidated which can rreduce the chances of recovering a failed drive until valid parity has been re-established.
Repairing a File System
You typically run this just after running a check as outlined above, but if skipping that follow steps 1-4 to get to the point of being ready to run the repair. It is a good idea to enable the Help built into the GUI to get more information on this process.
If the drive is marked as disable and being emulated then the repair is run against the emulated drive and not the physical drive. It is frequently done before attempting to rebuild a drive as it is the contents of the emulated drive that is used by the rebuild process.
- Remove any parameters from the Options field that would cause the tool to run in check-only mode.
- Add any additional parameters to the Options field required that are suggested from the check phase. If not sure then ask in the forum.
- The Help build into the GUI can provide guidance on what options might be applicable.
- Press the Check button to start the repair process. You can now periodically use the Refresh button to update the progress information
- If the repair does not complete for any reason then ask in the forum for advice on how to best proceed if you are not sure.
- If repairing an XFS formatted drive then it is quite normal for the xfs_repair process to give you a warning and saying you need to provide the -L option to proceed. Despite this ominous warning message this is virtually always the right thing to do and does not result in data loss.
- When asking a question in the forum and when including the output from the repair attempt as part of your post use option to preserve the formatting as otherwise it becomes difficult to read
- If the repair completes without error then stop the array and restart in normal mode. The drive should now mount correctly.
If at any point you do not understand what is happening then ask in the forum.
Changing a File System type
There may be times when you wish to change the file system type on a particular drive. The steps are outlined below.
IMPORTANT: These steps will erase any existing content on the drive so make sure you have first copied it elsewhere before attempting to change the file system type if you do not want to lose it.
- Stop the array
- Click on the drive whose format you want to change
- Change the format to the new one you want to use. Repeat if necessary for each drive to be changed
- Start the array
- There will now be an option on the main tab to format unmountable drives and showing what drives these will be. Check that only the drive(s) you expect show.
- Check the box to confirm the format and then press the Format button.
- The format will now start. It typically only takes a few minutes. There have been occasions where the status does not update but refreshing the Main tab normally fixes this.
If anything appears to go wrong then ask in the forum to add your system diagnostics zip file (obtained via Tools->Diagnostics) to your post.
- For SSDs you can erase the current contents using
- at the console where 'X' corresponds to what is currently shown in the Unraid GUI for the device. Be careful that you get it right as you do not want to accidentally erase the contents of the wrong drive.
Converting to a new File System type
There is the special case of changing a file system where you want to keep the contents of the drive. The commonest reason for doing this is those users who ran an older version of Unraid where the only supported file system type was reiserFS (which is now deprecated) and they want to switch the drive to using either XFS or BTRFS file system instead. However, there may be users who want to convert between file system types for other reasons.
In simplistic terms the process is:
- Copy the data off the drive in question to another location. This can be elsewhere on the array or anywhere else suitable.
- You do have to have enough free space to temporarily hold this data
- Many uses do such a conversion just after adding a new drive to the array as this gives them the free space required.
- Follow the procedure above for changing the file system type of the drive. This will leave you with an empty drive that is now in the correct format but that has no files on it.
- Copy the files you saved in step 1 back to this drive
- If you have multiple drives that need to be converted then do them one at a time.
This is a time-consuming process as you are copying large amounts of data. However, most of this is computer time as the user does not need to be continually present closely watching the actual copying steps.
Reformatting a drive
If by any chance you want to reformat a drive to erase its contents keeping the existing file system type then many users find that it may not be obvious how to do this from the Unraid GUI.
The way to do this is to follow the above process for changing the file system type twice. The first time you change it to any other type, and then once it has been formatted to the new type repeat the process this time setting the type back to the one you started with.
This process will only take a few minutes, and as you go parity is updated accordingly.
Reformatting a cache drive
There may be times when you want to change the format used on the cache drive (or some similar operation) and preserve as much of its existing contents as possible. In such cases the recommended way to proceed that is least like;y to go wrong is:
- Stop array.
- Disable docker and VM services under Settings
- Start array. If you have correctly disabled these services there will be NO Docker or VMstab in the GUI.
- Set all shares that have files on the cache and are currently not have a Use Cache:Yes to BE Cache:Yes. Make a note of which shares you changed and what setting they had before the change
- Run mover from the Main tab; wait for completion (which can take some time to complete if there are a lot of files); check cache drive contents, should be empty. If it's not, STOP, post diagnostics, and ask for help.
- Stop array.
- Set cache drive desired format to XFS or BTRFS, if you only have a single cache disk and are keeping that configuration, then XFS is the recommended format. XFS is only available as a selection if there is only 1 (one) cache slot shown while the array is stopped.
- Start array.
- Verify that the cache drive and ONLY the cache drive shows unformatted. Select the checkbox saying you are sure, and format the drive.
- Set any shares that you changed to be Cache: Yes earlier to Cache: Prefer if they were originally Cache: Only or Cache: Prefer. If any were Cache: No, set them back that way.
- Run mover from the Main tab; wait for completion; check cache drive contents which should be back the way it was.
- change any share that we’re set to Use Cache:Only back to that option
- Stop array.
- Enable docker and VM services.
- Start array
There are other alternative procedures that might be faster if you are Linux aware, but the one shown above is the one that has proved most likely to succeed without error for the average Unraid user.
There are a number of operations that are specific to BTRFS formatted drives that do not have a direct equivalent in the other formats.
Unlike most conventional filesystems, BTRFS uses a two-stage allocator. The first stage allocates large regions of space known as chunks for specific types of data, then the second stage allocates blocks like a regular filesystem within these larger regions. There are three different types of chunks:
- Data Chunks: These store regular file data.
- Metadata Chunks: These store metadata about files, including among other things timestamps, checksums, file names, ownership, permissions, and extended attributes.
- System Chunks: These are a special type of chunk which stores data about where all the other chunks are located.
Only the type of data that the chunk is allocated for can be stored in that chunk. The most common case these days when you get a -ENOSPC error on BTRFS is that the filesystem has run out of room for data or metadata in existing chunks, and can't allocate a new chunk. You can verify that this is the case by running btrfs fi df on the filesystem that threw the error. If the Data or Metadata line shows a Total value that is significantly different from the Used value, then this is probably the cause.
What btrfs balance does is to send things back through the allocator, which results in space usage in the chunks being compacted. For example, if you have two metadata chunks that are both 40% full, a balance will result in them becoming one metadata chunk that's 80% full. By compacting space usage like this, the balance operation is then able to delete the now-empty chunks and thus frees up room for the allocation of new chunks. If you again run btrfs fi df after you run the balance, you should see that the Total and Used values are much closer to each other, since balance deleted chunks that weren't needed anymore.
The BTRFS balance operation can be run from the Unraid GUI by clicking on the drive on the Main tab and running scrub from the resulting dialog. the current status information for the volume is displayed. You can optionally add parameters to be passed to the balance operation and then start the scrub by pressing the Balance button.
Scrubbing involves reading all the data from all the disks and verifying checksums. If any values are not correct, the data can be corrected by reading a good copy of the block from another drive. The scrubbing code also scans on read automatically. It is recommended that you scrub high-usage file systems once a week and all other file systems once a month.
You can initiate a check of the entire file system by triggering a file system scrub job. The scrub job scans the entire file system for integrity. It automatically attempts to report and repair any bad blocks that it finds along the way. Instead of going through the entire disk drive, the scrub job deals only with data that is actually allocated. Depending on the allocated disk space, this is much faster than performing an entire surface scan of the disk.
The BTRFS scrub operation can be run from the Unraid GUI by clicking on the drive on the Main tab and running scrub from the resulting dialog.
Unassigned drives are drives that are present in the server running Unraid that have not been added to the array or to a cache pool.
It is important to note that all such drives that are plugged into the server at the point you start the array count towards the Unraid Attached Devices license limits.
Typical uses for such drives are:
- Plugging in removable drives for the purposes of transferring files or backing up drives.
- Having drives dedicated to specific use (such as running VMs) where you want higher performance than can be achieved by using array drives.
It is strongly recommended that you install the Unassigned Devices (UD) plugins via the Apps tab if you want to use Unassigned Drives on your system. There are 2 plugins available:
- The basic Unassigned Devices plugin provides support for file system types supported as standard in Unraid.
- The Unassigned Devices Plus plugin extends the file system support to include options such as ExFat ant HFS+.
You should look at the Unassigned Devices support thread for these plugins to get more information the very extensive facilities offered and guidance on how to use them.
More detail still needs to be added
Array Write Modes
Unraid maintains real-time parity and the performance of writing to the parity protected array in Unraid is strongly affected by the method that is used to update parity.
There are fundamentally 2 methods supported:
- Turbo Mode (also known as reconstruct write)
These are discussed in more detail below to help users decide which modes are appropriate to how they currently want their array to operate.
Setting the Write mode
The write mode is set by going Settings->Disk Settings, and look for the Tunable (md_write_method) setting. The 3 options are:
- Auto: Currently this operates just like setting the read/modify/write option but is reserved for future enhancement
- reconstruct write (a.k.a.Turbo write)
To change it, click on the option you want, then the Apply button. The effect should be immediate to you can change it at any time
The different modes and their implications are discussed in more detail below
Historically, Unraid has used the "read/modify/write" method to update parity and to keep parity correct for all data drives.
Say you have a block of data to write to a drive in your array, and naturally you want parity to be updated too. In order to know how to update parity for that block, you have to know what is the difference between this new block of data and the existing block of data currently on the drive. So you start by reading in the existing block and comparing it with the new block. That allows you to figure out what is different, so now you know what changes you need to make to the parity block, but first, you need to read in the existing parity block. So you apply the changes you figured out to the parity block, resulting in a new parity block to be written out. Now you want to write out the new data block, and the parity block, but the drive head is just past the end of the blocks because you just read them. So you have to wait a long time (in computer time) for the disk platters to rotate all the way back around until they are positioned to write to that same block. That platter rotation time is the part that makes this method take so long. It's the main reason why parity writes are so much slower than regular writes.
To summarize, for the "read/modify/write" method, you need to:
- read in the parity block and read in the existing data block (can be done simultaneously)
- compare the data blocks, then use the difference to change the parity block to produce a new parity block (very short)
- wait for platter rotation (very long!)
- write out the parity block and write out the data block (can be done simultaneously)
That's 2 reads, a calc, a long wait, and 2 writes.
The advantages of this approach are:
- Only the parity drive(s) and the drive being updated need to be spun up.
- Minimises power usage as array drives can be kept spun down when not being accessed
- Does not require all the other array drives to be working perfectly
Turbo write mode
More recently Unraid introduced the Turbo write mode (often called "reconstruct write")
We start with that same block of new data to be saved, but this time we don't care about the existing data or the existing parity block. So we can immediately write out the data block, but how do we know what the parity block should be? We issue a read of the same block on all of the *other* data drives, and once we have them, we combine all of them plus our new data block to give us the new parity block, which we then write out! Done!
To summarize, for the "reconstruct write" method, you need to:
- write out the data block while simultaneously reading in the data blocks of all other data drives
- calculate the new parity block from all of the data blocks, including the new one (very short)
- write out the parity block
That's a write and a bunch of simultaneous reads, a calc, and a write, but no platter rotation wait! The upside is it can be much faster.
The downside is:
- ALL of the array drives must be spinning, because they ALL are involved in EVERY write.
- Increased power draw due to the need to keep all drives spinning
- All drives must be reading without error.
So what are the ramifications of this?
- For some operations, like parity checks and parity builds and drive rebuilds, it doesn't matter, because all of the drives are spinning anyway.
- For large write operations, like large transfers to the array, it can make a big difference in speed!
- For a small write, especially at an odd time when the drives are normally sleeping, all of the drives have to be spun up before the small write can proceed.
- And what about those little writes that go on in the background, like file system housekeeping operations? EVERY write at any time forces EVERY array drive to spin up. So you are likely to be surprised at odd times when checking on your array, and expecting all of your drives to be spun down, and finding every one of them spun up, for no discernible reason.
- So one of the questions to be faced is, how do you want your various write operations to be handled. Take a small scheduled backup of your phone at 4 in the morning. The backup tool determines there's a new picture to back up, so tries to write it to your unRAID server. If you are using the old method, the data drive and the parity drive have to spin up, then this small amount of data is written, possibly taking a couple more seconds than Turbo write would take. It's 4am, do you care? If you were using Turbo write, then all of the drives will spin up, which probably takes somewhat longer spinning them up than any time saved by using Turbo write to save that picture (but a couple of seconds faster in the save). Plus, all of the drives are now spinning, uselessly.
- Another possible problem if you were in Turbo mode, and you are watching a movie streaming to your player, then a write kicks into the server and starts spinning up ALL of the drives, causing that well-known pause and stuttering in your movie. Who wants to deal with the whining that starts then?
Currently, you only have the option to use the old method or the new (currently the Auto option means the old method). The plan is to add the true Auto option that will use the old method by default, *unless* all of the drives are currently spinning. If the drives are all spinning, then it slips into Turbo. This should be enough for many users. It would normally use the old method, but if you planned a large transfer or a bunch of writes, then you would spin up all of the drives - and enjoy faster writing.
The auto method has been for the potential of the system automatically switching modes depending on current array activity but this has not happened so far. The problem is knowing when a drive is spinning, and being able to detect it without noticeably affecting write performance, ruining the very benefits we were trying to achieve. If on every write you have to query each drive for its status, then you will noticeably impact I/O performance. So to maintain good performance, you need another function working in the background keeping near-instantaneous track of spin status, and providing a single flag for the writer to check, whether they are all spun up or not, to know which method to use.
Many users would like tighter and smarter control of which write mode is in use. There is currently no official way of doing this but you could try searching for "Turbo Write" on the Apps tab for unofficial ways to get better control.
Using a Cache Drive
It is possible to use a Cache Drive/Pool to improve the perceived speed of writing to the array. This can be done on a share-by-share basis using the Use Cache setting available for each share by clicking on the share name on the Shares tab in the GUI. It is important to realize that using the cache has not really sped up writing files to the array - it has just that such writes now occur when the user is not watching them
Points to note are:
- The Yes setting for Use Cache causes new files for the share to initially be written to the cache and later moved to the parity protected array when mover runs.
- Writes to the cache run at the full speed the cache is capable of.
- It is not uncommon to use SSDs in the cache to get maximum performance.
- Moves from cache to array are still comparatively slow but since mover is normally scheduled to fun when the system is otherwise idle this is not visible to the end-user.
- There is a Minimum Free Space setting under Settings->Global Share settings and if the free space on the cache falls below this value Unraid will stop trying to write new files to the cache. Since when Unraid first creates a file it does not know the final size it is recommended that the value for this setting should be as large (or larger) as the biggest file you expect to write to the share as you want to stop Unraid selecting the cache for a file that will not fit in the space available. this will stop the write failing with an 'out of space' error when the free space gets exhausted.
- If there is not sufficient free space on the cache then writes will start by-passing the cache and revert to the speeds that would be obtained when not using the cache.
Normally read performance is determined by the maximum speed that a file can be read off a drive. Unlike some other forms of RAID an Unraid system does not utilize striping techniques to improve performance as every file is constrained to a single drive.
If a disk is marked as disabled and being emulated then Unraid needs to reconstruct its contents on the fly by reading the appropriate sectors of all the good drives and the parity drive(s). In such a case the read performance is going to be determined primarily by the slowest drives in the system.
It is also worth emphasizing that if there is any array operation going on such as a parity check or a disk rebuild then read performance will be degraded significantly due to drive head movements caused by disk contention between the two operations.
THIS SECTION IS STILL UNDER CONSTRUCTION
A lot more detail still needs to be added
Once you have assigned some devices to Unraid and started the array, you can create shares to simplify how you store data across multiple disks in the array. Unraid will automatically create a handful of shares for you that it needs to support common plugins, containers, and virtual machines, but you can also create your own shares for storing other types of data. Unraid supports 2 types of share:
- User Shares
- Disk Shares
You can control which of these types of shares are to be used under Settings->Global Share Settings. The default on Unraid is to have User Shares enabled but Disk Shares disabled.
It is sometimes important to realize that these are two different views of the same underlying file system. Every file/folder that appears under a User Share will also appear under the Disk Share for the physical drive that is storing the file/folder.
User Shares can be enabled/disabled via Settings->Global Share Settings.
From the Shares tab, you can either create a new share or edit' an existing share. Click the Help icon in the top-right of the Unraid webGui when configuring shares for more information on the settings available.
User Shares are implemented by using Linux Fuse file system support. What they do is provide an aggregated view of all top level folders of the same name across the cache and the array drives. The name of this top level folder is used as the share name. From a user perspective this gives a view that can span multiple drives when viewed at the network level. Note that no individual file will span multiple drives - it is just the directory level that is given a unified view.
When viewed at the Linux level then User Shares will appear under the path /mnt/user. It is important to note that a User Share is just a logical view imposed on top of the underlying physical file system so you can see the same files if you look at the physical level (as described below for Disk Shares.
- Current releases of Unraid also include the mount point /mnt/user0 that shows the files in User Shares OMITTING any files for a share that are on the cache drive. However This mount point is now deprecated ant likely to stop being available in a future Unraid release.
Normally one creates User Shares using the Shares tab. However if you manually create a top level folder on any drive the system will automatically consider this to be a user Share and give it default settings.
Which physical drive in the main array is used to store a physical file is controlled by a number of settings for the share:
- Allocation method: This has various options:
- Most Free: This option means that new files should go to the disk with the most free space. It has the downside that one is continually switching drives which keeps the drive involved spun up.
- Fill Up: This option means simply fill up drives in disk order until the free space falls below the Minimum Free Space setting, and when that happens move onto the next disk. Many users like this setting because their content is static in nature to they find this a simple way to manage their storage.
- High Water: (default) This option attempts to provide a compromise between continually switching drives as is caused by the Most Free setting and filling up disks in a sensible manner, but not fill each drive to capacity before using the next one. The aim is to allow related files do be kept together on the same drive and to let unused drives be spun down.
- It works with switch points based by continually halving the size of the largest drive in the array.
- Many people find this confusing (particularly in an array with drives of varying size). so as an example if you had an array consisting of drives of 8TB, 3Tb and 2TB
- The largest drive is 8TB so the switch points are 4TB, 2TB, 1Tb etc.
- The 4TB switch point is active so The 8TB Drive one would be filled to 4TB free space left.
- The 2TB switch point becomes active so the 8TB and 3TB drives each gets used in disk order until it they have 2TB free space
- The 1TB switch point becomes active so each drive now gets used in disk order until it only has 1TB free space.
- Included or excluded drives: These settings allow you to control which array drives can hold files for the share. Never set both values, set only the one that is most convenient for you. If no drives are specified under these settings then all drives allowed under Settings >> Global Share settings are allowed.
- Split level: This setting controls how files should be grouped.
- Important: in the event of there being contentions between the Minimum free space, Split Level and the Allocation method settings in deciding which would be an appropriate drive to use the Split level setting always wins. This means that you can get an out-of-space error even though there is plenty of space on other array drives that the share can logically use.
Important: The Linux file system used by Unraid are case sensitive while the SMB share system is not. As an example this means that a folder at the Linux level a folder called 'media' is different to one called 'Media'. However at the network level case is ignored so for example 'media', Media', 'MEDIA' would all be the same share. However to take this example further you would only get the content of one of the underlying 'media' or 'Media' folders to appear at the network share level - and it can be non-obvious which one this would be.
The following sections proved more detail on how these settings work:
When a new User share is created, or when any object (file or directory) is created within a User share, the system must determine which data disk the User share or object will be created on. In general, a new User share, or object within a User share, will be created on the data disk with the most free space. However there are a set of share configuration parameters available to fine tune disk allocation.
The basic allocation strategy for a share is defined by the Allocation method configuration parameter. You may select one of three allocation methods for the system to use.
The high water allocation method attempts to step fill each disk so at the end of each step there is an equal free space left on each disk. The idea is to progressively fill each disk but not constantly go back and forth between disks each time new data is written to the array. Most times, only a single disk will be needed when writing a series of files to the array so the array will only spin-up the needed disk. The high water level is initially set equal to one-half of the size of the largest disk. A new high water level is again set to one-half of the previous high level once all the disks have less free space than the current high water level.
The above example shows what will occur when there is a mix of 4 disks varying is size from 500gig to 2T in size.
First Pass - The high water level is set to one-half of the size of the 2T drive or 1T. Each disk will be filled until it has <1T of free space remaining. This means no data is stored on disk1 or disk2 since both already have <1T of free space. 500gig of data will be stored on disk3 followed by 1T of data being stored on disk4.
Second Pass - The high water level is reset to one-half of the previous level or 500gig. Each disk will be filled until it has <500gig of free space remaining. This means no data is stored on disk1 since it already has <500gig of free space. 500gig of data will be stored on disk2 and then 500gig of data will be stored on disk3 and finally 500gig of data will be stored on disk4.
Third Pass - The high water level is again reset to one-half of the previous level or 250gig. Each disk will be filled until it has <250gig of free space remaining. 250gig of data will be stored on disk1 and then 250gig of data will be stored on disk2 and then 250gig of data will be stored on disk3 and finally 250gig of data will be stored on disk4. An interesting note is that the 500gig disk does not get used at all until the third pass. Don't be concerned if the smaller sized disks don't immediately get used with this method.
This pattern will continue with progressively smaller high water levels until the disks are full.
The most free allocation method simply picks the disk with the most free space and writes the data to that disk. Each time a file is written unRAID will check the free space on the disks and pick the one with the most free space.
The fill-up allocation method simply attempts to fill each disk in order from the lowest numbered disk to the highest numbered disk. The fill-up allocation method must be used in conjunction with the minimum free space setting. Otherwise, unRAID will begin to give disk full errors and not allow any more transfers once the first disk gets close to being full.
Min. Free Space
The minimum free space setting is used with the allocation method and split level. The minimum free space setting tells unRAID to stop putting new content onto the disk if it can be split to a new disk. This must be used with the fill-up allocation method or disk full errors will occur.
First a brief explanation of how unRAID will typically receive a file. unRAID typically receives data in this manner. First, unRAID receives the request to store a file, named for example "file.eg". At this time, unRAID has no idea how big "file.eg" is so unRAID will pick a spot to place "file.eg" and begin to store the file data as the data is transfered over the network. Now, this is important because unRAID may pick a storage disk that does not have enough space to store the complete "file.eg". unRAID doesn't know there is not enough space when it first places the file so unRAID will only find out the disk doesn't have enough space when the disk is full. At this point, the transfer will fail with a disk full error.
So, unRAID will write to a different disk if the minimum free space is set to a value larger than the biggest file size you will ever transfer. The recommended setting is 2 times the largest file size you will ever transfer. For example, if the largest file you have is 8gig in size then set the minimum free space to 16gig. This allows you to transfer files that may vary in size somewhat and not accidentally transfer one too large. The minimum free space is set in kilo-bytes.
Here are some examples of the minimum free space setting;
Note that unRAID will still place files on the disk if the split level does not allow the files to be placed on another disk with more free space.
Also note that unRAID will typically not move a file onto a new disk if you're over-writing or updating it. For example, a backup file that grows in size over time could end up filling a disk and causing a disk full error.
The Split level setting is one that many users find confusing to here is a more detailed description of how it works.
IIMPORTANT: in the event of there being contention between the various settings for a share over which array drive to select for a file the Split Level setting always takes precedence. This can mean unRAID chooses a drive which does not have enough space for the file so that an out-of-space error subsequently occurs for the file.
The split level setting tells unRAID how many folder levels are allowed to be created on multiple disks. The split level can be used to ensure that the contents of a folder are kept on the same disk. The split level numbering starts with the user share being the top level and given the number 1.
Here is an example showing a possible directory structure for a user share called "Media".
Note: I (the original author of this section) consider combining media types into a single large share a poor way to store media. I use a share for each media type. Movies is a share and TV shows is a share. I combined the movies and TV shows to show the pitfalls in the split levels when doing this as explained after the figure.
Here is an explanation of the different split levels, referenced to the folder structure above;
- Level 1
- The top level Media share can be created on every disk.
- Every other folder under the Media share must remain on a single disk.
- This setting does not allow the SD Movies, HD Movies, Kids Movies or TV Shows folders to spread to multiple disks.
- This setting is too low for all the media.
- Level 2
- The top level Media share can be created on every disk.
- The SD Movies, HD Movies, Kids Movies and TV Shows folders can be created on every disk.
- Each Movie Folder and TV Show Folder must remain on a single disk.
- This setting may work well. It will keep each movie and each TV series together on a single disk.
- This setting may give issues because it keeps each TV series on a single disk. So, a disk may fill as new TV seasons are added to a TV show which is on a disk which is close to full.
- Level 3
- The top level Media share can be created on every disk.
- The SD Movies, HD Movies, Kids Movies and TV Shows folders can be created on every disk.
- Each Movie Folder and TV Show Folder can be created on every disk.
- Each Season Folder must remain on a single disk.
- This setting will allow the contents stored in each Movie Folder to be spread out onto multiple disks.
- This setting is too high for the different movie types.
- Level 4
- The top level Media share can be created on every disk.
- The SD Movies, HD Movies, Kids Movies and TV Shows folders can be created on every disk.
- Each Movie Folder and TV Show Folder can be created on every disk.
- Each Season Folder can be created on every disk.
- This setting is too high because it will allow the contents of every folder to be spread out onto multiple disks. The split level is not being used to keep similar content together.
The only valid split level for the above example is 2. This causes a split level limitation which forces each complete TV series to a single disk. This can force a new TV season to be placed on a disk which is almost full and result in out of space errors once new episodes completely fill the disk. The split level can't be increased to 3 because each individual movie would not be contained to a single disk.
The first way to fix this split level mismatch issue is to create separate shares for the movies and the TV shows. This way, the movies can be set to use a split level of 2 and the TV shows can use a split level of 3.
For Movies use a split level = 2. This allows the "SD Movies", "HD Movies" and "Kids Movies" folders to be placed on every disk and it keeps each individual movie folder on a single disk. This way, any single movie folder and the contents of the movie folder will remain on a single disk.
For TV_Shows use a split level of either 1 or 2. A split level of 1 will keep each TV series on a single disk and split level of 2 will keep each season on a single disk. The split level of 2 means that the complete TV series can be stored on multiple disks, however each individual season of that TV series will be on a single disk.
The second way to fix the issue is to add another folder level to the movies, starting first with a Movies folder in the Media share and then placing the different movie types below this.
This user share structure must use split level = 3. SD Movies, HD Movies, Kids Movies and each TV series can exist on multiple disks. This structure means each TV season can be on a different disk. This has the opposite issue compared to the first example. You can not use split level 2 to force each complete TV series to remain on a single disk without messing up the ability of the movies to split to every disk.
Some things to keep in mind.
- The above examples are to demonstrate the use of the split level. It is not necessary to store your media sorted in the same format as the above example illustrates. You may want to use a Movies share and then just place a "Movie Name" folder for each movie directly into the share without sorting the movies by type.
- It is completely valid to force each complete TV series to stay on a single disk. Just understand that a continuing TV series will keep filling the disk where it is first placed. This may require manual intervention to shift some TV series from an almost full disk to an empty disk. Using the Most Free allocation method can help eliminate the issue since a completely new TV series would be placed on the disk with the most free space.
- The above TV example applies to any similar share. It could apply to a Pictures share where you store the pictures in folders based on the year (2010, 2011, 2012 etc) or it could apply to a Music share where you store the music in a folder for each artist. In these cases, a split level of 1 would keep a whole year of pictures on a single disk or it would keep all the music by an artist on a single disk.
Disable Split Level
It is also possible to disable the split level by setting a high split level. A file copy or move will fail if a folder is locked to a full disk and an attempt is made to add more files into that folder. Setting a high split level will ensure each file will get written to the server as long as a disk has space for it.
Split Level = 1 Example
The following example demonstrates how the share behaves when the split level is set to 1. The Share name is New_Movies. Each movie stored in this share has its own folder. Inside the movie folder is the movie file as well as some metadata files used by MediaBrowser.
The above Windows Explorer screen shot shows the file structure of the New_Movies share on the left and the contents of the A History of Violence movie folder on the right. The levels for this share are labeled on the example. This is what split level = 1 means:
- A New_Movies folder can be created on each disk allowed by the include and exclude disk settings. A new New_Movies folder will be created on the next disk in line when the allocation method calls for unRAID to begin filling the next disk. Note that the New_Movies folder will only be created on the next disk in line when it is necessary and not when the share is created.
- The A History of Violence folder can only exist on one disk. Once it is created on the disk, all of the contents will remain on the same disk. Any changes or additions to this folder will remain on the same disk. For example, a new file called movie.nfo for the XBMC metadata might be created in this folder in the future. The movie.nfo file will be created in the existing A History of Violence folder. A duplicate A History of Violence folder will not be created on another disk to store this new file.
You will notice that the movie folders 500 Days of Summer (2009) and 2 Fast 2 Furious (2003) both appear in the New_Movies share. The next screen shot will show how each of these files is stored on a separate disk.
The above screen shot shows side by side Window Explorer views of the file structure stored on disk1 and disk2. On the left is disk1 and on the right is disk2. The left Explorer window shows the contents of disk1. The New_Movies share is a folder stored at the top level or the root of disk1 with the individual movie directories stored in this directory. The right Explorer window shows the contents of disk2. The New_Movies share is a folder stored at the top level or the root of disk2 with the individual movie folders stored in this directory. As files were being moved into the New_Movies share, unRAID created the New_Movies folder on both disk1 and disk2 to store these files.
The windows side by side can be used to examine the contents of the New_Movies share on a disk by disk basis. You will notice that the movie folder 500 Days of Summer (2009) is stored on disk1 and the movie folder 2 Fast 2 Furious (2003) is stored on disk2. As previously noted, unRAID combines the movies stored on disk1 and disk2 into one network share called New_Movies and both movies appear in the New_Movies network share.
Take note that a share called Movies is also visible on disk2.
Split Level 0
Split level 0 is a special case. Split level 0 requires you to create the desired top level or parent folder structure. unRAID will unconditionally create an object on the disk that contains the parent folders. unRAID will choose which disk to use according to the allocation method if the parent folders exist on multiple disks.
If you set the Split level to 0, then all directories/files created under that share will be on the same disk where the directory within that share share was originally created. In other words, use level 0 to not allow the share to split automatically across disks
NOTE: If you create the same folder structure on multiple disks then Unraid will apply the other share settings to decide which disk to use.
The server has 4 disks. A user share called Media is desired. Different types of media will be stored in this share. The desired structure is;
- disk1 - will hold the DVD movies.
- disk2 - will hold the BluRay movies.
- disk3 - will hold the BluRay movies.
- disk4 - will hold the TV series.
The desired structure is illustrated below.
On the left side is how the user share will appear and on the right side is the folder structure on each disk. The user will go to each disk and create the folders shown in red to create the storage as listed above. Then, the Media folder as well as the DVD Movies, BluRay Movies and TV Shows folders become the parent folders for everything stored in the Media share. The media will be sorted by disk as follows;
- Movies placed in the DVD Movies folder will go to disk1.
- Movies placed in the BluRay Movies folder will go to disk2 or disk3. The disk is selected by the allocation method.
- TV shows placed in the TV Shows folder will go to disk4.
Say one day that disk1 is full and disk5 is added to the server to hold new DVD Movies. The same folders on disk1 must be created on the new disk5. In other words, the folder Media and sub-folder DVD Movies must be created on disk5. Then, unRAID can use either disk1 or disk5 to store DVD Movies.
Split By Character
Specify a character in the split level box to use this method. Then, unRAID will not allow any folder name containing the character to split. For example, set the split level to an opening square bracket ( [ ) instead of a number. Then, create each movie folder with the year encased in square brackets after the title in this manner - Iron Man 2 . unRAID will see the opening square bracket ( [ ) and it will not split this folder or any content stored inside this folder.
This type of split level can allow different levels of sub-folders to be specified as not splitting simply by inserting the character into the folder name which should not split. This can overcome the limitation of having a fixed split level for a share.
Included and Excluded disk(s)
The included disk(s) and excluded disk(s) parameters control which disks are allowed to be used by each user share. These parameters can be used separately or together to define the group of disks allowed for writing files to each user share. The disks are entered by disk number with a comma separating each disk, for example "disk2,disk5".
unRAID will first check the included disks(s) set and then the Excluded disk(s) set when deciding which disk to place a file on. Then, unRAID will use the split level and allocation method to pick a disk which is allowed to hold the file.
Note: The Include/Exclude settings at the individual share level only control which disks new files can be written to. Files on other disks that are in a folder corresponding to the share name will still show up under that share for read purposes.
The included disks(s) parameter defines the set of disks which are candidates for allocation to that share. All disks may be used by the user share when the Included disk(s) parameter is left blank. Specify the disks to include here. For example, set the included disk(s) to "disk1,disk2,disk3" to allow the share to only use disk1, disk2 and disk3.
The excluded disk(s) parameter defines the set of disks which are excluded from use by the user share. No disks are excluded from use by the user share when the excluded disk(s) parameter is left blank. Specify the disks to exclude here. For example, set the excluded disk(s) to "disk1,disk2" to restrict a share from using disk1 and disk2.
If you have Docker or VMs enabled then a number of default shares are set up to support their use. It is not mandated that you use these shares (and the system will let you remove them if you do not want to use them for their standard purpose) but it is recommended as it tends to make it easier to support users who encounter problems.
The shares that fall into this category are:
- appdata: this is the default location for storing working files associated with docker containers. Typically there will be a sub-folder for each docker container.
- system: this is the default location for storing the docker application binaries, and VM XML templates
- domains: this is the default location for storing virtual disk images (vdisks) that are used by VMs.
- isos: this is the default location for storing CD iso images for use with VMs.
Note: Starting with unRaid 6.9.0 multiple pools can exist and they can have any name the user chooses. Any of these pools can act act as a cache in the way unRaid uses the term. The word cache therefore is referring to this functionality and not necessarily to the pool name.
Unraid includes an application called mover that is used in conjunction with User Shares. It’s behavior controlled by the “Use Cache for new files” setting under each User Share. The way these different settings operate is as follows
- Yes: Write new files to the cache as long as the free space on the cache is above the Minimum free space value. If the free space is below that then by-pass the cache and write the files directly to the main array.
- When mover runs it will attempt to move files to the main array as long as they are not currently open. Which array drive will get the file is controlled by the combination of the Allocation method, Split level, and Minimum Free Space setting for the share.
- No: Write new files directly to the array. Which array drive will get the file is controlled by the combination of the Allocation method, Split level, and Minimum Free Space setting for the share.
- When mover runs it will take no action on files for this share even if there are files on the cache that logically belong to this share.
- Only: Write new files directly to the cache. If the free space on the cache is below the Minimum free space setting for the cache then the write will fail with an out-of-space error.
- When mover runs it will take no action on files for this share even if there are files on the main array that logically belong to this share.
- Prefer: Write new files to the cache if the free space on the cache is above the Minimum free space setting for the share, and if the free space falls below that value then write the files to the main array instead.
- When mover runs it will attempt to move any files for this share that are on the main array back to the cache as long as the free space on the cache is above the Minimum free space setting for the cache
- It is the default setting for the appdata and System Shares that are used to support the Docker and VM sub-systems. In typical use you want the files/folders belonging to these shares to reside on the cache as you get much better performance from Docker containers and VMs if their files are not on the main array (due to the cost of maintaining parity on the main array significantly slowing down write operations).
- This setting works for a share even if you do not have (yet) a physical cache drive(s) as then files will simply be written directly to the array. If at a later date you add a cache drive mover will now automatically try and move the files in any share set to Prefer to the pool defined as the cache for the share
to improve performance. This is why it is the default for shares that are typically located on the cache rather than Only as it caters for those who do not (yet) have a cache drive.
Moving Files from a Pool (cache) to the Array
This is the more traditional usage of a pool for caching where one wants the files for a particular share initially written to a pool acting as a cache to maximise write speed, but later you want it to be moved to the main array for long term storage. Most of the time all that is required is to set the Use Cache setting for the share to Yes and the default behaviour handles the rest with no further user interaction.
Sometimes for one reason or another users find that the files seem to be 'stuck' on a pool. The way to proceed in such a case to get the files belonging to a share from a pool onto the main array is:
- Disable Docker/VM services if they are enabled (as files open in these services cannot be moved).
- Change the Use Cache setting for the share to Yes
- Manually run mover from the Main tab to get it to move Yes type shares from array to the pool (cache).
- When mover finishes you can re-enable the Docker and/or VMs services you use if you disabled them earlier.
- (optional) change the Use Cache setting to Only to say files for this share can never be written to the array.
Moving Files from the Array to a Pool (cache)
One typically wants files associated with running Docker containers or VMs on a pool to maximise performance. It is not unusual for one reason or another to find that one has files on the main array which you really want to be on a pool.
The way to proceed to get the files belonging to a share from the main array onto a pool is:
- Disable Docker/VM services if they are enabled (as files open in these services cannot be moved)
- Change the Use Cache setting for the share to Prefer
- Manually run mover from the Main tab to get it to move Prefer type shares from array to the pool (cache).
- When mover finishes you can re-enable the Docker and/or VMs services you use.
- (optional) change the Use Cache setting to No to say files for this share can never be cached on a pool.
These are shares that relate to individual drives within the Unraid system. By default if User Shares are enabled then disk Shares are not enabled. If you want them this is done under Settings->Global Share Settings. They will then appear under a new section on the Shares tab.
When viewed at the Linux level then disk shares will appear directly under /mnt with a name corresponding to the drive name (e.g. /mnt/disk1 or /mnt/cache).
|If you have both Disk Shares and User Shares enabled then there is an important restriction that you must observe if you want to avoid potential data loss. What you must NEVER do is copy between a User Share and a Disk Share in the same copy operation where the folder name on the Disk Share corresponds to the User Share name. This is because at the base system level Linux does not understand User Shares and therefore that a file on a Disk Share and a User Share can be different views of the same file. If you mix the share types in the same copy command you can end up trying to copy the file to itself which results in the file being truncated to zero length and its content thus being lost.|
There is no problem if the copy is between shares of the same type, or copying to/from a disk mounted as an Unassigned Device..
You can control what protocols should be supported for accessing the Unraid server across the network. Click on Settings->Network Services to see the various options available.. These options are:
- SMB: This the standard protocol used by Windows systems. It is widely implemented on other YS.
- NFS: Network File System. This is a protocol widely used on Unix compatible system.
- AFP: Apple File Protocol. This is the protocol that has historically been used on Apple Mac system. It is now a deprecated option as the latest versions of MacOS now use SMB as the transferred protocol for accessing files and folders over the network.
- FTP: File Transfer Protocol.
When you click on the name of a share on the Shares tab then there is a section that allows you to control the visibility of the share on the network for each of the protocols you have enabled. The setting is labelled Export and has the following options:
- Yes: With this setting the share will be visible across the network.
- Yes (Hidden): With this setting the share can be accessed across the network but will not be listed when browsing the shares on the server. Users can still access the share as long as they know the name and the user is prepared to enter in manually.
- No: With this option selected then it is not possible to access the share across the network.
When you click on the name of a share on the Shares tab then there is a section that allows you to control the access rights of the share on the network for each of the protocols you have enabled. The setting is labelled Security and has the following options:
- Public: All users have both read and write access to the contents of the share
- Private: All users including guests have read access, you select which of your users have write access
- Secure: You select which of your users have access and for each user whether that user has read/write or read-only access.
There is an issue with the way Windows handles network shares that many users fall foul of:
- This is the fact that Windows only allows a single username to be used to connect to a specific server at any given time. All attempts to then connect to a different share on the same server that are not public shares put up a Username/Password prompt and this fails as though you have entered an incorrect password for this username. If you have any shares on the server set to Private or Secure access it can therefore be important that you connect to such a share first before any shares set for Public access which may connect as a guest user and make subsequent attempts to connect with a specific user fail.
- A workaround that can help with avoiding this issue is the fact that if you access a server both by it's network name and via it's IP address then Windows will treat it a two separate servers as far as authentication is concerned.