Difference between revisions of "UnRAID 6/Storage Management"

From unRAID
Jump to: navigation, search
m (Removing parity disk(s))
m (Spelling - Rwmoving - changed to Removing)
(35 intermediate revisions by one other user not shown)
Line 91: Line 91:
 
=== Cannot contact key-server ===
 
=== Cannot contact key-server ===
  
If your server is unable to contact our key server to validate your license, you will not be able to start the array.  The server will attempt to validate upon first boot with a timeout of 30 sec.  If it can't validate upon first boot, then the array won't start, but each time you navigate or refresh the webGui it will attempt validation again (with a very short timeout).  Once validated, it won't phone-home for validation again unless rebooted.
+
This message will only occur if you are using a Trial licence.    If you are using a paid-for licence then the array can be started without the need to contact the Unraid licence server.
 +
 
 +
If your server is unable to contact our key server to validate your Trial license, you will not be able to start the array.  The server will attempt to validate upon first boot with a timeout of 30 sec.  If it can't validate upon first boot, then the array won't start, but each time you navigate or refresh the webGui it will attempt validation again (with a very short timeout).  Once validated, it won't phone-home for validation again unless rebooted.
  
 
=== This Unraid Server OS release has been withdrawn ===
 
=== This Unraid Server OS release has been withdrawn ===
Line 152: Line 154:
 
# Stop the array.
 
# Stop the array.
 
# Power down the unit.
 
# Power down the unit.
# Install new larger parity disks.
+
# Install new larger parity disks.  Note if you do this as your first step then steps 2 & 4 listed here are not needed.
 
# Power up the unit.
 
# Power up the unit.
 
# Assign a larger disk to the parity slot (replacing the former parity device).
 
# Assign a larger disk to the parity slot (replacing the former parity device).
 
# Start the array.
 
# Start the array.
  
When you start the array, the system will once again perform a parity sync to the new parity device and when it completes the array will once again be in a protected state.  If you have a dual parity system and wish to upgrade both of your parity disks, it is recommended to perform this procedure one parity disk at a time, as this will allow for your array to still be in a protected state throughout the entire upgrade process.
+
When you start the array, the system will once again perform a parity sync to the new parity device and when it completes the array will once again be in a protected state.  It is recommended that you keep the old parity drives contents intact until the above procedure completes as if an array drive fails during this procedure so you cannot complete building the contents of the new parity disk, then it is possible to use the old one for recovery purposes (ask on the forum for the steps involved).  If you have a dual parity system and wish to upgrade both of your parity disks, it is recommended to perform this procedure one parity disk at a time, as this will allow for your array to still be in a protected state throughout the entire upgrade process.
  
 
Once you've completed the upgrade process for a parity disk, the former parity disk can be considered for assignment and use in the array as an additional data disk (depending on age and durability).
 
Once you've completed the upgrade process for a parity disk, the former parity disk can be considered for assignment and use in the array as an additional data disk (depending on age and durability).
Line 224: Line 226:
 
:* If the syslog shows that resets are occurring on the drive then this is a good indication of a connection problem.
 
:* If the syslog shows that resets are occurring on the drive then this is a good indication of a connection problem.
 
:* The SMART report for the drive is a good place to start.
 
:* The SMART report for the drive is a good place to start.
 +
:* The SMART attributes can indicate a drive is healthy when in fact it is not.    A better indication of health is whether the drive can successfully complete the SMART extended test without error.  If it cannot complete this test error free then there is a high likelihood that the drive is not healthy.
 
:* CRC errors are almost invariably cabling issues.  It is important to realize that this SMART attribute is never reset to 0 so if it stops increasing that is what you should be aiming to achieve.
 
:* CRC errors are almost invariably cabling issues.  It is important to realize that this SMART attribute is never reset to 0 so if it stops increasing that is what you should be aiming to achieve.
 
* If you have sufficient parity drives then Unraid will emulate the failed drive using the combination of the parity drive(s) and the remaining 'good' drives.  From a user perspective this results in the system reacting as if the failed drive is still present.
 
* If you have sufficient parity drives then Unraid will emulate the failed drive using the combination of the parity drive(s) and the remaining 'good' drives.  From a user perspective this results in the system reacting as if the failed drive is still present.
Line 253: Line 256:
 
# Click the checkbox that says ''Yes I want to do this'' and then click '''Start'''.
 
# Click the checkbox that says ''Yes I want to do this'' and then click '''Start'''.
  
When you start the array after replacing a failed disk or disks, the system will reconstruct the contents onto the new disk(s) and, if the new disk(s) is/are bigger, expand the file system.
+
When you start the array in normal mode after replacing a failed disk or disks, the system will reconstruct the contents onto the new disk(s) and, if the new disk(s) is/are bigger, expand the file system.  If you start the array in Maintenance mode you will need to press the ''Sync'' button to trigger the rebuild.
  
 
You must replace a failed disk with a disk which is as big or bigger than the original and not bigger than the parity disk.
 
You must replace a failed disk with a disk which is as big or bigger than the original and not bigger than the parity disk.
Line 264: Line 267:
 
# Stop array  
 
# Stop array  
 
# Reassign disabled disk  
 
# Reassign disabled disk  
# Start array to begin rebuild
+
# Start array to begin rebuild.  If you start the array in Maintenance mode you will need to press the ''Sync'' button to start the rebuild.
  
 
==== Parity Swap ====
 
==== Parity Swap ====
Line 328: Line 331:
 
# Stop the array.
 
# Stop the array.
 
# Set the slot for the parity disk you wish to remove to ''Unassigned''.
 
# Set the slot for the parity disk you wish to remove to ''Unassigned''.
# Start the array to commit the change
+
# Start the array to commit the change and 'forget' the previously assigned parity drive.
  
 
'''CAUTION:'''  If you already have any failed data drives in the array be aware that removing a parity drive reduces the number of failed drives Unraid can handle without potential data loss.  
 
'''CAUTION:'''  If you already have any failed data drives in the array be aware that removing a parity drive reduces the number of failed drives Unraid can handle without potential data loss.  
Line 453: Line 456:
 
# Rename the file ''super.old'' to ''super.dat''
 
# Rename the file ''super.old'' to ''super.dat''
 
# Refresh the browser on the '''Main''' page and your array configuration will be restored
 
# Refresh the browser on the '''Main''' page and your array configuration will be restored
 +
 +
== Notifications ==
 +
 +
TBD
 +
 +
=== Status Reports ===
 +
 +
Unraid can be configured to send you status reports about the state of the array.
 +
 +
An important point about these reports is:
 +
*  They only tell you if the array currently has any disks disabled or showing read/write errors.
 +
*  The status is reset when you reboot the system, so it is does not tell you what the status was in the past.
 +
*  '''IMPORTANT''': The status report does not take into account the SMART status of the drive.  You can therefore get a status report indicating that the array appears to be healthy even though the SMART information might indicate that a disk might not be too healthy.
 +
 +
== SMART Monitoring ==
 +
 +
Unraid can be configured to report whether SMART attributes for a drive are changing.  The idea is to try and tell you in advance if drives might be experiencing problems even though they have not yet caused read/write errors so that you can take pre-emptive action before a problem becomes serious and thus might potentially lead do data loss.  You should have notifications enabled so that you can see these notifications even when you are not running the Unraid GUI.
 +
 +
SMART monitoring is currently only supported for SATA drives, and is not available for SAS drives.
 +
 +
Which SMART attributes are monitored can be configured by the user, but the default ones are:
 +
 +
* 5: Reallocated Sectors count
 +
* 187: Reported uncorrected errors
 +
* 188: Command timeout
 +
* 197: Current /pending Sector Count
 +
* 198: Uncorrectable sector count
 +
* 199: UDMA CRC error count
 +
 +
If any of these attributes change value then this will be indicated on the Dashboard by the icon against the drive turning orange.  You can click on this icon and a menu will appear that allows you to acknowledge that you have seen the attribute change, and then Unraid will stop telling you about it unless it changes again.
 +
 +
You can manually see all the current SMART information for a drive by clicking on its name on the Main tab in the Unraid GUI.
  
 
= Cache Operations =
 
= Cache Operations =
Line 531: Line 566:
 
<br>
 
<br>
  
== Rwmoving disks from a cache pool ==
+
== Removing disks from a cache pool ==
  
 
Notes:
 
Notes:
Line 589: Line 624:
  
 
<br>
 
<br>
 +
 +
== Remove a disk from a cache pool ==
 +
 +
There have been times when users have indicated they would like to remove a disk from a cache pool they
 +
have set up while keeping all the data intact.  This cannot be done from the Unraid GUI but is easy enough
 +
to do from the command line in a console session.
 +
 +
'''Note''': You need to maintain the minimum number of devices for the profile in use,
 +
i.e., you can remove a a device from a 3+ device raid0 pool but you can't remove one from a 2 device raid0 pool
 +
(unless it's converted to single profile first).
 +
 +
With the array running type on the console:
 +
 +
  btrfs dev del /dev/mapper/sdX1 /mnt/cache
 +
 +
Replace X with correct letter for the drive you want to remove from the system as shown on the Main tab
 +
(don't forget the 1 after it).
 +
 +
Wait for the device to be deleted (i.e., until the command completes and you get the cursor back).
 +
 +
Device is now removed from the pool, you don't need to stop the array now, but at next array stop you need to make Unraid forget the now deleted member,
 +
and to achieve that:
 +
 +
* Stop the array
 +
* Unassign all pool devices
 +
* Start the array to make Unraid "forget" the pool config
 +
: If the docker and/or VMs services were using that pool best to disable those services before start or Unraid will recreate the images somewhere else, assuming they are using /mnt/user paths)
 +
* Stop array (re-enable docker/VM services if disabled above)
 +
* Re-assign all pool member except the removed device
 +
* Start array
 +
Done
 +
 +
You can also remove multiple devices with a single command (as long as the above rule is observed):
 +
 +
  btrfs dev del /dev/mapper/sdX1 /dev/mapper/sdY1 /mnt/cache
 +
 +
but in practice this does the same as removing one device, then the other, as they are still removed one at a time, just one after the other with n/ further input from you.
  
 
= File System Management =
 
= File System Management =
Line 597: Line 669:
  
 
* '''XFS''':  This is the default format for array drives on a new system.  It is a well tried Linux file system and deemed to be the most robust.
 
* '''XFS''':  This is the default format for array drives on a new system.  It is a well tried Linux file system and deemed to be the most robust.
 +
** XFS is better at recovering from file system corruption than BTRFS (which can happen after unclean shutdowns or system crashes).
 
* '''BTRFS''':  This is a newer file system that supports advanced features not available with XFS.  It is considered not quite as stable as XFS but many Unraid users have reported in seems as robust as XFS when used on array drives where each drive is a self-contained file system.  Some of it's features are:
 
* '''BTRFS''':  This is a newer file system that supports advanced features not available with XFS.  It is considered not quite as stable as XFS but many Unraid users have reported in seems as robust as XFS when used on array drives where each drive is a self-contained file system.  Some of it's features are:
 +
** It supports detecting file content corruption (often colloquially known as bit-rot) by internally using check summing techniques
 
** It can support a single file system spanning multiple drives, and in such a case it is not necessary that the drives all be of the same size.
 
** It can support a single file system spanning multiple drives, and in such a case it is not necessary that the drives all be of the same size.
 
** In multi-drive mode various levels of RAID can be supported (although these are a BTRFS specific implementation and not necessarily what one expects).  The default in Unraid for a cache pool is RAID1 so that data is stored redundantly to protect against drive failure.
 
** In multi-drive mode various levels of RAID can be supported (although these are a BTRFS specific implementation and not necessarily what one expects).  The default in Unraid for a cache pool is RAID1 so that data is stored redundantly to protect against drive failure.
 
** is the only option supported when using a cache pool spanning multiple drives that need to run as a single logical drive as this needs the multi-drive support.
 
** is the only option supported when using a cache pool spanning multiple drives that need to run as a single logical drive as this needs the multi-drive support.
** It supports detecting file content corruption (often colloquially known as bit-rot) by internally using check summing techniques
 
 
** In multi-drive mode in the cache pool it is not always obvious how much usable space you will end up with.  The [https://carfax.org.uk/btrfs-usage/ BTRFS Space Calculator] can help with this.
 
** In multi-drive mode in the cache pool it is not always obvious how much usable space you will end up with.  The [https://carfax.org.uk/btrfs-usage/ BTRFS Space Calculator] can help with this.
 
* '''ReiserFS''':  This is supported for legacy reasons for those migrating from earlier versions of Unraid where it was the only supported file system type.
 
* '''ReiserFS''':  This is supported for legacy reasons for those migrating from earlier versions of Unraid where it was the only supported file system type.
Line 607: Line 680:
 
** It has a hard limit of 16TB on a ReiserFS file system and commercial grade hard drives have now reached this limit.
 
** It has a hard limit of 16TB on a ReiserFS file system and commercial grade hard drives have now reached this limit.
 
** Write performance can degrade significantly as the file system starts getting full.
 
** Write performance can degrade significantly as the file system starts getting full.
 +
** It is extremely good at recovering from even extreme levels of file system corruption.
 
** It is now deprecated for use with Unraid and should not be used by new users.
 
** It is now deprecated for use with Unraid and should not be used by new users.
  
Line 646: Line 720:
  
 
Once the format has completed then the drive is ready to start being used to store files.
 
Once the format has completed then the drive is ready to start being used to store files.
 +
 +
== Drive shows as unmountable ==
 +
 +
A drive can show as '''unmountable''' in the Unraid GUI for two reasons:
 +
# The disk has never been used in Unraid and you have just added is new a new disk slot in the array.  In this case you want to follow the format procedure shown above to create a new empty file system on the drive so it is ready to receive files.
 +
# File system corruption has occurred. This is not infrequent if a write to a disk fails for any reason and Unraid marks the disk as disabled, although it can occur at other times as well.  In such a sase you want to use the file system check/repair process documented below to get the disk back into a state where you can mount it again and see all its data.  Note that this process can be carried out on a disk that is being ‘emulated’ by Unraid prior to carrying out any rebuild process.
  
 
== Checking a File System ==
 
== Checking a File System ==
  
If a disk that was previously mounting fine suddenly start showing as '''''unmountable''''' then this normally means that there is some sort of corruption at the file system level.  This most commonly occurs after an unclean shutdown but could happen any time a write to a drive files or if the drive ends up being marked as ''''''disabled'''''' (i.e. with a red ',' in the Unraid GUI).  
+
If a disk that was previously mounting fine suddenly start showing as '''''unmountable''''' then this normally means that there is some sort of corruption at the file system level.  This most commonly occurs after an unclean shutdown but could happen any time a write to a drive fails or if the drive ends up being marked as ''''''disabled'''''' (i.e. with a red ',' in the Unraid GUI). If the drive is marked as disable and being emulated then the check is run against the emulated drive and not the physical drive.
  
 
'''IMPORTANT:'''   
 
'''IMPORTANT:'''   
At this point the Unraid GUI will be offering an option to format unmountable drives.  This will '''erase''' all content on the drive and '''update parity''' to reflect this making recovering the data impossible/very difficult so do '''NOT''' do this unless you are happy to lose the contents of the drive.
+
At this point the Unraid GUI will be offering an option to format unmountable drives.  This will '''erase''' all content on the drive and '''update parity''' to reflect this making recovering the data impossible/very difficult so do '''NOT''' do this unless you are happy to lose the contents of the drive.  
  
 
To recover from file system corruption then one needs to run the tool that is appropriate to the file system on the disk. Points to note that users new to Unraid often misunderstand are:
 
To recover from file system corruption then one needs to run the tool that is appropriate to the file system on the disk. Points to note that users new to Unraid often misunderstand are:
Line 667: Line 747:
 
# Information on the check progress is now displayed.  You may need to use the ''Refresh'' button to get it to update.
 
# Information on the check progress is now displayed.  You may need to use the ''Refresh'' button to get it to update.
 
# If you are not sure what the results of the check mean you should copy the progress information so you can ask a question in the forum.  When including this information as part of a forum post use the mark them as ''code'' (using the '''<?>''' icon) to preserve the formatting as otherwise it becomes difficult to read.
 
# If you are not sure what the results of the check mean you should copy the progress information so you can ask a question in the forum.  When including this information as part of a forum post use the mark them as ''code'' (using the '''<?>''' icon) to preserve the formatting as otherwise it becomes difficult to read.
 +
 +
If you ever need to run a check on a drive that is not part of the array then you need to run the appropriate command from a console/terminal session. As an example for an XFS disk you would use a command of the form:
 +
  XFS_repair /dev/sdX1
 +
where X corresponds to the device identifier shown in the Unpaid GUI.  Points to note are:
 +
* The value of X can change when Unraid is rebooted so make sure it is correct for the current boot
 +
* Note the presence of the '1' on the end to indicate the partition to be checked.
 +
* The reason for not doing it this way on array drives is that although the disk would be repaired parity would be invalidated which can predudice the chances of recovering a failed drive until valid parity has been re-established.
  
 
== Repairing a File System ==
 
== Repairing a File System ==
  
You typically run this just after running a check as outlined above, but if skipping that follow steps 1-4 to get to the point of being ready to run the repair.  It is a good idea to enable the Help built into the GUI to get more information on this process.
+
You typically run this just after running a check as outlined above, but if skipping that follow steps 1-4 to get to the point of being ready to run the repair.  It is a good idea to enable the Help built into the GUI to get more information on this process.
 +
 
 +
If the drive is marked as disable and being emulated then the repair is run against the emulated drive and not the physical drive.  It is frequently done before attempting to rebuild a drive as it is the contents of the emulated drive that is used by the rebuild process.
  
 
# Remove any parameters from the ''Options'' field that would cause the tool to run in ''check-only'' mode.
 
# Remove any parameters from the ''Options'' field that would cause the tool to run in ''check-only'' mode.
Line 676: Line 765:
 
#* The Help build into the GUI can provide guidance on what options might be applicable.
 
#* The Help build into the GUI can provide guidance on what options might be applicable.
 
# Press the Check button to start the repair process.  You can now periodically use the ''Refresh'' button to update the progress information
 
# Press the Check button to start the repair process.  You can now periodically use the ''Refresh'' button to update the progress information
# If the repair does not complete for any reason then ask in the forum for advice on how to best proceed.   
+
# If the repair does not complete for any reason then ask in the forum for advice on how to best proceed if you are not sure.   
#* When asking such a question and when including the output from the repair attempt as part of a forum post use [[File:Code-icon.jpg|Code]] option to preserve the formatting as otherwise it becomes difficult to read
+
#* If repairing a XFS formatted drive then it is quite normal for the ''xfs_repair'' process to give you a warning and saying you need to provide the '''''-L''''' option to proceed.  Despite this ominous warning message this is virtually always the right thing to do and does not result in data loss.
 +
#* When asking a question in the forum and when including the output from the repair attempt as part of a your post use [[File:Code-icon.jpg|Code]] option to preserve the formatting as otherwise it becomes difficult to read
 
# If the repair completes without error then stop the array and restart in normal mode.  The drive should now mount correctly.
 
# If the repair completes without error then stop the array and restart in normal mode.  The drive should now mount correctly.
  
Line 723: Line 813:
 
If by any chance you want to reformat a drive to erase its contents keeping the existing file system type then many users find that it may not be obvious how to do this from the Unraid GUI.
 
If by any chance you want to reformat a drive to erase its contents keeping the existing file system type then many users find that it may not be obvious how to do this from the Unraid GUI.
  
The way to do this is to follow the above process for changing the file system type twice.  The first time you change it to any other type, and then once it has been formatted to the new type repeat the process this time setting the type back to the one you started with.
+
The way to do this is to follow the above process for [https://wiki.unraid.net/UnRAID_6/Storage_Management#Changing_a_File_System_type| changing the file system type] twice.  The first time you change it to any other type, and then once it has been formatted to the new type repeat the process this time setting the type back to the one you started with.
 +
 
 +
This process will only take a few minutes, and as you go parity is updated accordingly.
  
 +
== Reformatting a cache drive ==
 +
 +
There may be times when you want to change the format used on the cache drive (or some similar operation) and preserve as much of its existing contents as possible.  In such cases the recommended way to proceed that is least like;y to go wrong is:
 +
# Stop array.
 +
# Disable docker and VM services under Settings
 +
# Start array. If you have correctly disabled these services there will be NO Docker or VMstab in the GUI.
 +
# Set all shares that have files on the cache and are currently not have a Use Cache:Yes to BE Cache:Yes. Make a note of which shares you changed and what setting they had before the change
 +
# Run mover from the Main tab;  wait for completion (which can take some time to complete if there are a lot of files); check cache drive contents, should be empty. If it's not, STOP, post diagnostics and ask for help.
 +
# Stop array.
 +
# Set cache drive desired format to XFS or BTRFS, if you only have a single cache disk and are keeping that configuration, then XFS is the recommended format.  XFS is only available as a selection if there is only 1 (one) cache slot shown while the array is stopped.
 +
# Start array.
 +
# Verify that the cache drive and ONLY the cache drive shows unformatted. Select the checkbox saying you are sure, and format the drive.
 +
# Set any shares that you changed to be Cache: Yes earlier to Cache: Prefer if they were originally Cache: Only or Cache: Prefer. If any were Cache: No, set them back that way.
 +
# Run mover from the Main tab; wait for completion; check cache drive contents which should be back the way it was.
 +
# change any share that we’re set to Use Cache:Only back to that option
 +
# Stop array.
 +
# Enable docker and VM services.
 +
# Start array
 +
 +
There are other alternative procedure that might be faster if you are Linux aware, but the one shown above is the one that has proved most likely to succeed without error for the average Unraid user.
  
 
== BTRFS Operations ==
 
== BTRFS Operations ==
Line 871: Line 983:
 
User Shares can be enabled/disabled via Settings->Global Share Settings.
 
User Shares can be enabled/disabled via Settings->Global Share Settings.
  
From the '''Shares''' tab, you can either ''create'' a new share or ''edit' an existing share.  Click the '''Help''' icon in the top-right of the Unraid webGui when configuring shares for more information on the settings available.
+
From the '''Shares''' tab, you can either ''create'' a new share or ''edit' an existing share.  Click the '''Help''' icon in the top-right of the Unraid webGui when configuring shares for more information on the settings available.
  
 
User Shares are implemented by using Linux Fuse file system support.  What they do is provide an aggregated view of all top level folders of the same name across the cache and the array drives.  The name of this top level folder is used as the share name. From a user perspective this gives a view that can span multiple drives when viewed at the network level. Note that no individual file will span multiple drives - it is just the directory level that is given a unified view.
 
User Shares are implemented by using Linux Fuse file system support.  What they do is provide an aggregated view of all top level folders of the same name across the cache and the array drives.  The name of this top level folder is used as the share name. From a user perspective this gives a view that can span multiple drives when viewed at the network level. Note that no individual file will span multiple drives - it is just the directory level that is given a unified view.
  
When viewed at the Linux level then User Shares will appear under the path ''/mnt/user''.
+
When viewed at the Linux level then User Shares will appear under the path ''/mnt/user''.  It is important to note that a User Share is just a logical view imposed on top of the underlying physical file system so you can see the same files if you look at the physical level (as described below for Disk Shars.
 
* Current releases of Unraid also include the mount point ''/mnt/user0'' that shows the files in User Shares OMITTING any files for a share that are on the cache drive.  ''However This mount point is now deprecated ant likely to stop being available in a future Unraid release.''
 
* Current releases of Unraid also include the mount point ''/mnt/user0'' that shows the files in User Shares OMITTING any files for a share that are on the cache drive.  ''However This mount point is now deprecated ant likely to stop being available in a future Unraid release.''
  
 
Normally one creates User Shares using the Shares tab. However if you manually create a top level folder on any drive the system will automatically consider this to be a user Share and give it default settings.
 
Normally one creates User Shares using the Shares tab. However if you manually create a top level folder on any drive the system will automatically consider this to be a user Share and give it default settings.
 +
 +
Which physical drive in the main array is used to store a physical file is controlled by a number of settings for the share:
 +
* '''Included''' or '''excluded''' drives:  These settings allow you to control which array drives can hold files for the share.  Never set both values, set only the one that is most convenient for you.    If no drives are specified under these settings then all drives allowed under ''Settings >> Global Share settings'' are allowed.
 +
* '''Minimum free space''': 
 +
* '''Allocation method''''
 +
* '''Split level''':  This setting controls how files should be grouped.
 +
: '''Important''':  in the event of there being contentions between the ''''Minimum free space'''' and the ''Allocation method'' settings in deciding which would be an appropriate drive to use the ''Split level'' setting always wins.    This means that you can get an out-of-space error even though there is plenty of space on other array drives that the share can logically use.
  
 
'''Important''': The Linux file system used by Unraid are case sensitive while the SMB share system is not.  As an example this means that a folder at the Linux level a folder called 'madia' is different to one called 'Media'.  However at the network level case is ignored so for example 'media', Media', 'MEDIA' would all be the same share.  However to take this example further you would only get the content of one of the underlying 'media' or 'Media' folders to appear at the network share level - and it can be non-obvious which one this would be.
 
'''Important''': The Linux file system used by Unraid are case sensitive while the SMB share system is not.  As an example this means that a folder at the Linux level a folder called 'madia' is different to one called 'Media'.  However at the network level case is ignored so for example 'media', Media', 'MEDIA' would all be the same share.  However to take this example further you would only get the content of one of the underlying 'media' or 'Media' folders to appear at the network share level - and it can be non-obvious which one this would be.
 +
 +
=== Mover Behavior with User Shares ===
 +
 +
Unraid includes an application called '''mover''' that is used in conjunction with User Shares.  It’s behavior controlled by the “Use Cache for new files”  setting under each User Share.    The way these different settings operate is as follows
 +
* '''Yes''':    Write new files to the cache as long as the free space on the cache is above the ''Minimum free space'' value.    If the free space is below that then by-pass the cache and write the files directly to the main array.
 +
: When ''mover'' runs it will attempt to move files to the main array as long as they are not currently open.  Which array drive will get the file is controlled by the combination of the ''Allocation method'' and ''Split level'' setting for the share.
 +
* '''No''':  Write new files directly to the array.
 +
: When ''mover'' runs it will take '''no''' action on files for this share even if there are files on the cache that logically belong to this share.
 +
* '''Only''': Write new files directly to the cache.  If the free space on the cache is below the ''Minimum free space'' setting for the cache then the write will fail with an out-of-space error.
 +
: When ''mover'' runs it will take '''no''' action on files for this share even if there are files on the main array that logically belong to this share.
 +
* '''Prefer''':  Write new files to the cache if the free space on the cache is above the ''Minimum free space'' setting for the share, and if the free space falls below that value then write the files to the main array instead.
 +
: When ''mover'' runs it will attempt to move any files for this share that are on the main array back to the cache as long as the free space on the cache is above the ''Minimum free space'' setting for the cache
 +
: It is the default setting for the ''appdata'' and ''System '' Shares that are used to support the Docker and VM sub-systems.    In typical use you want the files/folders belonging to these shares to reside on the cache as you get much better performance from Docker containers and VMs if their files are not on the main array (due to the cost of maintaining parity on the main array significantly slowing down write operations).
 +
:  This setting works for the cache share even if you do not have (yet) a physical cache drive(s).  This is why it is the default for these shares rather than ''Only''.
  
 
== Disk Shares ==
 
== Disk Shares ==
Line 888: Line 1,021:
 
When viewed at the Linux level then disk shares will appear directly under ''/mnt'' with a name corresponding to the drive name (e.g. ''/mnt/disk1'' or ''/mnt/cache'').
 
When viewed at the Linux level then disk shares will appear directly under ''/mnt'' with a name corresponding to the drive name (e.g. ''/mnt/disk1'' or ''/mnt/cache'').
  
<center>'''IMPORTANT'''</center>
+
{| border=1
If you have both Disk Shares and User Shares enabled then there is an important restriction that you must observe if you want to avoid potential data loss.  What you must NEVER do is copy between a User Share and a Disk Share where the folder name on the Disk Share corresponds to the User Share name.  This is because at the base system level Linux does not understand User Shares and therefore that a file on a Disk Share and a User Share can be different views of the same file.  If you mix the share types in the same copy command you can end up trying to copy the file to itself which results in the file being truncated to zero length and its content thus being lost. There is no problem if the copy is between shares of the same type.
+
! '''IMPORTANT'''
 
+
|-
 +
| If you have both ''Disk Shares'' and ''User Shares'' enabled then there is an important restriction that you must observe if you want to avoid potential data loss.  What you must '''NEVER''' do is copy between a '''User Share''' and a '''Disk Share''' in the same copy operation where the folder name on the Disk Share corresponds to the User Share name.  This is because at the base system level Linux does not understand ''User Shares'' and therefore that a file on a ''Disk Share'' and a ''User Share'' can be different views of the '''same''' file.  If you mix the share types in the same copy command you can end up trying to copy the file to itself which results in the file being truncated to zero length and its content thus being lost.<br><br>There is no problem if the copy is between shares of the same type, or copying to/from a disk mounted as an Unassigned Device..
 +
|-
 +
|}
  
 
== Network access ==
 
== Network access ==

Revision as of 22:24, 3 July 2020

Official Documentation Contents List

Contents

Assigning storage devices

Configuringarray1.png

To assign devices to the array and/or cache, first login to the server's webGui. Click on the Main tab and select the devices to assign to slots for parity, data, and cache disks. Assigning devices to Unraid is easy! Just remember these guidelines:

  • Always pick the largest storage device available to act as your parity device(s). When expanding your array in the future (adding more devices to data disk slots), you cannot assign a data disk that is larger than your parity device(s). For this reason, it is highly recommended to purchase the largest HDD available for use as your initial parity device, so future expansions aren’t limited to small device sizes. If assigning dual parity disks, your two parity disks can vary in size, but the same rule holds true that no disk in the array can be larger than your smallest parity device.
  • SSD support in the array is experimental. Some SSDs may not be ideal for use in the array due to how TRIM/Discard may be implemented. Using SSDs as data/parity devices may have unexpected/undesirable results. This does NOT apply to the cache / cache pool. Most modern SSDs will work fine in the array, and even NVMe devices are now supported, but know that until these devices are in wider use, we only have limited testing experience using them in this setting.
  • Using a cache will improve array performance. It does this by redirecting write operations to a dedicated disk (or pool of disks in Unraid 6) and moves that data to the array on a schedule that you define (by default, once per day at 3:40AM). Data written to the cache is still presented through your user shares, making use of this function completely transparent.
  • Creating a cache-pool adds protection for cached data. If you only assign one cache device to the system, data residing there before being moved to the array on a schedule is not protected from data loss. To ensure data remains protected at all times (both on data and cache disks), you must assign more than one device to the cache function, creating what is called a cache-pool. Cache pools can be expanded on demand, similar to the array.
  • SSD-based cache devices are ideal for applications and virtual machines. Apps and VMs benefit from SSDs as they can leverage their raw IO potential to perform faster when interacting with them. Use SSDs in a cache pool for the ultimate combination of functionality, performance, and protection.
  • Encryption is disabled by default. If you wish to use this feature on your system, you can do so by adjusting the file system for the devices you wish to have encrypted. Click on each disk you wish to have encrypted and toggle the filesystem to one of the encrypted options.

NOTE: Your array will not start if you assign or attach more devices than your license key allows.

Starting and stopping the array

Normally following system boot up the array (complete set of disks) is automatically started (brought on-line and exported as a set of shares). But if there's been a change in disk configuration, such as a new disk added, the array is left stopped so that you can confirm the configuration is correct. This means that any time you have made a disk configuration change you must log into the webGui and manually start the array. When you wish to make changes to disks in your array, you will need to stop the array to do this. Stopping the array means all of your applications/services are stopped, and your storage devices are unmounted, making all data and applications unavailable until you once again start the array. To start or stop the array, perform the following steps:

  1. Log into the Unraid webGui using a browser (e.g. http://tower; http://tower.local from Mac)
  2. Click on Main
  3. Go to the Array Operation section
  4. Click Start or Stop (you may first need to click the "Yes I want to do this" checkbox)

Help! I can't start my array!

If the array can't be started, it may be for one of a few reasons which will be reported under the Array Operation section:

  • Too many wrong and/or missing disks
  • Too many attached devices
  • Invalid or missing registration key
  • Cannot contact key-server
  • This Unraid Server OS release has been withdrawn

Too many disks missing from the array

indication that you have too many devices missing or incorrectly assigned

If you have no parity disks, this message won't appear.

If you have a single parity disk, you can only have up to one disk missing and still start the array, as parity will then help simulate the contents of the missing disk until you can replace it.

If you have two parity disks, you can have up to two disks missing and still start the array.

If more than two disks are missing / wrong due to a catastrophic failure, you will need to perform the New Config procedure.

Too many attached devices

indication that you have too many storage devices attached

Storage devices are any devices which present themselves as a block storage device EXCLUDING the USB flash device used to boot Unraid Server OS. Storage devices can be attached via any of the following storage protocols: IDE/SATA/SAS/SCSI/USB. This rule only applies prior to starting the array. Once the array is started, you are free to attach additional storage devices and make use of them (such as USB flash devices for assignment to virtual machines). In Unraid Server OS 6, the attached storage device limits are as follows:

Attached Storage Device Limits by Registration Key
Trial Basic Plus Pro
Unlimited 6 12 Unlimited

NOTE: The attached device limits do NOT refer to how many devices you can assign to the array or cache. Those limits are imposed by the software, not the license policy.

Invalid or missing key

indication that your key is missing or invalid

Missing key

A valid registration key is required in order to start the array. To purchase or get a trial key, perform the following steps:

  1. Log into the Unraid webGui using a browser (e.g. http://tower from most device, http://tower.local from Mac devices)
  2. Click on Tools
  3. Click on Registration
  4. Click to Purchase Key or Get Trial Key and complete the steps presented there
  5. Once you have your key file link, return to the Registration and paste it in the field then click Install Key.

Expired trial

If the word "expired" is visible at the top left of the webGui, this means your trial key has expired. Visit the registration cage to request either an extension to your trial or purchase a valid registration key.

Blacklisted USB flash device

If your server is connected to the Internet and your trial hasn't expired yet, it is also possible that your USB flash device contains a GUID that is prohibited from registering for a key. This could be because the GUID is not truly unique to your device or has already been registered by another user. It could also be because you are using an SD card reader through a USB interface, which also tend to be provisioned with a generic GUID. If a USB flash device is listed as blacklisted, this is a permanent state and you will need to seek an alternative device to use for your Unraid Server OS installation.

Cannot contact key-server

This message will only occur if you are using a Trial licence. If you are using a paid-for licence then the array can be started without the need to contact the Unraid licence server.

If your server is unable to contact our key server to validate your Trial license, you will not be able to start the array. The server will attempt to validate upon first boot with a timeout of 30 sec. If it can't validate upon first boot, then the array won't start, but each time you navigate or refresh the webGui it will attempt validation again (with a very short timeout). Once validated, it won't phone-home for validation again unless rebooted.

This Unraid Server OS release has been withdrawn

If you receive this message, it means you are running a beta or release candidate version of Unraid that has been marked disabled from active use. Upgrade the OS to the latest stable, beta, or release candidate version in order to start your array.

Array operations

There are a number of operations you can perform against your array:

  • Add disks
  • Replace disks
  • Remove disks
  • Check disks
  • Spin disks up/down
  • Reset the array configuration

NOTE: In cases where devices are added/replaced/removed, etc., the instructions say "Power down" ... "Power up". If your server's hardware is designed for hot/warm plug, Power cycling is not necessary and Unraid is designed specifically to handle this. All servers built by LimeTech since the beginning are like this: no power cycle necessary.

Adding disks

Data Disks

This is the normal case of expanding the capacity of the system by adding one or more new hard drives.

The capacity of any new disk(s) added must be the same size or smaller than your parity disk. If you wish to add a new disk which is larger than your parity disk, then you must instead first replace your parity disk. (You could use your new disk to replace parity, and then use your old parity disk as a new data disk).

The procedure is:

  1. Stop the array.
  2. Power down the server.
  3. Install your new disk(s).
  4. Power up the server.
  5. Assign the new storage device(s) to a disk slot(s) using the Unraid webGui.
  6. Start the array.
  7. Unraid will now automatically begin to clear the disk which is required before it can be added to the array.
    • If a disk has been pre-cleared before adding it Unraid will recognize this and go straight to the next step.
    • The clearing phase is necessary to preserve the fault tolerance characteristic of the array. If at any time while the new disk(s) is being cleared, one of the other disks fails, you will still be able to recover the data of the failed disk.
    • The clearing phase can take several hours depending on the size of the new disks(s) and although the array is available during this process Unraid will not be able to use the new disk(s) for storing files until the clear has completed and the new disk has been formatted.
  8. Once the disk has been cleared, an option to format the disk will appear in the webGui. At this point the disk is added to the array and shows as unmountable and the option to format unmountable disks is shown.
    • Check that the serial number of the disk(s) is what you expect. You do not want to format a different disk (thus erasing its contents) by accident.
  9. Click the check box to confirm that you want to proceed with the format procedure.
    • A warning dialog will be given warning you of the consequences as once you start the format the disks listed will have any existing contents erased and there is no going back. This warning may seem a bit like over-kill but there have been times that users have used the format option when it was not the appropriate action.
  10. The format button will now be enabled so you can click on it to start the formatting process.
  11. The format should only take a few minutes and after the format completes the disk will show as mounted and ready for use.
    • You will see that a small amount of space will already show as used which is due to the overheads of creating the empty file system on the drive.

You can add as many new disks to the array as you desire at one time, but none of them will be available for use until they are both cleared and formatted with a filesystem

Parity Disks

It is not mandatory for an ‘Unraid system to have a parity disk, but it is normal to provide redundancy. A parity disk can be added at any time, Each parity disk provides redundancy against one data drive failing.

Any parity disk you add must be at least as large as the largest data drive (although it can be larger). If you have two parity drives then it is not required that they be the same size although it is required that they both follow the rule of being at least as large as the largest data drive.

The process for adding a parity disk is identical to that for adding a data disk except that when you start the array after adding it Unraid will start to build parity on the drive that you have just added.


Upgrading parity disk(s)

If you wish to upgrade your parity device(s) to a larger one(s) so you can start using larger sized disks in the array or to add an additional parity drive, the procedure is as follows:

  1. Stop the array.
  2. Power down the unit.
  3. Install new larger parity disks. Note if you do this as your first step then steps 2 & 4 listed here are not needed.
  4. Power up the unit.
  5. Assign a larger disk to the parity slot (replacing the former parity device).
  6. Start the array.

When you start the array, the system will once again perform a parity sync to the new parity device and when it completes the array will once again be in a protected state. It is recommended that you keep the old parity drives contents intact until the above procedure completes as if an array drive fails during this procedure so you cannot complete building the contents of the new parity disk, then it is possible to use the old one for recovery purposes (ask on the forum for the steps involved). If you have a dual parity system and wish to upgrade both of your parity disks, it is recommended to perform this procedure one parity disk at a time, as this will allow for your array to still be in a protected state throughout the entire upgrade process.

Once you've completed the upgrade process for a parity disk, the former parity disk can be considered for assignment and use in the array as an additional data disk (depending on age and durability).

Replacing disks

There are two primary reasons why you may wish to replace disks in the array:

  • A disk needs to be replaced due to failure or scheduled retirement (out of warranty / support / serviceability).
  • The array is nearly full and you wish to replace existing data disk(s) with larger ones (out of capacity).

In either of these cases, the procedure to replace a disk is roughly the same, but one should be aware of the risk to data loss during a disk replacement activity. Parity device(s) protect the array from data loss in the event a disk failure. A single parity device protects against a single failure, whereas two parity devices can protect against losing data when two disks in the array fail. This chart will help you better understand your level of protection when various disk replacement scenarios occur.

Data Protection During Disk Replacements
With Single Parity With Dual Parity
Replacing a single disk Array cannot tolerate a disk failure without potential data loss to both the disk being replaced and the additional disk that has failed. Array can tolerate up to one additional disk failure without potential data loss
Replacing two disks Not possible! Array cannot tolerate a disk failure without potential data loss to both the disk(s) being replaced and the additional disk that has failed.

Replacing failed disk(s)

a red X indicates that a disk has suffered a write error and should be replaced
if notifications are enabled, this additional alert will appear

As noted previously, with a single parity disk, you can replace up to one disk at a time, but during the replacement process, you are at risk for data loss should an additional disk failure occur. With two parity disks, you can replace either one or two disks at a time, but during a two disk replacement process, you are also at risk for data loss. Another way to visualize the previous chart:

Array Tolerance to Disk Failure Events
Without Parity With Single Parity With Dual Parity
A single disk failure Data from that disk is lost Data is still available and the disk can be replaced Data is still available and the disk can be replaced
A dual disk failure Data on both disks are lost Data on both disks are lost Data is still available and the disks can be replaced
confirming you wish to start the array and rebuild the contents of the failed disk on a new disk
notification indicating that a disk rebuild is occurring
the progress and time remaining for the rebuild will be displayed under the array operation section

NOTE: If more disk failures have occurred than your parity protection can allow for, you are advised to post in the General Support forum for assistance with data recovery on the data devices that have failed.

What is a 'failed' drive

It is important to realize what is meant by the term failed drive:

  • It is typically used to refer to a drive that is marked with a red 'x' in the Unraid GUI.
  • It does NOT necessarily mean that there is a physical problem with the drive (although that is always a possibility). More often than not the drive is OK and an external factor caused the write to fail.
  • If the syslog shows that resets are occurring on the drive then this is a good indication of a connection problem.
  • The SMART report for the drive is a good place to start.
  • The SMART attributes can indicate a drive is healthy when in fact it is not. A better indication of health is whether the drive can successfully complete the SMART extended test without error. If it cannot complete this test error free then there is a high likelihood that the drive is not healthy.
  • CRC errors are almost invariably cabling issues. It is important to realize that this SMART attribute is never reset to 0 so if it stops increasing that is what you should be aiming to achieve.
  • If you have sufficient parity drives then Unraid will emulate the failed drive using the combination of the parity drive(s) and the remaining 'good' drives. From a user perspective this results in the system reacting as if the failed drive is still present.
This is one reason why it is important that you have enabled notifications to you get alerted to such a failure. From the end user perspective the system continues to operate and the data remain available. Without notifications enabled the user may blithely continue using their Unraid server not realizing that their data may now be at risk and they need to take some corrective action.

When a disk is marked as disabled and Unraid indicates it is being emulated then the following points apply:

  • Unraid will stop writing to the physical drive. Any writes to the 'emulated' drive will not be reflected on the physical drive but will be reflected in parity so from the end-user perspective then the array seems to be updating data as normal.
  • When you rebuild a disabled drive the process will make the physical drive correspond to the emulated drive. You can, therefore, check that the emulated drive contains the content that you expect before starting the rebuild process
  • If a drive is being emulated then you can carry out recovery actions on the emulated drive before starting the rebuild process. This can be important as it keeps the physical drive untouched for potential data recovery processes if the emulated drive cannot be recovered.

A replacement drive does not need to be the same size as the disk it is replacing. It cannot be smaller but it can be larger. If the replacement drive is not larger than any of your parity drives then the simpler procedure below can be used. In the special case where you want to use a new disk that is larger than at least one of your parity drives then please refer to the Parity Swap procedure that follows instead.

If you have purchased a replacement drive, many users like to pre-clear the drive to stress test the drive first, to make sure it's a good drive that won't fail for a few years at least. The Preclearing is not strictly necessary as replacement drives don't have to be cleared since they are going to be completely overwritten., but Preclearing new drives one to three times provides a thorough test of the drive, eliminates 'infant mortality' failures. You can also carry out stress tests in other ways such as running an extended SMART test or using tools supplied by the disk manufacturer that run on Windows or MacOS.

Normal replacement

This is a the normal case of replacing a failed drive where the replacement drive is not larger than your current parity drive(s).

It is worth emphasising that Unraid must be able to reliably read every bit of parity PLUS every bit of ALL other disks in order to reliably rebuild a missing or disabled disk. This is one reason why you want to fix any disk related issues with your Unraid server as soon as possible.

To replace a failed disk or disks:

  1. Stop the array.
  2. Power down the unit.
  3. Replace the failed disk(s) with a new one(s).
  4. Power up the unit.
  5. Assign the replacement disk(s) using the Unraid webGui.
  6. Click the checkbox that says Yes I want to do this and then click Start.

When you start the array in normal mode after replacing a failed disk or disks, the system will reconstruct the contents onto the new disk(s) and, if the new disk(s) is/are bigger, expand the file system. If you start the array in Maintenance mode you will need to press the Sync button to trigger the rebuild.

You must replace a failed disk with a disk which is as big or bigger than the original and not bigger than the parity disk.

There can be cases where it is determined that the reason a disk was disabled is due to an external factor and the disk drive appears to be fine. In such a case you need to take a slightly modified process to cause Unraid to rebuild a 'disabled' drive back onto the same drive.

  1. Stop array
  2. Unassign disabled disk
  3. Start array so the missing disk is registered
  4. Stop array
  5. Reassign disabled disk
  6. Start array to begin rebuild. If you start the array in Maintenance mode you will need to press the Sync button to start the rebuild.

Parity Swap

This is a special case of replacing a failed drive where the replacement drive is larger than your current parity drive.

Why would you want to do this? To replace a data drive with a larger one, that is even larger than the Parity drive.

Unraid does not require a replacement drive to be the same size as the drive being replaced. The replacement drive CANNOT be smaller than the old drive, but it CAN be larger, much larger in fact. If the replacement drive is the same size or larger, UP TO the same size as the parity drive, then there the simple procedure above can be used. If the replacement drive is LARGER than the Parity drive, then a special two-step procedure is required as described here. It works in twi phases:
  • The larger-than-parity drive is first to first upgrade the parity drive
  • The the old parity drive replaces the old data drive.
As an example, you have a 1TB data drive that you want to replace (the reason does not matter). You have a 2TB parity drive. You buy a 4TB drive as a replacement. The 'Parity Swap' procedure will copy the parity info from the current 2TB parity drive to the 4TB drive, zero the rest, make it the new parity drive, then use the old 2TB parity drive to replace the 1TB data drive. Now you can do as you wish with the removed 1TB drive.


Important Notes

  • If you have purchased a replacement drive, we always recommend many users like to pre-clear the drive to stress test the replacement drive first, to make sure it's a good drive that won't fail for a few years at least. The Preclearing is not strictly necessary, as replacement drives don't have to be cleared, they are going to be completely overwritten. But Preclearing new drives one to three times provides a thorough test of the drive, eliminates 'infant mortality' failures.
  • If your replacement drive is the same size or smaller than your current Parity drive, then you don't need this procedure. Proceed with the Replacing a Data Drive procedure.
  • This procedure is strictly for replacing data drives in an Unraid array. If all you want to do is replace your Parity drive with a larger one, then you don't need the Parity Swap procedure. Just remove the old parity drive and add the new one, and start the array. The process of building parity will immediately begin. (If something goes wrong, you still have the old parity drive that you can put back!)
  • IMPORTANT!!! This procedure REQUIRES that the data drive being replaced MUST be disabled first. If the drive failed (has a red ball), then it is already 'disabled', but if the drive is OK but you want to replace it anyway, then you have to force it to be 'failed', by unassigning it and starting and stopping the array. Unraid only forgets a drive when the array is started without the drive, otherwise it still associates it with the slot (but 'Missing'). The array must be started once with the drive unassigned or disabled. Yes, it may seem odd, but is required before Unraid will recognize that you are trying to do a 'Parity Swap'. It needs to see a disabled data disk with forgotten ID, a new disk assigned to its slot that used to be the parity disk, and a new disk assigned to the parity slot.
  • Obviously, it's very important to identify the drives for assignment correctly! Have a list of the drive models that will be taking part in this procedure, with the last 4 characters of their serial numbers. If the drives are recent Toshiba models, then they may all end in GS or S, so you will want to note the preceding 4 characters instead.


The steps to carry out this procedure are:

Note: these steps are the general steps needed. The steps you take may differ depending on your situation. If the drive to be replaced has failed and Unraid has disabled it, then you may not need steps 1 and 2, and possibly not steps 3 and 4. If you have already installed the new replacement drive (perhaps because you have been Preclearing it), then you would skip steps 5 through 8. Revise the steps as needed.
  1. Stop the array (if it's started)
  2. Unassign the old drive (if it's still assigned)

    If the drive was a good drive and notifications are enabled, you will get error notifications for a missing drive! This is normal.

  3. Start the array (put a check in the Yes I want to do this checkbox if it appears (older versions: Yes, I'm sure))

    Yes, you need to do this. Your data drive should be showing as Not installed.

  4. Stop the array again
  5. Power down
  6. [ Optional ] Pull the old drive

    You may want to leave it installed, for Preclearing or testing or reassignment.

  7. Install the new drive (preclear STRONGLY suggested, but formatting not needed)
  8. Power on
  9. Stop the array
    *If you get an "Array Stopping•Retry unmounting disk share(s)..." message, try disabling Docker and/or VM in Settings and stopping the array again after rebooting.
  10. Unassign the parity drive
  11. Assign the new drive in the parity slot

    You may see more error notifications! This is normal.

  12. Assign the old parity drive in the slot of the old data drive being replaced

    You should now have blue drive status indicators for both the parity drive and the drive being replaced.

  13. Go to the Main -> Array Operation section

    You should now have a Copy button, with a statement indicating "Copy will copy the parity information to the new parity disk".

  14. Put a check in the Yes I want to do this checkbox (older versions: Yes, I'm sure), and click the Copy button

    Now patiently watch the copy progress, takes a long time (~20 hours for 4TB on a 3GHz Core 2 Duo). All of the contents of the old parity drive are being copied onto the new drive, then the remainder of the new parity drive will be zeroed.
    The array will NOT be available during this operation!
    *If you disabled Docker and/or VM in settings earlier, go ahead and re-enable now.
    When the copy completes, the array will still be stopped ("Stopped. Upgrading disk/swapping parity.").
    The Start button will now be present, and the description will now indicate that it is ready to start a Data-Rebuild.

  15. Put a check in the Yes I want to do this checkbox (older versions: Yes, I'm sure), and click the Start button

    The data drive rebuild begins. Parity is now valid, and the array is started.
    Because the array is started, you can use the array as normal, but for best performance, we recommend you limit your usage.
    Once again, you can patiently watch the progress, takes a long time too! All of the contents of the old data drive are now being reconstructed on what used to be your parity drive, but is now assigned as the replacement data drive.

That's it! Once done, you have an array with a larger parity drive and a replaced data drive that may also be larger!
Note: many users like to follow up with a parity check, just to check everything. It's a good confidence builder (although not strictly necessary) !


A disk failed while I was rebuilding another

If you only have a single parity device in your system and a disk failure occurs during a data-rebuild event, the data rebuild will be cancelled as parity will no longer be valid. However, if you have dual parity disks assigned in your array, you have options. You can either

  • let the first disk rebuild complete before starting the second, or
  • you can cancel the first rebuild, stop the array, replace the second failed disk, then start the array again

If the first disk being rebuilt is nearly complete, it's probably better to let that finish, but if you only just began rebuilding the first disk when the second disk failure occurred, you may decide rebuilding both at the same time is a better solution.

Removing disks

There may be times when you wish to remove drives from the system.

Removing parity disk(s)

If for some reason you decide you do not need the level of parity protection that you have in place then it is always possible to easily remove a parity disk.

  1. Stop the array.
  2. Set the slot for the parity disk you wish to remove to Unassigned.
  3. Start the array to commit the change and 'forget' the previously assigned parity drive.

CAUTION: If you already have any failed data drives in the array be aware that removing a parity drive reduces the number of failed drives Unraid can handle without potential data loss.

  • If you started with dual parity you can still handle a single failed drive but would not then be able another drive failing while trying to rebuild the already failed drive without potential data loss.
  • If you started with single parity you will no longer be able to handle any array drive failing without potential data loss.

Removing data disk(s)

Removing a disk from the array is possible, but normally requires you to once again sync your parity disk(s) after doing so. This means that until the parity sync completes, the array is vulnerable to data loss should any disk in the array fail.

To remove a disk from your array, perform the following steps:

  1. Stop the array
  2. (optional) Make note if your disk assignments under main tab (for both the array and cache; some find it helpful to take a screenshot)
  3. Perform the Reset the array configuration procedure. When doing this it is a good idea to use the option to preserve all current assignments to avoid you having to re-enter them (and possibly make a mistake doing so).
  4. Make sure all pour previously assigned disks are there and set the drive you want removed to be Unassigned
  5. Start the array without checking the 'Parity is valid' box.

A parity-sync will occur if at least one parity disk is assigned and until that operation completes, the array is vulnerable to data loss should a disk failure occur.

Alternative method

It is also possible to remove a disk without invalidating parity if special action is taken to make sure that the disk only contains zeroes as a disk that is all zeroes does not affect parity. There is no support for this method built into the Unraid GUI so. it requires manual steps to carry out the zeroing process. It also takes much longer than the simpler procedure above.

There is no official support from Limetech for using this method so you are doing it at your own risk.

Notes:

  1. This method preserves parity protection at all times.
  2. This method can only be used if the drive to be removed is a good drive that is completely empty, is mounted and and can be completely cleared without errors occurring
  3. This method is limited to removing only one drive at a time (actually this is not technically true but trying to do multiple drives in parallel is slower that doing them sequentially due to the contention that arises for updating the parity drive)
  4. As stated above, the drive must be completely empty as this process will erase all existing content. If there are still any files on it (including hidden ones), they must be moved to another drive, or deleted.
    • One quick way to clear a drive of files is reformat it! To format an array drive, you stop the array and then on the Main page click on the link for the drive and change the file system type to something different than it currently is, then restart the array. You will then be presented with an option to format it. Formatting a drive removes all of its data, and the parity drive is updated accordingly, so the data cannot be easily recovered.
    • Explanatory note: "Since you are going to clear the drive anyway, why do I have to empty it? And what is the purpose of this strange clear-me folder?" Yes it seems a bit draconian to require the drive to be empty since we're about to clear and empty it in the script, but we're trying to be absolutely certain we don't cause data loss. In the past, some users misunderstood the procedure, and somehow thought we would preserve their data while clearing the drive! This way, by requiring the user to remove all data, and then add an odd marker, there cannot be any accidents or misunderstandings and data loss.

The procedure is as follows:

  1. Make sure that the drive you are removing has been removed from any inclusions or exclusions for all shares, including in the global share settings.
  2. Make sure the array is started, with the drive assigned and mounted.
  3. Make sure you have a copy of your array assignments, especially the parity drive.
    • In theory you should not need this but it is a useful safety net in case if the "Retain current configuration" option under New Config doesn't work correctly (or you make a mistake using it).
  4. It is highly recommended to turn on reconstruct write, as the write method (sometimes called 'Turbo write'). With it on, the script can run 2 to 3 times as fast, saving hours!
    • However when using 'Turbo Write' all drives must read without error so do not use it unless you are sure no other drive is having issues.
    • To enable 'turbo Write' in Settings->Disk Settings, change Tunable (md_write_method) to reconstruct write
  5. Make sure ALL data has been copied off the drive; drive MUST be completely empty for the clearing script to work.
  6. Double check that there are no files or folders left on the drive.
    • Note: one quick way to clean a drive is reformat it! (once you're sure nothing of importance is left of course!)
  7. Create a single folder on the drive with the name clear-me - exactly 7 lowercase letters and one hyphen
  8. Run the clear an array drive script from the User Scripts plugin (or run it standalone, at a command prompt).
    • If you prepared the drive correctly, it will completely and safely zero out the drive. If you didn't prepare the drive correctly, the script will refuse to run, in order to avoid any chance of data loss.
    • If the script refuses to run, indicating it did not find a marked and empty drive, then very likely there are still files on your drive. Check for hidden files. ALL files must be removed!
    • Clearing takes a loooong time! Progress info will be displayed.
    • If running in User Scripts, the browser tab will hang for the entire clearing process.
    • While the script is running, the Main screen may show invalid numbers for the drive, ignore them. Important! Do not try to access the drive, at all!
  9. When the clearing is complete, stop the array
  10. Follow the procedure for resetting the array making sure you elect to retain all current assignments.
  11. Return to the Main page, and check all assignments. If any are missing, correct them. Unassign the drive(s) you are removing. Double check all of the assignments, especially the parity drive(s)!
  12. Click the check box for Parity is already valid, make sure it is checked!
  13. Start the array! Click the Start button then the Proceed button (on the warning popup that will pop up)
  14. (Optional) Start a correcting parity check to ensure parity really is valid and you did not make a mistake in the procedure. If everything was done correctly this should return zero errors.

Alternate Procedure steps for Linux proficient users

If y/u are happy to use the Linux Command line then you can replace steps 7 and 8 by performing the clearing commands yourself at a command prompt. (Clearing takes just as long though!) If you would rather do that, than run the script in steps 7 and 8, then here are the 2 commands to perform:

 umount /mnt/diskX
 dd bs=1M if=/dev/zero of=/dev/mdX status=progress

(where X in both lines is the number of the data drive being removed) Important!!! It is VITAL you use the correct drive number, or you will wipe clean the wrong drive! That's why using the script is recommended, because it's designed to protect you from accidentally clearing the wrong drive.

Checking array devices

the check button lets you perform parity and read checks

When the array is started, there is a button under Array Operations labeled Check. Depending on whether or not you have any parity devices assigned, one of two operations will be performed when clicking this button.

It is also possible to schedule checks to be run automatically at User defined intervals under Settings->Scheduler. It is a good idea to do this as an automated check on array health so that problems can be noticed and fixed before the array can deteriorate beyond repair. Typical periods for such automated checks are monthly or quarterly and it is recommended that such checks should be non-correcting.

Parity check

If you have at least one parity device assigned, clicking Check which will initiate a Parity-check. This will march through all data disks in parallel, computing parity and checking it against stored parity on the parity disk(s).

By default, if an error is found during a Parity-check the parity disk will be updated (written) with the computed data and the Sync Errors counter will be incremented. If you wish to run purely a check without writing correction, uncheck the checkbox that says Write corrections to parity before starting the check. In this mode, parity errors will be notated but not actually fixed during the check operation.

A correcting parity check is started automatically when starting the array after an "Unsafe Shutdown". An "Unsafe Shutdown" is defined as any time that the Unraid server was restarted without having previously successfully stopped the array. The most common cause of Sync Errors is an unexpected power-loss, which prevents buffered write data being written to disk. It is highly recommended that users consider purchasing a UPS (uninterruptable power supply) for their systems co that Unraid can be set to shut down tidily on power loss, especially if frequent offsite backups aren't being performed.

It is also recommended that you run an automatic parity check periodically and this can be done under Settings->Scheduler. The frequency is up to the user bun monthly or quarterly are typical choices. It is also recommended that such a check is set as non-correcting as if a disk is having problems there is a chance of you corrupting your parity if you set such a check to be correcting. The only acceptable result from such a check is to have 0 errors reported. If you do have errors reported then you should take pre-emptive action to try and find out what is causing them. If in doubt ask questions in the forum.

Read check

history lets you review stats on your preview check operations

If you configure an array without any parity devices assigned, the Check option will start a Read check against all the devices in the array. You can use this to check disks in the array for unrecoverable read errors, but know that without a parity device, data may be lost if errors are detected.

A Read Check is also the type of check started if you have disabled drives present and the number of disabled drives is larger than the number of parity drives.

Check history

Any time a parity or read check is performed, the system will log the details of the operation and you can review them by clicking the History button under Array Operations. These are stored in a text file under the config directory on your Unraid USB flash device.

Spin up and down disks

If you wish to manually control the spin state of your rotational storage devices or toggle your SSD between active and stand-by mode, these buttons provide that control. Know that if files in are in the process of being accessed while using these controls, the disk(s) in use will remain in an active state, ignoring your request.

When disks are in a spun-down state, they will not report their temperature through the webGui.

Reset the array configuration

If you wish to remove a disk from the array or you simply wish to start from scratch to build your array configuration, there is a tool in Unraid that will do this for you. To reset the array configuration, perform the following steps:

you can reset your disk configuration from the new config page
  1. Navigate to the Tools page and click New Config
  2. You can (optionally) elect to have the system preserve some of the current assignments while resetting the array. This can be very useful if you only intend to make a small change as it avouds you having to re-enter the details of the disks you want to leave unchanged.
  3. Click the checkbox confirming that you want to do this and then click apply to perform the operation
  4. Return to the Main tab and your configuration will have been reset
  5. Make any adjustments to the configuration that you want.
  6. Start the array to commit the configuration. You can start in Normal or Maintenance mode.

Notes:

  • Unraid will recognise if any drives have been previously used by Unraid, and when you start the array as part of this procedure the contents of such disks will be left intact.
  • There is a checkbox next to the Start button that you can use to say 'Parity is Valid'. Do not check this unless you are sure it is the correct thing to do, or unless advised to do so by an experienced Unraid user as part of a data recovery procedure.
  • Removing a data drive from the array will always invalidate parity unless special action has been taken to ensure the disk being removed only contains zeroes
  • Reordering disks after doing the New Config without removing drives does not invalidate parity1, but it DOES invalidate parity2.

Undoing a reset

If for any reason after performing a reset, you wish to undo it, perform the following steps:

  1. Browse to your flash device over the network (SMB)
  2. Open the Config folder
  3. Rename the file super.old to super.dat
  4. Refresh the browser on the Main page and your array configuration will be restored

Notifications

TBD

Status Reports

Unraid can be configured to send you status reports about the state of the array.

An important point about these reports is:

  • They only tell you if the array currently has any disks disabled or showing read/write errors.
  • The status is reset when you reboot the system, so it is does not tell you what the status was in the past.
  • IMPORTANT: The status report does not take into account the SMART status of the drive. You can therefore get a status report indicating that the array appears to be healthy even though the SMART information might indicate that a disk might not be too healthy.

SMART Monitoring

Unraid can be configured to report whether SMART attributes for a drive are changing. The idea is to try and tell you in advance if drives might be experiencing problems even though they have not yet caused read/write errors so that you can take pre-emptive action before a problem becomes serious and thus might potentially lead do data loss. You should have notifications enabled so that you can see these notifications even when you are not running the Unraid GUI.

SMART monitoring is currently only supported for SATA drives, and is not available for SAS drives.

Which SMART attributes are monitored can be configured by the user, but the default ones are:

  • 5: Reallocated Sectors count
  • 187: Reported uncorrected errors
  • 188: Command timeout
  • 197: Current /pending Sector Count
  • 198: Uncorrectable sector count
  • 199: UDMA CRC error count

If any of these attributes change value then this will be indicated on the Dashboard by the icon against the drive turning orange. You can click on this icon and a menu will appear that allows you to acknowledge that you have seen the attribute change, and then Unraid will stop telling you about it unless it changes again.

You can manually see all the current SMART information for a drive by clicking on its name on the Main tab in the Unraid GUI.

Cache Operations

There are two primary modes of operating the cache in Unraid:

Single device mode

When the number of disk slots for the cache is set to one, this is referred to as running in single device mode. In this mode, you will have no protection for any data that exists on the cache, which is why pool mode is recommended. However, unlike in pool mode, while in single device mode, you are able to adjust the filesystem for the cache device to something other than btrfs. It is for this reason that there are no special operations for single mode. You can only add or remove the device from the system.

NOTE: If you choose to use a non-btrfs file system for your cache device operating in single mode, you will not be able to expand to a cache pool without first reformatting the device with btrfs. It is for this reason that btrfs is the default filesystem for the cache, even when operating in single device mode.

Cache pool mode

When more than one disk is assigned to the cache, this is referred to running in cache pool mode. This mode utilizes btrfs RAID 1 in order to allow for any number of devices to be grouped together in a pool. Unlike a traditional RAID 1, a btrfs RAID1 can mix and match devices of different sizes and speeds and can even be expanded and contracted as your needs change. To calculate how much capacity your btrfs pool will have, check out this handy btrfs disk usage calculator. Set the Preset RAID level to RAID-1, select the number of devices you have, and set the size for each. The tool will automatically calculate how much space you will have available.

Here are typical operations are likely to want to carry out on the cache:

  • Back up the cache to the array
  • Switch the cache to run in pool mode
  • Add disks
  • Replace a disk

Backing up the cache to the array

The procedure shown assumes that there are at least some dockers and/or VMs related files on the cache disk, some of these steps are unnecessary if there aren't.

  1. Stop all running Dockers/VMs
  2. Settings -> VM Manager: disable VMs and click apply
  3. Settings -> Docker: disable Docker and click apply
  4. Click on Shares and change to "Yes" all cache shares with "Use cache disk:" set to "Only" or "Prefer"
  5. Check that there's enough free space on the array and invoke the mover by clicking "Move Now" on the Main page
  6. When the mover finishes check that your cache is empty
Note that any files on the cache root will not be moved as they are not part of any share and will need manual attention

You. can then later restore files to the cache by effectively reversing the above steps:

  1. Click on all shares whose content you want on the cache and set "Use cache disk:" option to "Only" or "Prefer" as appropriate.
  2. Check that there's enough free space on the cache and invoke the mover by clicking "Move Now" on the Main page
  3. When the mover finishes check that your cache now has the expected content and that the shares in question no longer have files on the main array
  4. Settings -> Docker: enable Docker and click apply
  5. Settings -> VM Manager: enable VMs and click apply
  6. Start any Dockers/VMs that you want to be running

Switching the cache to pool mode

If you want a cache pool (i.e. a multi-drive cache) then the only supported format for this is BTRFS. If it is already in BTRFS format then you can follow the procedure below for adding an additional drive to a cache pool

If the cache is NOT in BTRFS format then you will need to do the following:

  1. Use the procedure above for backing up any existing content you want to keep to the array.
  2. Stop the array
  3. Click on the cache on the Main tab and change the format to be BTRFS
  4. Start the array
  5. The cache should how show as unmountable and offer the option to format the cache.
  6. Confirm that you want to do this and click the format button
  7. When the format finishes you now have a cache pool (albeit with only one drive in it)
  8. If you want additional drives in the cache pool y/u can (optionally) do it now.
  9. Use the restore part of the previous procedure to restore any content you want on the cache

Adding disks to a cache pool

Notes:

  • You can only do this if the cache is already formatted as BTRFS
If it is not then you will need to first follow the steps in the previous section to create a cache pool in BTRFS format.

To add disks to the BTRFS cache (pool) in your array, perform the following steps:

  1. Stop the array.
  2. Navigate to the Main tab.
  3. Scroll down to the section labelled Cache Devices.
  4. Change the number of Slots to be at least as many as the number of devices you wish to assign.
  5. Assign the devices you wish to the cache slot(s).
  6. Start the array.
  7. Click the checkbox and then the button under Array Operations to format the devices.
Make sure that the devices shown are those you expect - you do not want to accidentally format a device that contains data you want to keep.


Removing disks from a cache pool

Notes:

  • You can only do this if your cache is configured for redundancy at both the data and metadata level.
you can check what raid level your cache is currently set to by clicking on it on the Main tab and scrolling down to the Balance Status section.
  • you can only remove one drive at a time
  1. Stop the array
  2. Unassign a cache drive.
  3. Start the array
  4. Click on the cache drive
  5. if you still have more than one drive in the cache pool then you can simply run a Balance operation
  6. If you only have one drive left in the pool then switch the cache pool raid level to single as described below


Change Cache Pool RAID Levels

BTRFS can add and remove devices online, and freely convert between RAID levels after the file system has been created.

BTRFS supports raid0, raid1, raid10, raid5 and raid6 (but see the section below about raid5/6), and it can also duplicate metadata or data on a single spindle or multiple disks. When blocks are read in, checksums are verified. If there are any errors, BTRFS tries to read from an alternate copy and will repair the broken copy if the alternative copy succeeds.

By default Unraid creates BTRFS volumes in a cache pool with data=raid1 and metadata=raid1 to give redundancy.

For more information about the BTRFS options when using multiple devices see the BTRFS wiki article.

You can change the BTRFS raid levels for a cache pool from the Unraid GUI by:

  • If the array is not started then start it in normal mode
  • Click on the Cache on the Main tab
  • Scroll down to the Balance section
  • At this point information (including current RAID levels) will be displayed.
  • Add the appropriate additional parameters added to the Options field.
As an example the following screenshot shows how you might convert the cache from the RAID1 to the SINGLE profile.
Convert to RAID1
  • Start the Balance operation.
  • Wait for the Balance to complete
  • The new RAID level will now be fully operational.


Replace a disk in a cache pool

Notes:

  • You can only do this if the cache is formatted as BTRFS AND in is set up to be redundant.
  • You can only replace up to one disk at a time from your cache pool.

To replace a disk in the redundant pool, perform the following steps:

  1. Stop the array.
  2. Physically detach the disk from your system you wish to remove.
  3. Attach the replacement disk (must be equal to or larger than the disk being replaced).
  4. Refresh the Unraid webGui when under the Main tab.
  5. Select the cache slot that previously was set to the old disk and assign the new disk to the slot.
  6. Start the array.
  7. If presented with an option to Format the device, click the checkbox and button to do so.


Remove a disk from a cache pool

There have been times when users have indicated they would like to remove a disk from a cache pool they have set up while keeping all the data intact. This cannot be done from the Unraid GUI but is easy enough to do from the command line in a console session.

Note: You need to maintain the minimum number of devices for the profile in use, i.e., you can remove a a device from a 3+ device raid0 pool but you can't remove one from a 2 device raid0 pool (unless it's converted to single profile first).

With the array running type on the console:

 btrfs dev del /dev/mapper/sdX1 /mnt/cache

Replace X with correct letter for the drive you want to remove from the system as shown on the Main tab (don't forget the 1 after it).

Wait for the device to be deleted (i.e., until the command completes and you get the cursor back).

Device is now removed from the pool, you don't need to stop the array now, but at next array stop you need to make Unraid forget the now deleted member, and to achieve that:

  • Stop the array
  • Unassign all pool devices
  • Start the array to make Unraid "forget" the pool config
If the docker and/or VMs services were using that pool best to disable those services before start or Unraid will recreate the images somewhere else, assuming they are using /mnt/user paths)
  • Stop array (re-enable docker/VM services if disabled above)
  • Re-assign all pool member except the removed device
  • Start array

Done

You can also remove multiple devices with a single command (as long as the above rule is observed):

 btrfs dev del /dev/mapper/sdX1 /dev/mapper/sdY1 /mnt/cache

but in practice this does the same as removing one device, then the other, as they are still removed one at a time, just one after the other with n/ further input from you.

File System Management

Selecting a File System type

Each array drive in an Unraid system is set up as a self contained file system. Unraid currently supports the following file system types:

  • XFS: This is the default format for array drives on a new system. It is a well tried Linux file system and deemed to be the most robust.
    • XFS is better at recovering from file system corruption than BTRFS (which can happen after unclean shutdowns or system crashes).
  • BTRFS: This is a newer file system that supports advanced features not available with XFS. It is considered not quite as stable as XFS but many Unraid users have reported in seems as robust as XFS when used on array drives where each drive is a self-contained file system. Some of it's features are:
    • It supports detecting file content corruption (often colloquially known as bit-rot) by internally using check summing techniques
    • It can support a single file system spanning multiple drives, and in such a case it is not necessary that the drives all be of the same size.
    • In multi-drive mode various levels of RAID can be supported (although these are a BTRFS specific implementation and not necessarily what one expects). The default in Unraid for a cache pool is RAID1 so that data is stored redundantly to protect against drive failure.
    • is the only option supported when using a cache pool spanning multiple drives that need to run as a single logical drive as this needs the multi-drive support.
    • In multi-drive mode in the cache pool it is not always obvious how much usable space you will end up with. The BTRFS Space Calculator can help with this.
  • ReiserFS: This is supported for legacy reasons for those migrating from earlier versions of Unraid where it was the only supported file system type.
    • The original developer is in jail for murdering his wife. As a result there is only minimal involvement from Linux kernel developers on maintaining the ReiserFS drivers on new Linux kernel versions so the chance of a new kernel causing problems with ReiserFS are higher than for other Linux fine system types.
    • It has a hard limit of 16TB on a ReiserFS file system and commercial grade hard drives have now reached this limit.
    • Write performance can degrade significantly as the file system starts getting full.
    • It is extremely good at recovering from even extreme levels of file system corruption.
    • It is now deprecated for use with Unraid and should not be used by new users.

These formats are standard Linux formats and us such any array drive can easily be removed from the array and read on any Linux system. This can be very useful in any data recovery scenario. Note, however, that the initial format needs to be done on the Unraid system as Unraid has specific requirement around how the disk is partitioned that are unlikely to be met if the partitioning is not done on Unraid. Unfortunately these formats cannot be read as easily on Windows or MacOS systems as these OS do not recognise the file system formats without additional software being installed that is not freely obtainable.

A user can use a mixture of these file system types in their Unraid system without it causing any specific issues. in particular the Unraid parity system is file system agnostic as it works at the physical sector level and is not even ware of the file system that is in use on any particular drive.

In addition drives can be encrypted.

If using a cache pool (i.e multiple drives) then the only supported type is BTRFS and the pool is formatted as a single entity. By default this will be the BTRFS version of RAID1 to give redundancy, but other BTRFS options can be achieved by running the appropriate btrfs command.

Setting a File System type

The File System type for a new drive can be set in 2 ways:

  1. Under Settings->Disk Settings the default type for array drives and the cache pool can be set.
    • On a new Unraid system this will be XFS for array drives and BTRFS for the cache.
  2. Explicitly for individual drives by clicking on a drive on the Main tab (with the array stopped) and selecting a type from those offered.
    • When a drive is first added the file system type will show as auto which means use the setting specified under Settings->Disk Settings.
    • Setting an explicit type over-rides the global setting
    • The only supported format for a cache containing more than one drive is BTRFS.


Creating a File System (Format)

Before a disk can be used in Unraid then an empty file system of the desired type needs to be created on the disk. This is the operation commonly known as "format" and it erases any existing content on the disk.

WARNING:
If a drive has already been formatted by Unraid then if it now shows as unmountable you probably do NOT want to format it again unless you want to erase its contents. In such cases the appropriate action is usually instead to use the File System check/repair process detailed later.

The basic process to format a drive once the file system type has been set is:

  • Start the array
  • Any drives where Unraid does not recognize the format will be shown as unmountable and there will be an option to format unmountable drives
  • Check that ALL the drives shown as unmountable are ones you want to format. You do not want to accidentally format another drive and erase its contents
  • Click the check box to say you really want to format the drive.
  • Carefully read the resulting dialog that outlines the consequences
  • The Format button will now be enabled so if you want to go ahead with the format click on it.
  • The format process will start running for the specified disks.
    • If the disk has not previously been used by Unraid then it will start by rewriting the partition table on the drive to conform to the standard Unraid expects.
  • The format should only take a few minutes but if the progress does not automatically update you might need to refresh the Main tab.

Once the format has completed then the drive is ready to start being used to store files.

Drive shows as unmountable

A drive can show as unmountable in the Unraid GUI for two reasons:

  1. The disk has never been used in Unraid and you have just added is new a new disk slot in the array. In this case you want to follow the format procedure shown above to create a new empty file system on the drive so it is ready to receive files.
  2. File system corruption has occurred. This is not infrequent if a write to a disk fails for any reason and Unraid marks the disk as disabled, although it can occur at other times as well. In such a sase you want to use the file system check/repair process documented below to get the disk back into a state where you can mount it again and see all its data. Note that this process can be carried out on a disk that is being ‘emulated’ by Unraid prior to carrying out any rebuild process.

Checking a File System

If a disk that was previously mounting fine suddenly start showing as unmountable then this normally means that there is some sort of corruption at the file system level. This most commonly occurs after an unclean shutdown but could happen any time a write to a drive fails or if the drive ends up being marked as 'disabled' (i.e. with a red ',' in the Unraid GUI). If the drive is marked as disable and being emulated then the check is run against the emulated drive and not the physical drive.

IMPORTANT: At this point the Unraid GUI will be offering an option to format unmountable drives. This will erase all content on the drive and update parity to reflect this making recovering the data impossible/very difficult so do NOT do this unless you are happy to lose the contents of the drive.

To recover from file system corruption then one needs to run the tool that is appropriate to the file system on the disk. Points to note that users new to Unraid often misunderstand are:

  • Rebuilding a disk does not repair file system corruption
  • If a disk is showing as being emulated then the file system check and/or repair are run against the emulated drive and not the physical drive.

The process for checking a file system using the Unraid GUI is as follows:

  1. Stop the array
  2. Start the array in Maintenance mode
  3. Click on the drive in the Main tab
  4. Go the section on the resulting dialog labelled Check Filesystem Status
  5. The tool that will be run is shown and the status at this point will show as Not available. The Options field may include a parameter that causes the selected tool to run in check-only mode so that the underlying drive is not actually changed.
  6. Click on the Check button to run the file system check
  7. Information on the check progress is now displayed. You may need to use the Refresh button to get it to update.
  8. If you are not sure what the results of the check mean you should copy the progress information so you can ask a question in the forum. When including this information as part of a forum post use the mark them as code (using the <?> icon) to preserve the formatting as otherwise it becomes difficult to read.

If you ever need to run a check on a drive that is not part of the array then you need to run the appropriate command from a console/terminal session. As an example for an XFS disk you would use a command of the form:

 XFS_repair /dev/sdX1

where X corresponds to the device identifier shown in the Unpaid GUI. Points to note are:

  • The value of X can change when Unraid is rebooted so make sure it is correct for the current boot
  • Note the presence of the '1' on the end to indicate the partition to be checked.
  • The reason for not doing it this way on array drives is that although the disk would be repaired parity would be invalidated which can predudice the chances of recovering a failed drive until valid parity has been re-established.

Repairing a File System

You typically run this just after running a check as outlined above, but if skipping that follow steps 1-4 to get to the point of being ready to run the repair. It is a good idea to enable the Help built into the GUI to get more information on this process.

If the drive is marked as disable and being emulated then the repair is run against the emulated drive and not the physical drive. It is frequently done before attempting to rebuild a drive as it is the contents of the emulated drive that is used by the rebuild process.

  1. Remove any parameters from the Options field that would cause the tool to run in check-only mode.
  2. Add any additional parameters to the Options field required that are suggested from the check phase. If not sure then ask in the forum.
    • The Help build into the GUI can provide guidance on what options might be applicable.
  3. Press the Check button to start the repair process. You can now periodically use the Refresh button to update the progress information
  4. If the repair does not complete for any reason then ask in the forum for advice on how to best proceed if you are not sure.
    • If repairing a XFS formatted drive then it is quite normal for the xfs_repair process to give you a warning and saying you need to provide the -L option to proceed. Despite this ominous warning message this is virtually always the right thing to do and does not result in data loss.
    • When asking a question in the forum and when including the output from the repair attempt as part of a your post use Code option to preserve the formatting as otherwise it becomes difficult to read
  5. If the repair completes without error then stop the array and restart in normal mode. The drive should now mount correctly.

If at any point you do not understand what is happening then ask in the forum.

Changing a File System type

These may be times when you wish to change the file system type on a particular drive. The steps are outlined below.

IMPORTANT: These steps will erase any existing content on the drive so make sure you have first copied it elsewhere before attempting to change the file system type if you do not want to lose it.

  1. Stop the array
  2. Click on the drive whose format you want to change
  3. Change the format to the new one you want to use. Repeat if necessary for each drive to be changed
  4. Start the array
  5. There will now be an option on the main tab to format unmountable drives and showing what drives these will be. Check that only the drive(s) you expect show.
  6. Check the box to confirm the format and then press the Format button.
  7. The format will now start. It typically only takes a few minutes. There have been occasions where the status does not update but refreshing the Main tab normally fixes this.

If anything appears to go wrong then ask in the forum add your system diagnostics zip file (obtained via Tools->Diagnostics) to your post.

Notes:

  • For SSDs you can erase the current contents using
 blkdiscard /dev/sdX 
at the console where 'X' corresponds to what is currently shown in the Unraid GUI for the device. Be careful that you get it right as you do not want to accidentally erase the contents of the wrong drive.


Converting to a new File System type

There is the special case of changing a file system where you want to keep the contents of the drive. The commonest reason for doing this is those users who ran an older version of Unraid where the only supported file system type was reiserFS (which is now deprecated) and they want to switch the drive to using either XFS or BTRFS file system instead. However there may be users who want to convert between file system types for other reasons.

In simplistic terms the process is:

  1. Copy the data off the drive in question to another location. This can be elsewhere on the array or anywhere else suitable.
    • You do have to have enough free space to temporarily hold this data
    • Many uses do such a conversion just after adding a new drive to the array as this gives them the free space required.
  2. Follow the procedure above for changing the file system type of the drive. This will leave you with an empty drive that is now in the correct format but that has no files on it.
  3. Copy the files you saved in step 1 back to this drive
  4. If you have multiple drives that need to be converted then do them one at a time.

This is a time-consuming process as you are copying large amounts of data. However most of this is computer time as the user does not need do be continually present closely watching the actual copying steps.


Reformatting a drive

If by any chance you want to reformat a drive to erase its contents keeping the existing file system type then many users find that it may not be obvious how to do this from the Unraid GUI.

The way to do this is to follow the above process for changing the file system type twice. The first time you change it to any other type, and then once it has been formatted to the new type repeat the process this time setting the type back to the one you started with.

This process will only take a few minutes, and as you go parity is updated accordingly.

Reformatting a cache drive

There may be times when you want to change the format used on the cache drive (or some similar operation) and preserve as much of its existing contents as possible. In such cases the recommended way to proceed that is least like;y to go wrong is:

  1. Stop array.
  2. Disable docker and VM services under Settings
  3. Start array. If you have correctly disabled these services there will be NO Docker or VMstab in the GUI.
  4. Set all shares that have files on the cache and are currently not have a Use Cache:Yes to BE Cache:Yes. Make a note of which shares you changed and what setting they had before the change
  5. Run mover from the Main tab; wait for completion (which can take some time to complete if there are a lot of files); check cache drive contents, should be empty. If it's not, STOP, post diagnostics and ask for help.
  6. Stop array.
  7. Set cache drive desired format to XFS or BTRFS, if you only have a single cache disk and are keeping that configuration, then XFS is the recommended format. XFS is only available as a selection if there is only 1 (one) cache slot shown while the array is stopped.
  8. Start array.
  9. Verify that the cache drive and ONLY the cache drive shows unformatted. Select the checkbox saying you are sure, and format the drive.
  10. Set any shares that you changed to be Cache: Yes earlier to Cache: Prefer if they were originally Cache: Only or Cache: Prefer. If any were Cache: No, set them back that way.
  11. Run mover from the Main tab; wait for completion; check cache drive contents which should be back the way it was.
  12. change any share that we’re set to Use Cache:Only back to that option
  13. Stop array.
  14. Enable docker and VM services.
  15. Start array

There are other alternative procedure that might be faster if you are Linux aware, but the one shown above is the one that has proved most likely to succeed without error for the average Unraid user.

BTRFS Operations

There are a number of operations that are specific to BTRFS formatted drives that do not have a direct equivalent in the other formats.

Balance

Unlike most conventional filesystems, BTRFS uses a two-stage allocator. The first stage allocates large regions of space known as chunks for specific types of data, then the second stage allocates blocks like a regular filesystem within these larger regions. There are three different types of chunks:

  • Data Chunks: These store regular file data.
  • Metadata Chunks: These store metadata about files, including among other things timestamps, checksums, file names, ownership, permissions, and extended attributes.
  • System Chunks: These are a special type of chunk which stores data about where all the other chunks are located.

Only the type of data that the chunk is allocated for can be stored in that chunk. The most common case these days when you get a -ENOSPC error on BTRFS is that the filesystem has run out of room for data or metadata in existing chunks, and can't allocate a new chunk. You can verify that this is the case by running btrfs fi df on the filesystem that threw the error. If the Data or Metadata line shows a Total value that is significantly different from the Used value, then this is probably the cause.

What btrfs balance does is to send things back through the allocator, which results in space usage in the chunks being compacted. For example, if you have two metadata chunks that are both 40% full, a balance will result in them becoming one metadata chunk that's 80% full. By compacting space usage like this, the balance operation is then able to delete the now empty chunks, and thus frees up room for the allocation of new chunks. If you again run btrfs fi df after you run the balance, you should see that the Total and Used values are much closer to each other, since balance deleted chunks that weren't needed anymore.

The BTRFS balance operation can be run from the Unraid GUI by clicking on the drive on the Main tab and running scrub from the resulting dialog. the current status information for the volume is displayed. You can optionally add parameters to be passed to the balakce operation and then start the scrub by pressing the Balance button.

Scrub

Scrubbing involves reading all the data from all the disks and verifying checksums. If any values are not correct, the data can be corrected by reading a good copy of the block from another drive. The scrubbing code also scans on read automatically. It is recommended that you scrub high-usage file systems once a week and all other file systems once a month.

You can initiate a check of the entire file system by triggering a file system scrub job. The scrub job scans the entire file system for integrity. It automatically attempts to report and repair any bad blocks that it finds along the way. Instead of going through the entire disk drive, the scrub job deals only with data that is actually allocated. Depending on the allocated disk space, this is much faster than performing an entire surface scan of the disk.

The BTRFS scrub operation can be run from the Unraid GUI by clicking on the drive on the Main tab and running scrub from the resulting dialog.

Performance

THIS SECTION IS STILL UNDER CONSTRUCTION

A lot more detail still needs to be added

Array Write Modes

Unraid maintains real-time parity and the performance of writing to the parity protected array in Unraid is strongly affected by the method that is used to update parity.

There are fundamentally 2 methods supported:

  • Read/Modify/Write
  • Turbo Mode (also known as reconstruct write)

These are discussed in more detail below to help users decide which modes are appropriate to how they currently want their array to operate.

Setting the Write mode

The write mode is set by going Settings->Disk Settings, and look for the Tunable (md_write_method) setting. The 3 options are:

  • Auto: Currently this operates just like setting the read/modify/write option but is reserved for future enhancement
  • read/modify/write
  • reconstruct write (a.k.a.Turbo write)

To change it, click on the option you want, then the Apply button. The effect should be immediate to you can change it at any time

The different modes and their implications are discussed in more detail below

Read/Modify/Write mode

Historically, Unraid has used the "read/modify/write" method to update parity and to keep parity correct for all data drives.

Say you have a block of data to write to a drive in your array, and naturally you want parity to be updated too. In order to know how to update parity for that block, you have to know what is the difference between this new block of data and the existing block of data currently on the drive. So you start by reading in the existing block, and comparing it with the new block. That allows you to figure out what is different, so now you know what changes you need to make to the parity block, but first you need to read in the existing parity block. So you apply the changes you figured out to the parity block, resulting in a new parity block to be written out. Now you want to write out the new data block, and the parity block, but the drive head is just past the end of the blocks because you just read them. So you have to wait a long time (in computer time) for the disk platters to rotate all the way back around, until they are positioned to write to that same block. That platter rotation time is the part that makes this method take so long. It's the main reason why parity writes are so much slower than regular writes.

To summarize, for the "read/modify/write" method, you need to:

  • read in the parity block and read in the existing data block (can be done simultaneously)
  • compare the data blocks, then use the difference to change the parity block to produce a new parity block (very short)
  • wait for platter rotation (very long!)
  • write out the parity block and write out the data block (can be done simultaneously)

That's 2 reads, a calc, a long wait, and 2 writes.

The advantages of this approach are:

  • Only the parity drive(s) and the drive being updated need to be spun up.
  • Minimises power usage as array drives can be kept spun down when not being accessed
  • Does not require all the other array drives to be working perfectly

Turbo write mode

More recently Unraid introduced the Turbo write mode (often called "reconstruct write")

We start with that same block of new data to be saved, but this time we don't care about the existing data or the existing parity block. So we can immediately write out the data block, but how do we know what the parity block should be? We issue a read of the same block on all of the *other* data drives, and once we have them, we combine all of them plus our new data block to give us the new parity block, which we then write out! Done!

To summarize, for the "reconstruct write" method, you need to:

  • write out the data block while simultaneously reading in the data blocks of all other data drives
  • calculate the new parity block from all of the data blocks, including the new one (very short)
  • write out the parity block

That's a write and a bunch of simultaneous reads, a calc, and a write, but no platter rotation wait! The upside is it can be much faster.

The downside is:

  • ALL of the array drives must be spinning, because they ALL are involved in EVERY write.
  • Increased power draw due to the need to keep all drives spinning
  • All drives must be reading without error.

Ramifications

So what are the ramifications of this?

  • For some operations, like parity checks and parity builds and drive rebuilds, it doesn't matter, because all of the drives are spinning anyway.
  • For large write operations, like large transfers to the array, it can make a big difference in speed!
  • For a small write, especially at an odd time when the drives are normally sleeping, all of the drives have to be spun up before the small write can proceed.
  • And what about those little writes that go on in the background, like file system housekeeping operations? EVERY write at any time forces EVERY array drive to spin up. So you are likely to be surprised at odd times when checking on your array, and expecting all of your drives to be spun down, and finding every one of them spun up, for no discernible reason.
  • So one of the questions to be faced is, how do you want your various write operations to be handled. Take a small scheduled backup of your phone at 4 in the morning. The backup tool determines there's a new picture to back up, so tries to write it to your unRAID server. If you are using the old method, the data drive and the parity drive have to spin up, then this small amount of data is written, possibly taking a couple more seconds than Turbo write would take. It's 4am, do you care? If you were using Turbo write, then all of the drives will spin up, which probably takes somewhat longer spinning them up than any time saved by using Turbo write to save that picture (but a couple of seconds faster in the save). Plus, all of the drives are now spinning, uselessly.
  • Another possible problem if you were in Turbo mode, and you are watching a movie streaming to your player, then a write kicks in to the server and starts spinning up ALL of the drives, causing that well-known pause and stuttering in your movie. Who wants to deal with the whining that starts then?

Currently, you only have the option to use the old method or the new (currently the Auto option means the old method). The plan is to add the true Auto option that will use the old method by default, *unless* all of the drives are currently spinning. If the drives are all spinning, then it slips into Turbo. This should be enough for many users. It would normally use the old method, but if you planned a large transfer or a bunch of writes, then you would spin up all of the drives - and enjoy faster writing.

The auto method has been for the potential of the system automatically switching modes depending on current array activity but this has not happened so far. The problems is knowing when a drive is spinning, and being able to detect it without noticeably affecting write performance, ruining the very benefits we were trying to achieve. If on every write you have to query each drive for its status, then you will noticeably impact I/O performance. So to maintain good performance, you need another function working in the background keeping near-instantaneous track of spin status, and providing a single flag for the writer to check, whether they are all spun up or not, to know which method to use.

Many users would like tighter and smarter control of which write mode is in use. There is currently no official way of doing thus but you could try searching for "Turbo Write" on the Apps tab for unofficial ways to get better control.

Using a Cache Drive

It is possible to use a Cache Drive/Pool to improve the perceived speed of writing to the array. This can be done on a share-by-share basis using the Use Cache setting available for each share by clicking on the share name on the Shares tab in the GUI. It is important to realize that using the cache has not really sped up writing files to the array - it has just that such writes now occur when the user is not watching them

Points to note are:

  • The Yes setting for Use Cache causes new files for the share to initially be written to the cache and later moved to the parity protected array when mover runs.
  • Writes to the cache run at the full speed the cache is capable of.
  • It is not uncommon to use SSDs in the cache to get maximum performance.
  • Moves from cache to array are still comparatively slow but since mover is normally scheduled to fun when the system is otherwise idle this is not visible to the end-user.
  • There is a Minimum Free Space setting under Settings->Global Share settings and if the free space on the cache falls below this value Unraid will stop trying to write new files to the cache. Since when Unraid first creates a file it does not know the final size it is recommended that the value for this setting should be as large (or larger) as the biggest file you expect to write to the share as you wand to stop Unraid selecting the cache for a file that will not fit in the space available. this will stop the write failing with an 'out of space' error when the free space gets exhausted.
  • If there is not sufficient free space on the cache then writes will start by-passing the cache and revert to the speeds that would be obtained when not using the cache.

Read Modes

Normally read performance is determined by the maximum speed that a file can be read off a drive. Unlike some other forms of RAID an Unraid system does not utilise striping techniques to improve performance as every file is constrained to a single drive.

If a disk is marked as disabled and being emulated then Unraid needs to reconstruct its contents on the fly by reading the appropriate sectors of all the good drives and the parity drive(s). In such a case the read performance is going to be determined primarily by the slowest drives in the system.

It is also worth emphasising that if there is any array operation going on such as a parity check or a disk rebuild then read performance will be degraded significantly due to drive head movements caused by disk contention between the two operations.

Share Management

THIS SECTION IS STILL UNDER CONSTRUCTION

A lot more detail still needs to be added

Once you have assigned some devices to Unraid and started the array, you can create shares to simplify how you store data across multiple disks in the array. Unraid will automatically create a handful of shares for you that it needs to support common plugins, containers, and virtual machines, but you can also create your own shares for storing other types of data. Unraid supports 2 types of share:

  • User Shares
  • Disk Shares

You can control which of these types of shares are to be used under Settings->Global Snare Settings. The default on Unraid is to have User Shares enabled but Disk Shares disabled.

It is sometimes important to realize that these are two different views of the same underlying file system. Every file/folder that appears under a User Share will also appear under the Disk Share for the physical drive that is storing the file/folder.

User Shares

User Shares can be enabled/disabled via Settings->Global Share Settings.

From the Shares tab, you can either create a new share or edit' an existing share. Click the Help icon in the top-right of the Unraid webGui when configuring shares for more information on the settings available.

User Shares are implemented by using Linux Fuse file system support. What they do is provide an aggregated view of all top level folders of the same name across the cache and the array drives. The name of this top level folder is used as the share name. From a user perspective this gives a view that can span multiple drives when viewed at the network level. Note that no individual file will span multiple drives - it is just the directory level that is given a unified view.

When viewed at the Linux level then User Shares will appear under the path /mnt/user. It is important to note that a User Share is just a logical view imposed on top of the underlying physical file system so you can see the same files if you look at the physical level (as described below for Disk Shars.

  • Current releases of Unraid also include the mount point /mnt/user0 that shows the files in User Shares OMITTING any files for a share that are on the cache drive. However This mount point is now deprecated ant likely to stop being available in a future Unraid release.

Normally one creates User Shares using the Shares tab. However if you manually create a top level folder on any drive the system will automatically consider this to be a user Share and give it default settings.

Which physical drive in the main array is used to store a physical file is controlled by a number of settings for the share:

  • Included or excluded drives: These settings allow you to control which array drives can hold files for the share. Never set both values, set only the one that is most convenient for you. If no drives are specified under these settings then all drives allowed under Settings >> Global Share settings are allowed.
  • Minimum free space:
  • Allocation method'
  • Split level: This setting controls how files should be grouped.
Important: in the event of there being contentions between the 'Minimum free space' and the Allocation method settings in deciding which would be an appropriate drive to use the Split level setting always wins. This means that you can get an out-of-space error even though there is plenty of space on other array drives that the share can logically use.

Important: The Linux file system used by Unraid are case sensitive while the SMB share system is not. As an example this means that a folder at the Linux level a folder called 'madia' is different to one called 'Media'. However at the network level case is ignored so for example 'media', Media', 'MEDIA' would all be the same share. However to take this example further you would only get the content of one of the underlying 'media' or 'Media' folders to appear at the network share level - and it can be non-obvious which one this would be.

Mover Behavior with User Shares

Unraid includes an application called mover that is used in conjunction with User Shares. It’s behavior controlled by the “Use Cache for new files” setting under each User Share. The way these different settings operate is as follows

  • Yes: Write new files to the cache as long as the free space on the cache is above the Minimum free space value. If the free space is below that then by-pass the cache and write the files directly to the main array.
When mover runs it will attempt to move files to the main array as long as they are not currently open. Which array drive will get the file is controlled by the combination of the Allocation method and Split level setting for the share.
  • No: Write new files directly to the array.
When mover runs it will take no action on files for this share even if there are files on the cache that logically belong to this share.
  • Only: Write new files directly to the cache. If the free space on the cache is below the Minimum free space setting for the cache then the write will fail with an out-of-space error.
When mover runs it will take no action on files for this share even if there are files on the main array that logically belong to this share.
  • Prefer: Write new files to the cache if the free space on the cache is above the Minimum free space setting for the share, and if the free space falls below that value then write the files to the main array instead.
When mover runs it will attempt to move any files for this share that are on the main array back to the cache as long as the free space on the cache is above the Minimum free space setting for the cache
It is the default setting for the appdata and System Shares that are used to support the Docker and VM sub-systems. In typical use you want the files/folders belonging to these shares to reside on the cache as you get much better performance from Docker containers and VMs if their files are not on the main array (due to the cost of maintaining parity on the main array significantly slowing down write operations).
This setting works for the cache share even if you do not have (yet) a physical cache drive(s). This is why it is the default for these shares rather than Only.

Disk Shares

These are shares that relate to individual drives within the Unraid system. By default if User Shares are enabled then disk Shares are not enabled. If you want them this is done under Settings->Global Share Settings. They will then appear under a new section on the Shares tab.

When viewed at the Linux level then disk shares will appear directly under /mnt with a name corresponding to the drive name (e.g. /mnt/disk1 or /mnt/cache).

IMPORTANT
If you have both Disk Shares and User Shares enabled then there is an important restriction that you must observe if you want to avoid potential data loss. What you must NEVER do is copy between a User Share and a Disk Share in the same copy operation where the folder name on the Disk Share corresponds to the User Share name. This is because at the base system level Linux does not understand User Shares and therefore that a file on a Disk Share and a User Share can be different views of the same file. If you mix the share types in the same copy command you can end up trying to copy the file to itself which results in the file being truncated to zero length and its content thus being lost.

There is no problem if the copy is between shares of the same type, or copying to/from a disk mounted as an Unassigned Device..

Network access

You can control what protocols should be supported for accessing the Unraid server across the network. Click on Settings->Network Services to see the various options available.. These options are:

  • SMB: This the standard protocol used by Windows systems. It is widely implemented on other YS.
  • NFS: Network File System. This is a protocol widely used on Unix compatible system.
  • AFP: Apple File Protocol. This is the protocol that has historically been used on Apple Mac system. It is now a deprecated option as the latest versions of MacOS now use SMB as the transferred protocol for accessing files and folders over the network.
  • FTP: File Transfer Protocol.

When you click on the name of a share on the Shares tab then there is a section that allows you to control the visibility of the share on the network for each of the protocols you have enabled. The setting is labelled Export and has the following options:

  • Yes: With this setting the share will be visible across the network.
  • Yes (Hidden): With this setting the share can be accessed across the network but will not be listed when browsing the shares on the server. Users can still access the share as long as they know the name and the user is prepared to enter in manually.
  • No: With this option selected then it is not possible to access the share across the network.


Access Permissions

When you click on the name of a share on the Shares tab then there is a section that allows you to control the access rights of the share on the network for each of the protocols you have enabled. The setting is labelled Security and has the following options:

  • Public: All users have both read and write access to the contents of the share
  • Private: All users including guests have read access, you select which of your users have write access
  • Secure: You select which of your users have access and for each user whether that user has read/write or read-only access.

Windows 'Gotcha'

There is an issue with the way Windows handles network shares that many users fall foul of:

  • This is the fact that Windows only allows a single username to be used to connect to a specific server at any given time. All attempts to then connect to a different share on the same server that are not public shares put up a Username/Password prompt and this fails as though you have entered an incorrect password for this username. If you have any shares on the server set to Private or Secure access it can therefore be important that you connect to such a share first before any shares set for Public access which may connect as a guest user and make subsequent attempts to connect with a specific user fail.
  • A workaround that can help with avoiding this issue is the fact that if you access a server both by it's network name and via it's IP address then Windows will treat it a two separate servers as far as authentication is concerned.