Unraid OS 6.9.0
Summary of New Features
- 1 Multiple Pools
- 2 Better Module/Third Party Driver Support
- 3 Docker
- 4 Virtualization
- 5 Language Translation
- 6 Other
This features permits you to define up to 35 named pools, of up to 30 storage devices per pool. Pools are created and managed via the Main page.
- Note: A pre-6.9.0 cache disk/pool is now simply a pool named "cache". When you upgrade a server which has a cache disk/pool defined, a backup of
config/disk.cfgwill be saved to
config/disk.cfg.bak, and then cache device assignment settings are moved out of
config/disk.cfgand into a new file,
config/pools/cache.cfg. If later you revert back to a pre-6.9.0 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache. As long as you reassign the correct devices, data should remain intact.
When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share. The assigned pool functions identically to current cache pool operation.
Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:
pool assigned to share
all the other pools in strverscmp() order.
A single-device pool may be formatted with either xfs, btrfs, or (deprecated) reiserfs. A multiple-device pool may only be formatted with btrfs. A future release will include support for multiple "unRAID array" pools, as well as a number of other pool types.
- Note: Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level. Now let's say you create another pool, and what you do is un-assign one of the devices from the existing 2-device btrfs pool and assign it to this pool. Now you have x2 single-device btrfs pools. Upon array Start user might understandably assume there are now x2 pools with exactly the same data. However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a
wipefson that device so that upon mount it will not be included in the old pool. This of course effectively deletes all the data on the moved device.
Additional btrfs balance options
Multiple device pools are still created using btrfs raid1 profile by default. If you have 3 or more devices in a pool you may now rebalance to raid1c3 profile (x3 copies of data on separate devices). If you have 4 or more devices in a pool you now now rebalance to raid1c4 (x4 copies of data on separate devices). We also modified the raid6 balance operation to set meta-data to raid1c3 (previously was raid1).
However, we have noticed that applying one of these balance filters to a completely empty volume leaves some data extents with the previous profile. The solution is to simply run the same balance again. We consider this to be a btrfs bug and if no solution is forthcoming we'll add the second balance to the code by default. For now, it's left as-is.
SSD 1 MiB Partition Alignment
We have added another partition layout where the start of partition 1 is aligned on a 1 MiB boundary. That is, for devices which present 512-byte sectors, partition 1 will start in sector 2048; for devices with 4096-byte sectors, in sector 256. This partition type is now used when formatting all unformatted non-rotational storage (only).
It is not clear what benefit 1 MiB alignment offers. For some SSD devices, you won't see any difference; others, perhaps big performance difference. LimeTech does not recommend re-partitioning an existing SSD device unless you have a compelling reason to do so (or your OCD just won't let it be).
To re-partition a SSD it is necessary to first wipe out any existing partition structure on the device. Of course this will erase all data on the device. Probably the easiest way to accomplish this is, with array Stopped, identify the device to be erased and use the 'blkdiscard' command:
blkdiscard /dev/xxx # for example /dev/sdb or /dev/nvme0n1 etc
WARNING: be sure you type the correct device identifier because all data will be lost on that device!
Upon next array Start the device will appear Unformatted, and since there is now no partition structure, Unraid OS will create it.
SMART handling and Storage Threshold Warnings
There is a configuration file named
config/smart-one.cfg which stores information related to SMART, for example, the controller type to be passed to
smartctl for purposes of fetching SMART information. Also stored in that file are volume warning and critical free space thresholds. Starting with this release, these configuration settings are handled differently.
In the case of SMART configuration, settings are saved by device-ID instead of by slot-ID. This permits us to manage SMART for unassigned devices. It also permits SMART configuration to "follow the device" no matter which slot it's assigned to. The implication however, is that you must manually reconfigure SMART configuration for all devices which vary from default.
The volume warning and critical space threshold settings have been moved out of this configuration file and instead are saved now in
config/disk.cfg (for the unRAID array) and in the pool configuration files for each pool. The implication is that you must manually reconfigure these settings for all volumes which vary from default.
Better Module/Third Party Driver Support
Recall that we distribute Linux modules and firmware in separate squashfs files which are read-only mounted at
/lib/firmware. We now set up an overlayfs on each of these mount points, making it possible to install 3rd party modules using the plugin system, provided those modules are built against the currently running kernel version. In addition, we define a new directory on the USB flash boot device called
config/modpobe.d the contents of which are copied to
/etc/modprobe.d early in the boot sequence before the Linux kernel loads any modules.
This technique is used to install the Nvidia driver (see below) and may be used by Community Developers to provide an easier way to add modules not included in base Unraid OS: no need to build custom bzimage, bzmodules, bzfirmware and bzroot files!
Passing Parameters to Modules
The use of
conf files in
config/modprobe.d may be used to specify options and pass arguments to modules.
As an example: at present we do not have UI support for specifying which network interface should be "primary" in a bond; the bonding driver simply selects the first member by default. In some configurations it may be useful to specify an explicit preferred interface, for example if you have a bond with a 1Gbit/s (eth0) and 10Gbit/sec (eth1) interface.
Since setting up the bond involves loading the bonding kernel module, and you can specify which interface to set as primary using this method:
Create a file on the flash:
config/modprobe.d/bonding.conf which contains this single line, and then reboot:
options bonding primary=eth1
After reboot you can check if it worked by typing this command:
where you should see the selected interface show up as "Primary Slave".
The goal of creating squashfs overlays mounted at
/lib/firmware, along with providing a mechanism for defining custom module parameters, is to provide a way of integrating third-party drivers into Unraid OS without requiring custom builds of the bz* files. One of the most popular third-party drivers requested for Unraid OS is Nvidia's GPU Linux driver. This driver is required for transcoding capability in Docker containers. Providing this driver as a plugin for Unraid OS has required a lot of work to set up a dev environment, compile the driver and tools, and then unpack bzmodules, add the driver, create new bzmodules, and then finally replace in USB flash root directory. This work has been accomplished by members @chbmb, @bassrock, and others. Building on their work, along with member @ich777 we now create separate Nvidia driver packages built against each new Unraid OS release that uses a new kernel, but not directly included in the base bz* distribution.
A JSON file describing the driver version(s) supported with each kernel can be downloaded here:
Each driver package includes the Nvidia Linux GPU driver along with a set of container tools. The container tools include: nvidia-container-runtime, nvidia-container-toolkit and libnvidia-container. These tools are useful in facilitating accelerated transcoding in Docker containers. A big Thank You! to Community member @ich777 for help and providing the tools. @ich777 has also provided a handy plugin to facilitate installing the correct driver.
Inclusion of third-party modules into Unraid OS using the plugin system is still a work-in-progress. For example, another candidate would be to replace the Linux in-tree Intel ethernet drivers with Intel's custom Linux drivers.
It's now possible to select different icons for multiple containers of the same type. This change necessitates a re-download of the icons for all your installed docker applications. A delay when initially loading either the Dashboard or the Docker tab while this happens is to be expected prior to the containers showing up.
We also made some changes to add flexibility in assigning storage for the Docker engine. First, 'rc.docker' will detect the filesystem type of /var/lib/docker. We now support either btrfs or xfs and the docker storage driver is set appropriately.
Next, 'mount_image' is modified to support a loopback file formatted either with btrfs or xfs depending on the suffix of the loopback file name. For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs. If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs.
In addition, we added the ability to bind-mount a directory instead of using a loopback. If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker. For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker". If this path is on a user share we then "de-reference" the path to get the disk path which is then bind-mounted onto /var/lib/docker. For example, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker". Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this.
We integrated changes to System Devices page by user @Skitals with refinements by user @ljm42. You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes. This makes it easier to reserve those devices for assignment to VM's. This technique is known as stubbing (because a stub or dummy driver is assigned to the device at boot preventing the real Linux driver from being assigned).
One might wonder, if we can blacklist individual drivers why do we need to stub those devices in order to assign to VM's? The answer is: you can. But, if you have multiple devices of the same type, where some need to be passed to a VM and some need to have the host Linux driver installed, then you must use stubbing for the devices to pass to VM's.
Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9. Refer also @ljm42's excellent guide.
A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI. There are several language packs now available, and several more in the works. Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.
Note: Community Applications must be up to date to install languages. See also here.
Each language pack exists in public Unraid organization github repos. Interested users are encouraged to clone and issue Pull Requests to correct translations errors. Language translations and PR merging is managed by @SpencerJ.
GPU Driver Integration
Unraid OS now includes selected in-tree GPU drivers: ast (Aspeed), i915 (Intel), amdgpu and radeon (AMD). For backward compatibility, these drivers are blacklisted by default via corresponding
conf files in
/etc/modprobe.d/ast.conf /etc/modprobe.d/i915.conf /etc/modprobe.d/amdgpu.conf /etc/modprobe.d/radeon.conf
Each of these files has a single line which blacklists the driver, preventing it from being loaded by the Linux kernel.
It is possible to override the settings in these files by creating a custom
conf file in the
config/modprobe.d directory on your USB flash boot device. For example, to un-blacklist the amdgpu driver type this command in a Terminal session:
touch /boot/config/modprobe.d/amdgpu.conf # create an empty file
When Unraid OS boots, before the Linux kernel executes device discovery, we copy any files from /boot/config/modprobe.d to /etc/modprobe.d. Since amdgpu.conf on the flash is an empty file, it will effectively cancel the driver from being blacklisted.
These out-of-tree drivers are currently included:
- QLogic QLGE 10Gb Ethernet Driver Support (from staging)
- RealTek r8125: version 9.003.05 (included for newer r8125)
- HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request)
Note that as we update Linux kernel, if an out-of-tree driver no longer builds, it will be omitted.
These drivers are currently omitted:
- Highpoint RocketRaid r750 (does not build)
- Highpoint RocketRaid rr3740a (does not build)
- Tehuti Networks tn40xx (does not build)
If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives. Better yet, pester the manufacturer of the controller and get them to update their drivers.
All updated to latest versions. In addition, Linux PAM has been integrated. This will permit us to install 2-factor authentication packages in a future release.
There are changes in /etc/ssh/sshd_conf to improve security (thanks to @Mihai and @ljm42 for suggestions):
- only root user is permitted to login via ssh (remember: no traditional users in Unraid OS - just 'root')
- non-null password is now required
- non-root tunneling is disabled
In addition, upon upgrade we ensure the 'config/ssh/root' directory exists on the USB flash boot device; and, we have set up a symlink: /root/.ssh to this directory. This means any files you might put into /root/.ssh will be persistent across reboots.
Note: if you examine the sshd startup script (/etc/rc.d/rc.sshd), upon boot all files from the 'config/ssh' directory are copied to /etc/ssh (but not subdirs). The purpose is to restore the host ssh keys; however, this mechanism can be used to define custom ssh_conf and sshd_conf files.
"unexpected GSO errors"
If your system log is being flooded with errors such as:
Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66
You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net". In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page. For other network configs it may be necessary to directly edit the xml. Example:
<interface type='bridge'> <mac address='xx:xx:xx:xx:xx:xx'/> <source bridge='br0'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>
AFP support has been removed.
Even Apple no longer uses this protocol.