UnRAID 6/Overview

From unRAID
Jump to: navigation, search

What is Unraid?

Unraid® is an embedded operating system that is designed to provide you with the ultimate control over your hardware. In addition to performing the duties of a robust NAS (network-attached storage), Unraid is also capable of acting as an application server and virtual machine host. Unraid installs to and boots from a USB flash device and loads into a root RAM file system. By using a modern Linux kernel with up-to-date hardware drivers, Unraid can operate on nearly any 64 bit system (x86_64) with minimal consumption of system memory. All configuration data relating to the operating system is stored on the flash device and loaded at the same time as the operating system itself. Management of your Unraid system is accomplished through an intuitive web interface that offers basic controls for common tasks as well as advanced tuning options for the more savvy user. Unraid automatically chooses default settings that should work for most people’s needs, but also allows you to tweak settings to your liking. This makes Unraid intuitive where you want it, and tunable where you need it. By combining the benefits of both hardware and software agnosticism into a single OS, Unraid provides a wide variety of ways to store, protect, serve, and play the content you download or create.

The capabilities of Unraid are separated into three core parts: software-defined NAS, application server, and localized virtualization

Network Attached Storage

At its core, Unraid is a hardware-agnostic solution that can turn almost any 64-bit capable system into a NAS. Unraid can manage an array of drives (connected via IDE, SATA, or SAS) that vary in size, speed, brand, and filesystem. In addition, by eliminating the use of traditional RAID-based technologies, we can scale on-demand by adding more drives and without needing to rebalance existing data. Unraid's NAS functionality consists of a parity-protected array, user shares, and an optional cache pool.

Parity-Protected Array

The primary purpose of an Unraid array is to manage and protect the data of any group of drives (JBOD) by adding a dedicated parity drive. A parity drive provides a way to reconstruct all of the data from a failed drive onto a replacement. Amazing as it seems, a single parity drive can add protection for all of the others! The contents of a hard drive can be thought of as a very long stream of bits, each of which can only be a zero or a one. If you sum the values of the nth bit on every drive and determine whether that sum is even or odd, then you can force the corresponding nth parity bit to also be even or odd (zero or one). If a data drive fails, that parity information can now be used to deduce the exact bit values of the failed drive, and perfectly rebuild it on a replacement drive. Here's an example:

Figure 1. Showing bit settings on a set of disks without a parity device.

In the picture above, we have three drives and each has a stream of bits that vary in count based on the device size. By themselves, these devices are unprotected and should any of them fail, data will be lost. To protect ourselves from failure, we must add a fourth disk to serve as parity. The parity disk must be of equal or greater size than the largest data disk. To calculate the value of each bit on the parity disk, we only need to know the sum total for each column. If the sum of a column is an even number, the parity bit should be a 0. If the sum of a column is an odd number, the parity bit should be a 1. Here's the same image as before, but with parity calculated per frame:

Figure 2. Showing bit settings on a set of disks with a parity device.

Now let's pretend that drive 2 in our example has suffered a failure and a new drive has been purchased to replace it:

Figure 3. Solve for the missing bits using parity.

To rebuild the data on the newly replaced disk, we use the same method as before, but instead of solving for the parity bit, we solve for the missing bit. For column 1, the sum would be 0, an even number, so the missing bit must be a 0 as well. For column 6, the sum would be 1, an odd number, so therefore the missing bit must also be a 1.

The ability to rebuild a disk using parity provides protection from data loss. Parity protection also provides fault-tolerance by allowing full usage of the system while keeping all data accessible, even when a drive has failed.

User Shares

Unlike most RAID systems, Unraid saves data to individual drives. To simplify manageability, users can create shares that allow files written to them to be spread across multiple drives. Each share can be thought of as a top-level folder on a drive. When browsing through a share, all data from all drives that participate in that share will be displayed together. Users do not need to know which disk a file is on in order to access it under a share. Shares can be tuned to include/exclude specific disks and to use various methods for determining how files are allocated across those disks. In addition to controlling how data is distributed across drives, users can also control what network protocols the share is visible through as well as define user-level security policy. When accessing your Unraid server over a network protocol, all shares exported through that protocol will be visible, but you can toggle protocols for both individual shares as well as at a global setting level. Should you have private data on your system that you wish to protect from anonymous access, user accounts can be created and policies defined to limit access to only trusted individuals.

Figure 1. Distribution policies define the disks to use when data is written to a share.
Figure 2. Access policies define the protocols and user-level security to use for a share.

Cache

The cache-drive feature of Unraid provides faster data capture. Generally speaking, by using a cache alongside an array of three or more drives, you can achieve up to 3x write performance. When data is written to a user share that has been configured to use the cache drive, all of that data is initially written directly to the dedicated cache drive. Because this drive is not a part of the array, the write speed is unimpeded by parity updates. Then an Unraid process called “the mover” copies the data from the cache to the array at a time and frequency of your choosing (typically in the middle of the night). Once the mover completes, the space consumed previously on the cache drive is freed up.

With a single cache drive, data captured there is at risk, as a parity drive only protects the array, not the cache. However, you can build a cache with multiple drives both to increase your cache capacity as well as to add protection for that data. The grouping of multiple drives in a cache is referred to as building a cache pool. The Unraid cache pool is created through a unique twist on traditional RAID 1, using a BTRFS feature that provides both the data redundancy of RAID 1 plus the capacity expansion of RAID 0.

Unraid-array-with-cache.gif

Application Server

Traditional NAS solutions to application support come with three primary limitations:

  1. They cannot support applications written for other operating systems.
  2. They can be cumbersome to install and even more difficult to remove.
  3. They don’t always “play nice” with other applications in the same OS.

Docker addresses these problems in a number of key ways:

  • It allows for the use of any Linux operating system to empower a given application (no longer limited by the operating system of the host itself).
  • It removes the “installation” process that applications have to go through by providing pre-installed images that ensure a consistent run-time experience for the user and making them easier to remove when the user is done with them.
  • It enables applications that would normally have issues with coexistence to live in harmony in the same operating environment.

Docker is made up of three primary components: the Engine, the Hub, and Containers.

The Engine

The Docker Engine represents the management component that is built into Unraid 6. Using the engine, we can control application access to vital system resources, interact with the Docker Hub, and isolate applications from conflicting with each other or our operating system.

From a storage perspective, the engine leverages the copy-on-write capabilities of the BTRFS filesystem combined with Docker images provided through the hub. The images are essentially tar files with a hierarchy so that other images which depend upon a common layer don’t need to replicate storage for the layer they share. The shared layers are put in a read-only state, while changes made to them are reflected only in the instance for the application that changed it. In simpler terms, this means that applications can be efficient in their use of both system performance and storage capacity.

Figure 1. Visualizing how different apps can share read-only access to a common base image, storing modifications to it in a copy-on-write data store.

The Hub

One of biggest advantages Docker provides over both traditional Linux containers (LXCs) and virtual machines (VMs) is in its application repository: the Docker Hub. Many traditional Linux operating systems nowadays come with a component in their framework known as a package manager. The job of the package manager is to let people easily install applications written for a particular operating system from catalogs that are known as repositories. While package managers do their job fairly well, they come with all the limitations mentioned earlier. Linux containers and virtual machines, while competent at providing a way to control resources allocated to an application, still rely on traditional package managers for software retrieval and installation into their run-time environments.

In contrast, the Docker Hub provides all the benefits without the limitations of a traditional package manager. Using the Docker engine, pre-built applications can be downloaded automatically and, thanks to the copy-on-write benefits we’ve already covered, the only data that is actually downloaded is data not already present on your system. The hub contains over 14,000 Dockerized apps, so finding what you’re looking for shouldn’t be a problem. In addition, thanks to some of our loyal community members, users can quickly add many of the most popular containers through the use of templates in Unraid 6. These forum members have taken it upon themselves to build and maintain these templates and the list of available templates continues to grow.

The Docker Hub can be accessed at http://registry.hub.docker.com.

Containers

The cornerstone of Docker is in its ability to use Linux control groups, namespace isolation, and images to create isolated execution environments in the form of Docker containers. Docker controls the resources allocated to the Containers and isolates them from conflicting with other applications on the same system. This provides all the benefits of traditional virtual machines, but with none of the overhead associated with emulating hardware, making containers ridiculously efficient and in some studies, barely distinguishable from bare-metal equivalents.

Docker works by allowing applications access to the system resources of the host operating system, such as CPU, memory, disk, and network, but isolates them into their own run-time environments. Unlike virtual machines, containers do not require hardware emulation, which eliminates overhead, hardware requirements, and provides near bare-metal performance.

Figure 2. Containers can be assigned common system resources and remain isolated from negatively impacting each other on the same system.

Virtualization Host

Virtualization technology has advanced greatly since it was first introduced and provides a wealth of benefits to users. By supporting the use of virtual machines on Unraid 6, we can run an even wider array of applications in isolated environments. While Docker containers are the preferred method for running Linux-based headless applications, virtual machines offer these unique benefits:

  1. Run non-Linux operating systems (e.g. Windows).
  2. Support drivers for physical devices independently of Unraid OS.
  3. Customize and tune the guest operating systems.

Unraid Server OS is designed to run as a virtualization host, leveraging a hypervisor to partition resources to virtualized guests in a secure and isolated manner. To simplify, virtual machines can be assigned a wider array of resources than Docker containers but still offer the same benefits of isolated access to those resources. This enables Unraid servers to handle a variety of other tasks, more than just network-attached storage.

Assignable Devices

Our implementation of KVM includes modern versions of QEMU, libvirt, VFIO*, VirtIO, and VirtFS. We also support Open Virtual Machine Firmware (OVMF) which enables UEFI support for virtual machines (adding SecureBoot support as well as simplified GPU pass through support). This allows for a wide array of resources to be assigned to virtual machines ranging from the basics (storage, compute, network, and memory) to the advanced (full PCI / USB devices). We can emulate multiple machine types (i440fx and Q35), support CPU pinning, optimize for SSDs, and much more. Best of all, these virtualization technologies prevent their use from impacting the reliability of the host operating system.

Fig. 1 assign both shared resources and host devices to virtual machine guests.

Simplified Management

Management of your Unraid system is accomplished through an intuitive web interface that offers basic controls for common tasks as well as advanced tuning options for the more savvy user. Unraid automatically chooses default settings that should work for most people’s needs, but also allows you to tweak settings to your liking. This makes Unraid intuitive where you want it, and tunable where you need it.

  • Dashboard View. With indicators for disk health, temperatures, resource utilization, and application states, the dashboard provides a 50,000 foot view of what’s happening on your system.
  • Array Operation. Assign devices for use in either the array or cache, spin up and down individual disks, start and stop the array, and even perform an on-the-fly parity check, all from a single page.
  • Share Management. Setting up shares on Unraid is easy. Give the share a name, optionally apply policies to access and distribution controls, and click create!
  • Disk Tuning. Control how and when devices spin down, the default file system format, and other advanced settings.
  • Network Controls. Enable NIC bonding and bridging, set time servers, and more.
  • APC UPS Safe Shutdown. When connected to an APC UPS, Unraid can safely shut down the system in the event of a power loss.
  • System Notifications. Unraid can alert you to important events happening on your system through the web management console as well as e-mail and other notification systems.
  • Task Scheduler. Choose if and when you want to have an automatic parity check occur as well how often the mover script should transfer files from the cache to the array.
  • Docker Containers. Manage application controls from a single pane of glass. Add applications with minimal effort using community-provided templates.
  • Virtual Machines. Choose between pre-created virtual machine images or create your own custom VM from scratch.

System Requirements

In general, if it’s supported by Linux, it’s supported by unRAID. CPU usage will be minimal, so even something as low end as a Celeron or Atom processor would be fine to use. However, should you wish to run more performance-demanding applications through Docker containers or virtual machines, a processor with extra clock speed and support for hyper-threading can be very beneficial. And as mentioned before, if using virtual machines, virtualization support on your CPU will be required (Intel VT-x / AMD-V), and if passing through PCI devices, IOMMU support will also be required (Intel VT-d / AMD-Vi). For running localized virtual desktops, you will need to be able to assign a graphics device (GPU) to a VM, which will require more specific component selection for compatibility.

Boot Device

unRAID installs to and boots from a quality USB flash storage device[1]. The device must be at least 512MB, no larger than 32GB, and contain a unique GUID (serial number).

Network Attached Storage

If the sole purpose of your unRAID system is to act as a traditional NAS (no plugins, virtual machines, or Docker containers), system requirements are minimal:

  • 1GB of RAM
  • 64-bit capable processor
  • Linux hardware driver support
  • At least 1 SATA/IDE/SAS HDD

Application Server

If you intend to use your system as an application server with Docker Containers, you will need to ensure you have enough memory to support the amount of concurrency you intend for your system. Most users will find it difficult to utilize more than 8GB of RAM on Docker alone, but usage may vary from application to application. General recommendations for running an application server are as follows:

  • General services (FTP, Databases, VoIP, etc.): 2GB of RAM, 1 CPU Core
  • Content download and extraction (SABnzbd, NZBget, etc.): 4GB of RAM, 2 CPU cores
  • Media servers (Plex, Logitech, Universal, etc.): 4GB of RAM, Intel i5/i7/e3 processor

Virtualization Host

To create virtual machines on unRAID, you will need HVM hardware support (Intel VT-x or AMD-V). To assign host-based PCI devices to those VMs, your hardware must also support IOMMU (Intel VT-d or AMD-Vi). Lastly, all virtualization features must be enabled in your motherboard BIOS (typically found in the CPU or System Agent sections). NOTE: Not all hardware that claims support for this has been proven to work effectively, so see the "tested hardware" section for known working component combinations. Virtual machines can also drive a need for much more RAM/CPU cores depending on the type. Here are some general recommendations on how much RAM should be allocated per virtual machine:

  • Virtual servers (Windows, Arch, etc.): 256MB - 1GB, 1-2 CPU cores
  • Virtual desktops (Windows, Ubuntu, etc.): 512MB - 8GB, 2-4 CPU cores
  • Hybrid VMs (GPU assignment, gaming, etc.): 1GB - 12GB, 2-6 CPU cores

Keep in mind that memory usage for virtual machines only occurs when they are running, so it's just important to think about these requirements in terms of peak concurrent usage on your system.

Determining HVM/IOMMU Hardware Support

To determine if hardware has support for HVM or IOMMU, there are two primary methods available:

Online Research

  • To check if your Intel processor has support for VT-x or VT-d, visit http://ark.intel.com/Search/Advanced.  On the left hand filter panel, you can filter by processors that have support for VT-x, VT-d, or both.
  • For guidance with AMD processors, there is not an equivalent to the ARK site, but this Wikipedia article may assist you.
  • Motherboard support for virtualization is usually available as part of the product documentation or user manual.

Through the unRAID webGui

  • When accessing your unRAID system through the web interface, you can determine if your system is virtualization compatible by clicking the Info button on the right side of the top menu bar.
    • HVM Support refers to Intel VT-x or AMD-V
      • Not Available means that your hardware is not HVM capable.
      • Disabled means that your hardware is HVM capable, but the settings in your motherboard BIOS are not enabled.
      • Enabled means that your hardware is both HVM capable and the appropriate settings in your motherboard BIOS are also enabled.
    • IOMMU Support refers to Intel VT-d or AMD-Vi
      • Not Available only displays if your system is not HVM capable.
      • Disabled means that either your hardware is not capable of IOMMU or the appropriate settings in your motherboard BIOS are not enabled.
      • Enabled means that your hardware is both IOMMU capable and the appropriate settings in your motherboard BIOS are also enabled.

Assigning Graphics Devices

Unlike other PCI devices, graphics devices can be more difficult to pass through to a VM for control. With unRAID 6, we've implemented a number of tweaks to maximize success with graphics pass through for our users. Here are the currently known limitations associated with GPU pass through on unRAID 6:

  • NVIDIA GTX-series GPUs should work fine as of the 600 series or newer, but not all models have been tested.
  • AMD cards have had some issues depending on the make or model and which guest operating system is attached.
  • Some devices may work better for pass through to specific guest operating systems.
  • With OVMF-based virtual machines, if your GPU has UEFI support, it should work fine, but some users still report card-specific issues.
  • In addition to the Lime Technology Tested Components, you can review a community-maintained spreadsheet of tested hardware configurations for GPU assignment.
  • More information on assigning graphics devices to VMs can be found here.

Lime Technology Tested Components

For those looking to purchase a new system for unRAID, the following components are used in Lime Technology's lab for testing features and capabilities for unRAID 6. Note that these systems do not represent the only hardware that can be used with unRAID 6, but this list does represent the limit to what the Lime Technology R&D team has access to for testing in the lab.

Motherboards / Processors

Small Form Factor

Desktop/Media Player

Server

Workstation (1 of 3)

Workstation (2 of 3)

Workstation (3 of 3)

Graphics Devices (GPUs)