UnRAID Manual 6

From Unraid | Docs
Jump to: navigation, search

THIS PAGE IS AN ORPHAN PAGE THAT IS NOT CONNECTED TO THE MAIN UNRAID DOCUMENTATION.

IT IS TEMPORARLY KEPT SO THAT USEFUL PARTS OF IT CAN BE MOVED INTO THE MAIN DOCUMENTATION.


Contents

System Requirements

In general, if it’s supported by Linux, it’s supported by unRAID. CPU usage will be minimal, so even something as low end as a Celeron or Atom processor would be fine to use. However, should you wish to run more performance-demanding applications through Docker containers or virtual machines, a processor with extra clock speed and support for hyper-threading can be very beneficial. And as mentioned before, if using virtual machines, virtualization support on your CPU will be required (Intel VT-x / AMD-V), and if passing through PCI devices, IOMMU support will also be required (Intel VT-d / AMD-Vi). For running localized virtual desktops, you will need to be able to assign a graphics device (GPU) to a VM, which will require more specific component selection for compatibility.

Boot Device

unRAID installs to and boots from a quality USB flash storage device[1]. The device must be at least 512MB, no larger than 32GB, and contain a unique GUID (serial number).

Network Attached Storage

If the sole purpose of your unRAID system is to act as a traditional NAS, system requirements are minimal:

  • 1GB of RAM
  • 64-bit capable processor
  • Linux hardware driver support
  • At least 1 SATA/IDE/SAS HDD

Application Server

If you intend to use your system as an application server with Docker Containers, you will need to ensure you have enough memory to support the amount of concurrency you intend for your system. Most users will find it difficult to utilize more than 8GB of RAM on Docker alone, but usage may vary from application to application. General recommendations for running an application server are as follows:

  • General services (FTP, Databases, VoIP, etc.): 2GB of RAM, 1 CPU Core
  • Content download and extraction (SABnzbd, NZBget, etc.): 4GB of RAM, 2 CPU cores
  • Media servers (Plex, Logitech, Universal, etc.): 4GB of RAM, Intel i5/i7/e3 processor

Virtualization Host

To create virtual machines on unRAID, you will need HVM hardware support (Intel VT-x or AMD-V). To assign host-based PCI devices to those VMs, your hardware must also support IOMMU (Intel VT-d or AMD-Vi). Lastly, all virtualization features must be enabled in your motherboard BIOS (typically found in the CPU or System Agent sections). NOTE: Not all hardware that claims support for this has been proven to work effectively, so see the "tested hardware" section for known working component combinations. Virtual machines can also drive a need for much more RAM/CPU cores depending on the type. Here are some general recommendations on how much RAM should be allocated per virtual machine:

  • Virtual servers (Windows, Arch, etc.): 256MB - 1GB, 1-2 CPU cores
  • Virtual desktops (Windows, Ubuntu, etc.): 512MB - 8GB, 2-4 CPU cores
  • Hybrid VMs (GPU assignment, gaming, etc.): 1GB - 12GB, 2-6 CPU cores

Keep in mind that memory usage for virtual machines only occurs when they are running, so it's just important to think about these requirements in terms of peak concurrent usage on your system.

Determining HVM/IOMMU Hardware Support

To determine if hardware has support for HVM or IOMMU, there are two primary methods available:

Online Research

  • To check if your Intel processor has support for VT-x or VT-d, visit http://ark.intel.com/Search/Advanced.  On the left hand filter panel, you can filter by processors that have support for VT-x, VT-d, or both.
  • For guidance with AMD processors, there is not an equivalent to the ARK site, but this Wikipedia article may assist you.
  • Motherboard support for virtualization is usually available as part of the product documentation or user manual.

Through the unRAID webGui

  • When accessing your unRAID system through the web interface, you can determine if your system is virtualization compatible by clicking the Info button on the right side of the top menu bar.
    • HVM Support refers to Intel VT-x or AMD-V
      • Not Available means that your hardware is not HVM capable.
      • Disabled means that your hardware is HVM capable, but the settings in your motherboard BIOS are not enabled.
      • Enabled means that your hardware is both HVM capable and the appropriate settings in your motherboard BIOS are also enabled.
    • IOMMU Support refers to Intel VT-d or AMD-Vi
      • Not Available only displays if your system is not HVM capable.
      • Disabled means that either your hardware is not capable of IOMMU or the appropriate settings in your motherboard BIOS are not enabled.
      • Enabled means that your hardware is both IOMMU capable and the appropriate settings in your motherboard BIOS are also enabled.

Assigning Graphics Devices

Unlike other PCI devices, graphics devices can be more difficult to pass through to a VM for control. With unRAID 6, we've implemented a number of tweaks to maximize success with graphics pass through for our users. Here are the currently known limitations associated with GPU pass through on unRAID 6:

  • NVIDIA GTX-series GPUs should work fine as of the 600 series or newer, but not all models have been tested.
  • AMD cards have had some issues depending on the make or model and which guest operating system is attached.
  • Some devices may work better for pass through to specific guest operating systems.
  • With OVMF-based virtual machines, if your GPU has UEFI support, it should work fine, but some users still report card-specific issues.
  • In addition to the Lime Technology Tested Components, you can review a community-maintained spreadsheet of tested hardware configurations for GPU assignment.
  • More information on assigning graphics devices to VMs can be found here.

Lime Technology Tested Components

For those looking to purchase a new system for unRAID, the following components are used in Lime Technology's lab for testing features and capabilities for unRAID 6. Note that these systems do not represent the only hardware that can be used with unRAID 6, but this list does represent the limit to what the Lime Technology R&D team has access to for testing in the lab.

Motherboards / Processors

Small Form Factor

Desktop/Media Player

Server

Workstation (1 of 3)

Workstation (2 of 3)

Workstation (3 of 3)

Graphics Devices (GPUs)



Docker.png Using Docker

With unRAID 6, we can now run any Linux application on unRAID, regardless of the distribution format. That means whether an app was written for Ubuntu, CentOS, Arch, Red Hat, or any other variant, unRAID can run it. This is accomplished through the use of Docker Containers, which allow us to provide each application with it’s own isolated operating environment in which it cannot create software compatibility or coexistance conflicts with other applications. This guide will show you how to get started with Docker on unRAID 6 to install media servers, file sharing software, backup solutions, gaming servers, and much more.

If you want more information on docker and its underlying technology than is provided in this guide then you should visit the docker home page.

Prerequisites

  • A system up and running with unRAID 6.0 and are connected via a web browser to the unRAID webGui (e.g., “http://tower” or “http://tower.local” from Mac by default).
  • A share created called “appdata” that will be used to store application metadata.

NOTE: Applications are made available and supported by both the Docker and unRAID user communities respectively.

Creating Your Docker Virtual Disk

The first step on your Docker journey will be to create your Docker virtual disk image where the service and all the application images will live.

  • Open a web browser on your Mac or PC and navigate to the unRAID webGui.
  • Click on the Docker tab at the top of the screen.
  • Set Enable Docker to Yes.
  • Specify an initial virtual disk image size (it is recommended that beginners start with at least a 10GB image size). This can be enlarged later, but can never be reduced in size once set.
  • Pick a location for your Docker virtual disk.
    • The path must be device-specific (you cannot specify a path through the user share file system; e.g., “/mnt/user/docker.img” is not a valid path).
    • It is recommended to store the virtual disk on the root of the cache disk or on the root of a data disk if no cache disk is available.
  • Click Apply to create the virtual disk and start the Docker service (this may take some time).

Adding Template Repositories

Once the service has started, the web page will refresh and a new “Docker Containers” section will appear. The easiest way to add Dockerized applications to unRAID is through the use of template repositories which act as a catalog for installing and configuring applications with ease through the unRAID web interface. These templates and their respective applications are maintained by the unRAID user community.

  • Check out the complete list of available applications and repositories in our community forums.
  • For each repository you want to add, copy the link of the repository and paste it into the “Template repositories” field on the Docker Settings page.
  • Separate multiple entries in the list by pressing Enter on your keyboard.
  • When you’re done adding repositories, click the Save button.

Adding Your First Container

With your template repositories added, you can now begin creating application “Containers” using Docker. Containers prevent software from causing conflicts with other applications and services running on unRAID.

  • Click Add Container on the Docker Containers page to begin adding your first application.
  • Now click the Template drop down to select an application from one of the repositories we added previously.
  • After selecting, the page will refresh and new fields will be presented for configuring the container’s network and storage access.
  • Be sure to read the Description section for any special instructions.

Network Type

If the Bridge type is selected, the application’s network access will be restricted to only communicating on the ports specified in the port mappings section. If the Host type is selected, the application will be given access to communicate using any port on the host that isn't already mapped to another in-use application/service. Generally speaking, it is recommended to leave this setting to its default value as specified per application template.

Volume Mappings

Applications can be given read and write access to your data by mapping a directory path from the container to a directory path on the host. When looking at the volume mappings section, the Container volume represents the path from the container that will be mapped. The Host path represents the path the Container volume will map to on your unRAID system. All applications should require at least one volume mapping to store application metadata (e.g., media libraries, application settings, user profile data, etc.). Clicking inside these fields provides a “picker” that will let you navigate to where the mapping should point. Additional mappings can be manually created by clicking the Add Path button. Most applications will need you to specify additional mappings in order for the application to interact with other data on the system (e.g., with Plex Media Server, you should specify an additional mapping to give it access to your media files). It is important that when naming Container volumes that you specify a path that won’t conflict with already existing folders present in the container. If unfamiliar with Linux, using a prefix such as “unraid_” for the volume name is a safe bet (e.g., “/unraid_media” is a valid Container volume name).

Port Mappings

When the network type is set to Bridge, you will be given the option of customizing what ports the container will use. While applications may be configured to talk to a specific port by default, we can remap those to different ports on our host with Docker. This means that while three different apps may all want to use port 8000, we can map each app to a unique port on the host (e.g., 8000, 8001, and 8002). When the network type is set to Host, the container will be allowed to use any available port on your system. Additional port mappings can be created, similar to Volumes, although this is not typically necessary when working with templates as port mappings should already be specified.

IMPORTANT NOTE: If adjusting port mappings, do not modify the settings for the Container port as only the Host port can be adjusted.

Container Creation Process

With your volume and port mappings configured, you are now ready to create your first Docker container. Click the Create button and the download process will begin. A few things worth noting while the image is downloading:

  • After clicking Create, do not close your browser window or attempt to navigate to other tabs using the browser until the download is complete.
  • Initial downloads per template repository may take longer than subsequent downloads per repository.
  • When the download process completes, you can click the Done button to return to the Docker page and continue adding applications.

Controlling Your Application

Once the download is complete, the application is started automatically. To interact with your application, we begin by clicking on the icon visible on the Docker page of the unRAID web interface. Doing so will make a context menu appear with multiple options:

Dockerguide-controlling.png

  • WebUI
    • Most apps added through Docker will have a web interface that you can access to configure and use them, but not all.
    • Clicking this option will launch a new browser tab/window directly to the applications web interface.
    • For apps that do NOT have a web interface, read the description when adding the container for instructions on how to make use of the app once it’s running.
  • Update
    • This option only appears after clicking Check for Updates (if available).
  • Start/Stop
    • This will toggle the active state of the container.
  • Logs
    • If you are having difficulties with your application, useful information may be present the application’s log.
    • Logs for applications are stored separately from unRAID’s system log itself.
  • Edit
    • Container settings such as port and volume mappings can be changed by clicking this option.
    • Once changes are applied, the container will start automatically, even if it is stopped initially.
  • Enable/Disable autostart
    • Toggling this will change the default behavior of the application when the Docker service is started.
  • Remove
    • Allows you to remove either the entire application, or just the container.
    • Removing a container without it’s “image” will make adding the application again later a much faster process (as it will not need to be redownloaded).

Accessing a Volume Mapping Inside a Container

One of the first things you will do after your application is running will be to configure it. Configuration typically will involve specifying storage locations from within the applications web interface. When doing so, remember to look for the volume mappings you defined when adding your container. For example, if I needed to specify a folder path in the BT Sync app that would point to my Media share, I would specify the path of “/unraid_media” in the applications interface, as depicted below.

Dockerguide-usingvolumes.png

Other Tips and Tricks

Using Docker containers to run applications on unRAID is incredibly easy and very powerful. Here are some additional tips to improve your experience:

  • Using a cache device for storing your Docker virtual disk image and application data can improve performance.
  • Run multiple instances of the same app at the same time, which is useful for testing out alternate versions before upgrading.
  • Click the Advanced View toggle on the top right when viewing the Docker page or adding applications to see additional configuration options.
  • Learn more about Docker containers from our helpful user community.

Libvirt.png Using Virtual Machines

While Docker Containers are the preferred mechanism for running Linux-based applications such as media servers, backup software, and file sharing solutions, virtual machines add support for non-Linux workloads and the ability to provide driver support for assigned PCI devices. Localized Virtualization is our method of supporting VMs where all resources assigned to the guest are local to the host.

NOTE: This guide applies to KVM boot mode only.

Technology Stack

unRAID 6 features a number of key technologies to simplify creation and management of localized VMs:

  • KVM
    • A hypervisor is responsible for monitoring and managing the resources allocated to virtual machines.
    • Unlike other hypervisors, KVM is the only one that is built directly into and supported by the Linux kernel itself.
    • All other type-1 hypervisors out there will load before Linux does, and then Linux runs in an underprivileged state to that hypervisor.
    • By leveraging a hypervisor that is part of the Linux kernel itself, it means better support, less complexity, and more room for optimization improvements.
  • QEMU
    • KVM is the component in the kernel that manages / monitors resources allocated to virtual machines.
    • QEMU is responsible for the emulation of hardware components such as a motherboard, CPU, and various controllers that make up a virtual machine.
    • KVM can't work without QEMU, so you'll often times see KVM referred to as KVM/QEMU.
  • VirtIO
    • A virtualization standard for network and disk device drivers where just the guest's device driver "knows" it is running in a virtual environment, and cooperates with the hypervisor.
    • This enables guests to get high performance network and disk operations, and gives most of the performance benefits of paravirtualization[2].
  • VirtFS
    • Also referred to as the 9p filesystem, VirtFS allows us to easily pass through file system access from a virtualization host to a guest.
    • VirtFS is the equivalent of Docker Volumes for KVM, but requires a mount command to be issued from within the guest[3]. VirtFS works with Linux-based virtual machines only.
  • VFIO
    • Virtual function IO allows us to assign a physical device, such as a graphics card, directly to a virtual machine that in turn will provide driver support for the device directly.
    • VFIO prevents assigned devices from accessing spaces in memory that are outside of the VM to which they are assigned.
    • This limits the impact of issues pertaining to device drivers and memory space, shielding unRAID OS from being exposed to unnecessary risk.
    • VFIO usage requires IOMMU capable hardware (your CPU must have Intel VT-d or AMD-Vi support)[4].
  • Libvirt
    • Libvirt is collection of software that provides a convenient way to manage virtual machines and other virtualization functionality, such as storage and network interface management.
    • These software pieces include an API library, a daemon (libvirtd), and a command line utility (virsh)[5].

System Preparation

Before you can get started with creating virtual machines, there are a few preparatory tasks that must be completed.

Adjust BIOS Settings

In order to utilize all the virtualization features of unRAID 6, you must ensure your BIOS is configured properly for hardware-assisted virtualization as well as IO memory mapping (HVM / IOMMU support). In your BIOS settings, look for anything marked with Virtualization, Intel VT-x, Intel VT-d, AMD-V, or AMD-Vi and set it to Enabled.

Bios-virtualization1.png Bios-virtualization2.jpg Bios-virtualization3.JPG Bios-virtualization4.png

examples of where virtualization settings can be found from various motherboard BIOS screens.

Configure a Network Bridge

There are two methods by which your virtual machines can get access to host-based networking: through a private NAT bridge managed by libvirt or through a public bridge managed by unRAID directly. The private bridge (virbr0) is automatically configured when libvirt starts. The public bridge can be created through the Network Settings page on the unRAID webGui.

The private bridge generates an internal DHCP server/address pool to create IPs for VMs automatically, but the VMs will be on a subnet that cannot be accessed by other devices or even other services on unRAID. This type of bridge is ideal if you want your VM to be completely isolated from all other network services accept internet access and the host's network file sharing protocols. VM management can be performed through a VNC session provided by the browser.

The public bridge provides VMs with an IP address from your router, but internally bridges communications between VMs and each other, as well the host. This type of bridge is ideal if you want your VMs to act just like another device on your network, where you manage it's network access at the LAN-router instead of inside the VM. We persist MAC address settings for the virtual interfaces you create, ensuring the VMs should get the same IP address each time they connect, as long as your router-managed DHCP pool doesn't run out of addresses. So if you want to connect to your VM from another PC, laptop, tablet, or other type of device, you should use the public bridge.

Whichever bridge you prefer can be defined as the Default Network Bridge on the VM Settings page.

Create User Shares for Virtualization

At the minimum, you should create two user shares for use with virtualization on unRAID. One share to store your installation media files (ISOs) and another to store your virtual machines themselves. If you don't already have a share you use for backups, you might consider adding one as well to use for backing up your virtual machines.

Recommendations for Share Configuration

  • Virtual machines will perform best when their primary vDisk is stored on a cache-only share.
  • While SSDs are not required for virtual machines to function, performance gains are substantial with their use.
  • For your ISO library share (containing your installation media), cache usage is optional.

IMPORTANT: Do NOT store your active virtual machines on a share where the Use Cache setting is set to Yes. Doing so will cause your VMs to be moved to the array when the mover is invoked.

Setup Virtualization Preferences

Before you can get started creating virtual machines, we need to perform a few configuration steps:

  • Use your web browser to navigate to the VM Manager Settings page (Settings -> VM Manager)
  • Set Enable VMs to Yes
  • Select the share you previously created in the ISO Library Share (optional)
  • For Windows VMs, you will need to download virtual drivers for storage, network, and memory.
    • Download the latest 'stable' VirtIO Windows drivers ISO found here: https://fedoraproject.org/wiki/Windows_Virtio_Drivers#Direct_download
    • Copy the ISO file for the drivers to the ISO Library Share that you created earlier
    • Use the file picker for VirtIO Windows Drivers ISO to select the ISO file you copied
    • You can override the default driver ISO on a per-VM basis (under Advanced View).
  • Select virbr0 (default) for a private network bridge or select a public network bridge that you created on the Network Settings page.
    • You can override the default network bridge on a per-VM basis (under Advanced View).
  • Toggle PCIe ACS Override to On if you wish to assign multiple PCI devices to disparate virtual machines
    • The override breaks apart IOMMU groups so that individual devices can be assigned to different virtual machines
    • Without this setting enabled, you may not be able to pass through devices to multiple virtual machines simultaneously
    • WARNING: This setting is experimental! Take caution when using. [6]
  • Click Apply when done to apply your settings and start the libvirt service
  • A new VMs tab will appear on the unRAID task bar when complete

Creating Your Own Virtual Machines

With the preparation steps completed, you can create your first VM by clicking Add VM from the Virtual Machines page.

Basic VM Creation

The webGui will by default present the minimum number of fields required in order for you to create a VM.

  • Set the Template type to Custom
  • Give the VM a Name and a Description
  • Toggle the Autostart setting if you want the VM to start with the array automatically
  • Select the Operating System you wish to use, which will also adjust the icon used for the VM
  • Select which CPUs you wish to assign the VM
    • You can select up to as many physical CPUs that are present on your host
  • Specify how much Initial Memory you wish to assign the VM
  • Select an OS Install ISO for your installation media
  • Specify the vDisks you wish to create (or select an existing vDisk)
    • The Primary vDisk is used to store your VM's operating system
    • Additional vDisks can be added by clicking Add-device.png
  • Specify a Graphics Card to use to interact with the VM
    • If you are NOT assigning a physical graphics card, specify VNC
    • If you ARE assigning a physical graphics card, select it from the list
    • VNC can only be specified as the primary graphics display or it can't be assigned at all
    • A password can be optionally specified for the VNC connection
    • Not all graphics cards will work as a secondary display
    • If you assign a physical graphics device, be sure to assign a USB keyboard and mouse as well
    • Additional graphics devices can be assigned by clicking Add-device.png
  • Assign a Sound Card if you're assigning a graphics card to get audio support in your VM
    • Most GPUs have their own built-in sound card as a function of the graphics card for HDMI audio
    • Additional sound cards can be assigned by clicking Add-device.png
  • USB Devices can be assigned to the VM that are plugged into the host
    • USB hot plugging is not currently supported, so devices must be attached before the VM is started in order for USB pass through to function
    • Some USB devices may not work properly when passed through to a guest (though most do work fine)
    • The unRAID USB flash device is not displayed here, to prevent accidental assignment
  • Click Create VM to create your virtual disks (if necessary), which will start automatically unless you unchecked the Start VM after creation checkbox.

Advanced Options

If you wish to toggle other advanced settings for the VM, you can toggle from Basic to Advanced View (switch located on the Template Settings section bar from the Add VM page).

  • You can adjust the CPU Mode setting
    • Host Passthrough will expose the guest to all the capabilities of the host CPU (this can significantly improve performance)
    • Emulated will use the QEMU emulated CPU and not expose the guest to all of the host processor's features
  • Specifying a Max Memory value will enable memory ballooning, allowing KVM to shrink/expand memory assignments dynamically as needed.
    • This feature does not apply to VMs where a physical PCI device has been assigned (GPU, sound, etc.)
  • The Machine type presented to your VM can be toggled between QEMU's i440fx or Q35 chipsets
    • For Windows-based VMs, i440fx is the default setting and should only be changed if you are having difficulty passing through a PCI-based graphics card (this may prompt Windows to reactivate)
    • For Linux-based VMs, Q35 is the default setting and should not be changed if passing through a GPU
  • The BIOS can only be adjusted when adding a new VM (existing VMs cannot modify this setting).
    • SeaBIOS is a traditional VGA BIOS for creating most virtual machines
    • OVMF utilizes a UEFI BIOS interface, eliminating the use of traditional VGA
    • OVMF requires that the VM's operating system supports UEFI (Windows 8 or newer, most modern Linux distros) and if you wish to assign a physical graphics device, it too must support UEFI
  • If you specify Windows as the guest operating system, you can toggle the exposure of Hyper-V extensions to the VM
    • This is disabled automatically if an NVIDIA-based graphics card is selected for assignment to the VM
    • See this post about 3D gaming performance with NVIDIA-based GPUs, Hyper-V settings, and various driver versions
  • You can choose to override the default VirtIO Drivers ISO should you so desire
  • You can toggle the vDisk Type between RAW and QCOW2 (RAW is recommended for best performance)
  • With Linux-based VMs, you can add multiple VirtFS mappings to your guest
  • If you desire, you can modify the Network MAC address for the virtual network interface of the VM as well as specify an alternate Network Bridge.
    • You can click the blue refresh symbol to auto-generate a new MAC address for the virtual network interface.
    • Additional virtual network interfaces can be assigned by clicking Add-device.png

Assigning Graphics Devices to Virtual Machines (GPU Pass Through)

The ability to assign a GPU to a virtual machine for direct I/O control comes with some additional provisions:

  1. Not all motherboard/GPU combinations will work for GPU assignment.
  2. Integrated graphics devices (on-board GPUs) are not assignable to virtual machines at this time.
  3. Additional community-tested configurations can be found in this spreadsheet.
  4. Lime Technology provides a list of validated and tested hardware combinations within the wiki.
  5. You can also discuss hardware selection in the Lime Technology community forums.

Warning: Passing through a GPU to a SeaBIOS-based VM will disable console VGA access

If you rely upon a locally-attached monitor and keyboard to interact with the unRAID terminal directly, you will lose this ability once you create a SeaBIOS VM with a GPU assigned. This is due to a bug with VGA arbitration and cannot be solved. This does NOT affect your ability to access the console using a telnet or SSH session, but local console access directly will appear to be frozen (blinking cursor, but no visible response to keyboard input). It does not matter if you are using on-board graphics for the console compared to a discrete GPU for the pass through to a VM or not. With OVMF, however, VGA isn't utilized, therefore arbitration isn't needed and therefore your console graphics will remain intact. Note that not all GPUs support OVMF as OVMF requires UEFI support on your GPU.

Help! I can start my VM with a GPU assigned by all I get is a black screen on my monitor!

If you aren't receiving an error message, but the display doesn't "light up" when your VM is started, it means that while the device is being assigned properly, you may have an issue with your motherboard or GPU preventing proper VGA arbitration from occurring. There are a few things you can attempt to fix this:

  • Ensure your motherboard BIOS and video card BIOS are up to date.
  • Try adjusting the BIOS under Advanced View when adding a new VM from SeaBIOS to OVMF (existing VMs cannot have this setting changed once created).
  • Try adjusting the Machine Type from i440fx to Q35 under Advanced View when editing or adding a VM.
  • As a last resort, you can attempt to manually provide the ROM file for your video card by editing the XML for your VM (see below procedure).

Edit XML for VM to supply GPU ROM manually

  • From another PC, navigate to this webpage: http://www.techpowerup.com/vgabios/
  • Use the Refine Search Parameters section to locate your GPU from the database.
  • Download the appropriate ROM file for your video card and store the file on any user share in unRAID.
  • With your VM stopped, click the icon for your VM, then select Edit XML from the context menu.

For SeaBIOS-based VMs

  • Scroll to the bottom of the XML and locate this line (the host=##:##.# part may look different for you than from the example below):

<qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>

  • Modify this line to supply the ROM file to the VM, like so:

<qemu:arg value='vfio-pci,host=01:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on,romfile=/mnt/user/sharename/foldername/rom.bin'/>

  • Change the path after /mnt/user/ to the actual user share / sub-folder path to your romfile.

For OVMF-based VMs

  • Scroll to the bottom of the XML and locate this section (the <address> parts may look different for you than from the example below):

    <hostdev mode='subsystem' type='pci' managed='yes'>
     <driver name='vfio'/>
     <source>
       <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
     </source>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
   </hostdev>
  • After the </source> tag, add the following code:

<rom file='/mnt/user/sharename/foldername/rom.bin'/>

  • Change the path after /mnt/user/ to the actual user share / sub-folder path to your romfile.

Once done editing the XML, click Update and try starting your VM again to see if GPU assignment works properly.

Installing a Windows VM

With Windows-based guests, the installation process is slightly different than others, as you will need to load the virtual drivers for I/O (disk, network, etc). To do this, you will need to perform the following steps:

Obtaining Your Installation Media (ISO)

Depending on which version of Windows you wish to install, the process for obtaining the installation media will differ slightly. To obtain installation media for Windows 7 or later, you will need to enter a valid Microsoft Windows product key for whichever version of the software you are attempting to download. Product keys can be obtained from either the Microsoft Store or authorized resellers. If you have a product key, but do not have the installation media, please see the following links:

It is important that you do not select to install the OS to a USB flash device. If prompted, select Install by creating media and select ISO file (this will let you save the media to an ISO file). Once you've obtained your ISO file, copy it to a share on your unRAID server.

Obtaining Virtual Hardware Drivers (VirtIO) for Windows

In order to maximize VM performance, unRAID utilizes VirtIO, which eliminates much of the overhead associated with I/O related to virtualization. These virtual devices will need to have their drivers loaded during the Windows installation process, or the process will not complete.

  1. Download the latest 'stable' VirtIO Windows drivers ISO found here: [7]
  2. Copy the ISO file for the drivers to the ISO Library Share you created earlier
  3. On the VM Settings page, use the file picker for VirtIO Windows Drivers ISO to select the ISO file you copied, then click Apply on that page.
  4. You can override the default driver ISO on a per-VM basis (under Advanced View).

Creating Your Windows VM

Follow the documented procedure for creating a VM, but alter the following settings:

  1. Select the appropriate version of Windows from the Operating System field.
  2. Select the Windows ISO you downloaded and copied to unRAID for the OS Install ISO.
  3. Be sure to select a minimum of 1GB of Initial Memory and specify at least 20GB for the Primary vDisk Size (as required by Windows 7, 8, and 8.1).
  4. For Windows 7, make sure the BIOS setting is left at SeaBIOS.
  5. For Windows 8/8.1, you can select either SeaBIOS or OVMF, but to assign a graphics device to OVMF, it must support UEFI.

Loading the VirtIO Drivers During Installation

  1. During the Windows installation process, you will reach a point where "no disks are found", this is expected behavior.
  2. Click Browse on this screen, then navigate to the virtio-win CD-ROM.
  3. You will need to load the following drivers in the following order:
    1. Balloon
    2. NetKVM
    3. vioserial
    4. viostor (be sure to load this one last)
  4. For each driver that needs to be loaded, you will navigate to the driver folder, then the OS version, then the amd64 subfolder (never click to load the x86 folder)
  5. After each driver is loaded, you will need to click the Browse button again to load the next driver.
  6. After loading the viostor driver, your virtual disk will then appear to select for installation and you can continue installing Windows as normal.
  7. After Windows is completely installed, you can install the guest agent, which improves host to guest management
    1. Open Windows File Explorer
    2. Browse to the virtual CD-ROM for virtio-win again, and then open the guest-agent folder
    3. Double-click to launch the qemu-ga-x64.msi installer (this process will be rather quick)

And that's all there is to it! If you have questions on this procedure, please post in the Lime Technology forums.

Converting VMs from Xen to KVM

Virtual machines that were running in Xen to KVM will require different procedures depending on whether they were created as paravirtualized or hardware-virtualized guests. Regardless of your conversion scenario, it is highly-recommended that you create a copy of your existing Xen virtual disk before proceeding. Use the copy to test your conversion process and if successful, you can remove your own Xen-based virtual disk should you so desire. In addition, you should ensure your hardware has support for hardware-assisted virtualization (Intel VT-x / AMD-V) as this is a requirement for use with KVM. Xen PV guests do not leverage hardware-virtualization extensions, which makes their process for converting slightly more involved than Xen HVM guests to KVM (it is not documented at the time of this writing).

Windows 7 Conversion Procedure

To convert a Windows 7 virtual machine from Xen to KVM, the process is fairly simple and takes about 10 minutes to perform. Remove any PCI device pass through that you are doing via your Xen domain cfg file before you begin. These devices can be re-added after the conversion process is complete.

Step 1: Determine if your VM is using Xen's GPLPV drivers

  1. From within your Xen VM, open Windows Device Manager (click Start -> right-click on Computer -> click Manage)
  2. Expand the node for Network adapters and note the name. If the name of the network device contains "Xen", then you are using GPLPV drivers. Anything else means you are not.

NOTE: IF YOU ARE NOT USING GPLPV DRIVERS, YOU CAN SKIP THE NEXT SEVERAL STEPS AND RESUME THE PROCEDURE FROM REBOOTING INTO KVM MODE.

Step 2: Prepare Windows 7 for GPLPV driver removal

  1. Open a command prompt, running it as administrator (click Start -> click All Programs -> click Accessories -> right-click Command Prompt -> click Run as administrator)
  2. Type the following command from the prompt: bcdedit -set loadoptions nogplpv
  3. Reboot your VM

Step 3: Download the uninstaller and remove the GPLPV drivers

  1. Once rebooted, open a browser and download the following zip file: gplpv_uninstall_bat.zip
  2. Extract the uninstall_0.10.x.bat file to your desktop
  3. Right click on the file and click Run as administrator (this will happen very quick)
  4. Reboot your VM
  5. After rebooting, open up Windows Device Manager again.
  6. Under the System Devices section, right-click on Xen PCI Device Driver and select Uninstall, and the confirmation dialog, click the checkbox to Delete the device driver software for this device.
  7. Shut down the VM

Step 4: Reboot your server into KVM mode

  1. Navigate your browser to the unRAID webGui, click on Main, then click on Flash from under the devices column.
  2. Under Syslinux Configuration, move the line menu default from under label Xen/unRAID OS to be under label unRAID OS.
  3. Click Apply
  4. Reboot your unRAID server

Step 5: Create a new VM with the VM Manager

  1. If you haven't already, follow the procedure documented here to enable VM Manager
  2. Click on the VMs tab and click Add VM
  3. Give the VM a name and if you haven't already, download the VirtIO drivers ISO and specify it
  4. Under Operating System be sure Windows is selected
  5. Under Primary vDisk Location, browse and select your Xen virtual disk
  6. Add a 2nd vdisk and give it a size of 1M (you can put this vdisk anywhere, it is only temporary)
  7. Leave graphics, sound, etc. all to defaults and click Create
  8. Upon create, immediately force shutdown the VM (click the eject symbol from the VMs page)
  9. Click the </> symbol from the VMs page next to the VM to edit the XML
  10. Locate the <disk> section for your primary virtual disk.
  11. Remove the <address> line completely.
  12. Change the bus='virtio' from the <target> section to bus='ide'
  13. Click Update

Step 6: Starting your new VM and loading the VirtIO drivers

  1. From the VMs page, click the play symbol to start the VM.
  2. Click the eye symbol to open a VNC connection through the browser.
  3. When the VM boots up, it will install several drivers and prompt a reboot, select Reboot later
  4. Open Windows Device Manager again and you'll notice 3 warnings under Other devices (Ethernet Controller, PCI Device, SCSI Controller, Serial controller)
  5. For each device, double click the device, click Update Driver, then select Browse my computer for driver software
    1. For Ethernet Controller, specify a path of d:\NetKVM\w7\amd64 (or browse to it) and click Next
    2. For PCI Device, specify a path of d:\Balloon\w7\amd64 (or browse to it) and click Next
    3. For SCSI Controller, specify a path of d:\viostor\w7\amd64 (or browse to it) and click Next
    4. For Serial Controller, specify a path of d:\vioserial\w7\amd64 (or browse to it) and click Next
  6. Select to Always trust Red Hat if prompted.
  7. When all drivers have been loaded, shut down your VM

Step 7: Remove the temporary vdisk and start the VM

  1. Click to edit the VM using the form-based editor (the pencil symbol)
  2. Remove the secondary vdisk
  3. Ensure the primary vdisk is pointing to your original vdisk file (it may be pointing to the secondary vdisk, and if so, update it to point to your actual vdisk)
  4. When completed, click Update
  5. Start your VM
  6. Verify your device manager shows no warnings
  7. DONE!

Notes on Windows-based VMs

There are a few things worth mentioning about creating Windows-based virtual machines on unRAID 6 using KVM.

General Notes

  • Before activating your Windows license, we highly encourage thorough testing of your VM first.
  • Changing the machine type between i440fx and Q35 under advanced mode will prompt Windows for reactivation of the license.
  • Windows 7 and earlier OS variants may not work with host-based graphics assignment correctly. Use Windows 8.1 or newer for the best experience.
  • If using OVMF, you must use Windows 8 or newer. UEFI is not directly supported by Windows 7 and therefore, OVMF will not work.

Enable MSI for Interrupts to Fix HDMI Audio Support

If you are assigning a graphics device to your Windows guest that uses an HDMI connection and you wish to push audio through that connection, you will need to perform a registry modification in Windows to ensure the audio driver remains working properly. For a comprehensive explanation of MSI and VFIO interrupts, you can visit Alex Williamson's blog[8]. Here's the procedure for doing this:

  • Shut down your VM and make a copy of your virtual disk before proceeding (as a backup).
  • Start your VM with the GPU device assigned.
  • Access your server using SSH or telnet.
  • For the device you wish to assign, locate it's PCI address identifier (this can be found when selecting the device from within the VM creation tool)
  • From the command line, type the following: lspci -v -s 1:00.0 (replace 1:00.0 with your GPU device)
  • Look for a line that looks like this: Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+

If the Enable setting is set to +, that means your device claims it is MSI capable and it is enabled by the guest VM that is using it. If you cannot find a line that mentions MSI as a capability, it means your device does not support this. If the Enable setting is set to -, this means your device claims it is MSI capable, but that the guest VM is NOT using it. The procedure for enabling MSI support from Windows is documented here: http://forums.guru3d.com/showthread.php?t=378044

Physical to Virtual Machine Conversion Process

If you have an existing physical PC or server that you wish to convert to a virtual machine for use on unRAID 6, the process is fairly simple. Steps 1-3 apply for almost any modern Linux-based guest. Steps 4-6 apply for Windows-based guests.

Prerequisites

  • Your system must meet the hardware requirements and complete these preparation steps before utilizing virtual machines on unRAID Server OS 6.
  • You must have enough disk space available on a single storage device in your array (total free space in the cache pool) that is equal to or greater in size than the physical disk you wish to convert.
  • It is highly encouraged to make a complete backup of your most important files before attempting a conversion.

Step 1: Identify the disk to be converted using the unRAID webGui

  • With the array stopped, attach the physical disk you wish to convert to your server (SATA and power)
  • Login to your webGui for unRAID 6 using a browser (http://tower or http://tower.local from a Mac OS X device by default)
  • Click the Main tab.
  • If the array hasn’t been started yet, start it by clicking Start.
  • Locate your disk device from the Unassigned Devices section on the Main tab.
  • Under the identification column, notate the disk id by letter handle (e.g. sdb, sdc, sdd, sde, …)
  • Also make note of the size, as you will need at least this much space free on an available array device or the cache (pool) to create your new virtual disk.

Step 2: Add a new Virtual Machine from the VMs tab

  • Login to your webGui for unRAID 6 using a browser (http://tower or http://tower.local from a Mac OS X device by default)
  • Click on the VMs tab (if the tab isn’t visible, you haven’t completed these preparation steps or may not meet these hardware requirements; post in general support for further assistance)
  • Click the Add VM button.
  • Follow this guide to create your VM, making sure to adhere to these specific settings:
    • Leave the BIOS setting to SeaBIOS.
    • Leave OS Install ISO blank.
    • Be sure to have the VirtIO Drivers ISO specified, you will need these in a later step.
    • Make the primary virtual disk large enough for the physical disk you are copying.
    • If converting a disk containing a Windows OS
      • Add a second virtual disk by clicking the green plus symbol
      • Make the size of this second virtual disk 1M.
      • Uncheck the option to Start VM after creation

Step 3: Connect to your unRAID server via Telnet or SSH

  • Utilizing a telnet or SSH capable client, connect to your unRAID system over a Local Area Network. The default username is root and there is no password by default.
  • Enter the following command to begin the conversion of your physical disk to a virtual image:

qemu-img convert -p -O raw /dev/sdX /mnt/user/vdisk_share/vmname/vdisk1.img

  • Replace sdX with the device letter handle you noted in step 1, replace vdisk_share with the share you created to store your virtual disks, and replace vmname with the name you gave it when you created it in step 2.
  • The -p tag will output progress in the form of a percentage while the conversion is occurring.

Step 4: Edit the XML for your virtual machine (Windows Guests Only)

  • From the VMs tab, click the VM icon and select Edit XML from the context menu.
  • Scroll down the XML and locate the <target> tag for the <disk> with a <source> file set to vdisk1.img, which will look like such:

   <disk type='file' device='disk'>
     <driver name='qemu' type='raw' cache='writeback'/>
     <source file='/mnt/cache/vdisk_share/vmname/vdisk1.img'/>
     <backingStore/>
     <target dev='hda' bus='virtio'/>
     <boot order='1'/>
     <alias name='virtio-disk0'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
   </disk>
  • Adjust vdisk1.img by changing the bus attribute to the <target> tag to ide.
  • Delete the entire <address> line for that <disk>.
  • Corrected XML example below:

   <disk type='file' device='disk'>
     <driver name='qemu' type='raw' cache='writeback'/>
     <source file='/mnt/cache/vdisk_share/vmname/vdisk1.img'/>
     <backingStore/>
     <target dev='hda' bus='ide'/>
     <boot order='1'/>
   </disk>
  • Click Update to update the virtual machine XML.

Step 5: Install the VirtIO drivers from inside the VM (Windows Guests Only)

  • Using Windows File Explorer, navigate to the VirtIO virtual cd-rom to browse its contents.
    • Navigate inside the Balloon folder.
    • Navigate to the subfolder named after your Windows OS version (e.g. w8.1)
    • Navigate to the amd64 subfolder
    • Right-click on the balloon.inf file inside and click Install from the context menu (you may need to enable viewing of file extensions to do this)
  • Repeat the above process for each of the following folders:
    • NetKVM
    • vioserial
    • viostor
  • When done installing drivers, navigate inside the virtual cd-rom one more time and open the guest-agent folder.
  • Double-click on qemu-ga-x64.msi to install the QEMU/KVM guest agent.

Step 6: Remove the secondary vdisk from your VM (Windows Guests Only)

  • Shutdown your VM if it isn’t already.
  • From the VMs tab, click the VM icon and select Edit from the context menu.
  • Remove the vdisk2.img virtual disk by clicking the red minus symbol.
  • Click Update to update the VM.
  • Start your newly converted virtual machine!

Extra: HELP! Stuck at SeaBIOS with "Booting from Hard Disk"

If your OS was installed using UEFI (as opposed to traditional VGA BIOS), start over from step 3, but select OVMF as the BIOS type instead of SeaBIOS. Most OS installations install using a traditional VGA BIOS, but it is possible to have a UEFI installation, in which case SeaBIOS will not work. The remainder of the conversion procedure is identical.

Using the unRAID webGui

To take control of and manage your unRAID system, you will need to connect to the unRAID webGui (also referred to as Dynamix webGui).

To connect to the webGui, simply type the name of your server (or it's IP address) into your browser's address bar (by default this would be http://tower or http://tower.local if using a Mac OS X device).

To adjust the default server name or IP address prior to booting your system, you can insert the USB flash device into your Mac or PC and edit the config/network.cfg and config/ident.cfg files respectively.

Dashboard

The first tab on the unRAID webGui is the dashboard. The dashboard is designed to provide you a summary view of your system and allows you to quickly jump to common tasks that are relevant to the running state of the system.

Main

The main tab is used to assign storage devices when the array is stopped, and provides an summary view of disk usage when running. Parity checks can also be manually invoked from this page. The page itself is broken into 5 sections:

  1. Array devices show all disks assigned to the parity or data function in the array.
  2. Cache devices show all disks assigned to the cache function.
  3. Unassigned devices show disks that are not assigned for use with unRAID but are physically attached to the system.
  4. Boot device shows your USB flash device used to boot unRAID.
  5. Array operation provides the controls to start and stop the array, initiate a parity check, and pre-clear / format disk devices.

Devices

Devices are organized into tables per section with columns to represent various relevant information.

Colored Status Indicator

The significance of the color indicator at the beginning of each line is as follows:

  • Green: the hard drive status is Normal
  • Yellow: the data contents of the actual hard drive are invalid. The parity disk has this status when Parity-Sync is taking place. A data disk has this status during Reconstruction.
  • Red: the disk is disabled.
  • Blue: a new disk not currently part of the array.
  • Grey: indicates the corresponding disk has been spun-down.

Identification

This data is read directly from the hard drive and contains the device serial number / model number as well as the drive letter assigned under /dev.

Temperature

This is the temperature reported by the hard drive via S.M.A.R.T. When the disk is spun down, there will be an asterisk (*) displayed here instead. This is because sending the command to a hard drive to obtain S.M.A.R.T. information would cause it to spin up.

Size

This is the raw capacity of the disk expressed as the number of 1024-byte blocks.

Used

This represents the amount of capacity used on the disk. Parity / Unassigned Devices will not display a value here.

Free

This is the amount of free space in the disk's file system, expressed as the number of 1024-byte blocks. The free space of a freshly formatted disk will always be less than the disk's raw size because of file system overhead. Parity / Unassigned Devices will not display a value here.

Reads, Writes, Errors

The Read and Write statistics display the number of 4096-byte read and write operations that have been performed by the disk.

The Error statistic displays the number of read and write operations which have failed. In a protected array, any single-disk read error will be corrected on-the-fly (using parity reconstruction). The Error counter will increment for every such occurrence. Any single-disk write error will result in the Error counter being incremented and that disk being disabled.

Upon system boot, the statistics start out cleared; and they may be manually cleared at any time; refer to the Settings page.

Filesystem

The filesystem assigned to the device will be listed here.

Array Operation

Starting and Stopping the Array

Normally following system boot up the array (complete set of disks) is automatically started (brought on-line and exported as a set of shares). But if there's been a change in disk configuration, such as a new disk added, the array is left stopped so that you can confirm the configuration is correct. This means that any time you have made a disk configuration change you must log into the unRAID webGui and manually start the array.

Disk Configuration Changes

Here are the normal configuration changes you can make:

  • add one or more new disks.
  • replace a single disk with a bigger one.
  • replace a failed disk.
  • remove one or more data disks
  • reset the array to an unconfigured state.
Add One or More New Disks

This is the normal case of expanding the capacity of the system by adding one or more new hard drives:

  1. Stop the array.
  2. Power down the server.
  3. Install your new hard drive(s).
  4. Power up the unit.
  5. Start the array.

When you Start the array, the system will first format the new disk(s). When this operation finishes, all the data disks, including the new one(s), will be exported and be available for use.

The format operation consists of two phases. First, the the entire contents of the new disk(s) is cleared (written with zeros), and then it's marked active in the array. Next, a file system is created (either ReiserFS, XFS, or BTRFS). By default, unRAID prefers to use XFS for new array devices and BTRFS for new cache devices. These settings can be altered on the Disk Settings page.

The clearing phase is necessary to preserve the fault tolerance characteristic of the array. If at any time while the new disk(s) is being cleared, one of the other disks fails, you will still be able to recover the data of the failed disk. Unfortunately, the clearing phase can take several hours depending on the size of the new disks(s).

The capacity of any new disk(s) added must be the same size or smaller than your parity disk. If you wish to add a new disk which is larger than your parity disk, then you must instead first replace your parity disk. (You could use your new disk to replace parity, and then use your old parity disk as a new data disk.)

Replace a Single Disk with a Bigger One

This is the case where you are replacing a single small disk with a bigger one:

  1. Stop the array.
  2. Power down the unit.
  3. Replace smaller disk with new bigger disk.
  4. Power up the unit.
  5. Start the array.

When you start the array, the system will reconstruct the contents of the original smaller disk onto the new disk. Upon completion, the disk's file system will be expanded to reflect the new size. You can only expand one disk at a time.

If you are replacing your existing Parity disk with a bigger one, then when you Start the array, the system will simply start a parity sync onto the new Parity disk.

A special case exists when the new bigger disk is also bigger than the existing parity disk. In this case you must use your new disk to first replace parity, and then replace your small disk with your old parity disk:

  1. Stop the array.
  2. Power down the unit.
  3. Replace smaller parity disk with new bigger disk.
  4. Power up the unit.
  5. Start the array.
  6. Wait for Parity-Sync to complete.
  7. Stop the array.
  8. Power down the unit.
  9. Replace smaller data disk with your old parity disk.
  10. Power up the unit.
  11. Start the array.
Replace a Failed Disk

This is the case where you have replaced a failed disk with a new disk:

  1. Stop the array.
  2. Power down the unit.
  3. Replace the failed hard disk with a new one.
  4. Power up the unit.
  5. Start the array.

When you Start the array after replacing a failed disk, the system will reconstruct the contents of the failed disk onto the new disk; and, if the new disk is bigger, expand the file system.

You must replace a failed disk with a disk which is as big or bigger than the original and not bigger than the parity disk. If the replacement disk is larger than your parity disk, then the system permits a special configuration change called swap-disable.

For swap-disable, you use your existing parity disk to replace the failed disk, and you install your new big disk as the parity disk:

  1. Stop the array.
  2. Power down the unit.
  3. Replace the parity hard disk with a new bigger one.
  4. Replace the failed hard disk with you old parity disk.
  5. Power up the unit.
  6. Start the array.

When you start the array, the system will first copy the parity information to the new parity disk, and then reconstruct the contents of the failed disk.

Remove One or More Data Disks

In this case the missing disk(s) will be identified. If there is only one missing disk when you start the array it will be marked as failed. All data disks will be exported (including the missing one), but the system will be running unprotected; that is, if a disk fails you will lose data.

If there are two or more missing disks, you can not start the array. In this case you must either put the disks back, or reset the array to an unconfigured state.

Reset Array to an Unconfigured State

When the array is Stopped, you can navigate to the Tools tab at the top of the webGui and click on New Config. This function will restore the array configuration data so that the system thinks it's brand new with all new hard drives. When you Start the array, the system will start a background process to generate the parity information.

In the special case where all the hard drives are new, the format operation will not clear the data areas; it simply generates parity. This can be used when you've added new disk(s) and you don't want to wait around for the clear phase to complete. In this case you could first Reset the array configuration, and then simply Start the array, and the system will re-sync parity, incorporating the new disk(s).

CAUTION: if a disk fails during the operation, you will not be able to rebuild it.

The array configuration data is stored in the file config/super.dat on the Flash. For this reason, you must always have the Flash installed in your server.

Check Parity

When the array is Started and parity is already valid, there is a button in the Array Operation section labeled Check, which will initiate a background Parity-Check function. Parity-Check will march through all data disks in parallel, computing parity and checking it against stored parity on the parity disk. If a mismatch occurs, the parity disk will be updated (written) with the computed data and the Sync Errors counter will be incremented.

The most common cause of Sync Errors is power-loss which prevents buffered write data being written to disk. Anytime the array is Started, if the system detects that a previous unsafe shutdown occurred, then it automatically initiates a Parity-Check.

Shares

The Shares page is used to configure shares and share access.

User Shares

This section lists all of the configured User shares.

Note: if User shares are not enabled, then this section is not present.

User shares is a feature of unRAID OS which provides a unified name space across multiple data disks. User shares simplify storage management by presenting a view of all unRAID storage as if it were one large file system.

When 'User Shares' are enabled, unRAID OS will automatically create a set of shares named after the top-level directories found on each data disk. If the same top-level directory exists on more than one disk, then the exported share will contain all directories/files under that top-level directory on all the disks.

For example, suppose each disk has the following structure:

  • disk1
    • Movies
      • Alien
        • folder.jpg
        • VIDEO_TS
          • VIDEO_TS.IFO
          • VTS_01_1.VOB
          • VTS_01_2.VOB
      • Basic
        • folder.jpg
        • VIDEO_TS
          • VIDEO_TS.IFO
          • VTS_01_1.VOB
          • VTS_01_2.VOB
  • disk2
    • Movies
      • Cars
        • folder.jpg
        • VIDEO_TS
          • VIDEO_TS.IFO
          • VTS_01_1.VOB
          • VTS_01_2.VOB
  • disk3
    • Movies
      • Dejavu
        • folder.jpg
        • VIDEO_TS
          • VIDEO_TS.IFO
          • VTS_01_1.VOB
          • VTS_01_2.VOB


With User Shares enabled, for the above tree we would see this share under 'My Network Places':

//tower/Movies

And it would have the following structure:

  • Movies
    • Alien
      • folder.jpg
      • VIDEO_TS
        • VIDEO_TS.IFO
        • VTS_01_1.VOB
        • VTS_01_2.VOB
    • Basic
      • folder.jpg
      • VIDEO_TS
        • VIDEO_TS.IFO
        • VTS_01_1.VOB
        • VTS_01_2.VOB
    • Cars
      • folder.jpg
      • VIDEO_TS
        • VIDEO_TS.IFO
        • VTS_01_1.VOB
        • VTS_01_2.VOB
    • Dejavu
      • folder.jpg
      • VIDEO_TS
        • VIDEO_TS.IFO
        • VTS_01_1.VOB
        • VTS_01_2.VOB

In the case where the same object (directory or file) exists at the same hierarchy on multiple disks, the User Share will reference the object on the lowest numbered disk. For example, if Movies/Cars existed on both disk1 and disk2, then Cars under the Movies User Share would refer to the version on disk1.

Each time the array is Started, if User Shares are enabled, unRAID OS will regenerate and re-export each top-level directory as a network share.

Allocation Method

When a new User share is created, or when any object (file or directory) is created within a User share, the system must determine which data disk the User share or object will be created on. In general, a new User share, or object within a User share, will be created on the data disk with the most free space. However there are a set of share configuration parameters available to fine tune disk allocation.

The basic allocation strategy for a share is defined by the Allocation method configuration parameter. You may select one of two Allocation methods for the system to use:

Most-Free

In this method, the system will simply pick the disk which currently has the most free space.

High-Water

In this method, the system will pick the disk which currently has the least free space that is still above a certain minimum (called the "high water" mark). Suppose in our example above, we have this situation:

disk size free
disk1 80GB 75GB
disk2 120GB 110GB
disk3 80GB 70GB


The initial high water mark is set to the 1/2 the size of the largest disk; in this case, it will be set to 60GB. In this state, disk1 has 15GB of free space above the "high water" mark; disk2 has 50GB, and disk3 has 10GB.

As new objects are created, the system will choose disk3 until the amount of free space on disk3 falls under 60GB. Subsequently, the system will start allocating from disk1 until it's free space falls under 60GB. Then it will allocate from disk2 until it's free space also falls under 60GB. Once the amount of free space on all disks is below 60GB, a new high water mark is established by dividing the old high water mark by 2.

The advantage of High-water method is that when writing a series of files, most of the time only one data disk will need to be spun up.

Split Level

Often media data will consolidated under a single directory, or directory tree. Then during playback the files will be accessed one after another. This is the case with the set of VOB files which make up a DVD movie. In this situation we want all the associated media files to be stored on the same physical disk if at all possible. This is because we don't want media playback to pause while the disk containing the next file spins up. unRAID OS solves this problem by introducing a configurable allocation parameter called "Split level".

Split level defines the highest level in the share directory hierarchy which can be split among multiple disks. In the Movie share example above, setting Split level to 1 only permits any object created directly under the Movie directory to be allocated to any disk according to the Allocation method. Thus, when we create the Alien subdirectory, it may reside on any of the data disks; however, when we create a file or another directory within the Movies/Alien directory, this object is at level 2, and will be created on whatever disk the Movies/Alien directory actually resides on.

If the share were organized differently, for example according to genre:

  • Movies
    • SciFi
      • Alien
    • Action
      • Basic
      • Dejavu
    • Kids
      • Cars

Then you would set 'Split Level' to 2. This will let the genres expand among all disks, but still ensure that the contents of the actual movie directories stay within the same disk.

If you set the 'Split Level' to 0 for a share, then all directories/files created under that share will be on the same disk where the share was originally created.

If you set the 'Split Level' high, e.g., 999 for a share, then every directory/file created under that share will get placed on a disk according to Allocation method.

Included and Excluded disk(s)

The last way to control which disks are used by a share is through the Included disks(s) and Excluded disk(s) configuration parameters.

The Included disks(s) parameter defines the set of disks which are candiates for allocation to that share. If Included disk(s) is blank, then all present data disks are candiates. For example, to restrict a share to using only disk1, disk2, and disk3, you would set Included disk(s) to disk1,disk2,disk3.

The Excluded disk(s) parameter defines the set of disks which are exluded from consideration for allocation. If Excluded disk(s) is blank, then no disks are excluded.

When considering which disk to allocate space for a new object, unRAID OS first checks if it's in the Included disks(s) set, and the checks if it's in the Excluded disk(s) set.

Creating User Shares

To create a new User share:

  1. Click the Add Share button at the bottom of the User shares list.
  2. Enter the new Share name and other configuration and click the Add Share button.
  3. Once a share is created you can set Export and User Level Security parameters under SMB Security Settings.

unRAID OS will select the disk to create the initial top-level share directory according to the configured Allocation method.

User Level Security

User level security is a feature that lets you restrict access to shares according to user name.

If a share security level is set to Public (not enabled), then you do not need to create additional Users. Any user that attempts to connect to a share on your unRAID server is granted access, subject to the Export mode setting on the share.

If a share security level is set to Private (enabled), you will need to enter the list of users who may access the share. When a user attempts to connect to a share on your unRAID server, a dialog box will appear asking them to enter their user name and password before being granted access to shares. In addition, you can specify which users may access each share, as well as restrict access to read-only.

Examples

Suppose we have a share called Movies for which we want everyone on the network to be able to read, but only larry can read/write:

Export: Yes
Security: Secure

User Access
larry Read/Write

Suppose we have a share called Finances which only mom and dad can access:

Export: Yes (hidden)
Security: Secure

User Access
mom Read/Write
dad Read/Write

Further, suppose only mom should be able to change the files:

Export: Yes (hidden)
Security: Secure

User Access
mom Read/Write
dad Read-only

Deleting User Shares

To delete a User Share:

  1. Move or delete the contents of the user share.
  2. Check the 'Delete' box next to the Apply button under Share Settings.
  3. Click the Delete button.

Note: Some operating systems add hidden files to the user share which will prevent you from deleting it. You can identify these files by executing the command below from a console (replace <sharename> with your user share name):

ls -a /mnt/user/<sharename>

On version 4.7 of unRAID to delete a User Share, just clear the Share name field and click Apply. Only entirely empty User Shares may be deleted.

Renaming User Shares

To rename a User share:

  1. Click in the Share name field of the share.
  2. Type it's new name, and then click Apply.

Technical Notes:

  • A user share configuration file called config/shares/.cfg is stored on the Flash for each User Share (where the Share name is). If this file does not exist, then a set of default values are used for the User Share. Whenever a User Share parameter is changed, it's configuration file is also updated, or created if it does not exist.
  • Adding a new User Share or changing the configuration parameters of an existing User Share will not break any current connections on other shares. Renaming or deleting a User Share will break all outstanding connections, however. This is because Samba must be stopped in order to rename or delete the top-level directory which is associated with the share.
  • User Shares are implemented using proprietary code which builds a composite directory hierarchy of all the data disks. This is created on a tmpfs file system mounted on /mnt/tmp. User Shares are exported using a proprietary FUSE pseudo-file system called 'shfs' which is mounted on /mnt/users.
  • When an object needs to be created on a selected disk, first the directory hierarchy is created on the disk (if it isn't already in place). When the last file of a particular directory on a disk is removed, the unused part of the directory hierarchy on that disk remains in place.
  • With User Shares enabled, files may still be accessed via the individual disk shares. However, depending on the disk directory hierarchy and user share settings, some operations on a disk share may not be reflected in the user share which includes this disk.

Parameters

  • Share Name
    This is the name of the share. Use only the characters: a-z, A-Z, 0-9, - (dash), and . (dot).
  • Comments
    Optional descriptive text that will appear in the Comments column under My Network Places.
  • Allocation Method
    The method by which the system will select the disk to use for creating a User share, directory, or disk. See Allocation below.
  • Split Level
    The maximum depth in the directory tree which may be split across multiple disks. See Split level below.
  • Included Disk(s)
    The set of disks which will be considered for allocation. Blank means all disks. Disks are specified using the identifier disk1,disk2, etc. Separate each identifier with a comma.
  • Excluded Disks(s)
    The set of disk which will be excluded from consideration for allocation. Blank means no disks.
  • Export Mode
    Specifies the basic export mode of the share. See Export Mode above.
  • Exceptions
    A list of users who are exceptions to the basic export mode of the share: If the export mode of the share is read/write, then this lists users who will have read-only access. If the export mode of the share is read-only, then this lists users who will have read/write access. Separate multiple user names with commas.
    Note: this parameter is present only when User level security is enabled.
  • Valid Users
    A list of users who can exclusively access the share. Blank means all users.
    Note: this parameter is present only when User level security is enabled.
  • Invalid users
    A list of users who may not access the share at all. Blank means no users.
    Note: this parameter is present only when User level security is enabled.

Users

The Users page is used to set a password for the root user, and adding/removing Users for your server. User level security is a feature that lets you restrict access to shares according to user name.

If User level security is not enabled, then you do not need to enter a list of users. Any user that attempts to connect to a share on your unRAID server is granted access, subject to the Export mode setting on the share.

When User level security is enabled for a share, you will need to enter the list of users who may access the share. When a user attempts to connect to a share on your unRAID server, a dialog box will appear asking them to enter their user name and password before being granted access to shares. In addition, you can specify which users may access each share, as well as restrict access to read-only.

Users

This section lists each configured user name.

Regardless of whether 'User level security' is enabled, the built-in user name root always appears atop the Users list. If you enter a non-blank password for the root user, then your browser will also prompt you for the password when you attempt to open the webGui. In addition, you will be prompted for a password to log into the console or telnet session.

Add User

To create a new user, scroll to the end of the Users list, enter the new User name and Password (and Retype password), and then click Add User.

Change Password

To change the password of an existing user, just type the new Password (and Retype password) for the user and click Apply.

Remove User

To delete a user, change the User name to blank and click Apply. Note that you can not delete the root user.

Technical Notes

  • All user access restrictions on shares is defined for each share on the Shares page.
  • Each new user is automatically given a unique uid, and unique gid (group name same as the user name). However, all objects (files/directories) created in shares will be owned by root.
  • Only root can access the unRAID webGui, and log in to the system console or telnet session. The configured users do not have actual home accounts on the server.
  • The following files are maintained in the config directory on the Flash when User level security is enabled:

config/passwd - contains user names and encrypted passwords
config/group - contains groups created for users
config/smbpasswd - contains user names and SMB encrypted passwords

User Name

The name should only consist of the characters a-z, 0-9, - (dash), _ (underscore), and . (dot). Please do not use any uppercase letters.

Password

Type anything you want here. Blank is also ok.

Retype Password

Must be the same as what you typed for Password.

Common Procedures

The following section represents common tasks that you will need to perform throughout the life of your unRAID system. This includes expanding your array, replacing failed devices, and adding/removing devices from your cache pool.

Pool Operations

Creating a Cache Pool

Without a pool, data living in the cache is in an unprotected state until moved to the array. Pooling multiple storage devices together ensures that data protection is maintained at all times, whether data is in the cache or the array.

Before we begin

  • You must have at least two storage devices in order to create a cache pool. Most people find that SSDs are ideal for a cache pool.
  • A multi-device cache pool is implemented with BTRFS only.
  • You must have as many device slots available for assignment to the cache as you wish to use with the pool.

Creating your pool

  • Stop the array, if it is not already stopped.
  • Increase the number of cache slots to as many as you have disk devices you wish to assign to your pool (you may have to lower your array slot count to be able to increase your cache slots)
  • Select each device you wish to participate in the pool and assign it to a slot in the cache.
  • With all devices assigned appropriately, start the array.
  • A Format option will appear after the array is started; click the checkbox and button to approve the procedure and initialize your new cache pool.
  • When completed, the pool will be identified on the Main tab along with total available space in it.

Removing a Device from a Cache Pool

  • Stop the array if it is started (go to the Main tab, click the checkbox and button to stop the array).
  • Unassign the device you wish to remove from the slot it's assigned to in the cache pool.
  • Physically unplug the device from the system (detach SATA and power).
  • Start the array.
  • A balance operation will be automatically executed; when this operation completes, an entry will be printed to the log stating disk deleted missing
  • You can once again stop the array, physically reattach the previously removed device, and assign it for another purpose, such as to the array.