2020 NAS – Plex, nomachine, Crashplan

After about a year and a half with my previous NAS (see here), I decided it was time for an upgrade. The previous NAS had served dutifully, but it was no match for 4K video (I don’t have a lot of it), it took forever to transcode files when I wanted to synchronize them with a device using Plex, and it couldn’t transcode pretty much any videos in real-time. For playing up to four files simultaneously that didn’t need to be transcoded, it worked like a champ. But it just couldn’t handle all the scenarios I was throwing at it. So, it was time for an upgrade.

Hardware

The major change, of course, was the hardware. Here are the specs for the new NAS:

Case: Fractal Design Define R5 – Mid Tower ATX case
RAM: Corsair LPX 32GB (2x16GB) 3200MHz C16 DDR4 DRAM
GPU: EVGA GeForce GTX 1060 3GB
Motherboard: ASUS AM4 TUF Gaming X570
CPU: AMD Ryzen 5 3600 6-Core
Powersupply: Fractal Design Ion+ Platinum 860W PSU
SSD: Samsung 850 EVO 250GB
Hard Drives: 4 WD Blue 4TB 5400 RPM Hard Drives

Since I was repurposing the SSD and hard drives from my previous NAS, I didn’t buy those for this build. But here is the estimated cost of the NAS assuming those were included: $1,357.47 + tax.

(NOTE: Full disclosure, the links above are affiliate links to Amazon.)

A couple of notes on my hardware choices…

The case is amazing. It’s big, roomy, and extremely well-designed. It is also absolutely silent and does a great job with cable management. It also has plenty of room for hard drives, which I absolutely love. It’s also understated. I don’t need flashing lights with my NAS. I need a discrete box that makes no noise.

My new fileserver. It’s absolutely silent. It may be a little large, but that gives me plenty of space for the components I want.

I had originally considered a different GPU, the PNY Quadro P2000. A lot of Plex forums suggested it was a real beast when it came to hardware transcoding for video. By the time I put this together, these were really hard to find and very expensive. I looked around and the GeForce GTX 1060 had higher benchmarks at a much lower price. You can see my testing data below to see how well it works and whether I made the right choice.

I also considered an AMD Threadripper over a standard Ryzen, but I also realized that I was passing the video transcoding to a GPU with this build (my last one used the CPU), which meant I really didn’t need that much CPU power. Again, you can see in the benchmarks below how well this paid off.

The remaining hardware choices were pretty straightforward. The SSD is small because it only houses the OS. All the storage is on the hard drives. I kept the ZFS raid (see details below) and can upgrade it by simply buying bigger hard drives in the future and swapping the smaller ones out, making this NAS fairly future proof for at least the next few years.

Software

I changed very little from the previous NAS when it comes to software. I installed the latest Kubuntu LTS, 20.04. I am running the latest versions of Plex, Crashplan, tinyMediaManager, kTorrent, and nomachine. I am using NFS to access my server directly from other computers within the network. These are all detailed in my previous build.

I also kept my ZFS file system. Conveniently, I was able to transfer the raid right over to my new NAS, which I detail below.

Test Results

To test how well my new build did, I ran a couple of tests. First, I tried to simultaneously stream a 4K video to 5 devices – my TV, two Google Pixel 3 phones, a Google Pixel phone, and a laptop (all on my home network). The Google Pixel couldn’t handle the 4K video, so I ended up streaming a different 1080p video instead. My previous NAS couldn’t stream a single 4K video. This one handled all 5 streams like a champ, as shown in the video below:

I also wanted to make sure that my Plex server was using the GPU for converting videos, so I took that same 4K video and set it to synchronize with my phone at 720p. On my previous NAS, converting a 31gb 4K video to a 720p video would have taken hours. Plex passed the conversion on to the GPU and it took less than 10 minutes.

To make sure your Plex server is using your GPU, you need to set it to do so in the settings. It’s under Settings->Transcoder. Click on “Use hardware acceleration when available” and “Use hardware-accelerated video encoding.”

NOTE: To monitor the CPU, I used Ksysguard, which ships with Kubuntu. To monitor the GPU, I used “nvtop,” which can be installed from most distribution repositories.

Headless Monitor Issue

One of the problems I have been struggling with for months now (and had a similar issue with my previous NAS) was setting up a virtual screen/monitor. Here’s the problem…

I actually do the initial installation of the operating system using a monitor that I plug into the NAS. But I don’t want to keep a monitor out, sitting near my NAS/fileserver all the time. I want the NAS to be “headless” – meaning no monitor is attached. Once I get the operating system installed, I first install SSH (just in case I screw something up, ’cause you’re always going to screw something up). Then I install my remote management software, nomachine. Once I have nomachine installed, I can unplug the monitor, mouse, and keyboard and take care of everything else remotely. However, I found out with my previous NAS build, that the X server that manages sessions on Kubuntu, doesn’t set up a monitor if it doesn’t detect one being attached physically. In other words, if I restart the computer with no monitor attached, the X server correctly detects that there is no monitor attached and doesn’t set up a monitor in the X session.

As a result, when I would try to remotely connect to my NAS using a VNC client, the client would have to try to generate a monitor and the result was always problematic – either nothing would show up or it would be really slow and sluggish. I get that the X server doesn’t want to use resources to run a monitor if there isn’t a monitor connected. That’s a feature, not a bug. But it is a challenge to solve this problem. With my new NAS, it took me about 2 hours to finally figure out a solution. So I don’t have to go through this again, I’m going to post the solution here in detail.

With my previous NAS, I didn’t have a graphics card installed. I let the CPU do all the processing, which meant I needed a different solution than the one I had employed before (here’s the old solution). With my new NAS, I wanted a graphics card so I could pass off some of the video processing to the GPU instead. As noted above, I bought an NVIDIA graphics card, which Kubuntu recognized when I installed the OS and appropriately added the recommended NVIDIA drivers. My graphics card also runs the monitor. This was the key to solving my headless monitor issue.

Basically, what you need to do is tell the X Server to create a monitor even though one isn’t attached. You do this by creating a “xorg.conf” file that is stored in the /etc/X11/ directory. Once the NAS is up and running, you can actually let the NVIDIA X Server Settings application create the necessary xorg.conf file, for the most part. Here’s what you do. With the NVIDIA drivers installed and the NVIDIA X Server Settings application installed, launch the NVIDIA X Server Settings application. Go to the second option down on that window that reads, “X Server Display Configuration.”

Near the bottom of that window is a button that reads, “Save to X Configuration File.” This is nearly the answer to all our problems.

When you click on that, it will pop up a window that will allow you to preview the code it is going to add to /etc/X11/xorg.conf, which is the X server’s configuration file for the monitor. That code will be based on your current monitor – whatever is currently physically attached to the computer. Go ahead and follow the prompt and it will generate the xorg.conf file.

However, you now need to make two modifications to the file. I use “nano” to edit text files:

sudo nano /etc/X11/xorg.conf

First, you need to add the following to the Device Section of the xorg.conf file:

Option “AllowEmptyInitialConfiguration”

This code tells the X Server that it’s okay to create a screen/monitor without a physical monitor connected. (Got this information from here).

Second, you need to add the dimensions of the virtual screen you want to set up to the “Display” SubSection of the “Screen” Section of the xorg.conf file, like this:

Virtual 1440 900

This creates a virtual monitor/screen with the dimensions indicated. (Got this information from here). Here is the complete xorg.conf file I am using for my NAS:

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings:  version 440.64

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
    Option         "Xinerama" "0"
EndSection

Section "Files"
EndSection

Section "Module"
    Load           "dbe"
    Load           "extmod"
    Load           "type1"
    Load           "freetype"
    Load           "glx"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Acer AL1917W"
    HorizSync       31.0 - 84.0
    VertRefresh     56.0 - 76.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1060 3GB"
    Option	   "AllowEmptyInitialConfiguration"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "Stereo" "0"
    Option         "nvidiaXineramaInfoOrder" "DFP-0"
    Option         "metamodes" "nvidia-auto-select +0+0"
    Option         "SLI" "Off"
    Option         "MultiGPU" "Off"
    Option         "BaseMosaic" "off"
    SubSection     "Display"
        Depth       24
	Virtual 1680 1050
    EndSubSection
EndSection

With the above xorg.conf file (which is located in /etc/X11/, so, /etc/X11/xorg.conf), I can then shutdown my NAS, unplug the mouse, keyboard, and monitor, then restart it and use nomachine to completely control the NAS. Here’s how it looks when I pull it up:

This approach will only work if you have an NVIDIA graphics card. If you don’t, you’ll need to take a different approach. This site may point you in the right direction.

UPDATE 2020-07-13: Back to XFCE

Well, I’m back to using XFCE as my desktop environment on my NAS/file server. An update completely wiped out KDE making it throw errors every time I tried to reboot the file server and not even showing the desktop. KDE is just not the right option for a NAS/file server. I had the same issue with my previous file server. Lesson learned. From now on, ultra lightweight XFCE will be my desktop environment of choice from my file server.

How to install XFCE. First, install tasksel:

sudo apt-get install tasksel

Then run tasksel and select Xubuntu desktop:

sudo tasksel

When prompted, select “lightdm” as your display manager of choice:

Go ahead and restart and you’ll be good to go. (NOTE: I’m still using the same xorg.conf file and it’s working.)

Also, I had to manually remove the NVIDIA drivers then reinstall them with the monitor connected.

ZFS Transfer

As I detailed in my previous NAS build, I opted to go with ZFS for storing my files using a raidz2 array (4 x 4tb drives). It provides, IMO, the optimal balance of speed and redundancy. With just over 4tb of files on my ZFS array in my old NAS, I didn’t really want to have to copy all of those files over to the new. As it turns out, ZFS has the ability to move physical disks to a new device and recreate a ZFS pool. Hallelujah!

On the old NAS, I shut down all connections to my NAS from external devices, restarted the machine, and then used the following command:

sudo zpool export

This basically sets up the ZFS pool so it is ready to be transferred to another machine. I then physically removed all four 4tb drives from the old NAS, moved them to the new one, and got everything ready to import the new NAS. I installed the following packages first:

sudo apt-get install zfsutils-linux nfs-kernel-server zfs-initramfs watchdog

The above packages also installed the following packages:

libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-zed keyutils libnfsidmap2 libtirpc-common libtirpc3 nfs-common rpcbind

With those installed, I checked to see if I had any pools (I didn’t, of course, I hadn’t set any up):

zpool status

Then, I used the following command to look for the pool that was spread across the four hard drives:

sudo zpool import

Beautifully, it immediately found the pool (ZFSNAS) and gave me a bunch of useful information:

pool: ZFSNAS
id: 11715369608980340525
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and the '-f' flag.
see: http://zfsonlinux.org/msg/ZFS-8000-EY
config:
 ZFSNAS      ONLINE
      raidz2-0  ONLINE
        sdd     ONLINE
        sdb     ONLINE
        sde     ONLINE
        sdc     ONLINE

Honestly, I cheered when I saw this. It was going to work. I then used the following command to import my pool:

sudo zpool import -f ZFSNAS

Then, just to be sure everything worked, I checked my pool status:

zpool status

And up popped my zpool. It did tell me that I needed to upgrade my pool, which I proceeded to do. And, with that, my pool from my old NAS was transferred to my new NAS intact. No need to copy over the files. ZFS is definitely the way to go when it comes to NAS file storage. I’m increasingly happy with ZFS!

NOTE: I bought a 10gb external drive and copied everything onto it just to be extra cautious. Turns out, I didn’t need to do that. But, better safe than sorry.

Mounting Fileserver Across the Internet

This trick blew my mind. I am using EasyDNS to be able to VNC into my fileserver from wherever I am in the world (since I don’t have a static IP from my ISP). Nomachine works like a champ across the internet. But I had a crazy thought the other day, “Wouldn’t it be nice if I could actually mount my fileserver directly onto my work computer?” I don’t typically need to do this but I have been working with some files on my fileserver for a project and wanted direct access. Turns out, you can (thank you StackExchange). And so easily it will blow your mind. Linux rocks!

Since I have already set up SSH access on my fileserver, all I needed to do was learn about this as I already had all the packages installed. (If you haven’t set up SSH on your fileserver and opened the relevant ports on your home network, do that first). Then, on my computer at work, I created a new folder in my home directory (/home/ryan/fileserver/). I then used the following command at the terminal to mount my fileshares on my work desktop:

sshfs ryan@myfileserver.easydns.org:/directory/to/share /home/ryan/fileserver

You obviously need to change your username for your fileserver, which comes before the “@”, and need to have the IP address of your fileserver (what comes after the “@” sign). You also need to know what directory from your fileserver you want to mount (comes after the :/) and the folder where you want to mount it (the last piece). Once you have everything set up correctly, hit enter and it will ask you for your password. Once I entered my password, all of the files on my fileserver were securely shared over SSH to my work desktop:

Those are the main directories from my fileserver at home.

Note, this isn’t a permanent share. Once you shutdown your work computer, the share will go away. But it’s just a quick repeat of that command and you’ll have the files mounted right back to your computer. Also, the speed with which you can access the files will depend on your home and work networks. Luckily, mine are both very fast. So, it’s basically like being on my home network as I work with the files I need. Absolutely amazing!

 3,567 total views,  9 views today

Building My Own NAS (with Plex, Crashplan, NFS file sharing, bitTorrent, etc.)

NOTE: As of 2020-06-22, I have a new NAS build. You can read about it here.

For about the last seven years (since 2012), I’ve been using a Synology NAS (Network Attached Storage) device in my house as a central repository for files, photos, music, and movies. It has generally worked well. However, there have been a number of serious problems with the Synology NAS I bought (DS413J). First, the amount of RAM is limited and cannot be upgraded. Second, the CPU (MARVELL Kirkwood, Arm processor) is in the same situation. While the box is small and draws very little power, the inability to upgrade the hardware (other than hard drives), means that I’m basically stuck with what was considered cutting edge back in 2012 when I bought it. In practical terms, these limitations have meant that I have not been able to run Crashplan on my Synology box since the first year I bought it because I have more than a terabyte of files I am backing up and the 512 mb of memory cannot handle that (I created a workaround where I run Crashplan on a different computer, but am backing up the files on the Synology box over the network). It also means that I haven’t been able to run the latest version of Plex for the last 4 years because Plex stopped supporting the CPU in my Synology box. This eventually came to a head about a month ago when the latest Plex client on my Roku stopped working with the very outdated version of Plex server on my Synology NAS device. As a result, I was no longer able to serve videos to my Roku, which was one of the primary reasons why I even have a NAS in the house.

The convenience of having a pre-built NAS with a web interface has been nice. There is a lot to like about Synology products. However, you are locked into their hardware and their software and are restricted by their timelines for upgrading to the latest software. Additionally, my Synology NAS, which is a 4-bay device, has a problem with one of the bays that actually ended up destroying 2 hard drives, so I only have 3 usable hard drive bays. And, Synology devices are crazy expensive. Given my use scenario, paying as much as they now charge for a high-end NAS that might temporarily meet my needs doesn’t make a lot of sense.

So, I finally decided that it’s time to go back to building my own NAS (I had one for a short while before). As I started researching what I wanted for my Do It Yourself (DIY) NAS, I basically went down a rabbit hole of options: Which Operating System (OS)? What hard drives? What file system for the hard drives? Do I use a RAID? What software do I need to install? What CPU and motherboard? How much memory? In this rather lengthy post, I will detail what I ultimately decided to do and why.

DIY NAS: OS (Operating System) Options

As a long-time Linux user, I was never going to consider anything but a Unix-based system. That means I never even considered Windows as an option. Those who use Windows could certainly consider it, but I have no interest in using Windows for my NAS. That, however, didn’t narrow my options that much. With Unix-based systems, all of the following are real contenders: Unraid, FreeNAS, Amahi, Open Media Vault, Rockstor, Openfiler, NAS4Free, or just the Linux distribution of my choice (currently, Kubuntu 18.10). I spent quite a bit of time considering these OSes, all but the last being designed specifically as OSes for NAS boxes. The more I thought about theses pre-packaged options, the more I realized that they fall in between the proprietary OS of my Synology box and the OS I use on all my other computers (Kubuntu), and they are all crippled (to varying degrees) by the same problem I had with Synology – I am beholden to the companies/people who maintain this software to release new packages for the software I need to run: Crashplan and Plex. This is a particular concern with Amahi, Rockstor, Openfiler, and NAS4Free, since they all require the installation of software through their “packages.” That would not be the case if I just went with a standard Linux distribution that gets regular updates (e.g., Kubuntu). I can install pretty much whatever I want on such an OS, which means the NAS will be whatever I want it to be.

Unraid (which isn’t free) will really allow you to install an OS on top of their OS, so that might not be a problem. But I’m also not convinced that I need Unraid for drive management (as I’ll detail below when I discuss how I organized my hard drives). FREENAS is probably the most appealing OS for this as it really is just an OS, and you can install what you want on top of it. My biggest concern here is that FreeNAS is BSD based, which really shouldn’t be a concern, but I have limited experience with BSD (tons with Linux), and I wasn’t certain what FreeNAS would give me over a standard distribution.

There very well may be advantages to one of these specific OSes for NAS devices that I am missing. But, after having suffered under the proprietary lock-in and inability to upgrade my software under Synology, I realized that I was very wary of getting locked into a pre-packaged OS that would mean I couldn’t install what I want to install. Ultimately, I decided to just install Kubuntu 18.04 on what would become my new, DIY NAS box. Some might note that I should have just gone with Ubuntu server, as it reduces the amount of CPU power and memory since I wouldn’t have a graphical front end (KDE). I considered it. But that would also mean that I would have to manage the entire device through the command line or try to find some other software (e.g., webmin, which won’t do everything I need) that would allow me to monitor my device via a web browser. Since I’m most comfortable with a graphical desktop environment, why not just go with what I know and what works for me?

Final choice on OS: Kubuntu 18.04, which is a long-term support release (important for future upgrades to the OS).

DIY NAS: Hardware Options (CPU, motherboard, RAM, etc.)

When Plex stopped working with our Roku, my wife quickly realized it. We all use Plex on a regular basis to watch our movie and TV collection. When I told her what the problem was and said that I thought I was going to need to replace our NAS, she asked me how much it was going to be. Being honest, I told her it could be fairly expensive, depending on what I decided to build. Luckily, we are in a position financially where I could spend upwards of a $1,000 on a new NAS if I needed to and she said that would be fine.

I spent quite a bit of time consider hardware. The biggest question was really whether I wanted to go with dual Xeon processors to build out a real server or go with what I am most familiar, AMD CPUs like Ryzen (I build all my own desktop computers). The dual Xeon approach made a lot of sense as they can manage a lot of concurrent threads. The latest processors from AMD (and Intel, but I’m an AMD guy), Threadripper, also manage a lot of simultaneous processors. However, these are all pretty expensive processors, even somewhat older Xeon’s can be a little pricey if I’m trying to get something that was made in the last few years. Additionally, the motherboards that go with these can be almost as expensive. I debated these options for quite a while.

Then, I had an idea. I had an older computer lying around from my latest upgrade (I usually upgrade my desktop, give my immediate past desktop to my wife; during the last one, I ended up upgrading the entire device, case included, which left a third desktop computer sitting unused). I wanted to test out Plex and Crashplan on that computer just to make sure it worked. It’s an older AMD Athlon II X4 620 that has 24 gb of RAM and room for 8 SATA devices on the motherboard. Once I got everything up and running on this test system, I realized that, given my use scenario, I actually didn’t need the latest Threadripper AMD CPU or even dual Xeon processors. I don’t need concurrent ripping of 4k or 8k video files. I don’t even have any 4k video files (my main TV has 4k capabilities, but I rarely use them). Most of my video is 1080p, which looks great. I tested the system out streaming a video to my TV and backing up files and it worked great. So, I decided to re-purpose this older computer and make it my new NAS.

Once I got everything set up (see below for details), I wanted to see just how much my NAS could handle, especially since the old Synology NAS struggled with streaming just a single HD stream. To run my real world test, I started streaming audio from three Amazon Echo devices in three different rooms in the house via Plex, started streaming a 1080p video file to my phone on the home network, and started streaming a 1080p video file to my main TV. This screenshot shows the Plex server with 5 simultaneous streams:

Simultaneously streaming to five devices on the home network.

The big question was whether my 4 cores could handle this. Here’s how things looked:

All four CPUs were never maxed out simultaneously.

Conclusion: My older AMD Athlon X4 620 was more than up to the task. With two simultaneous 1080p video streams and three mp3 streams, the server was working, but it was far from maxing out. Since we have just one TV in the house, the odds of us ever needing to simultaneously stream more than two videos are almost zero, and that’s true even considering that I am allowing some of my siblings access to my Plex server.

What does this mean for the average person building a DIY NAS? Unless you have, let’s say, 10 4k TVs in your house and you want to simultaneously stream 4k videos to all of them, you probably don’t need the latest and greatest CPU in your NAS. In all likelihood, I’ll let the system I have run for a year or two (unless there is a problem), and then upgrade my box, my power supply, my motherboard, and my CPU (which will also require upgrading my RAM). By that time, Ryzen Threadrippers will have dropped in price enough that it won’t cost me $2,000 to build a beast of a NAS that can serve multiple 4K streams at the same time. I’m not sure I’ll ever need that much bandwidth or power given my use scenario, but I can imagine needing a bit more power in the future.

Where you probably do not want to skimp is on RAM. As I have been transferring my files and media to my new NAS from the old one, I have seen my RAM usage go up to as high as about 16 gb at different times. That has been the result of large bandwidth file transfers, Plex indexing the media files (video and audio), and simultaneous streaming of files. In short, you can go with an older, not crazy expensive, multi-core CPU for your NAS and be fine, but make sure you’ve got at least 16 gb of RAM, maybe more.

DIY NAS: Hard Drive Options

Where I got the most bogged down in my research was in deciding on how many hard drives to use, what file system to use, and whether or not I should use a RAID for the drives. In my Synology box, I had just two 4 terabyte hard drives in a RAID 1 (or mirror) arrangement. While I am getting close to filling up the 4 terabytes (I have about 3 terabytes of photos, movies, files, music), I was more concerned with not losing data. With a RAID 1, all of my data was backed up between the two hard drives. Thus, if I lost a drive, I would still have a copy of all of the data.

Additionally, I pay for unlimited storage through Crashplan. Everything on my NAS is backed up off-site. This way, I have a local copy of all of my files (the mirrored drive) and an off-site copy of all my files (in case there is a fire or catastrophic failure). Since I do back up everything off-site, I could theoretically just go for speed with, for instance, a RAID 0 that stripes all my data between drives. But a failure would mean I would lose all my local data and I would have to restore from the off-site backup (which would take quite some time given the amount of data I have).

As I considered the options for hard drives, my initial thought was to use the same system but increase the size of the hard drives. There are now hard drives with capacities in excess of 10 tb. They are expensive, but two 10 tb hard drives in a RAID 1 would basically replicate what I was doing but give me plenty of room to grow my photo, video, and file collection. I was just about to pull the trigger on this plan until I realized that another option might make more sense.

RAID 6 requires 4 hard drives, but offers a number of advantages over RAID 1 (e.g., similar speed but better redundancy, as well as the possibility of hot swapping hard drives). And, if I went with RAID 6, I could actually buy cheaper, 4 or 6 tb hard drives instead of 10 tb hard drives and end up with just as much or more space as I would get with two 10 tb hard drives for less money. From the many articles I read on this, it seems that lots of corporations use a RAID 6 given the redundancy and speed advantages that result. I ultimately decided to go with a RAID 6 with four 4 tb hard drives. Effectively, that meant I would be doubling my storage (8 tb) while improving my redundancy.

I still needed to figure out what file system to use. Having worked with Linux for over a decade, I typically use EXT4 on all of my computers. It doesn’t have any file size limitations, obviously, and does everything I need. However, I had been hearing about ZFS for a while (as well as BTRFS) and what I had heard made me think that ZFS was really what I should be running on my NAS. While it may slow down my NAS a little bit, the benefits to preventing bit rot and the redundancy it includes meant the impact on performance would likely be worth it. However, ZFS doesn’t come as a standard file system option in Linux distributions. I have used various disk partitioning programs enough to know what the standard file systems are that ship with Linux distributions and ZFS is not one of those. I was a little worried about what might be entailed in installing ZFS and setting up ZFS in a RAID6 (in ZFS terms, it’s called RAIDZ2). Once I found this handy guide, I realized that it wasn’t that difficult and was something that I could easily do. Before I headed down this path, I tested the guide with a spare drive I had lying around and it really was simple to set up ZFS as the file system on a drive. That test convinced me that ZFS was going to be the route to go for my file system on my drives. Thus, my four 4 tb RAID6 is actually a RAIDZ2 with a functional capacity of 8 tb of storage, but is actually expandable should I want to do so. (Also useful for ZFS is this post on scheduling regular scrubs on the drives.)

Update: 2019-08-18 – I restarted my file server after installing system updates and my ZFS pool was missing. That was terrifying. I finally found a solution that was a little nerve-wracking but worked. Somehow, the mount point where my ZFS pool was supposed to mount either got corrupted or had something in it (for me, it was /ZFSNAS). I renamed that folder:

mv /ZFSNAS /ZFSNAS-temp

I was then able to import my ZFS pool with the following command:

sudo zpool import ZFSNAS

Apparently, this is the result of an upgrade to the ZFS software. This happened again right after I rebooted. There is a new option in ZFS systems that has to be enabled in order for these to mount after reboots:

systemctl enable zfs-import.target

Now, when I reboot, my ZFS pool comes right back up.

Update: 2019-11-13 – I installed an updated and restart and now my ZFS pool has disappeared again. The solution above brought it back, but enabling zfs-import.target, doesn’t bring it back up anymore on the next reboot. I tried enabling all of the following (from here):

sudo systemctl enable zfs.target
sudo systemctl enable zfs-import-cache
sudo systemctl enable zfs-mount
sudo systemctl enable zfs-import.target

None of those worked. Not sure what is going on but I’m pretty sure it’s tied to ZFS changes in the latest Ubuntu/Debian kernel. Argh! I also had to do the following after I brought the pool back up to make sure it was shared across the network:

sudo systemctl restart nfs-kernel-server

NOTE: It also looks like I set up my ZFS pool in a problematic fashion. I created one pool, but no datasets whereas I should have had one pool with multiple datasets. Datasets are where the files should be stored, not in the pool directly. I’m now struggling to set up automated snapshots.

DIY NAS: What software to install?

I have been running Plex for about 6 years to manage my media collection. With my Synology box, the limited RAM and not very good CPU meant that it actually had a pretty hard time managing my media collection. It would transmit videos across my network to my ROKU device, but only if they were in a specific format (mp4, which was probably a limitation of the Synology box as that is not a problem with Plex or ROKU). Setting up a video slide show was basically impossible on my ROKU from my Synology box as the CPU and RAM just couldn’t cut it. While I could play my music across the network, the Synology box would also not play nice with my music in Plex. As a result, I used Plex just to watch videos but not for anything else, even though it is a great way to manage all sorts of media – video, photos, and music. Thus, one of the requirements of my new DIY NAS would be that it has to take advantage of all the features that Plex has to offer (I have a lifetime Plex Pass). So, Plex was the core requirement for my NAS.

But I also wanted my NAS to run Crashplan. As noted above, my Synology box didn’t have enough RAM to run Crashplan, which meant I had to run it on my desktop to backup the files on my NAS to Crashplan. It was a hack to get around the limitations of the Synology NAS (FYI, you need about a gigabyte of RAM for every terabyte of files you want to back up to Crashplan). Plex and Crashplan were minimum software requirements. My DIY NAS had to be able to run both of those and run them well.

I do occasionally download stuff via bitTorrent (mostly Linux distributions). So, having a bitTorrent client installed would be nice. Kubuntu comes with one, kTorrent, which was fine.

The last piece of software I really needed was a way to control my NAS remotely. The goal was to basically have it run headless, stick it in a corner in my office, and just let it do its thing. I can control the Plex server through the Plex website, but to do everything else I would need VNC software. I was actually surprised at how difficult it was to find software that would let me control my NAS. I tried Remmina first and had no luck. The interface was clunky, not intuitive, and I only was able to successfully connect about 1 time out of 10. I finally went with nomachine. (I initially went with TeamViewer, but they claimed I was using their software for commercial purposes and cut me off. Fuck them.) Also, in order to make this work reasonably well, you need to install a dummy driver for the X11 driver and create an X configuration file for the dummy driver, as detailed here. Otherwise, the VNC software will be slow as molasses.

I also installed SSH so I could access the NAS remotely in case the GUI VNC programs were having issues. I followed this guide.

(Update: 12/7/2019. I strongly encourage the use of tinyMediaManager for managing your videos and TV shows. It’s the slickest software I’ve found for doing this and runs great on Linux.)

DIY NAS: NFS Fileserver

This guide made it very easy.

DIY NAS: Specifications

Here’s the final rundown of what I put together:
OS: Kubuntu 18.04 (long term support)
Hard Drives: ZFS filesystem; four 4 terabyte drives in a RAIDZ2 for 8 tb of storage
CPU: AMD Athlon X4 620
Motherboard:
RAM: 24 GB
Software: Plex, Crashplan, NFS, TeamViewer, kTorrent

UPDATE – 2019-04-24

It’s been almost 4 months running this NAS box. Generally, it’s worked really well. I have, however, run into two problems.

The first one was pretty recent (about 2 weeks ago). I’m not sure what happened, but over the last month, I’m assuming some update to Kubuntu 18.04 led to the screen going black when I would VNC into the box. The server was still working, but I couldn’t get it to display anything via VNC until I restarted the box manually (SSH on the NAS box would have given me another option for that). I’m still not 100% sure what the problem was, but it was somehow related to KDE. I ended up installing Unity/Gnome as the desktop environment and now the problem is gone. That was about a week ago that I solved the problem. The box has been running without a hitch since then.

The only other issue I will note is that there have been two times when I have been streaming shows on my NAS box through ROKU where my box had to actually transcode a 1080p file. With the hardware I have inside it, the 4-core processor was not up to the task. It ended up stopping playback several times and buffering a lot. It can easily stream a 1080p file in almost any format (both Plex and Roku can handle almost every format) and can even manage to stream two of them simultaneously (tested on my TV and phone simultaneously), but the two movies it had issues with were in a weird MKV format that my Roku couldn’t handle. I ended up ripping them into a different MKV format and, voila, problem solved. The point being – my NAS box is a little underpowered. But, a year or two from now, the price of an AMD Threadripper will be half of what it is now. I’ll swap that in and will have all the power I need.

So, 4 months into this, it’s working extremely well. I’d give it a 99 out of 100.

Update 06-21-2020:

As I predicted, after about 1 1/2 years, it was time to upgrade my NAS. I started a new post detailing my new build and what I changed here.

 5,117 total views,  14 views today

Linuxmint or Ubuntu: Crashplan backup using headless Synology NAS

If you’re using a Synology NAS box and would like to back up your files to offsite storage service Crashplan (which is relatively inexpensive), there is a relatively easy way to do this.  However, you need to think about the Crashplan software as having two components.  There is the backup engine, or the software that communicates with Crashplan’s servers and sends the files you want backed up to their servers.  Then there is the “head” or user interface which tells the engine what to back up and when.  If you’re backing up your Synology box, then the engine will go on there.  You can use any OS for the head, but I’m running Linuxmint and here is how I got it to work.

1) On your Synology box, first you need to download and install Java SE for embedded packages following these instructions.  Then you need to download and install the Crashplan package for Synology NAS following these instructions. Make sure you choose the correct version of Crashplan from the link above.  You don’t want to install CrashPlan Pro if you’re just running CrashPlan – it won’t work.

2) Once you’ve got both of those up and running on your Synology box, you should see something like this:

crashplan-01

crashplan-02

crashplan-03

3) The next step is to install CrashPlan on your desktop computer so you can control the engine on your Synology box (i.e., tell it what to backup and when).  First, download the CrashPlan software for your OS here.  Once you’ve downloaded the .tgz file, uncompress it to a folder (doesn’t really matter where; your’e desktop or home folder will work).  Then open a terminal and navigate to where you unpacked the CrashPlan files.  At the terminal, type:

sudo bash install.sh

You’ll then need to follow the prompts, but it should install the software on your computer.

4) Now is the tricky part.  If you follow the directions on CrashPlan’s website to connect the “head” to the “headless engine” it won’t work.  Their directions say to edit the file /usr/local/crashplan/conf/ui.properties by uncommenting the line #servicePort=4243 and changing it to servicePort=4200.  You then need to set up an SSH tunnel.  Here are their directions.  I tried a lot of variations of this and didn’t work.  But you know what did?  Editing a different line.  In that same file, uncomment the line that says #serviceHost=127.0.0.1 and change it to serviceHost=192.168.2.100 (i.e., the IP of your Synology box).  Save the file and close it.

5) You should now be able to open the CrashPlan GUI and control your Synology box remotely.

 584 total views,  1 views today

LinuxMint or Ubuntu: How to Automount Synology Shares

UPDATE:  As of Ubuntu 13.04, these directions no longer work.  I figured out a way to get this to work, however.  See the updated directions at the bottom of this post.

——————Old Directions————————————

If you’d like to share your network attached storage from a Synology file server with your Linuxmint or Ubuntu machine and have it appear as just another folder, you can set the Synology unit to automount on your computer.  These steps assume that you have already set up your Synology unit and are sharing at least one folder over the network.  It also assumes that you are already connected to the same local network as your Synology unit.

To set up the automount, do the following:

1) Install the package nfs-common, either using synaptic or the command line:

synology-01

(from the command line: sudo apt-get install nfs-common)

2) Open a console or terminal and type “ifconfig” to find out your IP address on your local network.

synology-02

 

Let’s assume your IP on the local network is 192.168.2.1 (as shown in the figure).

3) Open the Synology interface and then open the Control Panel:

synology-03

4) Click on “Shared Folder” which will show you a list of your shared folders.  Synology comes with the ability to share folders using the nfs protocol.  It is a secure protocol that requires you to add the IP address of the computer that is going to be allowed to access files on the Synology NAS.  Once you see the shared folders, select the folder you want to share, then click on “Privileges” and then “NFS Privileges”.

synology-04

 

5) In the next window, click on “Create” and then add the IP address of the computer with which you want to share that folder.  You should also decide what privileges you want to grant that computer.  If you grant it read/write privileges, that computer can modify files.  If you grant it the read privilege, that computer can only read files.

synology-05

6) Once you’ve done that, you should be able to access the shared folder over your network.  However, what we want to do is make any shared folders automatically mount over the network every time you start your computer.  To do so, you’ll need to do two more things.  First, create a folder on your computer to map the shared folder to.  An ideal location is in your home folder since you already have read/write privileges there.  So, for instance, if you are sharing photos over the network, create a folder in your home directory called “NASphotos” by doing the following from the terminal (or just create it in a file explorer): mkdir /home/user/NASphotos

7)  Next, you’ll need to edit your /etc/fstab file.  To do so, open a terminal and type: sudo kate /etc/fstab

(You could also use gedit or some other text program, like nano.)

8) This should open the /etc/fstab file in a text editing program.  You’ll need to add the following lines to your /etc/fstab file:

Any line that starts with the pound sign “#” is a comment line.  I like to add a comment line so I know what my command is doing.  Here’s the line I add:

# automount file synology

Next is the line that actually does the work:

     192.168.2.100:/volume1/photos /home/user/NASphotos nfs nouser,rsize=8192,wsize=8192,atime,auto,rw,dev,exec,suid 0 0

You’ll need to change the parts that are bolded.  The IP is the IP of your Synology unit on the network.  If you have a different name for your volume on your Synology unit, you’ll need to change “volume1” to whatever it is.  Replace “photos” with the name of the shared folder on your Synology unit.  Replace “user” with your username.  And replace “NASphotos” with whatever folder you created in step 6.

Save the file and close it.

8)  Now, assuming you’ve done everything correctly, type the following into a terminal to mount the shared folder: sudo mount -a

Your shared folder should now show up in your file explorer (e.g. Dolphin) and should do so every time you start your computer.  Depending on the privileges you granted yourself on the Synology NAS, you should be able to read and/or write whatever files you’ve stored on the Synology unit as if they were on your own computer.

 

——————————————New Directions—————————————

UPDATE: As of Ubuntu 13.04, the directions I gave above stopped working. After tinkering with the settings for a while, I found a way to make it work.  Follow all of the above steps.  However, when you edit the fstab file, try using the following format for each share you want to mount:

192.168.2.100:/volume1/photos /home/user/NASphotos nfs rw,hard,intr,nolock 0 0

This is working for now.  This was based on this article on the ubuntu help site.

 4,943 total views,  9 views today