Linux – Setting Up FTP/SFTP Restricted Access for User

I run a server (Ubuntu 18.04) that hosts about a dozen websites using Linode. Most of the sites are run using WordPress and are my own or sites I manage for friends or family. I do, however, host one for a colleague who actively develops online content for that site.

As WordPress has developed, the ability to upload various file types has slowly been removed for security reasons. As a result, for certain types of files, it is now required to upload them using a different approach. I can do so using SSH, but GUI FTP/SFTP software was going to be easier in this situation as the person responsible for managing that site doesn’t have a lot of knowledge managing a website. I explained to this person, we’ll call her Sharon, that it would be possible for her to upload these files herself using FTP/SFTP. She was worried as she doesn’t know what that is or how to use it. But I explained it and, hopefully, she’ll grow more comfortable with it.

However, I don’t want a novice to gain access to all the files on my server. So, I was faced with the question of how to set up an FTP/SFTP account for someone that is restricted to just one folder – a folder where she can upload stuff and delete files, but with no access to anything else.

Here’s how I did it.

First, you should create a new user group on your server. This can be done with the following command:

sudo addgroup --system GROUPNAME

This will add a new user group called GROUPNAME (I called mine “ftpusers”). If this individual isn’t currently a user on your server, add them as a user as well:

sudo adduser --shell /bin/false USER

Replace “USER” with whatever name you’re using for this individual, for me it was “sharon.” You’ll need to create a password for your USER and fill in some additional information. Then add your USER to your GROUPNAME with the following command:

sudo usermod -a -G GROUPNAME USER

Or my command:

sudo usermod -a -G ftpusers sharon

So, you have now created a new group and a new user and added the new user to the new group. Of course, the next step is to restrict what your new USER can do. In particular, we want the user to have access to just a single directory. Here’s how that is done.

You can create a directory the user can use:

sudo mkdir -p /var/sftp/NEWFOLDER

This folder can be anywhere on your server. I put mine in a subfolder on their wordpress installation:

sudo mkdir -p /var/web/DOMAIN/public/wp-content/uploads/NEWFOLDER

Now, we need to tell the server to restrict USER to this NEWFOLDER when they login. First, let’s give ownership of that folder to the user with the chown command:

sudo chown USER:GROUPNAME /var/sftp/NEWFOLDER

We should also make sure the permissions for the new folder are what we want them to be – read/write for the user and group:

sudo chmod 755 /var/sftp/NEWFOLDER

If you navigate to that folder and check the settings, you should see that the owner is now the USER and the GROUPNAME (you can check with “ls -l”). It’s not a bad idea to also check to make sure that the folder above it is owned by “root” or your primary user, which will prevent your new USER from being able to make changes to that folder.

So far, we have a new USER and GROUPNAME and the user has a folder they can access. However, we need to tell the server that the user needs SFTP access and then need to force them to go to just that one folder when they login with SFTP.

To grant them SFTP access, you need to change the SSH settings:

sudo nano /etc/ssh/sshd_config

This will open the file “sshd_config” with a text editor (nano) so you can make changes. At the end of the file, you want to add the following text:

Match User GROUPNAME
ForceCommand internal-sftp
PasswordAuthentication yes
ChrootDirectory /var/sftp/NEWFOLDER
PermitTunnel no
AllowAgentForwarding no
AllowTcpForwarding no
X11Forwarding no

This allows users in the group GROUPNAME SFTP access to the folder you created for them.

Before you close the nano session with “sshd_config”, you may have to change one other setting. Look for a line that says:

Subsystem sftp /usr/lib/openssh/sftp-server

Mine was not commented out, so that setting was active. However, given the settings we just added to the file, we need to change that. Comment out that line:

#Subsystem sftp /usr/lib/openssh/sftp-server

Below that line, add the following line:

Subsystem sftp internal-sftp

I’m guessing that the original line specified a location for the sftp-server to be used by the server but we want the server to determine the best location for the sftp-server it is going to use and that’s what the second line does. (Alternatively, in the text added to “sshd_config” the line “ForceCommand internal-sftp” could probably be left off, meaning you wouldn’t have to do the step I just described. I haven’t tried that, but it may work.)

Anyway, when you’re done editing the “sshd_config” file, save it and exit from nano.

Finally, to make sure that the new USER is forced into the specified folder when they login, you have to make one more change. This changes the home directory for the user so they are forced into that directory when they login. Here’s the command.

usermod -d /var/sftp/NEWFOLDER USER

This makes the folder you created (NEWFOLDER) the home directory for the USER so, when they log in using SFTP, they will be forced directly into that folder.

There you have it. You have a new user in a group with restricted SFTP access and the user will be forced directly into the folder you created where they can upload, modify, and delete content. They will not have access to anything else on the server, so the rest of your content will be safe.

Acknowledgments: I figured all of the above out with help from these sites: here, here, here, and here.

Restarting KDE’s Plasma Shell via Konsole (command line)

As much as I love KDE as my desktop environment (on top of Ubuntu, so Kubuntu), it does occasionally happen that the Plasma Shell freezes up (usually when I’ve been running my computer for quite a while then boot up a game and begin to push the graphics a bit. Often, I just shut down when I’m done and that resets everything. However, there is a quick way to shutdown and restart the Plasma Shell that will bring everything back up.

In KDE Plasma Shell 5.10+, the command to kill the Plasma Shell is:

kquitapp5 plasmashell

In KDE Plasma Shell 5.10+, the command to restart the Plasma Shell is:

kstart5 plasmashell

In earlier versions of KDE 5, the commands were:

killall plasmashell
kstart plasmashell

Linux – Music File Naming Conventions

I have been collecting music files for decades, like a lot of people. My total collection includes over 10,000 tracks. When I built my own NAS recently, I decided it was time to do a careful audit of my music as I hadn’t organized my music in a very long time. In the process, I realized that I had to decide on a clear naming convention system for my music.

Music File and Folder Naming

There are some tricky components to this. Assuming you want all of your music in one main folder (not everyone does), the next question is what level of folder comes next. I’ve typically used the Artist as the next folder, so the folder structure looks like this:

MUSIC -> ARTIST
Written slightly differently:
[MUSIC]/[ARTIST]

I typically use software to manage the renaming and also add the information to the tags at the same time to make this easier. In the process, I realized that the structure above should really be:

MUSIC -> ALBUM ARTIST
Written slightly differently:
[MUSIC]/[ALBUM ARTIST]

The reason it should be “ALBUM ARTIST” rather than just “ARTIST” is for albums that have multiple artists. Albums with multiple artists can get really messy if they aren’t organized by the ALBUM ARTIST, which I usually tag as “VARIOUS ARTISTS,” which means the multi-artist albums all end up in a folder called “VARIOUS ARTISTS.”

However, using “ALBUM ARTIST” can also be advantageous when the album is primarily by one person but they have a guest artist on one or two tracks. Using the primary artist as the “ALBUM ARTIST” solves the problem of the separate tracks being organized in a different folder.

The next level below ALBUM ARTIST is the ALBUM TITLE, like this:

MUSIC -> ALBUM ARTIST -> ALBUM TITLE
Written slightly differently:
[MUSIC]/[ALBUM ARTIST]/[ALBUM TITLE]

This structure generally works for most artists and albums to keep things relatively organized. The next issue is the naming convention for the song files themselves. I don’t think this part matters as much, but I have seen different approaches. Some seem to think that including the name of the artist and the album in the name of the file along with the track number and title of the track is necessary, like this:

[TRACK #] – [ARTIST] – [ALBUM TITLE] – [TRACK TITLE].[file extension]

Alternatively, since the file itself is organized within folders that indicate the artist and album, others use the following convention:

[TRACK #] – [TRACK TITLE].[file extension]

This results in shorter names for the files themselves. I’m not sure that there is a better or worse approach. However, including the artist and album title does run a higher risk of running into a character limit for a track (on most operating systems, files can only have a 255 character name).

In light of the file name length concern, I have opted to go with the second, shorter naming convention. Here’s an actual example from a recent album purchase:

[Music]/[The Head and the Heart]/[Living Mirage]/01 – See You Through My Eyes.flac

Finally, there is also the issue of multi-disc albums. I used to put the different discs into separate folders. Now, I add the disc number at the beginning of the file name, like so:

[DISC#]-[TRACK #] – [TRACK TITLE].[file extension]

Organizing and Naming Software

The software I have used for quite some time for playing my music, Clementine, has had some issues with recent releases, leading me to switch to a forked version called Strawberry. However, as I’ve worked with both of these, I’ve come to realize that their ability to manage ID3 tags and lookup up metadata and tag information is problematic. As a result, I’ve switched to using Picard, that relies on MusicBrainz directly (the other two have implementations of this that are buggy in the latest versions of the software).

Picard also does the best job of actually filling out all of the information in the ID3 container, pulling in as much information as possible. As I result, I’ve switched to using Picard for tagging my music.

As for organizing the files, I switch back and forth between having Clementine/Strawberry organize the files (when I import them, I do this), but when I complete the tag information in Picard, I often have Picard redo this just to make sure everything is organized correctly.

Linux – Fixing “apt-get” failed installation

Occasionally, when I try to install an update or install software via the console using apt-get, something goes wrong. To date, I have never had the failure of a package to install ruin my system. However, it isn’t uncommon after such an incident to get an error the next time I run apt-get. The easiest way to fix apt-get is to run the following command:

sudo dpkg --configure -a

The above command is the replacement for the old command in Debian-based distributions:

sudo dpkg-reconfigure --all

Almost every time I have run into an apt-get error, the above command has been able to solve the problem. It’s a useful command to have on hand if you’re running a Debian-based Linux distribution (like Kubuntu, my distribution of choice).

Building My Own NAS (with Plex, Crashplan, NFS file sharing, bitTorrent, etc.)

For about the last seven years (since 2012), I’ve been using a Synology NAS (Network Attached Storage) device in my house as a central repository for files, photos, music, and movies. It has generally worked well. However, there have been a number of serious problems with the Synology NAS I bought (DS413J). First, the amount of RAM is limited and cannot be upgraded. Second, the CPU (MARVELL Kirkwood, Arm processor) is in the same situation. While the box is small and draws very little power, the inability to upgrade the hardware (other than hard drives), means that I’m basically stuck with what was considered cutting edge back in 2012 when I bought it. In practical terms, these limitations have meant that I have not been able to run Crashplan on my Synology box since the first year I bought it because I have more than a terabyte of files I am backing up and the 512 mbof memory cannot handle that (I created a workaround where I run Crashplan on a different computer, but am backing up the files on the Synology box over the network). It also means that I haven’t been able to run the latest version of Plex for the last 4 years because Plex stopped supporting the CPU in my Synology box. This eventually came to a head about a month ago when the latest Plex client on my Roku stopped working with the very outdated version of Plex server on my Synology NAS device. As a result, I was no longer able to serve videos to my Roku, which was one of the primary reasons why I even have a NAS in the house.

The convenience of having a pre-built NAS with a web interface has been nice. There is a lot to like about Synology products. However, you are locked into their hardware and their software and are restricted by their timelines for upgrading to the latest software. Additionally, my Synology NAS, which is a 4-bay device, has a problem with one of the bays that actually ended up destroying 2 hard drives, so I only have 3 usable hard drive bays. And, Synology devices are crazy expensive. Given my use scenario, paying as much as they now charge for a high-end NAS that might temporarily meet my needs doesn’t make a lot of sense.

So, I finally decided that it’s time to go back to building my own NAS (I had one for a short while before). As I started researching what I wanted for my Do It Yourself (DIY) NAS, I basically went down a rabbit hole of options: Which Operating System (OS)? What hard drives? What file system for drives? Do I use a RAID? What software do I need to install? What CPU and motherboard? How much memory? In this rather lengthy post, I will detail what I ultimately decided to do and why.

DIY NAS: OS (Operating System) Options

As a long-time Linux user, I was never going to consider anything but a Unix-based system. That means I never even considered Windows as an option. Those who use Windows could certainly consider it, but I have no interest in using Windows for my NAS. That, however, didn’t narrow my options that much. With Unix-based systems, all of the following are real contenders: Unraid, FreeNAS, Amahi, Open Media Vault, , Openfiler, NAS4Free, or just the Linux distribution of my choice (currently, Kubuntu 18.10). I spent quite a bit of time considering these OSes, all but the last being designed specifically as OSes for NAS boxes. The more I thought about theses pre-packaged options, the more I realized that they fall in between the proprietary OS of my Synology box and the OS I use on all my other computers (Kubuntu), and they are all crippled (to varying degrees) by the same problem I had with Synology – I am beholden to the companies/people who maintain this software to release new packages for the software I need to run:  and Plex. This is a particular concern with Amahi, , Openfiler, and since they all require the installation of software through their “packages.” That would not be the case if I just went with a standard Linux distribution that gets regular updates (e.g., Kubuntu). I can install pretty much whatever I want on such an OS, which means the NAS will be whatever I want it to be.

Unraid (which isn’t free) will really allow you to install an OS on top of their OS, so that might not be a problem. But I’m also not convinced that I need Unraid for drive management (as I’ll detail below when I discuss how I organized my hard drives). is probably the most appealing OS for this as it really is just an OS, and you can install what you want on top of it. My biggest concern here is that FreeNAS is BSD based, which really shouldn’t be a concern, but I have limited experience with BSD (tons with Linux), and I wasn’t certain what FreeNAS would give me over a standard distribution.

There very well may be advantages to one of these specific OSes for NAS devices that I am missing. But, after having suffered under the proprietary lock-in and inability to upgrade my software under Synology, I realized that I was very wary of getting locked into a pre-packaged OS that would mean I couldn’t install what I want to install. Ultimately, I decided to just install Kubuntu 18.04 on what would become my new, DIY NAS box. Some might note that I should have just gone with Ubuntu server, as it reduces the amount of CPU power and memory since I wouldn’t have a graphical front end (KDE). I considered it. But that would also mean that I would have to manage the entire device through the command line or try to find some other software (e.g., , which won’t do everything I need) that would allow me to monitor my device via a web browser. Since I’m most comfortable with a graphical desktop environment, why not just go with what I know and what works for me?

Final choice on OS: Kubuntu 18.04, which is a long-term support release (important for future upgrades to the OS).

DIY NAS: Hardware Options (CPU, motherboard, RAM, etc.)

When Plex stopped working with our Roku, my wife quickly realized it. We all use Plex on a regular basis to watch our movie and TV collection. When I told her what the problem was and said that I thought I was going to need to replace our NAS, she asked me how much it was going to be. Being honest, I told her it could be fairly expensive, depending on what I decided to build. Luckily, we are in a position financially where I could spend upwards of a $1,000 on a new NAS if I needed to and she said that would be fine.

I spent quite a bit of time consider hardware. The biggest question was really whether I wanted to go with dual Xeon processors to build out a real server or go with what I am most familiar, AMD CPUs like Ryzen (I build all my own desktop computers). The dual Xeon approach made a lot of sense as they can manage a lot of concurrent threads. The latest processors from AMD (and Intel, but I’m an AMD guy), Threadripper, also manage a lot of simultaneous processors. However, these are all pretty expensive processors, even somewhat older Xeon’s can be a little pricey if I’m trying to get something that was made in the last few years. Additionally, the motherboards that go with these can be almost as expensive. I debated these options for quite a while.

Then, I had an idea. I had an older computer lying around from my latest upgrade (I usually upgrade my desktop, give my immediate past desktop to my wife; during the last one, I ended up upgrading the entire device, case included, which left a third desktop computer sitting unused). I wanted to test out Plex and Crashplan on that computer just to make sure it worked. It’s an older AMD Athlon II X4 620 that has 24 of RAM and room for 8 SATA devices on the motherboard. Once I got everything up and running on this test system, I realized that, given my use scenario, I actually didn’t need the latest Threadripper AMD CPU or even dual Xeon processors. I don’t need concurrent ripping of 4k or 8k video files. I don’t even have any 4k video files (my main TV has 4k capabilities, but I rarely use them). Most of my video is 1080p, which looks great. I tested the system out streaming a video to my TV and backing up files and it worked great. So, I decided to re-purpose this older computer and make it my new NAS.

Once I got everything set up (see below for details), I wanted to see just how much my NAS could handle, especially since the old Synology NAS struggled with streaming just a single HD stream. To run my test, I started streaming audio from three Amazon Echo devices in three different rooms in the house via Plex, started streaming a 1080p video file to my phone on the home network, and started streaming a 1080p video file to my main TV. This screenshot shows the Plex server with 5 simultaneous streams:

Simultaneously streaming to five devices on the home network.

The big question was whether my 4 cores could handle this. Here’s how things looked:

All four CPUs were never maxed out simultaneously.

Conclusion: My older AMD Athlon X4 620 was more than up to the task. With two simultaneous 1080p video streams and three mp3 streams, the server was working, but it was far from maxing out. Since we have just one TV in the house, the odds of us ever needing to simultaneously stream more than two videos are almost zero, and that’s true even considering that I am allowing some of my siblings access to my Plex server.

What does this mean for the average person building a DIY NAS? Unless you have, let’s say, 10 4k TVs in your house and you want to simultaneously stream 4k videos to all of them, you probably don’t need the latest and greatest CPU in your NAS. In all likelihood, I’ll let the system I have run for a year or two (unless there is a problem), and then upgrade my box, my power supply, my , and my CPU (which will also require upgrading my RAM). By that time, Ryzen Threadrippers will have dropped in price enough that it won’t cost me $2,000 to build a beast of a NAS that can serve multiple 4K streams at the same time. I’m not sure I’ll ever need that much bandwidth or power given my use scenario, but I can imagine needing a bit more power in the future.

Where you probably do not want to skimp is on RAM. As I have been transferring my files and media to my new NAS from the old one, I have seen my RAM usage go up to as high as about 16 gb at different times. That has been the result of large bandwidth file transfers, Plex indexing the media files (video and audio), and simultaneous streaming of files. In short, you can go with an older, not crazy expensive, multi-core CPU for your NAS and be fine, but make sure you’ve got at least 16 gb of RAM, maybe more.

DIY NAS: Hard Drive Options

Where I got the most bogged down in my research was in deciding on how many hard drives to use, what file system to use, and whether or not I should use a RAID for the drives. In my Synology box, I had just two 4 terabyte hard drives in a RAID 1 (or mirror) arrangement. While I am getting close to filling up the 4 terabytes (I have about 3 terabytes of photos, movies, files, music), I was more concerned with not losing data. With a RAID 1, all of my data was backed up between the two hard drives. Thus, if I lost a drive, I would still have a copy of all of the data.

Additionally, I pay for unlimited storage through Crashplan. Everything on my NAS is backed up off-site. This way, I have a local copy of all of my files (the mirrored drive) and an off-site copy of all my files (in case there is a fire or catastrophic failure). Since I do back up everything off-site, I could theoretically just go for speed with, for instance, a RAID 0 that stripes all my data between drives. But would mean I would lose all my local data and I would have to restore from the off-site backup (which would take quite some time given the amount of data I have).

As I considered the options for hard drives, my initial thought was to use the same system but increase the size of the hard drives. There are now hard drives with capacities in excess of 10 tb. They are expensive, but two 10 tb hard drives in a RAID 1 would basically replicate what I was doing but give me plenty of room to grow my photo, video, and file collection. I was just about to pull the trigger on this plan until I realized that another option might make more sense.

RAID 6 requires 4 hard drives, but offers a number of advantages over RAID 1 (e.g., similar speed but better redundancy, as well as the possibility of hot swapping hard drives). And, if I went with RAID 6, I could actually buy cheaper, 4 or 6 tb hard drives instead of 10 tb hard drives and end up with just as much or more space as I would get with two 10 tb hard drives for less money. From the many articles I read on this, it seems that lots of corporations use a RAID 6 given the redundancy and speed advantages that result. I ultimately decided to go with a RAID 6 with four 4 tb hard drives. Effectively, that meant I would be doubling my storage (8 tb) while improving my redundancy.

I still needed to figure out what file system to use. Having worked with Linux for over a decade, I typically use EXT4 on all of my computers. It doesn’t have any file size limitations, obviously, and does everything I need. However, I had been hearing about ZFS for a while (as well as BTRFS) and what I had heard made me think that ZFS was really what I should be running on my NAS. While it may slow down my NAS a little bit, the benefits to preventing bit rot and the redundancy it includes meant the impact on performance would likely be worth it. However, ZFS doesn’t come as a standard file system option in Linux distributions. I have used various disk partitioning programs enough to know what the standard file systems are that ship with Linux distributions and ZFS is not one of those. I was a little worried about what might be entailed in installing ZFS and setting up ZFS in a RAID6 (in ZFS terms, it’s called RAIDZ2). Once I found this handy guide, I realized that it wasn’t that difficult and was something that I could easily do. Before I headed down this path, I tested the guide with a spare drive I had lying around and it really was simple to set up ZFS as the file system on a drive. That test convinced me that ZFS was going to be the route to go for my file system on my drives. Thus, my four 4 RAID6 is actually a RAIDZ2 with a functional capacity of 8 of storage, but is actually expandable should I want to do so. (Also useful for ZFS is this post on scheduling regular scrubs on the drives.)

Update: 2019-08-18 – I restarted my file server after installing system updates and my ZFS pool was missing. That was terrifying. I finally found a solution that was a little nerve-wracking but worked. Somehow, the mount point where my ZFS pool was supposed to mount either got corrupted or had something in it (for me, it was /ZFSNAS). I renamed that folder:

mv /ZFSNAS /ZFSNAS-temp

I was then able to import my ZFS pool with the following command:

sudo zpool import ZFSNAS

Apparently, this is the result of an upgrade to the ZFS software. This happened again right after I rebooted. There is a new option in ZFS systems that has to be enabled in order for these to mount after reboots:

systemctl enable zfs-import.target

Now, when I reboot, my ZFS pool comes right back up.

DIY NAS: What software to install?

I have been running Plex for about 6 years to manage my media collection. With my Synology box, the limited RAM and not very good CPU meant that it actually had a pretty hard time managing my media collection. It would transmit videos across my network to my ROKU device, but only if they were in a specific format (mp4, which was probably a limitation of the Synology box as that is not a problem with Plex or ROKU). Setting up a video slide show was basically impossible on my ROKU from my Synology box as the CPU and RAM just couldn’t cut it. While I could play my music across the network, the Synology box would also not play nice with my music in Plex. As a result, I used Plex just to watch videos but not for anything else, even though it is a great way to manage all sorts of media – video, photos, and music. Thus, one of the requirements of my new DIY NAS would be that it has to take advantage of all the features that Plex has to offer (I have a lifetime Plex Pass). So, Plex was the core requirement for my NAS.

But I also wanted my NAS to run Crashplan. As noted above, my Synology box didn’t have enough RAM to run Crashplan, which meant I had to run it on my desktop to backup the files on my NAS to Crashplan. It was a hack to get around the limitations of the Synology NAS (FYI, you need about a gigabyte of RAM for every terabyte of files you want to back up to Crashplan). Plex and Crashplan were minimum software requirements. My DIY NAS had to be able to run both of those and run them well.

I do occasionally download stuff via  (mostly Linux distributions). So, having a client installed would be nice. Kubuntu comes with one, kTorrent, which was fine.

The last piece of software I really needed was a way to control my NAS remotely. The goal was to basically have it run headless, stick it in a corner in my office, and just let it do . I can control the Plex server through the Plex website, but to do everything else I would need VNC software. I was actually surprised at how difficult it was to find software that would let me control my NAS. I tried Remmina first and had no luck. The interface was clunky, not intuitive, and I only was able to successfully connect about 1 time out of 10. I finally went with TeamViewer. I needed something that would let me reliably VNC into my NAS and control whatever I needed from anywhere. TeamViewer is free for personal users and works reliably. Another good alternative is . Also, in order to make this work reasonably well, you need to install a dummy driver for the X11 driver and create an X configuration file for the dummy driver, as detailed here. Otherwise, the VNC software will be slow as molasses.

I also installed SSH so I could access the NAS remotely in case the GUI VNC programs were having issues. I followed this guide.

DIY NAS: NFS Fileserver

This guide made it very easy.

DIY NAS: Specifications

Here’s the final rundown of what I put together:
OS: Kubuntu 18.04 (long term support)
Hard Drives: ZFS filesystem; four 4 terabyte drives in a RAIDZ2 for 8 of storage
CPU: AMD Athlon X4 620
Motherboard:
RAM: 24 GB
Software: Plex, Crashplan, NFS, TeamViewer, kTorrent

UPDATE – 2019-04-24

It’s been almost 4 months running this NAS box. Generally, it’s worked really well. I have, however, run into two problems.

The first one was pretty recent (about 2 weeks ago). I’m not sure what happened, but over the last month, I’m assuming some update to Kubuntu 18.04 led to the screen going black when I would VNC into the box. The server was still working, but I couldn’t get it to display anything via VNC until I restarted the box manually (SSH on the NAS box would have given me another option for that). I’m still not 100% sure what the problem was, but it was somehow related to KDE. I ended up installing Unity/Gnome as the desktop environment and now the problem is gone. That was about a week ago that I solved the problem. The box has been running without a hitch since then.

The only other issue I will note is that there have been two times when I have been streaming shows on my NAS box through ROKU where my box had to actually transcode a 1080p file. With the hardware I have inside it, the 4-core processor was not up to the task. It ended up stopping playback several times and buffering a lot. It can easily stream a 1080p file in almost any format (both Plex and Roku can handle almost every format) and can even manage to stream two of them simultaneously (tested on my TV and phone simultaneously), but the two movies it had issues with were in a weird MKV format that my Roku couldn’t handle. I ended up ripping them into a different MKV format and, voila, problem solved. The point being – my NAS box is a little underpowered. But, a year or two from now, the price of an AMD Threadripper will be half of what it is now. I’ll swap that in and will have all the power I need.

So, 4 months into this, it’s working extremely well. I’d give it a 99 out of 100.