Kubuntu – Audio CD Ripping

I mostly buy digital audio these days. My preferred source is bandcamp as they provide files in FLAC (Free Lossless Audio Codec). However, I ended up buying a CD recently (Last Night’s Fun by Scartaglen) as there wasn’t a digital download available and, in the process, I realized that there are lots of options for ripping the audio from a CD on Linux and quite the process to get the files ripped, tagged, properly named, and stored in my library. This is my attempt to summarize my process.

Format/Codec

First, you need to decide in what format you want the audio from the CD. As noted, I prefer FLAC these days. Given how relatively inexpensive storage is, I no longer need to scrimp on space for the most part. If space was an issue, ripping the files to mp3 format at, say, 192 kbps at a variable bit rate would probably be the optimum balance between decent quality and small size. But I prefer the best quality sound with no real regard for the size of the resulting files. It helps that I store all my music on a dedicated file server that runs Plex. That solves two problems: I have lots of space and Plex will transcode the files if I ever need that done (if, for example, I want to store the music on my phone and want it in a different format). So, my preferred file format is FLAC. (Another option is OGG, but I find not as many audio players work as well with OGG.)

There is another issue that I recently ran into: single audio files with cue sheets. Typically, people want their audio in individual files for each song. However, if you want to accurately represent an audio CD, the best approach to do this is to rip the audio as a single file with a corresponding cue sheet. The cue sheet keeps an exact record of the tracks from the CD. With the resulting two files, the audio CD can be recreated and burned back to a CD. I have no real intention of burning the audio back to a CD (I want everything digital so I can store it on my file server), but it’s good to know about this option. Typically, those who opt for this approach use one of two formats, .flac or .ape, for storing the audio and .cue for storing the timing of the tracks. The .ape format is a proprietary format, however, so it is definitely not my preferred approach.

As a quick illustration for how file format is related to size, I ripped my demonstration CD, Last Night’s Fun by Scartaglen into a single FLAC file and a single mp3 file (at 192 kbps using a variable bit rate) and put the resulting files into the same folder so you can see the size difference:

As you can see, the FLAC rip resulted in a file that was 222.9 MB compared to the mp3 file that is only 49.4 MB. The FLAC file is about 4.5 times the size of the mp3 file. A higher-quality mp3 rip at 320 kbps at a constant bit rate resulted in a 54.8 MB file. A pretty good estimate would be that the FLAC format is going to be somewhere between 3 to 5 times the size of an mp3 file. Thus, if space is an issue but you want good quality, ripping your music to the highest quality mp3 (320 kbps) is probably your best option. If space isn’t an issue and you care more about quality, FLAC is the way to go.

NOTE: I also ripped the disc to OGG and the file size was 38 MB.

Ripping Software

First, if you’re planning on ripping to FLAC on Linux, you’ll need to install FLAC. It is not installed in most distributions by default. This can be done easily from the terminal:

sudo apt-get install flac

Without FLAC installed, the software below won’t be able to rip to FLAC.

K3b

K3b is installed in Kubuntu 20.04 by default and is, IMO, a good interface for working with CDs and DVDs. When I inserted my demonstration CD into my drive, Kubuntu gave me the option of opening the disk in K3b. When I did, K3b automatically recognized the CD, grabbed the information from a CDDB, and immediately gave me options for ripping the CD:

When you click on “Start Ripping,” you get a new window:

In this new window, you have a bunch of options. You can change the format (Filetype). With the FLAC codec installed, the options listed are: WAVE, Ogg-Vorbis, MPEG1 Layer III (mp3), Mp3 (LAME), or Flac. You can obviously change the Target Folder as well. K3b also gives you the option of creating an m3u playlist and the option “Create single file” with “Write cue file” which is where you could create the single file and cue file from the CD as noted above. There are also options for changing the naming structure and, under the Advanced tab, options for how many times you want to retry reading the CD tracks. K3b is pretty fully featured and works for well for ripping audio CDs.

Clementine

My preferred music player in Linux is Clementine. I have used a number of music players over the years (e.g., Banshee, Rhythmbox, Amarok), but Clementine has the best combination of features while still being easy to use. Clementine is in the repositories and can easily be installed via synaptic or the terminal:

sudo apt-get install clementine

Clementine also has the ability to rip audio CDs. Once your CD is inserted, click on Tools -> Rip audio CD:

You’ll get this window, which is similar to the ripping window in K3b:

If the information is available in a CDDB, Clementine will pull that information in (as it did for my demonstration CD). You then have a bunch of options for the Audio format: FLAC, M4A, MP3, Ogg Flac, Ogg Opus, Ogg Speex, Ogg Vorbis, Wav, and Windows Media Audio. The settings for each of these can be adjusted in the “Options” box. One clear advantage of Clementine over K3b is that you can readily edit the titles of the tracks. Another advantage of Clementine over K3b is that you could import the files directly into your music library.

Ripping from a Cue Sheet

Another scenario I have run into on Linux is having a single file for the audio from a CD with a corresponding .cue sheet (the file is typically in the FLAC format, but I have also run into this in .ape format). I used to immediately turn to Flacon, a GUI that helped rip the single file into individual tracks. However, I have had mixed success with Flacon working lately (as of Kubuntu 20.04, I couldn’t get it to work). Never fear, of course, because Flacon is really just a GUI for tools that can be used in the terminal.

To split a single FLAC file with a corresponding .cue sheet into the individual tracks, you’ll need to install “shntool“:

sudo apt-get install shntool

(NOTE: It’s also a good idea to install the suggested packages, “cuetools,” “sox,” and “wavpack” but not required.) Assuming you have already installed “flac” as described above, ripping a single FLAC file into the individual tracks is fairly straightforward. The easiest way is to navigate to the folder where you have the FLAC file (e.g., “audiofile.flac”) and the cue sheet (e.g., “audiofile.cue”). Then use the following command at the terminal:

shnsplit -f audiofile.cue -o flac audiofile.flac 

Breaking the command down, “shnsplit” calls the program “shnsplit” which is part of the “shntool” package. The “-f” tells the program to show detailed format information. The first file is the cue sheet. The “-o” indicates that you are going to specify the output file format extension. After the “-o” is the target file format “flac” and the last file is the single FLAC file that you want to split.

Here’s a screenshot of me rippling the single FLAC file from my demonstration CD into individual FLAC files:

If you happen to run into a single audio file in the .ape format, shntool probably won’t be able to read it so the above command won’t work. However, a simple workaround is to convert the file to flac format using ffmpeg, which can read it. Here’s the command you could use from the terminal:

ffmpeg -i audiofile.ape audiofile.flac

That command will call ffmpeg (which you probably have installed) and convert the .ape file into a .flac file which can then be split using the command above (assuming you have a corresponding cue sheet).

Tagging Software

Let’s say I have successfully ripped my files into my desired format and now I want to tag them. There are a number of software packages that can do this, but my preferred software is Picard by MusicBrainz. Picard is open source, which is awesome, but it also interfaces with the MusicBrainz website and pulls in information that way, which means your files will get robust tagging information. If you pull in all the information from MusicBrainz, not only will the artist and album be tagged, but so to will lots of additional information, depending on how much was entered into the database by whoever added the album in the first place. (NOTE: Clementine also interfaced with MusicBrainz but this broke in 3.1. Once it broke, I started using Picard directly and now I realized that it has a lot more features than Clementine’s implementation, so I just use Picard. However, you could try doing this in Clementine as well.)

Again, I’ll use my demonstration CD to illustrate how this is done. I ripped the tracks into individual FLAC files above. Those tracks are completely devoid of tags – whatever software I use to try to play them won’t know what the audio files are. The screenshot below illustrates this. I used MediaInfo (a gui for pulling information from audio and video files in Linux) to pull available information from the file. It shows the format and length but provides no information about the artist or album, which it would if the file had tags.

We’ll use Picard to find the album and add all the tags. First, of course, install Picard:

sudo apt-get install picard

Open the program. Now, since my files have no tag information, I’m going to click on Add Files (you can also add a folder with subfolders, which then has Picard run through multiple audio files, a great feature if you are tagging multiple albums at the same time).

You’ll get a new window where you can select the audio files you want to add. Select them and then click “Open.”

If the files have existing tags, Picard will do its best to group the tracks together and may even associate the files with the corresponding albums. In my case, it simply puts the files into the “Unclustered” category:

Since I pulled them all in from the same folder, I can select the tracks and then click on the “Cluster” button in the toolbar and Picard will cluster the files.

Clustering is a first step toward find the corresponding album information. Once they are grouped together, they will show up in the Cluster category:

Without any tag information, Picard is unlikely to find the corresponding album if you select the cluster and then click on “Lookup.” If there was some tag information in the files, that might be enough for Picard to find the corresponding albums, so you could just select the album and then click on “Lookup.” In this case, that won’t work. So, I’m going to use a different approach. If you right-click on the cluster, you can select “Search for similar albums.”

This gives you a window where you can enter search terms to try to find the corresponding album in the Music Brainz database. Based on the limited information it has, it will try to find a corresponding album automatically. But it likely won’t find it because there are no tags, so there is virtually no information available. Generally, I have found that I have better luck finding albums if I use the album title first followed by the artist, like this “Last Night’s Fun Scartaglen” then hit enter:

Once you find the correct album, select it and then click on “Load into Picard” at the bottom of the window.

Once you do that, the album will move to the right side of the screen. If all of the tracks are included and Picard is able to align them with the corresponding information it has from the database, the CD icon will turn gold. If there is a little red dot on the CD, that means Picard has tag information that can be saved to the individual tracks.

Click on the album and then click the “Save” button in the toolbar and the tag information will be saved to the files.

You can track the progress as tracks will turn green once the information has been successfully added to the tags.

You can also set up Picard to modify the file names when it saves information to the files. Click on “Options” then click the checkmark next to “Rename Files.”

I typically let Clementine rename my files when I import them into my Music library, so I don’t worry about this with Picard, but it is a nice option. Finally, here’s that same Mediainfo box with the tagged file showing that information about the artist and track is now included in the file:

 2,283 total views,  7 views today

LibreOffice – How To Change Icons to a Darker Theme

I prefer darker themes for my desktop environment (Kubuntu 20.04) and browser (Brave). For the most part, this isn’t a problem, but it does cause an issue with some applications, including LibreOffice (6.4.4.2).

One of the first things I do when I install Kubuntu is switch my desktop environment from the default theme (System Settings -> Global Theme), Breeze, which is a lighter theme, to Breeze Dark. You can see the differences in the screenshots below:

This is the Breeze theme that is the default in Kubuntu 20.04
This is the Breeze Dark theme that I typically use in Kubuntu.

The problem is with the icon set in LibreOffice. With the default Breeze theme, the icons are very visible and work great:

These are the default icons in LibreOffice 6.4.4.2 in Kubuntu 20.04 with the default Breeze theme.

The problem comes when I switch the theme to Breeze Dark. Here is how the default Breeze icons look in LibreOffice when I switch the theme:

The default icon set, Breeze, in LibreOffice when the Kubuntu Global Theme is switched to Breeze Dark.

Perhaps it’s just my aging eyes, but those icons are very difficult for me to see. The solution is quite simple, though finding it is always hard for me to remember (thus this tutorial). All you need to do is switch the icon set in LibreOffice. There are several icon sets for dark themes that come pre-packaged with the standard version of LibreOffice that ships with Kubuntu and is in the repositories. It’s just a matter of knowing where to look.

In LibreOffice, go to Tools -> Options:

You’ll get this window. You want the third option down under “LibreOffice”, “View”:

Right at the top of this window you can see “Icon style.” That’s the setting you want to change. If you click on the drop down arrow, you’ll see six or so options. Two are specifically for dark themes, Breeze (SVG + dark) and Breeze (dark). Either of those will work:

I typically choose Breeze (SVG + dark). Select the dark theme you want, then click on OK and you’ll get a new icon set in LibreOffice that works much better for dark themes:

These icons are much more visible to my aging eyes.

Et voila! I can now see the icons in the LibreOffice toolbars.

 2,848 total views,  5 views today

Linux/Kubuntu – Disable Network Printer Auto Discovery

I don’t know when Kubuntu started automatically discovering printers on networks and then adding them to my list of printers, but it is a problematic feature in certain environments – like universities (where I work).

I set up my home printer on my laptop easy enough. But, whenever I open my laptop and connect to my work network, this feature searches for printers on the network and then adds them to my list of printers. I now have hundreds of printers that show up in my printers dialogue:

I didn’t manually add any of those printers. They were added automatically and are causing problems. First, it’s a pain in the ass to find the printer I want. Second, when I shutdown my computer, the OS has to run through all of those printers and make sure they are disconnected, which makes the OS hang for a couple of minutes every time I want to close down.

This is obviously a great idea in principle, but problematic in this environment.

So, how to turn this off. I found a solution. In a terminal, edit the following file:

sudo nano /etc/cups/cups-browsed.conf

In that file, you should just have to uncomment the following line (remove the hashtag ‘#’):

BrowseProtocols none

So, from this:

To this:

Afterward, try running the commands:

service cups-browsed restart
service cups restart

After making this change, my computer no longer automatically adds shared printers on my network. Hooray!

Unfortunately, making this edit did not remove all the shared printers it had already installed. I still had to remove them all manually, which was annoying. But at least they won’t be reinstalled automatically.

 2,667 total views

Restarting KDE’s Plasma Shell via Konsole (command line)

As much as I love KDE as my desktop environment (on top of Ubuntu, so Kubuntu), it does occasionally happen that the Plasma Shell freezes up (usually when I’ve been running my computer for quite a while then boot up a game and begin to push the graphics a bit. Often, I just shut down when I’m done and that resets everything. However, there is a quick way to shutdown and restart the Plasma Shell that will bring everything back up.

In KDE Plasma Shell 5.10+, the command to kill the Plasma Shell is:

kquitapp5 plasmashell

In KDE Plasma Shell 5.10+, the command to restart the Plasma Shell is:

kstart5 plasmashell

In earlier versions of KDE 5, the commands were:

killall plasmashell
kstart plasmashell

 4,456 total views,  3 views today

Building My Own NAS (with Plex, Crashplan, NFS file sharing, bitTorrent, etc.)

NOTE: As of 2020-06-22, I have a new NAS build. You can read about it here.

For about the last seven years (since 2012), I’ve been using a Synology NAS (Network Attached Storage) device in my house as a central repository for files, photos, music, and movies. It has generally worked well. However, there have been a number of serious problems with the Synology NAS I bought (DS413J). First, the amount of RAM is limited and cannot be upgraded. Second, the CPU (MARVELL Kirkwood, Arm processor) is in the same situation. While the box is small and draws very little power, the inability to upgrade the hardware (other than hard drives), means that I’m basically stuck with what was considered cutting edge back in 2012 when I bought it. In practical terms, these limitations have meant that I have not been able to run Crashplan on my Synology box since the first year I bought it because I have more than a terabyte of files I am backing up and the 512 mb of memory cannot handle that (I created a workaround where I run Crashplan on a different computer, but am backing up the files on the Synology box over the network). It also means that I haven’t been able to run the latest version of Plex for the last 4 years because Plex stopped supporting the CPU in my Synology box. This eventually came to a head about a month ago when the latest Plex client on my Roku stopped working with the very outdated version of Plex server on my Synology NAS device. As a result, I was no longer able to serve videos to my Roku, which was one of the primary reasons why I even have a NAS in the house.

The convenience of having a pre-built NAS with a web interface has been nice. There is a lot to like about Synology products. However, you are locked into their hardware and their software and are restricted by their timelines for upgrading to the latest software. Additionally, my Synology NAS, which is a 4-bay device, has a problem with one of the bays that actually ended up destroying 2 hard drives, so I only have 3 usable hard drive bays. And, Synology devices are crazy expensive. Given my use scenario, paying as much as they now charge for a high-end NAS that might temporarily meet my needs doesn’t make a lot of sense.

So, I finally decided that it’s time to go back to building my own NAS (I had one for a short while before). As I started researching what I wanted for my Do It Yourself (DIY) NAS, I basically went down a rabbit hole of options: Which Operating System (OS)? What hard drives? What file system for the hard drives? Do I use a RAID? What software do I need to install? What CPU and motherboard? How much memory? In this rather lengthy post, I will detail what I ultimately decided to do and why.

DIY NAS: OS (Operating System) Options

As a long-time Linux user, I was never going to consider anything but a Unix-based system. That means I never even considered Windows as an option. Those who use Windows could certainly consider it, but I have no interest in using Windows for my NAS. That, however, didn’t narrow my options that much. With Unix-based systems, all of the following are real contenders: Unraid, FreeNAS, Amahi, Open Media Vault, Rockstor, Openfiler, NAS4Free, or just the Linux distribution of my choice (currently, Kubuntu 18.10). I spent quite a bit of time considering these OSes, all but the last being designed specifically as OSes for NAS boxes. The more I thought about theses pre-packaged options, the more I realized that they fall in between the proprietary OS of my Synology box and the OS I use on all my other computers (Kubuntu), and they are all crippled (to varying degrees) by the same problem I had with Synology – I am beholden to the companies/people who maintain this software to release new packages for the software I need to run: Crashplan and Plex. This is a particular concern with Amahi, Rockstor, Openfiler, and NAS4Free, since they all require the installation of software through their “packages.” That would not be the case if I just went with a standard Linux distribution that gets regular updates (e.g., Kubuntu). I can install pretty much whatever I want on such an OS, which means the NAS will be whatever I want it to be.

Unraid (which isn’t free) will really allow you to install an OS on top of their OS, so that might not be a problem. But I’m also not convinced that I need Unraid for drive management (as I’ll detail below when I discuss how I organized my hard drives). FREENAS is probably the most appealing OS for this as it really is just an OS, and you can install what you want on top of it. My biggest concern here is that FreeNAS is BSD based, which really shouldn’t be a concern, but I have limited experience with BSD (tons with Linux), and I wasn’t certain what FreeNAS would give me over a standard distribution.

There very well may be advantages to one of these specific OSes for NAS devices that I am missing. But, after having suffered under the proprietary lock-in and inability to upgrade my software under Synology, I realized that I was very wary of getting locked into a pre-packaged OS that would mean I couldn’t install what I want to install. Ultimately, I decided to just install Kubuntu 18.04 on what would become my new, DIY NAS box. Some might note that I should have just gone with Ubuntu server, as it reduces the amount of CPU power and memory since I wouldn’t have a graphical front end (KDE). I considered it. But that would also mean that I would have to manage the entire device through the command line or try to find some other software (e.g., webmin, which won’t do everything I need) that would allow me to monitor my device via a web browser. Since I’m most comfortable with a graphical desktop environment, why not just go with what I know and what works for me?

Final choice on OS: Kubuntu 18.04, which is a long-term support release (important for future upgrades to the OS).

DIY NAS: Hardware Options (CPU, motherboard, RAM, etc.)

When Plex stopped working with our Roku, my wife quickly realized it. We all use Plex on a regular basis to watch our movie and TV collection. When I told her what the problem was and said that I thought I was going to need to replace our NAS, she asked me how much it was going to be. Being honest, I told her it could be fairly expensive, depending on what I decided to build. Luckily, we are in a position financially where I could spend upwards of a $1,000 on a new NAS if I needed to and she said that would be fine.

I spent quite a bit of time consider hardware. The biggest question was really whether I wanted to go with dual Xeon processors to build out a real server or go with what I am most familiar, AMD CPUs like Ryzen (I build all my own desktop computers). The dual Xeon approach made a lot of sense as they can manage a lot of concurrent threads. The latest processors from AMD (and Intel, but I’m an AMD guy), Threadripper, also manage a lot of simultaneous processors. However, these are all pretty expensive processors, even somewhat older Xeon’s can be a little pricey if I’m trying to get something that was made in the last few years. Additionally, the motherboards that go with these can be almost as expensive. I debated these options for quite a while.

Then, I had an idea. I had an older computer lying around from my latest upgrade (I usually upgrade my desktop, give my immediate past desktop to my wife; during the last one, I ended up upgrading the entire device, case included, which left a third desktop computer sitting unused). I wanted to test out Plex and Crashplan on that computer just to make sure it worked. It’s an older AMD Athlon II X4 620 that has 24 gb of RAM and room for 8 SATA devices on the motherboard. Once I got everything up and running on this test system, I realized that, given my use scenario, I actually didn’t need the latest Threadripper AMD CPU or even dual Xeon processors. I don’t need concurrent ripping of 4k or 8k video files. I don’t even have any 4k video files (my main TV has 4k capabilities, but I rarely use them). Most of my video is 1080p, which looks great. I tested the system out streaming a video to my TV and backing up files and it worked great. So, I decided to re-purpose this older computer and make it my new NAS.

Once I got everything set up (see below for details), I wanted to see just how much my NAS could handle, especially since the old Synology NAS struggled with streaming just a single HD stream. To run my real world test, I started streaming audio from three Amazon Echo devices in three different rooms in the house via Plex, started streaming a 1080p video file to my phone on the home network, and started streaming a 1080p video file to my main TV. This screenshot shows the Plex server with 5 simultaneous streams:

Simultaneously streaming to five devices on the home network.

The big question was whether my 4 cores could handle this. Here’s how things looked:

All four CPUs were never maxed out simultaneously.

Conclusion: My older AMD Athlon X4 620 was more than up to the task. With two simultaneous 1080p video streams and three mp3 streams, the server was working, but it was far from maxing out. Since we have just one TV in the house, the odds of us ever needing to simultaneously stream more than two videos are almost zero, and that’s true even considering that I am allowing some of my siblings access to my Plex server.

What does this mean for the average person building a DIY NAS? Unless you have, let’s say, 10 4k TVs in your house and you want to simultaneously stream 4k videos to all of them, you probably don’t need the latest and greatest CPU in your NAS. In all likelihood, I’ll let the system I have run for a year or two (unless there is a problem), and then upgrade my box, my power supply, my motherboard, and my CPU (which will also require upgrading my RAM). By that time, Ryzen Threadrippers will have dropped in price enough that it won’t cost me $2,000 to build a beast of a NAS that can serve multiple 4K streams at the same time. I’m not sure I’ll ever need that much bandwidth or power given my use scenario, but I can imagine needing a bit more power in the future.

Where you probably do not want to skimp is on RAM. As I have been transferring my files and media to my new NAS from the old one, I have seen my RAM usage go up to as high as about 16 gb at different times. That has been the result of large bandwidth file transfers, Plex indexing the media files (video and audio), and simultaneous streaming of files. In short, you can go with an older, not crazy expensive, multi-core CPU for your NAS and be fine, but make sure you’ve got at least 16 gb of RAM, maybe more.

DIY NAS: Hard Drive Options

Where I got the most bogged down in my research was in deciding on how many hard drives to use, what file system to use, and whether or not I should use a RAID for the drives. In my Synology box, I had just two 4 terabyte hard drives in a RAID 1 (or mirror) arrangement. While I am getting close to filling up the 4 terabytes (I have about 3 terabytes of photos, movies, files, music), I was more concerned with not losing data. With a RAID 1, all of my data was backed up between the two hard drives. Thus, if I lost a drive, I would still have a copy of all of the data.

Additionally, I pay for unlimited storage through Crashplan. Everything on my NAS is backed up off-site. This way, I have a local copy of all of my files (the mirrored drive) and an off-site copy of all my files (in case there is a fire or catastrophic failure). Since I do back up everything off-site, I could theoretically just go for speed with, for instance, a RAID 0 that stripes all my data between drives. But a failure would mean I would lose all my local data and I would have to restore from the off-site backup (which would take quite some time given the amount of data I have).

As I considered the options for hard drives, my initial thought was to use the same system but increase the size of the hard drives. There are now hard drives with capacities in excess of 10 tb. They are expensive, but two 10 tb hard drives in a RAID 1 would basically replicate what I was doing but give me plenty of room to grow my photo, video, and file collection. I was just about to pull the trigger on this plan until I realized that another option might make more sense.

RAID 6 requires 4 hard drives, but offers a number of advantages over RAID 1 (e.g., similar speed but better redundancy, as well as the possibility of hot swapping hard drives). And, if I went with RAID 6, I could actually buy cheaper, 4 or 6 tb hard drives instead of 10 tb hard drives and end up with just as much or more space as I would get with two 10 tb hard drives for less money. From the many articles I read on this, it seems that lots of corporations use a RAID 6 given the redundancy and speed advantages that result. I ultimately decided to go with a RAID 6 with four 4 tb hard drives. Effectively, that meant I would be doubling my storage (8 tb) while improving my redundancy.

I still needed to figure out what file system to use. Having worked with Linux for over a decade, I typically use EXT4 on all of my computers. It doesn’t have any file size limitations, obviously, and does everything I need. However, I had been hearing about ZFS for a while (as well as BTRFS) and what I had heard made me think that ZFS was really what I should be running on my NAS. While it may slow down my NAS a little bit, the benefits to preventing bit rot and the redundancy it includes meant the impact on performance would likely be worth it. However, ZFS doesn’t come as a standard file system option in Linux distributions. I have used various disk partitioning programs enough to know what the standard file systems are that ship with Linux distributions and ZFS is not one of those. I was a little worried about what might be entailed in installing ZFS and setting up ZFS in a RAID6 (in ZFS terms, it’s called RAIDZ2). Once I found this handy guide, I realized that it wasn’t that difficult and was something that I could easily do. Before I headed down this path, I tested the guide with a spare drive I had lying around and it really was simple to set up ZFS as the file system on a drive. That test convinced me that ZFS was going to be the route to go for my file system on my drives. Thus, my four 4 tb RAID6 is actually a RAIDZ2 with a functional capacity of 8 tb of storage, but is actually expandable should I want to do so. (Also useful for ZFS is this post on scheduling regular scrubs on the drives.)

Update: 2019-08-18 – I restarted my file server after installing system updates and my ZFS pool was missing. That was terrifying. I finally found a solution that was a little nerve-wracking but worked. Somehow, the mount point where my ZFS pool was supposed to mount either got corrupted or had something in it (for me, it was /ZFSNAS). I renamed that folder:

mv /ZFSNAS /ZFSNAS-temp

I was then able to import my ZFS pool with the following command:

sudo zpool import ZFSNAS

Apparently, this is the result of an upgrade to the ZFS software. This happened again right after I rebooted. There is a new option in ZFS systems that has to be enabled in order for these to mount after reboots:

systemctl enable zfs-import.target

Now, when I reboot, my ZFS pool comes right back up.

Update: 2019-11-13 – I installed an updated and restart and now my ZFS pool has disappeared again. The solution above brought it back, but enabling zfs-import.target, doesn’t bring it back up anymore on the next reboot. I tried enabling all of the following (from here):

sudo systemctl enable zfs.target
sudo systemctl enable zfs-import-cache
sudo systemctl enable zfs-mount
sudo systemctl enable zfs-import.target

None of those worked. Not sure what is going on but I’m pretty sure it’s tied to ZFS changes in the latest Ubuntu/Debian kernel. Argh! I also had to do the following after I brought the pool back up to make sure it was shared across the network:

sudo systemctl restart nfs-kernel-server

NOTE: It also looks like I set up my ZFS pool in a problematic fashion. I created one pool, but no datasets whereas I should have had one pool with multiple datasets. Datasets are where the files should be stored, not in the pool directly. I’m now struggling to set up automated snapshots.

DIY NAS: What software to install?

I have been running Plex for about 6 years to manage my media collection. With my Synology box, the limited RAM and not very good CPU meant that it actually had a pretty hard time managing my media collection. It would transmit videos across my network to my ROKU device, but only if they were in a specific format (mp4, which was probably a limitation of the Synology box as that is not a problem with Plex or ROKU). Setting up a video slide show was basically impossible on my ROKU from my Synology box as the CPU and RAM just couldn’t cut it. While I could play my music across the network, the Synology box would also not play nice with my music in Plex. As a result, I used Plex just to watch videos but not for anything else, even though it is a great way to manage all sorts of media – video, photos, and music. Thus, one of the requirements of my new DIY NAS would be that it has to take advantage of all the features that Plex has to offer (I have a lifetime Plex Pass). So, Plex was the core requirement for my NAS.

But I also wanted my NAS to run Crashplan. As noted above, my Synology box didn’t have enough RAM to run Crashplan, which meant I had to run it on my desktop to backup the files on my NAS to Crashplan. It was a hack to get around the limitations of the Synology NAS (FYI, you need about a gigabyte of RAM for every terabyte of files you want to back up to Crashplan). Plex and Crashplan were minimum software requirements. My DIY NAS had to be able to run both of those and run them well.

I do occasionally download stuff via bitTorrent (mostly Linux distributions). So, having a bitTorrent client installed would be nice. Kubuntu comes with one, kTorrent, which was fine.

The last piece of software I really needed was a way to control my NAS remotely. The goal was to basically have it run headless, stick it in a corner in my office, and just let it do its thing. I can control the Plex server through the Plex website, but to do everything else I would need VNC software. I was actually surprised at how difficult it was to find software that would let me control my NAS. I tried Remmina first and had no luck. The interface was clunky, not intuitive, and I only was able to successfully connect about 1 time out of 10. I finally went with nomachine. (I initially went with TeamViewer, but they claimed I was using their software for commercial purposes and cut me off. Fuck them.) Also, in order to make this work reasonably well, you need to install a dummy driver for the X11 driver and create an X configuration file for the dummy driver, as detailed here. Otherwise, the VNC software will be slow as molasses.

I also installed SSH so I could access the NAS remotely in case the GUI VNC programs were having issues. I followed this guide.

(Update: 12/7/2019. I strongly encourage the use of tinyMediaManager for managing your videos and TV shows. It’s the slickest software I’ve found for doing this and runs great on Linux.)

DIY NAS: NFS Fileserver

This guide made it very easy.

DIY NAS: Specifications

Here’s the final rundown of what I put together:
OS: Kubuntu 18.04 (long term support)
Hard Drives: ZFS filesystem; four 4 terabyte drives in a RAIDZ2 for 8 tb of storage
CPU: AMD Athlon X4 620
Motherboard:
RAM: 24 GB
Software: Plex, Crashplan, NFS, TeamViewer, kTorrent

UPDATE – 2019-04-24

It’s been almost 4 months running this NAS box. Generally, it’s worked really well. I have, however, run into two problems.

The first one was pretty recent (about 2 weeks ago). I’m not sure what happened, but over the last month, I’m assuming some update to Kubuntu 18.04 led to the screen going black when I would VNC into the box. The server was still working, but I couldn’t get it to display anything via VNC until I restarted the box manually (SSH on the NAS box would have given me another option for that). I’m still not 100% sure what the problem was, but it was somehow related to KDE. I ended up installing Unity/Gnome as the desktop environment and now the problem is gone. That was about a week ago that I solved the problem. The box has been running without a hitch since then.

The only other issue I will note is that there have been two times when I have been streaming shows on my NAS box through ROKU where my box had to actually transcode a 1080p file. With the hardware I have inside it, the 4-core processor was not up to the task. It ended up stopping playback several times and buffering a lot. It can easily stream a 1080p file in almost any format (both Plex and Roku can handle almost every format) and can even manage to stream two of them simultaneously (tested on my TV and phone simultaneously), but the two movies it had issues with were in a weird MKV format that my Roku couldn’t handle. I ended up ripping them into a different MKV format and, voila, problem solved. The point being – my NAS box is a little underpowered. But, a year or two from now, the price of an AMD Threadripper will be half of what it is now. I’ll swap that in and will have all the power I need.

So, 4 months into this, it’s working extremely well. I’d give it a 99 out of 100.

Update 06-21-2020:

As I predicted, after about 1 1/2 years, it was time to upgrade my NAS. I started a new post detailing my new build and what I changed here.

 6,157 total views,  5 views today

Linux – tinyMediaManager on Kubuntu 18.04

I run a network attached storage (NAS) device at home to manage all my media (e.g., music, videos, photos, etc.). I have used various programs over the years to manage the naming and organizing of my music files but just recently discovered tinyMediaManager for managing video files. Since it’s written in Java, it works on any OS, including Linux.

Given my large collection of movies, I have been looking for software that would properly name and organize all of them. tinyMediaManager seemed like the perfect solution, but I immediately hit a snag once I tried to get it running on Kubuntu 18.04 (my current distribution of choice). I couldn’t get the GUI to launch. It took some doing, but I eventually figured out how to make this work on Kubuntu 18.04.

First, download the tar.gz file here. (Note: I couldn’t download it using Chrome, as the tinyMediaManager site only lets you download it using a browser that allows Java and, as of Chrome 45, Chrome doesn’t. I used Firefox, which worked fine.).

Untar that file and move the resulting folder wherever you want it to reside (obviously, somewhere you have access to it, but, otherwise, it doesn’t matter).

Now, the tricky part. According to the tinyMediaManager website, all you need to do to launch the program is use a terminal to navigate to the folder you just untarred and use the command:

cd /home/ryan/tinyMediaManager
./tinyMediaManager.sh

When I tried this, it didn’t work. It seemed like it was trying to do something, but then the GUI wouldn’t open and… nothing. Disappointed, I started looking for answers. I eventually found the “launcher.log” file in the tinyMediaManager folder and that gave me the clue I needed to solve the problem. As it was trying to launch, it was running into a problem with a specific thread and library in the version of Java I had installed by default. Here was the error:

Exception in thread "Getdown" java.awt.AWTError: Assistive Technology not found:

It turns out, tinyMediaManager has not been updated to work with newer versions of Java. So, here’s what you can do.

First, install the Open Java Development Kit version 8 which is the latest version it works with:

sudo apt-get install openjdk-8-jdk

It turns out, you can have multiple openjdk’s installed at the same time. I was running as my default openjdk 11. Now, in order to switch to the openjdk 8 environment, type in the following command at the terminal:

sudo update-alternatives --config java

You’ll then be given the chance to choose which openjdk you want to use, like in this screenshot:

Choose openjdk 8 as your default. Then try running tinyMediaManager again. If the software gods are smiling upon you, the GUI will launch:

 1,114 total views

Kubuntu: How to Map Windows Network Share

My workplace is largely a Windows based institution.  Due to a collaborative project, I was recently asked to access some documents on a Windows Network Drive.  I was sent directions for how to add that drive in Windows.  Those directions were not that helpful as I, of course, run Linux.  As perhaps the only Linux user on my campus, this meant that I had to figure out how to map the Windows Network share to my computer on my own.  No problem.  I got this.

After a little googling, I figured out I needed to do the following:

(1) Install the packages “samba” and “cifs-util.”  You can do this using synaptic or from the command line (sudo apt-get install samba cifs-util).

(Installing the software from synaptic.)
(Installing the software from synaptic.)

(2) Once those packages are installed, it’s a good idea to restart your computer.

(3) Now, create a directory where you want to mount the mapped network drive.  I put the network drive on my desktop just to test this.  I may change that later.

(4) Open a command prompt (e.g., Konsole) and now it’s time to mount the drive to the newly created folder.  Here’s the command:

sudo mount -t cifs -o username=[your.username] //[name.of.network.drive]/[name.of.specific.folder] /home/[your.username]/Desktop/[folder.where.you.want.the.drive.mounted]

networkdrive2

(5) After you hit enter, you’ll be asked for your password for the network drive.  Assuming you didn’t have any typos, you should now have the Windows Network Drive mounted to the folder you created and should have access to all the files inside.

networkdrive3

(6) NOTE: This is a temporary mapping of the Windows Network Share.  Since I only need to access this Windows Network Share Drive occasionally, I don’t want to set up my computer to map it every time I boot it up.  There is a different process for mapping the drive permanently.

(I found the most helpful directions for this here.)

(NOTE: On LinuxMint 18.0, I was unable to get this to work.  Every time I tried to map the drive using the command above I would get the error: “mount error: could not resolve address for [network.drive.name]: Unknown error”.  It turns out, for some reason Linux doesn’t play well with Windows share names.  However, when I swapped out the name of the drive for the IP address, everything worked great.  Try using the IP address instead of the name of the Windows share if you get this error.)

(SECOND NOTE: A way around the above error (kind of) is to associate the Windows share name with the IP address in your /etc/hosts file.  Using a console, type: sudo nano /etc/hosts.  When the file comes up, add the IP address followed by the share name (e.g., 192.168.1.1   [share_name]).  That will associate the IP address with the share name and you’ll then be able to mount the Windows share with the name and not the IP address.)

 2,456 total views,  3 views today

Linux – Kubuntu 16.04 with Plasma 5.5.5 – unable to change file associations

I just upgraded my laptop (Lenovo ThinkPad T540P) to the latest version of Kubuntu – 16.04 with Plasma 5.5.5.  Everything was running great until I had an issue with Ark, the archiving program that comes with KDE.  It was having an issue unzipping an archive.  It seemed to unzip the archive, but the resulting file should have been a directory and instead was being recognized by the operating system as a PDF file.  In the process of trying to get the extracted zip file open, I set Ark as an option for opening PDF files using the standard approach: right-click on file, select Properties, click on File Type Options, and then add the new option – Ark.

ark1
Step 1 – right-click a PDF file

ark2
Step 2: click on File Type Options

ark3
Step 3: adjust the Application Preference Order or add/remove applications

This didn’t solve my archive problem, but did introduce a new problem with Kubuntu 16.04.  Ark became the default program for opening PDF files, which is absolutely not what I want both because Ark can’t open PDFs and because I prefer Okular for this.  I tried a dozen times or so to change the file association using the same method I had used to add it above (right-click on a PDF, select Properties, click on File Type Options, etc.) and then deleting Ark as an option or moving it down so it isn’t the default option.  Every time I would try this, Ark would re-appear as soon as I hit “Apply” or “OK.”

Since this didn’t work when I was using the quick and easy method of right-clicking, I tried changing the file associations in System Settings.  Open up System Settings and click on “Applications”:

ark4Then click on “File Associations” and add PDF in the search bar:

ark5

I tried doing the same thing here – delete Ark as an option or moving it down in the preferred order list, and it would just reappear when I hit “Apply.”  This is definitely a bug in the new Plasma/Kubuntu version.

I knew there was another location to change these default settings – a txt file that could be edited using something like “kate,” the built in KDE text editor.  From a terminal/Konsole, type:

sudo kate /home/[user]/.local/share/applications/mimeapps.list

Once you open that file, you can see some default settings as well as my attempt to remove Ark as a program for opening PDFs:

ark6

The information in my mimeapps.list file was correct, but it was still having the same problem of Ark being called as the default program to open PDF files.

After a little searching on the internet, I found a different solution that actually worked (again, suggesting this is a bug in KDE/Plasma/Kubuntu).  Apparently, the mimeapps.list in that location is user-specific.  There is another mimeapps.list in a different location that is universal for the operating system and not user specific that is located here:

/home/[user]/.config/mimeapps.list

I opened this file using kate:

sudo kate/home/[user]/.config/mimeapps.list

And removed the Ark connection with PDF files by deleting it so the current version looks like this:

ark7

After I did this, the system settings took effect and Ark was no longer the default app called when I tried to open PDF files.  This seems like a serious bug in Plasma/Kubuntu that the developers need to fix.  It seems as though the operating system wide options are over-riding the user-specific options for the mimeapps.list, which means you cannot change the default file associations in KDE using Kubuntu 16.04.

If you run into this problem, please report it to the Kubuntu/KDE/Plasma developers.

 521 total views,  2 views today

Ubuntu Linux (KDE): “Chrome didn’t shut down correctly” error

Every so often, Chrome on my Linux based computers (one running Kubuntu and one running Linux Mint KDE) starts having a problem.  I like having Chrome save my tabs from my previous sessions so I can pick back up where I left off.  But for some reason, and I’m not exactly sure what that reason is, Google’s Chrome eventually starts giving me the following error message:

Chrome didn't shut down correctly error
Chrome didn’t shut down correctly error

The error says, “Chrome didn’t shut down correctly” followed by a button that says “Restore.”  By clicking on the Restore button, I’m able to get my tabs back, but it’s kind of annoying that I have to do that.  Also, there is no indication of what the problem is in the Chrome crash log (chrome://crashes), which means I really have no idea what causes this problem.  I tried a bunch of suggestions from various websites to get this error to go away and finally found one that works.  Here’s what you need to do.

First, click on Chrome’s Settings option:

Chrome settings menu
Chrome settings menu

In the tab that opens up, scroll to the bottom and select “Show advanced settings…”

Chrome advanced settings
Chrome advanced settings

Near the bottom of the advanced settings is the option you want: System.  There should be two buttons there.  The first one says, “Continue running background apps when Google Chrome is closed.”  I only get the “Chrome didn’t shut down correctly” error when that button is selected.

Continue running background apps when Google Chrome is closed
Continue running background apps when Google Chrome is closed

Uncheck the box next to that option, like this:

Continue running background apps when Google Chrome is closed
Continue running background apps when Google Chrome is closed

Now try restarting Google Chrome.  If whatever is causing this is the same problem for you as it is for me, it should have solved the problem.  If not, sorry.  Keep googling for an answer.   🙁

 1,477 total views

Linux: Getting “Find” working in Dolphin on KDE (Linux Mint and Kubuntu)

One of the reasons I switched to KDE from Gnome was Dolphin, the file manager that ships with KDE.  When I made the switch a couple of years ago, the Find feature in KDE worked really well.  But some time in the last couple of years, the two distributions I’ve been using – Kubuntu and Linux Mint KDE – haven’t had the Find feature working from the base install.  I’ve muddled along without that feature for about two years (I don’t always need it, but there have been a few times when I really did need it and it didn’t work).  I finally figured out how to get it working.  It has to be one of the most ridiculously broken elements of Linux I’ve ever discovered as the solution is convoluted and counter-intuitive.

To begin with, from the base install in Dolphin, here is the Find button:

enabling find function in Dolphin in KDE

If you click it, it will open a find dialogue in the location bar at the top of Dolphin:

enabling find function in Dolphin in KDE
(click for full size)

If you try to find something, you’ll get an error message that says, “Invalid protocol” that looks like this:

enabling find function in Dolphin in KDE
(click for full size)

Dolphin has done that for the last two or three years or so, which means I haven’t been able to use this very basic feature of the file manager.

If you look around for advice on how to fix this, you’ll get mired in a bunch of forums that suggest different things about “baloo,” the new search program in KDE (that replaced Nepomuk, the failed, processor-hungry semantic search engine that no one really liked).  Here’s the problem with “baloo”: it’s not installed by default in Linux Mint KDE or Kubuntu.  That’s actually fine if you don’t need this search feature.  But, and here’s the convoluted part of this, you don’t actually use baloo for the search function in Dolphin.  However, you have to install it in order to enable the search function in Dolphin to work, but then turn baloo off.  Seriously!  It’s rather absurd and broken at the moment.

Here’s what you have to do.  First, install baloo4 from synaptic:

enabling find function in Dolphin in KDE
(click for full size)

If you try the search function now, it still won’t work.  Dolphin won’t give you the error message anymore, but it also won’t find anything.  It just gives you an empty page of results, regardless of what you search for.  But, installing baloo does something that makes enabling the Find feature possible.  If you open up System Settings, you’ll see a new icon that wasn’t there before – Desktop Search:

enabling find function in Dolphin in KDE
(click for full size)

We’ll return to that System Setting option in a minute.

Next, go back to Synaptic and install the following packages: kde-baseapps, systemsettings (probably already installed), and kfind (also probably already installed).

enabling find function in Dolphin in KDE
(click for full size)

You can still try searching in Dolphin after you’ve done this, but it won’t work.  There is one more completely counter-intuitive step.  Once you’ve installed kde-baseapps (and the other two packages), go back to the System Settings window and click on the new Desktop Search icon.  There is a check box below the window where you can exclude locations that says “Enable Desktop Search.”  Uncheck it and click “Apply”:

enabling find function in Dolphin in KDE
(click for full size)

Now, try searching in Dolphin and, voila, it works:

enabling find function in Dolphin in KDE
(click for full size)

This fix for the Find feature in a basic program in KDE is completely counter-intuitive.  In sum, in order to turn on the “search” feature, you have to install a package that you aren’t going to use, install another package that you are going to use, and then turn off the first package (baloo).  Why?  Why?  Why?

KDE programmers – I love your software!  I really, really, do.  But this makes no sense.  Can you please decide on a file/folder search solution, install it by default, and then make it a simple click of a button to turn it on or off?  This should not be anywhere close to this complicated!

 4,891 total views,  4 views today