ZFS Snapshot Management

With my new fileserver/NAS, I am using a ZFS raid array for my file storage. One aspect of the ZFS file system that I didn’t really investigate all that closely before I installed it was the snapshot feature. ZFS snapshots basically store a “picture” or “image” of all the files on the system at a specific point in time. The logic behind snapshots is that you can then roll the filesystem back to that point in time, in case there was a problem.

This is, of course, a great feature for a variety of reasons. If you accidentally deleted a file, you could recover it this way. If someone hacked your server and encrypted your files, you could recover them this way (assuming the snapshots aren’t also encrypted).

However, there is a small issue here. In order for the snapshots to work, they basically have to store all of the files that were in existence when the snapshot was made. In other words, so long as you retain a snapshot, you will also have to retain all of the files associated with that snapshot, which will take up all the corresponding space as well. Technically, once you delete a file, it won’t show up in your file directory, but the file system has to retain a copy of it as part of the snapshot.

It was this issue that finally got me to look more closely at snapshot management. My fileserver has four 4TB hard drives in it, which gives me roughly 8TB of storage. I can always upgrade the drives to bigger drives if needed, but I don’t have 8TB of files at this point. I have around 4TB, but my fileserver was reporting less and less available space, even after I deleted some files I didn’t want/need. I was confused until I realized that I had set up some automatic snapshots when I first set up the fileserver. Those snapshots included many of the files I had deleted, so no space was freed up when I deleted those files. I had to delete the snapshots in order to free up the space. It took some doing, but I finally figured out how to manage snapshots a bit better.

First, here’s how to list your existing snapshots:

zfs list -t snapshot -r [ZFSpoolnamehere]

If you have any snapshots, you should see a list like this:

This shows the snapshots I have created on my fileserver.

To create a snapshot, you can use this code:

sudo zfs snapshot -r [ZFSpoolnamehere]@[nameofsnapshot]

The above command puts you in root (sudo) and calls zfs, telling it to create a snapshot. The “-r” part of the command tells zfs to make a recursive snapshot, which means it will create snapshots of each of your top directories in your ZFS pool. You then have to invoke the ZFS pool followed by the “@” symbol and then the name of your snapshot. Here’s my actual command:

sudo zfs snapshot -r ZFSNAS@2021_01_02

This command creates a snapshot of my entire ZFS pool and names it with the date it was created – January 2nd, 2021.

Once I’ve created my snapshot, I can see it in the list of snapshots I have created with the command above.

Of course, there is also the issue of deleting snapshots. This was the issue I needed to address that got me started on ZFS snapshots – I had some very old snapshots that were taking up terabytes of space on my fileserver. To delete a snapshot, you can use the following code:

sudo zfs destroy [ZFSpoolname]/[directoryname]@[nameofsnapshot]

Here’s an example of my actual code used to delete an old snapshot:

sudo zfs destroy ZFSNAS/files@2020_12_25

This command tells zfs to delete my snapshot in the ZFSNAS pool that was made of the “files” directory on December 25, 2020 (@2020_12_25). When I check the list of snapshots after deleting that one, it’s no longer there and the space that was reserved for that snapshot is freed up.

Luckily, I haven’t needed to roll back to a snapshot at this point. However, it is nice knowing that my filesystem on my NAS has that capability.

For more information on working with ZFS snapshots, see here.

Finally, there is the issue of how many snapshots to keep and when to make them. Given my use case – a fileserver/NAS that stores my music, my video collection, my old photos (not the current year’s photos), and my old files, I actually don’t need particularly frequent snapshots. I decided that one snapshot per month would be sufficient. You can automate this by putting it into a crontab job. However, I simply added this to my calendar to create a snapshot once a month. That way, I can also delete the corresponding snapshot from 12 months prior. Basically, I am keeping one year’s worth of snapshots with one snapshot per month. This gives me a year’s worth of backups to which I can roll back should I need to but also regularly frees up any space that might be taken up by particularly old backups.

I can imagine that other use scenarios would require different frequencies of snapshots for a zfs system (e.g., a production server or someone who uses a zfs raid as their primary file storage). Since my server is primarily used for storing old files and relatively unchanging collections, once a month snapshots are sufficient.

 337 total views,  26 views today

Long Term Storage of Gmail/Email using Mozilla’s Thunderbird (on Linux)

I have email going back to 2003. There have been times when I have actually benefited from my email archive. Several times, I have gone back 5 or more years to look for a specific email and my archive has saved me. However, my current approach to backing up my email is, let’s say, a borderline violation of Google’s Terms of Service. Basically, I created a second email account that I use almost exclusively (not exclusively – I use it for a couple of other things) for storing the email from my primary Gmail account. However, Google has recently changed its storage policies for their email accounts, which has made me a little nervous. And, of course, I’m always wary about storing my data with Google or other large software companies.

Since I have already worked out a storage solution for old files that is quite reliable, moving my old email to that storage solution makes sense. (FYI, my storage solution is to run a local file server in my house with a RAID array so I have plenty of space and local backups. Certain folders on the file server are backed up in real-time to an online service so I also have a real-time offsite backup of all my important files. In other words, I have local and offsite redundancy for all my important files.)

I’m also a fan of Open Source Software (OSS) and would prefer an OSS solution to any software needs I have. Enter Mozilla’s Thunderbird email client. I have used Thunderbird for various tasks in the past and like its interface. I was wondering if there was a way to have Mozilla archive my email in a way that I can easily retrieve the email should I need to. Easily might be a bit of a stretch here, but it is retrievable using this solution. And, it’s free, doesn’t violate any terms of service, and works with my existing data backup approach.

So, how does it work? And how did I set it up?

First, install Thunderbird and set up whatever online email account you want to backup. I’m not going to go through those steps as there are plenty of tutorials for both of them. I’m using a Gmail account for this.

Once you have your email account set up in Thunderbird, click on the All Mail folder (assuming it’s a Gmail account) and let Thunderbird take the time it needs to synchronize all of your email locally. With the over one hundred thousand emails I had in my online archive, it took the better part of a day to do all the synchronizing.

I had over 167,000 emails in my online archive account.

Once you’ve got your email synchronized, right-click on “Local Folders” and select “New Folder.” I called my new folder “Archive.”

Technically, you could store whatever emails you want to store in that folder. However, you’ll probably want to create a subfolder in that folder with a memorable name (e.g., “2015 Work”). Trust me, it will be beneficial later. I like to organize things by year. So, I created subfolders under the Archive folder for each year of emails I wanted to back up. You can see that I have a folder in the above image for the year 2003, which is the oldest email I have (that was back when I had a Hotmail address… I feel so dirty admitting that!).

The next step is weird, but it’s the only way I’ve been able to get Thunderbird to play “nice” with Gmail. Open a browser, log in to the Gmail account you’re using, and empty your trash. Trust me, you’ll want to have the trash empty for this next step.

Now, returning to Thunderbird, select the emails you want to transfer to your local folder and drag them over the “Trash” folder in your Gmail account in Thunderbird. This won’t delete them but it will assign them the “Trash” tag in Gmail. Once they have all been moved into the Trash, select them again (CTRL+A) and drag them into the folder you created to store your archived emails. In the screenshot below, I’m moving just a few emails from 2003 to the Trash to test this process:

Once the transfer is complete, click on the Local Folder where you transferred the emails to make sure they are there:

And there are the emails. Just where I want them. This also means that you have a local copy of all your emails in a folder exactly where you want them. At this point, you have technically made a backup of all the email you wanted to backup.

To remove them from your Gmail account, you need to do one additional thing. Go back to the browser where you have your Gmail account open, click on the Trash, and empty the Trash. When you do, the emails will no longer be on Gmail’s server. The only copy is on your local computer.

Now, the next tricky part to this (I didn’t say this was perfectly easy, but it’s pretty easy). Thunderbird doesn’t really store the Local Folder emails in an obvious location. But you can find the location easy enough. Right-click your Local Folder where you are archiving the emails and select “Properties.”

You’ll get this window:

Basically, in the Location box you’ll see the folder’s location. This is telling you where to find your Local Folder where your email is stored. On Linux, make sure that you have “view hidden files” turned on in your file manager (I’m using Dolphin), the location is your home folder, followed by your user folder, then it’s inside the hidden “.thunderbird” folder followed by a randomly generated folder that Thunderbird creates. Inside that folder, look for the “Mail/Local Folders” folder. Or, simply:

/home/ryan/.thunderbird/R$ND0M.default-release/Mail/Local Folders/
I have opened all the relevant folders on my computer in Dolphin see you can see the file structure.

Since I created an additional folder, there are two files in my Archive.sbd folder that contain all the emails I have put into that folder: “2003” and “2003.msf.” You can learn more about the contents of those files here, but, basically, the file with no file extension stores the actual contents of the emails. The .msf file is basically an index of the emails (I was able to open them both in Kate, a text editor, and read them fine). In short, you won’t have a bunch of files in your archive. You’ll have two files – one with the contents of the emails and one that serves as an index of the emails that Thunderbird can read. These are the two files that you’ll want to backup.

I’ll admit, this was the part that scared me. I needed to know that I could just take those two files, move them to wherever I wanted to ultimately store them and then, when needed, move them back into my Thunderbird profile and read them just fine. So, I tested it repeatedly.

Here’s what I did. First, I clicked on my backup folder in Thunderbird to make sure all of the emails were there:

I then went into the Local Folders directory and moved the files to where I want to back them up.

I then clicked on a different folder in Thunderbird and then hit F5 to make Thunderbird refresh the Local Folders. It took a couple of minutes for Thunderbird to refresh the folder, but eventually, it did. Then, I selected the 2003 Archive folder again and the emails were gone:

This is what I expected. The emails are in the “2003.msf” and “2003” files on my backup server. Now for the real test. I copied the two backed up files back to the Archive.sbd folder in my Thunderbird profile, selected the parent folder in Thunderbird, and hit F5 to refresh the folders again. It took a minute for the folder to refresh, but eventually, when I clicked on the 2003 folder and…

The emails are back!

It worked!!!

What this means is that you can put all of the email you want to back up into a folder; that folder is stored in your Thunderbird profile. You can then find the relevant .msf file and its corresponding data file, move them wherever you want for storage and, if needed, move them back to your Thunderbird profile (or, technically, any Thunderbird profile using the same folder structure) and you’ll still have all of your email.

This may not be the most elegant method for backing up your email but it’s free, it’s relatively simple and straightforward, and works reliably. Your email is not in a proprietary format but rather in an open source format that can actually be ready in a simple text editor. Of course, it’s easiest to read it in Thunderbird, but you have the files in a format that is open and secure.

BONUS:

If you don’t think you’re going to need to access your email on a regular basis, you can always compress the files before storing them. Given that most of email is text, you’ll likely realize a savings of close to 50% if space is at a premium (once I moved all of my 2003 email to storage, I compressed it and saw a savings of 60%). This will add a small amount of time to accessing the email as you’ll have to extract it from the compressed format, but it could mean pretty substantial space savings depending on how many emails you’re archiving.

EXTRA BONUS:

This is also a way to move emails between computers. I ended up using this approach to move my email archives to my laptop so I could go through my old email while I’m watching TV at night and delete all the emails I’ll never want in the future (you could of course do that with the email online before you archive it). I’m pretty good about deleting useless emails as I go, but I don’t catch them all. With the .msf file and its accompanying data file, I was able to transport my email archives to whichever computer I wanted and modify the file, then return it to my file server for long term storage.

 790 total views,  31 views today

Linux – Bulk UnRAR

If you’re not familiar with RAR files, they are like ZIP files. RAR is an archive format.

I had a collection of more than 150 RAR files in a single folder I needed to unrar (that is, open and extract from the archive). Doing them one at a time via KDE’s Ark software would work, but it would have taken a long time. Why spend that much time when I could automate the process.

Enter a loop bash script in KDE’s Konsole:

for i in *.rar; do unrar x "$i"; done

Here’s how the command works. The “for i in” part starts the loop (note: you can use any letter here). The “*.rar” component indicates that we want the loop to run through all the RAR files in the directory, regardless of the name of the file. The “do” command tells the loop what command to run. The “unrar x “$i”” component tells the software to use the unrar function which unpacks the archive. The “x” in that command tells the software to use the directory structure inside the RAR archive. The final piece of the loop after the second semi-colon – “done” – indicates what the terminal should do once the loop completes.

It took about 20 minutes to run through all the RAR files, but I used that time to create this tutorial. Doing this by hand would have taken me hours.

 1,315 total views,  25 views today

Linux: Batch Convert .avi files to .mp4/.mkv

I’ve been trying to clean up my video library since building my latest NAS. In the process, I found a number of .avi files, which is an older file format that isn’t widely used these days. While every time a file is converted it loses some of its quality, I was willing to risk the slight quality reduction to convert the remaining few files I had to convert into more modern file formats.

I initially tried converting the files using HandBrake. But given the number I needed to convert, I decided pretty quickly that I needed a faster method for this. Enter stackoverflow.

Assuming you have all of your .avi video files in a single directory, navigate to that directory in a terminal and you can use the following single line of code to iterate through all of the .avi files and convert them to .mp4 files:

for i in *.avi; do ffmpeg -i "$i" "${i%.*}.mp4"; done

In case you’re interested, the code is a loop. The first part starts the loop (“for i in *.avi;”), telling the computer to look for every file with the file extension .avi. The second part tells the computer what to do with every file with that extension – convert it to a .mp4 file with the same name. The last piece indicates what to do when the loop is completed – done.

Of course, this code could also be used to convert any other file format into a different format by replacing the .avi or .mp4 file extensions in the appropriate places. For instance, to convert all the .avi files to .mkv, the code would look like this:

for i in *.avi; do ffmpeg -i "$i" "${i%.*}.mkv"; done

Or if you wanted to convert a bunch of .mp4 files into .mkv files, you could do this:

for i in *.mp4; do ffmpeg -i "$i" "${i%.*}.mkv"; done

BONUS:

If you have .avi files in a number of subfolders, you’ll want to use this script:

find . -exec ffmpeg -i {} {}.mp4 \;

To use it, navigate in a terminal to the top-level folder, then execute this command. It will search through all the subfolders, find all the files in those subfolders, and convert them all to .mp4 files.

Of course, if you have a mixture of file types in the folders, you’ll want a variation of this command that searches for just the files of a certain type. To do that, use this command:

find . -name *.avi -exec ffmpeg -i {} {}.mp4 \;

This command will find all the files with the extension .avi and convert them all to .mp4 files using ffmpeg.

And, if you’re feeling particularly adventurous, you could search for multiple file types and convert them all:

find . -name *.avi -or -name *.mkv -exec ffmpeg -i {} {}.mp4 \;

This code would find every file with the extension .avi or .mkv and convert it to .mp4 using ffmpeg.

NOTE: This command uses the default conversion settings of ffmpeg. If you want more fine-tuned conversions, you can always check the options available in converting video files using ffmpeg.

BONUS 2

If you want to specify the codec to use in converting the files, that is a little more complicated. For instance, if I want to use H265 instead of H264 as my codec, I could use the following code to convert all of my .avi files in a folder into .mkv files with H265 encoding:

for i in *.avi; do ffmpeg -i "$i" -c:v libx265 -crf 26 -preset fast -c:a aac -b:a 128k "${i%.*}.mkv"; done

The default setting in ffmpeg for audio is to pass it through. Thus, if you wanted to just convert the video to a new codec but leave the audio, you could use the following command:

for i in *.avi; do ffmpeg -i "$i" -c:v libx265 -crf 26 -preset fast "${i%.*}.mkv"; done

This will convert the video to H265 but retain whatever audio was included with the video file (the default is to take the audio with the highest number of channels).

Additional information on the various settings for H.265 is available here. Some quick notes: the number after “-crf” is basically an indicator of quality, but is inverted. Lower numbers improve the quality; higher numbers reduce the quality. Depending on what I’m encoding, I vary this from 24 (higher quality) to 32 (lower quality). This will affect the resulting file size. If time is not a concern, you can also change the variable after “-preset.” The options are ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, and veryslow. Slower encodes will result in better quality output but it will take substantially longer to do the encode.

If you run into the problem where you are trying to do bulk conversions but the names of the videos you are converting have spaces in them, you may get the error: “find: paths must precede expression.” The solution is to put the pattern in single quotes, like this:

find . -name '*.mkv' -exec ffmpeg -i {} -c:v libx265 -crf 26 -preset fast {}-new.mkv \;

 2,525 total views,  15 views today

HandBrake – H.265 NVEnc 1080p Ripping Chart and Guidelines

With a Plex server, I want my collection of movies backed up digitally so I can watch them when and where I want to. This involves a two-step process. First, I have to rip the video from Blu Ray, which I do using MakeMKV. Since that process is pretty straightforward, I’m not going to cover how to do it here. It’s the second step that is more complicated – compressing the video using HandBrake.

I’ve been using HandBrake for years, but have typically just used the default settings. That’s a terrible idea for a number of reasons, which I’ll detail in this post. But the primary reason why I’m posting is to detail how different settings translate into different file sizes so people have a better sense of what settings to use.

Format

The first thing you need to decide when ripping a video using HandBrake is the resulting file format. I’m using HandBrake 1.3.3 which includes three options: MPEG-4, Matroska, and WebM. Each of these formats has advantages and disadvantages.

You can choose which format you want on the summary tab of Handbrake

MPEG-4 (or mp4/MP4) is the most widespread format and the most compatible with various devices and programs. This is likely going to be the default for most people. However, MPEG-4 has a major limitation – you cannot store multiple subtitles in the video file itself. If you don’t care about subtitles (e.g., you never watch foreign films), that may not matter to you. But it is a problem for those of us who enjoy a good foreign film. (NOTE: The workaround for most people is to save the subtitles as an .srt file in the same folder, assuming your playback device can then load a separate SRT file.)

Matroska (or MKV) is kind of odd. It’s actually not a video codec. It’s an open-standard container. Basically, it’s like a zip file or folder for the actual media. Inside, it can have video files in a lot of different formats, including MPEG-4, but it can also include subtitle tracks and other media. Thus, the major advantage is that you can store more inside a Matroska file than you can in MPEG-4 files. The disadvantage is that not every device or app supports Matroska files natively. However, support for the format has increased quite a bit. Matroska is now my preferred format, mostly because all of the devices I use for playback can play MKV files and I can store subtitles in the same file.

WebM is actually a variant of Matroska but is meant to be a royalty-free alternative to MP4 that is primarily sponsored by Google and supports Google’s VP8 and VP9 codecs. It is licensed under a BSD license. Basically, MP4 is a proprietary codec while WebM is an open-source one. (NOTE: I don’t have any videos stored in WebM format. However, when I create videos to upload to YouTube, I typically rip them into VP9, which is the preferred format for YouTube.)

Audio

HandBrake searches for the default audio track from the video you are planning on converting but then does something very odd. By default, it sets whatever audio track that is to be converted to stereo audio (i.e., 2.0 or 2 channels – left and right). You can see that in the screenshot below:

HandBrake’s default setting on audio is to convert the default audio track to Stereo (2.0) sound.

If you have a home theater or good headphones, you’ll want 5.1 surround sound at a minimum. If you have a nicer setup, you’ll want 7.1 surround sound. So, make sure that you delete the default Audio option and instead include a 5.1 surround sound option into your new file. Like this:

5.1 audio track instead of stereo.

(NOTE: I haven’t figured out the best options for 5.1 surround sound. I’m not sure whether AC3 or AC3 passthrough is better. And I’m not sure if Dolby Surround, Dolby ProLogic II, or 5.1 Surround is better. Perhaps someone out there will have insights on this.)

(NOTE: Most modern video players are capable of converting 5.1 surround sound into stereo sound [a.k.a. downsampling]. Not including a 2.0 option is perfectly fine for most people as playback devices will compensate.)

Tags

If you haven’t ever used the Tags tab in Handbrake, you really should. By default, the “Title” tag is filled in with whatever information is stored in the video file or DVD you are converting, like this:

Default tag information loaded by HandBrake

However, what you include in the “Title” tag is also what video playback software or devices will think is the name of the video. It is worth taking two minutes to fill in the tags. I typically will change the “Title,” at a minimum, as well as the “Release Date” and “Genre” tags, like this:

I have filled in the tags in this screenshot.

Video

Now on to the primary purpose of this post – video quality settings. To determine the ideal balance between quality and file size, I actually converted the same file using different settings. The original file is a rip of the film Amadeus (Director’s Cut) from the Blu-Ray disc that was 28.4 GB in size. The table below illustrates the results of converting the file using different Constant Quality options – all into 1080p video. For each of these conversions, I used identical settings except I changed the Constant Quality option in HandBrake. Each of these used the H.265 (NVenc) encoder with all the other video options on their default settings. Each conversion took about 1 hour (give or take 10 to 15 minutes).

SettingResulting File SizeCompression
(% of original file)
CQ 2214.3 GB50.35%
CQ 248.6 GB30.28%
CQ 266.5 GB22.89%
CQ 284.9 GB17.23%
CQ 303.7 GB13.03%
CQ 322.9 GB10.21%

The big question, of course, is how much degradation there is in quality with the smaller file sizes. To illustrate this, I took a screenshot using VLC from four of the video files: the original, unconverted Blu-Ray rip (28.4 GB file), from the CQ 22 rip, the CQ 28 file, and the CQ 32 file. I uploaded the full resolution files below so you can see and compare the differences.

Screenshot from the original MKV file – 28.4 GB.
Screenshot from the converted file at CQ 22.
Screenshot from the converted file at CQ 28.
Screenshot from the converted file at CQ 32.

To help my old eyes see the differences, I zoomed in on just one part of these photos to see if I could tell if there were any meaningful differences:

I thought I might see differences in the quality of the screenshots with the musical score (that’s why I chose this frame). I thought the lines might get blurred or the notes would become fuzzier. But that isn’t actually the case. Most of the space savings actually came from the detail in the brown section of this frame. In the original, if you look closely at the bottom right of the image, you can see lots of “noise.” This is basically film grain. The compression algorithm must look for that kind of noise and then remove it. If you look closely at the CQ 22 frame, there is less film grain. In CQ 28, large swathes of the film grain have been removed. And in CQ 32, all of the film grain has been removed and converted to blocks of single colors. Where there is fine detail in the video or distinct color differences, that has been retained. I should probably try this same exercise with a more modern movie shot digitally and not on film to see how the compression varies. Even so, this is a good illustration of how compression algorithms save space – they look for areas that can be considered generally the same color and drop detail that is deemed unimportant.

TL:DR: My recommendation for file sizes and CQ settings…

Scenario 1: Storage space is genuinely not an issue and you have a very fast internet connect, file server, and home network. You should rip your BluRay disks using MakeMKV and leave them as an MKV file with their full size. That will give you the best resolution with lots of options for future conversion if you want.

Scenario 2: You have a fair amount of storage space, a decent internet connection, a reasonably fast/powerful file server, and a good home network, then I’d suggest a dual approach. If it’s a movie you absolutely love and want the ideal combination of quality while minimizing file size, use a CQ of 22 to 26. That will reduce the quality, but retain much of the detail. If it’s just a movie that you enjoyed and think you might want to watch it again in the future but don’t care so much about the quality, go with a lower quality setting (e.g., CQ of 30 or 32).

Scenario 3: You have very little storage space, your internet connection isn’t great, your file server cannot convert files on the fly, and your home network is meh, then you should probably go for higher levels of compression, like CQ 30 or 32. You’ll still get very good quality videos (compression algorithms are very impressive these days) but in a much smaller sized video. Oh, and you should probably convert your files to MP4.

 5,040 total views,  75 views today

How to Export Garmin Tracks/Activities to Google Maps

While it’s possible to share your Garmin tracks or activities with people through the Garmin website, I like exporting my tracks to Google Maps so I can embed them on my website. Here’s how I do that.

First, log in to your Garmin Connect account. Once you have logged in, click on the Activities tab.

You’ll see a list of your activities. Look for and find the activity you want to share.

Click on the activity you want and you’ll see all the details for the activity. To export the GPS track of your activity, look for the gear icon on the upper right of the screen:

Click on the gear icon and you’ll see several options. The one you want is “Export to GPX.”

You should then get a prompt to download a GPX file. Download it to your computer (and, of course, remember where you downloaded it to). If you’re curious, you can open the GPX file with a text editor and see that it is basically just a list of points using a markup language.

Now, log in to your Google account. The next part is always the part that takes me the longest when doing this – figuring out the URL where I can upload the GPX file to create a map. Google doesn’t make this easy (and, admittedly, it has changed over time). The link to upload and create maps using Google Maps is this one: https://www.google.com/maps/d/ Assuming you are signed in and everything works, you should see a page called “My Maps” that looks like this:

I’ve made a fair number of maps!

Click on “+CREATE A NEW MAP”, the big red button. On the next screen, you’ll see a checkbox next to “Untitled Layer” and below that a link that says “Import.”

Click on the “Import” link and you’ll get this window:

You can either click on the blue button and select your GPX file or just drag it to the window. Either way, as soon as it has been uploaded, the site will process it and overlay your track onto a map, like this:

(BONUS: Since I have created quite a few of these for my hikes, I actually try to keep them organized in my Google Drive. If you’re going to create a lot of these, you should do the same.)

Now you have lots of options to customize your map. You should obviously click on where it says “Untitled map” and give it a name. You can also add layers, add points of interest, insert pins, etc. You can also change the Base map to terrain or satellite instead of the default map. I added three pins, a title, and a description to my map:

When you’re done modifying your map, you should do two things to share it. First, change the permissions by clicking on the “Share” icon.

By default, maps are restricted and you can only provide access to specific people. You can change that by clicking at the bottom of that window where it says “Change to anyone with the link.”

Now you can embed the map into a different website or just share the link with people. To embed the map, click the three little vertical dots near the title and select “Embed on my site”:

You’ll then get a window with the embed code:

The code should look something like this:

<iframe src="https://www.google.com/maps/d/embed?mid=1hCMJebINYk1cTYWCUEo48YN48oUO1GGK" width="640" height="480"></iframe>

And here’s how it looks actually embedded on your site:

 3,795 total views,  16 views today

Teaching with a Mask – Headset Solution on Linux

My university, the University of Tampa, is doing what it can to continue teaching in-person classes (many are hybrid) during the COVID-19 pandemic. To facilitate that, our IT folks installed webcams and microphones in all of our classrooms. Unfortunately, the classrooms all have Windows-based desktops that don’t include all the software I need for educating my students (e.g., LibreOffice, Zotero, RStudio, etc.). I have always just plugged my laptop into an HDMI cable and then projected directly from it.

However, now that I’m teaching in a mask that substantially muffles my voice, I need a microphone that is projected through the speakers in the classroom so the students in the back can hear me, particularly when the A/C is on. I tried using the microphone provided the first day of class and ended up having to hold it for the entire class and it still cut out regularly. Our IT folks suggested we could start a Zoom meeting on the desktop, connect our laptop to it, and then display our laptop in the Zoom meeting and project that onto the screen so we can use our laptop and a microphone. That seemed like a kludge approach to solve the problem.

I figured there had to be a better way. So, I did a little thinking and a little research and found one. The answer – a bluetooth headset designed for truckers! Yep, truckers to the rescue!

If I could get a bluetooth headset to connect to my computer and then project the sound through the classroom’s speakers via bluetooth, I could continue to use my laptop to teach my class while still having a microphone to project my mask-muffled voice. Admittedly, this required a couple of hours of testing and some trial and error, but I got it working. Now, I have my own microphone set up for the classroom (I bring it with me) and can continue to use my laptop instead of the Windows-based PC.

So, how did I do it?

First, get yourself a bluetooth headset. I bought the Mpow M5 from Amazon. This is the perfect style headset as it has just one earphone, meaning I can still hear perfectly fine when students are talking to me in the classroom.

Second, connect the headset to your laptop. I’m going to assume your laptop has built-in bluetooth. Mine, a Dell Latitude 7390, does. Pairing it with my laptop was super easy. (If you don’t have bluetooth built-in, there are cheap USB bluetooth dongles you can buy as well.)

Third, the Linux part. I installed the package “blueman,” which provides a GUI interface for working with bluetooth devices. I didn’t know if this would be necessary, but, it turns out, it definitely was. Once you have your headset connected, open the blueman GUI and you’ll see this:

The next part stymied me for a while. Initially, my computer detected the headset as just headphones and not a headset with a microphone. I didn’t know why. Eventually, I got lucky and right-clicked on the Mpow M5 device in blueman and got a context window with the option I needed:

When you right-click on the device, you can select “Audio Profile” and then “Headset Head Unit”. The default, for some reason, was “High Fidelity Playback.” Once I did that, Linux detected the microphone.

Before you continue, make sure you have plugged in the HDMI cable as you’ll need that connected for the next part.

Next up was making sure all my audio settings were correct. This, too, took some trial and error. The settings window you need is paudio or Pulse Audio, which goes by different names in various versions of Linux. Regardless, here’s what the window looks like:

I’ll go over the settings for each tab, though the first two – Playback and Recording – won’t have anything in them until you start up OBS Studio, which I’ll cover shortly.

In the Configuration tab, you should now see your Mpow M5 connected as a Headset and you should see Built-in Audio. This may not say “Digital Stereo (HDMI) Output” to begin with. There is a drop down menu there. Click on it and you’ll see various options:

The default is “Analog Stereo Duplex”. Click on that drop down and select the HDMI Output. (NOTE: I typically use just “Digital Stereo (HDMI) Output” without the “+ Analog Stereo Input”. I have the wrong one highlighted above, but it should still work.)

Here’s what the Input Devices tab should look like:

And here’s how the Output Devices tab should look:

You probably will need to change one thing on the Output Devices tab. Make the HDMI output the default (click the little blue icon). You may also need to mute the Mpow M5 on this screen. Either way, you want to make sure that the HDMI output is where the sound is going.

Now, we need another piece of software. (NOTE: For those using Windows or Mac who want to do this as well, here’s the software that you’ll use that should use a fairly similar set up.) The software is OBS Studio, which is free and open-source software that works on all platforms. Install OBS Studio, then open it up.

The software is very good at detecting everything on your computer. Here are the settings I had to adjust. In the bottom right corner of the software, click on “Settings” and you’ll get a window with various tabs (tabs are on the right). The one you need is “Audio”. Click on that and you’ll see this:

In the Devices section, you’ll need to change the following: “Desktop Audio” should be set to “Built-in Audio Digital Stereo (HDMI)”. “Desktop Audio 2” should be disabled. “Mic/Auxiliary Audio” should be set to “Mpow M5.” All the others should be disabled. Then, under “Advanced,” where it says “Monitoring Device,” select “Monitor of Mpow M5.” Then click Apply or OK.

Close that window then click on “Edit” -> “Advanced Audio Properties” and you’ll get this window:

In that window, where you see “Audio Monitoring,” click on the drop down option for “Mic/Aux” and set it to “Monitor and Output.” What this does is tells the operating system that you want to monitor the audio from your Mpow M5 headset and output it through the speakers. Select “Close” and, assuming you’ve done everything correctly, you should now hear your voice coming out of the speakers. Woot!

A little more detail may be helpful, though. Back to Linux. If you return to the Pulse Audio window, you’ll now see that there is information in the remaining two tabs. Here’s what you should see in the Recording tab:

And here’s what you should see in the Playback tab:

And here is how I look with my headset and a mask:

Some notes:

I haven’t tested this with Zoom yet. I probably will to make sure that the audio also goes through Zoom.

OBS Studio can actually be used to record your presentation as well. It’s designed for streaming gamers, but works just as well for screen capture. So, if you need to record your class, just use OBS Studio to record your audio and your screen during the class.

 3,269 total views,  16 views today

Kubuntu – Audio CD Ripping

I mostly buy digital audio these days. My preferred source is bandcamp as they provide files in FLAC (Free Lossless Audio Codec). However, I ended up buying a CD recently (Last Night’s Fun by Scartaglen) as there wasn’t a digital download available and, in the process, I realized that there are lots of options for ripping the audio from a CD on Linux and quite the process to get the files ripped, tagged, properly named, and stored in my library. This is my attempt to summarize my process.

Format/Codec

First, you need to decide in what format you want the audio from the CD. As noted, I prefer FLAC these days. Given how relatively inexpensive storage is, I no longer need to scrimp on space for the most part. If space was an issue, ripping the files to mp3 format at, say, 192 kbps at a variable bit rate would probably be the optimum balance between decent quality and small size. But I prefer the best quality sound with no real regard for the size of the resulting files. It helps that I store all my music on a dedicated file server that runs Plex. That solves two problems: I have lots of space and Plex will transcode the files if I ever need that done (if, for example, I want to store the music on my phone and want it in a different format). So, my preferred file format is FLAC. (Another option is OGG, but I find not as many audio players work as well with OGG.)

There is another issue that I recently ran into: single audio files with cue sheets. Typically, people want their audio in individual files for each song. However, if you want to accurately represent an audio CD, the best approach to do this is to rip the audio as a single file with a corresponding cue sheet. The cue sheet keeps an exact record of the tracks from the CD. With the resulting two files, the audio CD can be recreated and burned back to a CD. I have no real intention of burning the audio back to a CD (I want everything digital so I can store it on my file server), but it’s good to know about this option. Typically, those who opt for this approach use one of two formats, .flac or .ape, for storing the audio and .cue for storing the timing of the tracks. The .ape format is a proprietary format, however, so it is definitely not my preferred approach.

As a quick illustration for how file format is related to size, I ripped my demonstration CD, Last Night’s Fun by Scartaglen into a single FLAC file and a single mp3 file (at 192 kbps using a variable bit rate) and put the resulting files into the same folder so you can see the size difference:

As you can see, the FLAC rip resulted in a file that was 222.9 MB compared to the mp3 file that is only 49.4 MB. The FLAC file is about 4.5 times the size of the mp3 file. A higher-quality mp3 rip at 320 kbps at a constant bit rate resulted in a 54.8 MB file. A pretty good estimate would be that the FLAC format is going to be somewhere between 3 to 5 times the size of an mp3 file. Thus, if space is an issue but you want good quality, ripping your music to the highest quality mp3 (320 kbps) is probably your best option. If space isn’t an issue and you care more about quality, FLAC is the way to go.

NOTE: I also ripped the disc to OGG and the file size was 38 MB.

Ripping Software

First, if you’re planning on ripping to FLAC on Linux, you’ll need to install FLAC. It is not installed in most distributions by default. This can be done easily from the terminal:

sudo apt-get install flac

Without FLAC installed, the software below won’t be able to rip to FLAC.

K3b

K3b is installed in Kubuntu 20.04 by default and is, IMO, a good interface for working with CDs and DVDs. When I inserted my demonstration CD into my drive, Kubuntu gave me the option of opening the disk in K3b. When I did, K3b automatically recognized the CD, grabbed the information from a CDDB, and immediately gave me options for ripping the CD:

When you click on “Start Ripping,” you get a new window:

In this new window, you have a bunch of options. You can change the format (Filetype). With the FLAC codec installed, the options listed are: WAVE, Ogg-Vorbis, MPEG1 Layer III (mp3), Mp3 (LAME), or Flac. You can obviously change the Target Folder as well. K3b also gives you the option of creating an m3u playlist and the option “Create single file” with “Write cue file” which is where you could create the single file and cue file from the CD as noted above. There are also options for changing the naming structure and, under the Advanced tab, options for how many times you want to retry reading the CD tracks. K3b is pretty fully featured and works for well for ripping audio CDs.

Clementine

My preferred music player in Linux is Clementine. I have used a number of music players over the years (e.g., Banshee, Rhythmbox, Amarok), but Clementine has the best combination of features while still being easy to use. Clementine is in the repositories and can easily be installed via synaptic or the terminal:

sudo apt-get install clementine

Clementine also has the ability to rip audio CDs. Once your CD is inserted, click on Tools -> Rip audio CD:

You’ll get this window, which is similar to the ripping window in K3b:

If the information is available in a CDDB, Clementine will pull that information in (as it did for my demonstration CD). You then have a bunch of options for the Audio format: FLAC, M4A, MP3, Ogg Flac, Ogg Opus, Ogg Speex, Ogg Vorbis, Wav, and Windows Media Audio. The settings for each of these can be adjusted in the “Options” box. One clear advantage of Clementine over K3b is that you can readily edit the titles of the tracks. Another advantage of Clementine over K3b is that you could import the files directly into your music library.

Ripping from a Cue Sheet

Another scenario I have run into on Linux is having a single file for the audio from a CD with a corresponding .cue sheet (the file is typically in the FLAC format, but I have also run into this in .ape format). I used to immediately turn to Flacon, a GUI that helped rip the single file into individual tracks. However, I have had mixed success with Flacon working lately (as of Kubuntu 20.04, I couldn’t get it to work). Never fear, of course, because Flacon is really just a GUI for tools that can be used in the terminal.

To split a single FLAC file with a corresponding .cue sheet into the individual tracks, you’ll need to install “shntool“:

sudo apt-get install shntool

(NOTE: It’s also a good idea to install the suggested packages, “cuetools,” “sox,” and “wavpack” but not required.) Assuming you have already installed “flac” as described above, ripping a single FLAC file into the individual tracks is fairly straightforward. The easiest way is to navigate to the folder where you have the FLAC file (e.g., “audiofile.flac”) and the cue sheet (e.g., “audiofile.cue”). Then use the following command at the terminal:

shnsplit -f audiofile.cue -o flac audiofile.flac 

Breaking the command down, “shnsplit” calls the program “shnsplit” which is part of the “shntool” package. The “-f” tells the program to show detailed format information. The first file is the cue sheet. The “-o” indicates that you are going to specify the output file format extension. After the “-o” is the target file format “flac” and the last file is the single FLAC file that you want to split.

Here’s a screenshot of me rippling the single FLAC file from my demonstration CD into individual FLAC files:

If you happen to run into a single audio file in the .ape format, shntool probably won’t be able to read it so the above command won’t work. However, a simple workaround is to convert the file to flac format using ffmpeg, which can read it. Here’s the command you could use from the terminal:

ffmpeg -i audiofile.ape audiofile.flac

That command will call ffmpeg (which you probably have installed) and convert the .ape file into a .flac file which can then be split using the command above (assuming you have a corresponding cue sheet).

Tagging Software

Let’s say I have successfully ripped my files into my desired format and now I want to tag them. There are a number of software packages that can do this, but my preferred software is Picard by MusicBrainz. Picard is open source, which is awesome, but it also interfaces with the MusicBrainz website and pulls in information that way, which means your files will get robust tagging information. If you pull in all the information from MusicBrainz, not only will the artist and album be tagged, but so to will lots of additional information, depending on how much was entered into the database by whoever added the album in the first place. (NOTE: Clementine also interfaced with MusicBrainz but this broke in 3.1. Once it broke, I started using Picard directly and now I realized that it has a lot more features than Clementine’s implementation, so I just use Picard. However, you could try doing this in Clementine as well.)

Again, I’ll use my demonstration CD to illustrate how this is done. I ripped the tracks into individual FLAC files above. Those tracks are completely devoid of tags – whatever software I use to try to play them won’t know what the audio files are. The screenshot below illustrates this. I used MediaInfo (a gui for pulling information from audio and video files in Linux) to pull available information from the file. It shows the format and length but provides no information about the artist or album, which it would if the file had tags.

We’ll use Picard to find the album and add all the tags. First, of course, install Picard:

sudo apt-get install picard

Open the program. Now, since my files have no tag information, I’m going to click on Add Files (you can also add a folder with subfolders, which then has Picard run through multiple audio files, a great feature if you are tagging multiple albums at the same time).

You’ll get a new window where you can select the audio files you want to add. Select them and then click “Open.”

If the files have existing tags, Picard will do its best to group the tracks together and may even associate the files with the corresponding albums. In my case, it simply puts the files into the “Unclustered” category:

Since I pulled them all in from the same folder, I can select the tracks and then click on the “Cluster” button in the toolbar and Picard will cluster the files.

Clustering is a first step toward find the corresponding album information. Once they are grouped together, they will show up in the Cluster category:

Without any tag information, Picard is unlikely to find the corresponding album if you select the cluster and then click on “Lookup.” If there was some tag information in the files, that might be enough for Picard to find the corresponding albums, so you could just select the album and then click on “Lookup.” In this case, that won’t work. So, I’m going to use a different approach. If you right-click on the cluster, you can select “Search for similar albums.”

This gives you a window where you can enter search terms to try to find the corresponding album in the Music Brainz database. Based on the limited information it has, it will try to find a corresponding album automatically. But it likely won’t find it because there are no tags, so there is virtually no information available. Generally, I have found that I have better luck finding albums if I use the album title first followed by the artist, like this “Last Night’s Fun Scartaglen” then hit enter:

Once you find the correct album, select it and then click on “Load into Picard” at the bottom of the window.

Once you do that, the album will move to the right side of the screen. If all of the tracks are included and Picard is able to align them with the corresponding information it has from the database, the CD icon will turn gold. If there is a little red dot on the CD, that means Picard has tag information that can be saved to the individual tracks.

Click on the album and then click the “Save” button in the toolbar and the tag information will be saved to the files.

You can track the progress as tracks will turn green once the information has been successfully added to the tags.

You can also set up Picard to modify the file names when it saves information to the files. Click on “Options” then click the checkmark next to “Rename Files.”

I typically let Clementine rename my files when I import them into my Music library, so I don’t worry about this with Picard, but it is a nice option. Finally, here’s that same Mediainfo box with the tagged file showing that information about the artist and track is now included in the file:

 2,641 total views,  1 views today

LibreOffice – How To Change Icons to a Darker Theme

I prefer darker themes for my desktop environment (Kubuntu 20.04) and browser (Brave). For the most part, this isn’t a problem, but it does cause an issue with some applications, including LibreOffice (6.4.4.2).

One of the first things I do when I install Kubuntu is switch my desktop environment from the default theme (System Settings -> Global Theme), Breeze, which is a lighter theme, to Breeze Dark. You can see the differences in the screenshots below:

This is the Breeze theme that is the default in Kubuntu 20.04
This is the Breeze Dark theme that I typically use in Kubuntu.

The problem is with the icon set in LibreOffice. With the default Breeze theme, the icons are very visible and work great:

These are the default icons in LibreOffice 6.4.4.2 in Kubuntu 20.04 with the default Breeze theme.

The problem comes when I switch the theme to Breeze Dark. Here is how the default Breeze icons look in LibreOffice when I switch the theme:

The default icon set, Breeze, in LibreOffice when the Kubuntu Global Theme is switched to Breeze Dark.

Perhaps it’s just my aging eyes, but those icons are very difficult for me to see. The solution is quite simple, though finding it is always hard for me to remember (thus this tutorial). All you need to do is switch the icon set in LibreOffice. There are several icon sets for dark themes that come pre-packaged with the standard version of LibreOffice that ships with Kubuntu and is in the repositories. It’s just a matter of knowing where to look.

In LibreOffice, go to Tools -> Options:

You’ll get this window. You want the third option down under “LibreOffice”, “View”:

Right at the top of this window you can see “Icon style.” That’s the setting you want to change. If you click on the drop down arrow, you’ll see six or so options. Two are specifically for dark themes, Breeze (SVG + dark) and Breeze (dark). Either of those will work:

I typically choose Breeze (SVG + dark). Select the dark theme you want, then click on OK and you’ll get a new icon set in LibreOffice that works much better for dark themes:

These icons are much more visible to my aging eyes.

Et voila! I can now see the icons in the LibreOffice toolbars.

 3,583 total views,  12 views today

HandBrake – Convert Files with GPU/Nvenc Rather than CPU

I don’t know exactly when HandBrake added the capability of using the GPU for encoding, but it was somewhere between 1.3.1 (current version in the Ubuntu repositories) and 1.3.3 (current version on PPA). Regardless, this option offers dramatic speed improvements, particularly when working with 4K videos. In this post, I’ll show how to use this feature in Handbrake and show some comparisons to illustrate the benefits and tradeoffs that result.

HandBrake 1.3.1 is the current option in the Ubuntu 20.04 repositories.
Version of HandBrake available from the PPA that has NVENC/GPU encoding capabilities.

How to Use NVENC GPU Encoding

First, make sure you have the latest version of Handbrake installed (as of 6/26/2020 that is version 1.3.3). Also, to use Nvenc encoding, you’ll need the Nvidia Graphics Driver 418.81 or later and an Nvidia GeForce GTX 1050+ series GPU or better per HandBrake’s documentation. I’m using a GTX 1060 and have Driver version 440.100 installed. My CPU is an AMD Ryzen 5 3600 6-Core, 12-Thread processor.

My GPU (GeForce GTX 1060) and my driver version: 440.100.

The video I’m using to illustrate how to use GPU encoding is a Blu-Ray rip of Pan’s Labyrinth. I’m making a backup copy of the video to store on my file server. I used MakeMKV to pull the video off the BluRay disc, resulting in a 31.7GB file. I’m going to convert that to a 1080p H265 video file that is substantially smaller in size.

Open up HandBrake and load your file you want to convert by clicking on “Open Source.” Find the file you want to convert and select “Open.”

HandBrake will run through the file, gathering information about the codec, subtitles, audio tracks, etc. Once it’s done, you’ll need to select what format you want to convert it to. For this tutorial, I’m just going to use a General Preset, but I want to illustrate the difference in encoding speed, so I’m going to select Super HQ 1080p30 Surround rather than Fast 1080p30.

My mouse is on Super HQ 480p 30 in the image, but I selected Super HQ 1080p.

To change from CPU encoding to GPU encoding, click on the Video tab:

In the middle of the screen, you’ll see a drop-down menu labeled “Video Encoder.” Click on that drop-down menu and you should see two NVENC options: H.264 (NVenc) and H.265 (NVenc). Those are the two options for using your GPU for the encoding versus using your CPU. The H.265 codec is newer and has somewhat better compression algorithms, but I’m not going to opine as to which of these you should choose. That choice should really be driven by what device you’re going to be playing your videos on. Regardless of which you choose, those two will push the encoding to your GPU instead of your CPU.

Again, my mouse is on the wrong option here. The two options you want are: H.264 (NVenc) or H.265 (NVenc).

Make sure you’ve adjusted your Audio, Subtitles, and Tags to your preferences, then click “Start.” That’s all there is to it – you now have GPU encoding.

Benchmarks: Encoding Speed

Using the H.265 (NVenc) encoder, it took about 17 minutes to convert the 31.7GB MKV file into a 8.4GB m4v file.

My mouse is next to the estimated time for encoding the video: 16m50s.

You can also see that even using the H.265 (NVenc) encoder, much of the processing is passed to the CPU, as shown in this screenshot that shows my GPU is working, but it’s certainly not being stressed:

GPU load is on the left. CPU load is top right.

Using all the same options but CPU encoding instead, HandBrake took 1 hour and 15 minutes to encode the file, so about 5 times as long.

When I grabbed this screenshot, HandBrake was estimating 1h07m, but it ended up taking about 1h15m for the entire encode.

Here’s the same resource utilization illustration showing that HandBrake is drawing exclusively on the CPU and not the GPU:

GPU utilization is on the left; CPU utilization is upper right.

Other tests I ran encoding 4K video illustrated that this difference increases with 4K video. I converted one 4K movie that was about 1 hour, 40 minutes using H.265 (NVenc) and it took about 1 hour. Using the CPU alone, HandBrake estimated it would take 18 hours (I didn’t wait to see if that was accurate). Thus, there is a dramatic difference in encoding speed with higher resolution and larger video files.

Benchmarks: Video Size and Quality

What about file size and video quality? I’m probably not going to do justice the differences because I don’t have the most discerning eye for pixellation and resolution differences, but I will try to use some objective measures. The video encoded with the GPU was 8.39GB in size. The video encoded with the CPU was 3.55GB. I’m not exactly sure why the file sizes are so different given that I chose the same setting for both encodes, but this next screenshot illustrates that the NVENC encode resulted in a higher bitrate (9,419 kb/s) versus the CPU encode with a lower bitrate (3,453 kb/s). Strange.

On the left are the specs for the NVENC encoded video; the CPU encoded video specs are on the right.

I also wanted to see if I could tell if there was a noticeable difference in quality between the two videos. I navigated to the same scene in both videos and used VLC to take a screenshot. First, the NVENC encoded screenshot:

Click for full resolution. This is the NVENC encoded video.
Click for full resolution. This is the CPU encoded video.

My 43-year-old eyes don’t see much of a difference at all.

Conclusion

If you’ve got the hardware and want to save time, using GPU encoding with Handbrake is a nice option. The end result is a much faster encode, particularly with higher resolution videos. Some of the forums where I was reading about this option suggested there are problems with using GPU encoding. I certainly won’t challenge those assertions, but I can’t tell the difference.

 27,048 total views,  147 views today