ASUS Router – VPN with Android phone

I have a specific use scenario where I need to be on my home network to control a device but may want to control that device when I’m not home. After considering several options, I realized that my newly purchased ASUS router – ASUS AX6000/RT-AX88U – has the ability to create a VPN. Rather than pay extra for an additional service, I thought I’d try the built-in VPN option to see if I could get it to work. It took a little doing, especially since ASUS’s directions were vague, but I got it working. Here’s how…

Setting Up Your Router

First, note that I’m doing this on Firmware Version: 3.0.0.4.386_42820. This may change over time.

First, log in to the router and navigate to Advanced Settings where it says VPN:

In that tab, you’ll see three buttons. I ended up getting it to work with OpenVPN (I tried the other two to no avail). Click on OpenVPN and select the button to turn it on and you’ll see this:

I set my port to 1025. Remember what port you select as you’ll need that in the optional section below. If you’ll be connecting from outside your local network (which, obviously, you are), select “Internet and local network.” Finally, at the bottom, create a username and password that you’ll use to connect.

Once you’ve done all of that, hit “Apply.”

Now, where it says “Export OpenVPN configuration file,” click “Export” and you’ll get an “.ovpn” file. You can open that file with a text editor and you’ll see content like this:

remote 192.168.1.23 1025
float
nobind
proto udp
....
------BEGIN CERTIFICATE----
[lots of letters and numbers]
------END CERTIFICATE-----

If your router is connected directly to the internet, you can move on to the section below where you set up your phone. However, I recommend you continue reading as you may need the directions that follow.

In my case, my server is behind my ISP’s gateway, so it isn’t connected directly to the internet. What that means for me is that I have to forward a port from my ISP’s gateway to my ASUS router – port 1025. Log in to your ISP gateway. Mine is an Arris NVG468MQ Gateway. I have to select Firewall -> Port Forwarding to get to where I need to open a port to my ASUS router. Here’s what I added to forward my port to my router: I selected my device from the drop-down list (it’s the only device connected to my gateway, which makes it an easy choice), named this “VPN”, then selected TCP/UDP (technically, you only need UDP for this), then put in the port: 1025. Once that’s done, select “Add.”

Once it’s added, you’ll get this line in your forwarded ports:

If your ASUS router is set up like mine, behind a gateway, the IP address in your OVPN file is wrong. You’ll need to set it to your external-facing (a.k.a. WAN IP) IP address. You can find that in the settings of your Gateway. Or use a website (Google: “What’s my IP”). However, an even better approach is to set up a DNS service either on your router or on a computer on your home network that keeps track of your WAN IP. I use entryDNS with my fileserver. As a result, I have a DNS address that is always current that I used in my OVPN file. Swap out the IP address that was in that OVPN file for either your WAN IP or your DNS address. Here’s how my OVPN file looks now with a fake DNS address (that’s not actually my DNS address)…

remote ryananddebi.randomdns.org 1025
float
nobind
proto udp
....
------BEGIN CERTIFICATE----
[lots of letters and numbers]
------END CERTIFICATE-----

With your port forwarded to your ASUS router and the correct external IP address in the OVPN file, you can now move on to setting up your phone.

Setting Up Your Phone

I installed the OpenVPN app from the Google Play Store on my Pixel 4a:

I then emailed myself the OVPN file, opened that email on my phone, and downloaded the file to my downloads. In the OpenVPN app, click on the + to add a new profile. Find the OVPN file you downloaded:

Then add your username and password you set up on your router.

Once done, you’ll have a profile set up:

Assuming everything was done correctly, select the “activate” button and you should see this:

You can also see that I’m connected in the Router’s VPN screen:

You can now use your phone as though you were connected to your home network.

 923 total views,  7 views today

Plex – Export Playlists to M3U

I spent the last year or so cleaning up my music library. It is now organized how I want it to be. I use two software packages to play my music. At home, on my primary computer, I use Clementine (which is also partly how I organized my library). When I’m away from home, I use Plex. With my music library cleaned up, I have started building a few playlists. Many of my playlists are automated; this tutorial doesn’t apply to those. I have started building just a handful of playlists that are very specific. I was building the playlists in both Clementine and Plex and realized that, if I built the playlist in one of the programs, I’d likely want to have the playlist in the other program as well. The problem I was running into is that there is no native way to import or export playlists to Plex.

This sent me down a weird, tangled path. Let me see if I can summarize the situation briefly. Before September 2018, there were some plugins that allowed some playlist import/export functionality. (NOTE: Until writing this post, I didn’t even know there were Plex plugins.) In September of 2018, the Plex developers announced they were discontinuing plugins. That announcement obviously wasn’t that popular with the Plex users who relied on the plugins. Since then, many of the plugins that previously worked have been discontinued and are no longer updated.

Some programmers have switched from developing plugins to developing stand-alone applications that now interface with the Plex Media Server. These applications are no longer installed in the now-defunct “plugins” directory of Plex. One of these applications, WebTools-NG, has a feature that helps partially solve the playlist import/export problem.

(ASIDE: If you google for exporting playlists from Plex, you’ll likely find a few posts or pages from before 2018 indicating that you could click on the context menu inside a playlist in Plex and find an export option. As of the latest version of Plex – I’m currently running 1.22.3.4392 – that option does not exist. So, don’t bother looking for it.)

As I’m on Linux, the WebTools-NG application is an AppImage – nothing to install, just run the AppImage. Download it and put it somewhere where you will remember. You’ll also want to right-click on it and select “Properties.”

In permissions, select “Allow this file to run as a program.”

Once you’ve done that, double-click the application and you’ll get a log in screen that requires your Plex credentials. Fill those out and log in. Assuming everything works, you’ll get a screen like this:

A couple of things to do before you start messing with the playlists. At the top, select your Plex server. You can see I have selected mine:

If you’re like me, you’re probably wondering why you’d need to select your server. It took me a second to realize that some people have multiple servers. That feature exists so people who have multiple servers can select the one they want to work with. Now, go to Global Settings and change two options. First, set your Export Directory and change the “Timeout when requesting info from PMS in sec” to 99. Without changing that, the application was timing out with some of my requests.

Now, to work with the playlists… click on ExportTools on the left and you’ll get this window:

Since I’m working with an audio playlist, in the Select Export Type, I selected “Playlist.”

In Sub Type, I selected “Audio.”

Then you need to Select Media Library. The one I chose for this example is “Favorite Chill.”

Finally, for Export Level, I chose “All” (FYI, I’m not sure what this does. I’ll mess around with it later.)

Then click “Export Media” and you’ll see the Status box light up with your export:

When it’s done, you’ll have a CSV file with all the relevant information from your playlist from Plex. You’ll open that with a spreadsheet program (I’m using LibreOffice Calc) for the next step. With LibreOffice, it asks for the delimiter (the Unicode character that separates data). By default in WebTools-NG, the delimiter is “|”. So, make sure you set that when you open the CSV file. Here’s how in LibreOffice:

Once you’ve done that, open the file and you’ll see lots of columns with lots of information. To create an M3U playlist, we actually just need one column: “Part File Combined.” That has the location of the file:

To make this as easy possible, I’d suggest deleting all of the other columns. Then, once you’ve deleted them all, you’ll need to do one additional step. Since the location of the music files is on my file server and is the absolute location (i.e., relative to where the OS is installed) and is not a relative location (i.e., relative to where the M3U file is stored), you’ll need to adjust that for where you are going to import your M3U file. Let me be a bit more specific. On my fileserver, my music is stored in a folder called “music” (keeping it simple). But that folder is stored on my ZFS raid which is located in the folder “ZFSNAS.” Thus, the absolute location of my music folder is “/ZFSNAS/music/”. That is different on my desktop computer where I am going to be playing the music. On my desktop computer, my music folder is loaded using NFS into a folder called “Music” but that folder is located in a different absolute location, “/home/ryan/Music/”. So, the final and perhaps most complicated step to creating the M3U file is to replace the absolute location from my fileserver with the absolute file location on my desktop computer. That’s easy enough with a Find and Replace command:

Once you’ve don’t that, creating the M3U file is super easy. Save your CSV file that now has just one column with a column header. Now, open that file with a text editor. I’m using Kate. Replace the column header “Part File Combined” with “#EXTM3U”. The file format for M3U file is pretty simple – it has to start with #EXTM3U then can literally just consist of the absolute file location of the files.

Then either save the file with the extension “.m3u” or close the file and replace the “.csv” with “.m3u”. You now have an M3U playlist!

The last step is to import the file into your other application. In my case, this is Clementine. First, open a new Playlist in Clementine. This is important because, when you import the M3U playlist, it will get added to whichever playlist you currently have active.

Then click on “Music” -> “Open File.” At the bottom of that window, choose “M3U playlists.” Find your playlist and select it. Then hit “Open.”

And there you have it – your playlist exported from Plex into Clementine:

If you want to save that playlist in Clementine, just click the “Star” at the top of the Playlist and it will get added to your Playlists. You can also save the playlist as an M3U file and Clementine will format it nice and proper.

Some notes…

This is a bit cumbersome. It sure would be nice of the folks at Plex to add a playlist import/export feature. It really wouldn’t be that hard to do this. A simple walk through process that first imports the M3U playlist then either (a) searches the audio library for matching files or (b) just cuts off the absolute location of the audio files and replaces that with the relative location for the audio library would do this.

I’ve noticed that not every song gets properly imported. I’ll probe that issue but I’m guessing it’s songs that have weird characters in their names that are problematic.

This is obviously just a one-way export feature. Right now, I have only figured out a way to export a playlist from Plex using a third application. I’d really love to be able to import playlists into Plex. If anyone has any thoughts on that, I welcome them. I typically create all my playlists in Clementine. For now, I have to go the other way for custom-built playlists that I want in both locations.

 1,394 total views,  15 views today

Linux Server – Adding a New Domain and WordPress Site – Linode VPS – Ubuntu 18.04

I have a VPS server with Linode that I use to host about a dozen different websites. All but one of them run on WordPress. Occasionally, I get a request to add another domain and website to the server. It’s not terribly time consuming to do, but it does require a number of specific steps and I never remember all of them. To help me remember them (and perhaps to help someone else), I’m putting together this tutorial.

Step 1: Purchase the new domain. For this tutorial, I’m going to be adding a domain my brother-in-law requested: flyingyoga.us. He’s a pilot but is getting certified as a yoga instructor and wanted to set up a simple website. I use Google Domains to purchase and manage all my domains. So, step 1, decide on what company you want to use to purchase your domains and purchase your domain.

My domains in Google Domains.

Step 2: Change the DNS settings on the new domain to point to Linode’s nameservers. If using Google domains, click on the domain then click on DNS:

Select “DNS” to change the nameservers.

Under Name servers, select “Use custom name servers” and enter “ns1.linode.com” for the first Name server then add a second and enter “ns2.linode.com.” Hit Save.

Here are the custom name servers in Google Domains.

Step 3: You now need to add the domain and then add domain records to your Linode account. Login to your account and select Domains.

Click on “Create Domain”:

click “Create Domain” at the top right.

Enter the domain and the admin email address. Unless you need to do something special with the Records, select “Insert default records from one of my Linodes.” then select your Linode:

Basic domain creation information.

Assuming you don’t need anything special, the defaults should take care of this step and you’re done.

Step 4: Since I already have about a dozen websites running on the server, I’m not going to go into detail on how to install a LAMP stack – Apache, MySQL, and PHP. There are a number of tutorials for doing so. Instead, my next step is to SSH into my server (obviously replace “user” and the IP address with your own) and create the directories where the files for the new website will be hosted.

ssh user@192.168.0.1

Whenever I log into my server, I use that as an opportunity to run updates.

sudo apt-get update
sudo apt-get upgrade

Next, navigate to the directory where you store your public-facing web files. On my server, it’s /var/www/

cd /var/www/

In that directory, I’m going to create a new folder for the domain:

mkdir flyingyoga.us

I’m then going to navigate inside that folder and create two additional folders: (1) a “public” folder where the actual files go and (2) a “logs” folder for access and error logs.

cd flyingyoga.us
mkdir logs public

Now, navigate back to the main directory where you store all your website files and change the ownership of the directories:

cd ..
sudo chown -R www-data:www-data flyingyoga.us/

This allows Apache to access the information in the folders and since Apache is effectively the web server, that’s important. Don’t skip this step.

Step 5: Download the latest version of WordPress and untar it into the public folder. Where you download it and untar isn’t actually all that important as we’re going to move it to the public folder shortly.

sudo wget http://wordpress.org/latest.tar.gz
tar -xvf latest.tar.gz
mv wordpress/* /var/www/flyingyoga.us/public/
rmdir wordpress/
rm latest.tar.gz

Just to clarify the above commands. The first line downloads the latest version of wordpress. The second one unpacks wordpress into a folder called “wordpress.” The third line moves all of the files that were just unpacked into the newly created public folder for the domain. The fourth line deletes the now empty “wordpress” folder and the fifth line deletes the wordpress tar.gz download (nice and clean server).

Step 6: It would be nice if we were done, but we’ve got a ways to go yet. Next up, let’s create a MySQL user and database with a corresponding user. This can be done from the command line as well, but I prefer using phpmyadmin.

You’ll need to look up where to find phpMyAdmin on your server.

Navigate to “User accounts” and scroll down to “Add user account.” Click on that and you’ll get this screen:

Click “add user account” to set up a new database and user.

Obviously, choose an appropriate user name. I typically let phpMyAdmin generate a nice strong password. Just below the “Login Information” box is a box that says “Database for user account.” Check “Create database with same name and grant all privileges.” Don’t check below that where it says “Global privileges – Check all.” That would give this user access to all databases you have on the server. Not a good security choice. Write down or copy the username and password to a text file as you’ll need it later. When you’ve got all that done, scroll down to the bottom and select “Go.” That will create your database, the user, with the password you wrote down (you wrote it down or copied it to a text file, right?). You now have the database WordPress is going to use for your website.

Here’s where you can create a new user and database in PHPMyAdmin

Step 7: Next up is creating the website information for Apache. Back to the SSH shell. Navigate to where the Apache websites are stored on your server:

cd /etc/apache2/sites-available

In there, you should see the configuration files for all the websites on your server. Since I already have sites configured, I typically just copy one of the existing configuration files and then edit it according to the new domain:

cp otherdomain.com.conf flyingyoga.us.conf
cp otherdomain.com-le-ssl-conf flyingyoga.us-le-ssl-conf

Since I’m using SSL on all my domains, I have two configuration files per domain. The above commands copy existing configuration files and create new ones for my new domain. Here’s the contents for the first one: flyingyoga.us.conf:

<Directory /var/www/flyingyoga.us/public>
    Require all granted
</Directory>
<VirtualHost *:80>
        ServerName flyingyoga.us
        ServerAlias www.flyingyoga.us
        ServerAdmin ryantcragun@gmail.com
        DocumentRoot /var/www/flyingyoga.us/public

        ErrorLog /var/www/flyingyoga.us/logs/error.log
        CustomLog /var/www/flyingyoga.us/logs/access.log combined

RewriteEngine on
RewriteCond %{SERVER_NAME} =www.flyingyoga.us [OR]
RewriteCond %{SERVER_NAME} =flyingyoga.us
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>

And the contents for the second one – flyingyoga.us-le-ssl-conf

<IfModule mod_ssl.c>
<Directory /var/www/flyingyoga.us/public>
    Require all granted
</Directory>
<VirtualHost *:443>
        ServerName flyingyoga.us
        ServerAlias www.flyingyoga.us
        ServerAdmin ryantcragun@gmail.com
        DocumentRoot /var/www/flyingyoga.us/public

        ErrorLog /var/www/flyingyoga.us/logs/error.log
        CustomLog /var/www/flyingyoga.us/logs/access.log combined

Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/ryantcragun.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ryantcragun.com/privkey.pem
</VirtualHost>
</IfModule>

Once you have these updated, you can then tell Apache to load the site:

sudo a2ensite flyingyoga.us.conf
systemctl reload apache2

The first line tells Apache to enable the site. The second line restarts Apache. NOTE: You don’t have to load the ssl configuration file (i.e., flyingyoga.us-le-ssl-conf).

Step 8: Since I have SSL encryption on all of my websites using LetsEncrypt, there is an extra step. This is always the one I forget. Since I’m adding a domain, I have to use the following commands to add a domain to my existing domains on the single SSL certificate that I use for all of my domains. First, let me find the name of my current certificate:

certbot certificates

That provides me the name of my current certificate as well as a list of all of my other domains. Next, I copy all of the existing domains so I can update the certificate and add the two new ones I need to add. The command to then get a new certificate with the added domains is:

certbot --expand -d existing.com,www.existing.com,flyingyoga.us,www.flyingyoga.com

Assuming everything works, this will expand the existing certificate with the new domain and issue a new SSL certificate with all the domains. (NOTE: no spaces between the domains.)

Step 9: Now you can test your server. I always do this by creating a simple html file with the classic “Hello World” in it and putting that into the public directory for the new website:

<!DOCTYPE html>
<html>
    <head>
        <title>Test Page</title>
    </head>
    <body>
        <p>Hello World!</p>
    </body>
</html>

Save that as “index.html” and put it in the public folder. Now, navigate to the new domain in your browser and, hopefully, you’ll see “Hello World!”

Yeah. Website is working!

If you saw “Hello World!” in your browser, everything is working. It’s always a good idea to check that the https redirect is working as well – so you know that your SSL certificate is good and working. The easiest way to do that is to click on the lock icon in your browser and then check the certificate information.

Step 10: Now, the final step to install WordPress – change the name of the index.html file to something else (e.g. “index.html-test”) then reload the page. You should now see the installation guide for WordPress that will ask for your database name, username, and password:

This is the last step to install WordPress on a new domain.

Enter the database information from Step 6 above. Assuming everything goes according to plan, WordPress will populate the database with the relevant fields and your site will be ready:

Here’s the backend of my new wordpress installation.

 1,923 total views,  12 views today

GPRENAME – removing text up to a space using regular expressions

For a recent project, I came into possession of hundreds of photos, each of which was named according to the settings the person who take the photo with their phone had in place but ended with ” – FirstName LastName.jpg.”

The photos have varied characters before the names. That required some creative thinking on how to batch rename them.

I wanted to keep the users’ names, but everything before that could go. With hundreds of photos, I realized that it would be way too time-consuming to edit each file name by hand. Enter “gprename,” an app for bulk renaming files in Linux, and regular expressions.

Regular expressions are strings of characters that can be used to search for patterns. I’m no expert, but I’m familiar with them and realized that this could be a solution to my problem. A quick google search brought up this Stack Overflow post with the answer. I then applied it using gprename and, a 2 hour task was cut down to 5 minutes.

Here’s how I did it…

Open gprename and navigate to the folder where the files are located.

This is one of about 14 folders I needed to clean up.

On the “Replace / Remove” tab, in the “Replace” box, enter: “^.*?\s+”. In the “with” box that follows, you can just leave that blank. Make sure you check the box next to “Regular expression.”

Here’s where you enter the regular expression.

Here’s what the expression means:
^: Match from the beginning of the line
.*?\s+: Match anything until a space is encountered

Hit “Preview” and you should see the result:

A preview of the changes.

When you’re ready, click “Rename” and all of the characters prior to the first space will be wiped out:

The cleaned up files.

In my case, I then did some additional renaming (removing the ” – ” before the names and adding dates after), but this should get you close to your goal.

 1,756 total views,  12 views today

ZFS Snapshot Management

With my new fileserver/NAS, I am using a ZFS raid array for my file storage. One aspect of the ZFS file system that I didn’t really investigate all that closely before I installed it was the snapshot feature. ZFS snapshots basically store a “picture” or “image” of all the files on the system at a specific point in time. The logic behind snapshots is that you can then roll the filesystem back to that point in time, in case there was a problem.

This is, of course, a great feature for a variety of reasons. If you accidentally deleted a file, you could recover it this way. If someone hacked your server and encrypted your files, you could recover them this way (assuming the snapshots aren’t also encrypted).

However, there is a small issue here. In order for the snapshots to work, they basically have to store all of the files that were in existence when the snapshot was made. In other words, so long as you retain a snapshot, you will also have to retain all of the files associated with that snapshot, which will take up all the corresponding space as well. Technically, once you delete a file, it won’t show up in your file directory, but the file system has to retain a copy of it as part of the snapshot.

It was this issue that finally got me to look more closely at snapshot management. My fileserver has four 4TB hard drives in it, which gives me roughly 8TB of storage. I can always upgrade the drives to bigger drives if needed, but I don’t have 8TB of files at this point. I have around 4TB, but my fileserver was reporting less and less available space, even after I deleted some files I didn’t want/need. I was confused until I realized that I had set up some automatic snapshots when I first set up the fileserver. Those snapshots included many of the files I had deleted, so no space was freed up when I deleted those files. I had to delete the snapshots in order to free up the space. It took some doing, but I finally figured out how to manage snapshots a bit better.

First, here’s how to list your existing snapshots:

zfs list -t snapshot -r [ZFSpoolnamehere]

If you have any snapshots, you should see a list like this:

This shows the snapshots I have created on my fileserver.

To create a snapshot, you can use this code:

sudo zfs snapshot -r [ZFSpoolnamehere]@[nameofsnapshot]

The above command puts you in root (sudo) and calls zfs, telling it to create a snapshot. The “-r” part of the command tells zfs to make a recursive snapshot, which means it will create snapshots of each of your top directories in your ZFS pool. You then have to invoke the ZFS pool followed by the “@” symbol and then the name of your snapshot. Here’s my actual command:

sudo zfs snapshot -r ZFSNAS@2021_01_02

This command creates a snapshot of my entire ZFS pool and names it with the date it was created – January 2nd, 2021.

Once I’ve created my snapshot, I can see it in the list of snapshots I have created with the command above.

Of course, there is also the issue of deleting snapshots. This was the issue I needed to address that got me started on ZFS snapshots – I had some very old snapshots that were taking up terabytes of space on my fileserver. To delete a snapshot, you can use the following code:

sudo zfs destroy [ZFSpoolname]/[directoryname]@[nameofsnapshot]

Here’s an example of my actual code used to delete an old snapshot:

sudo zfs destroy ZFSNAS/files@2020_12_25

This command tells zfs to delete my snapshot in the ZFSNAS pool that was made of the “files” directory on December 25, 2020 (@2020_12_25). When I check the list of snapshots after deleting that one, it’s no longer there and the space that was reserved for that snapshot is freed up.

Luckily, I haven’t needed to roll back to a snapshot at this point. However, it is nice knowing that my filesystem on my NAS has that capability.

For more information on working with ZFS snapshots, see here.

Finally, there is the issue of how many snapshots to keep and when to make them. Given my use case – a fileserver/NAS that stores my music, my video collection, my old photos (not the current year’s photos), and my old files, I actually don’t need particularly frequent snapshots. I decided that one snapshot per month would be sufficient. You can automate this by putting it into a crontab job. However, I simply added this to my calendar to create a snapshot once a month. That way, I can also delete the corresponding snapshot from 12 months prior. Basically, I am keeping one year’s worth of snapshots with one snapshot per month. This gives me a year’s worth of backups to which I can roll back should I need to but also regularly frees up any space that might be taken up by particularly old backups.

I can imagine that other use scenarios would require different frequencies of snapshots for a zfs system (e.g., a production server or someone who uses a zfs raid as their primary file storage). Since my server is primarily used for storing old files and relatively unchanging collections, once a month snapshots are sufficient.

 2,635 total views,  6 views today

PVC paintball gun holder

I play paintball frequently enough that I finally broke down and made myself a paintball gun holder. Is it a requirement for paintball? Of course not! Is it a convenient place to put your gun between games while you fill the hopper and work on it? Yes!

I initially tried a different design but didn’t like it. This was my second attempt and I’m quite pleased with it. I found several youtube videos showing how to make PVC paintball gun holders. They showed what I needed to see, but in almost all of them, the people creating the videos didn’t measure anything – they were just winging it. I’m not one for winging it. So, after successfully building my own, here are detailed instructions.

First, supplies. You’ll need:

  • PVC cutting tool (you can use a hacksaw if needed)
  • dry erase marker
  • ruler or tape measure
  • PVC fittings – all in 3/4 inch PVC (unless otherwise indicated):
    • 4 – 90 degree elbows
    • 4 – 45 degree elbows
    • 3 – tees
    • 1 – 3/4″ to 1″ tee
  • 3/4 PVC pipe cut to the following lengths:
    • 1 – 24 centimeter piece (9 1/2 inches)
    • 4 – 22 centimeter pieces (8 2/3 inches)
    • 1 – 16 centimeter piece (6 1/3 inches)
    • 2 – 11 centimeter pieces (4 1/3 inches)
    • 4 – 10 centimeter pieces (4 inches)
    • 1 – 5.5 centimeter pieces (2 1/4 inches)
    • 2 – 4.5 centimeter pieces (1 3/4 inches)

Here’s a photo of all the pieces labeled:

Once you cut all of the pieces of PVC, then it’s just a matter of assembling them in the right way. Here are the pieces assembled and labeled:

The hardest part of this build was cutting the 3/4″ x 1″ tee in half. That isn’t actually required. You could also just use another 3/4″ tee and turn it sideways, placing it under the barrel. If you do, the length of PVC that holds it up will need to be slightly shorter.

 2,711 total views,  11 views today

Long Term Storage of Gmail/Email using Mozilla’s Thunderbird (on Linux)

I have email going back to 2003. There have been times when I have actually benefited from my email archive. Several times, I have gone back 5 or more years to look for a specific email and my archive has saved me. However, my current approach to backing up my email is, let’s say, a borderline violation of Google’s Terms of Service. Basically, I created a second email account that I use almost exclusively (not exclusively – I use it for a couple of other things) for storing the email from my primary Gmail account. However, Google has recently changed its storage policies for their email accounts, which has made me a little nervous. And, of course, I’m always wary about storing my data with Google or other large software companies.

Since I have already worked out a storage solution for old files that is quite reliable, moving my old email to that storage solution makes sense. (FYI, my storage solution is to run a local file server in my house with a RAID array so I have plenty of space and local backups. Certain folders on the file server are backed up in real-time to an online service so I also have a real-time offsite backup of all my important files. In other words, I have local and offsite redundancy for all my important files.)

I’m also a fan of Open Source Software (OSS) and would prefer an OSS solution to any software needs I have. Enter Mozilla’s Thunderbird email client. I have used Thunderbird for various tasks in the past and like its interface. I was wondering if there was a way to have Mozilla archive my email in a way that I can easily retrieve the email should I need to. Easily might be a bit of a stretch here, but it is retrievable using this solution. And, it’s free, doesn’t violate any terms of service, and works with my existing data backup approach.

So, how does it work? And how did I set it up?

First, install Thunderbird and set up whatever online email account you want to backup. I’m not going to go through those steps as there are plenty of tutorials for both of them. I’m using a Gmail account for this.

Once you have your email account set up in Thunderbird, click on the All Mail folder (assuming it’s a Gmail account) and let Thunderbird take the time it needs to synchronize all of your email locally. With the over one hundred thousand emails I had in my online archive, it took the better part of a day to do all the synchronizing.

I had over 167,000 emails in my online archive account.

Once you’ve got your email synchronized, right-click on “Local Folders” and select “New Folder.” I called my new folder “Archive.”

Technically, you could store whatever emails you want to store in that folder. However, you’ll probably want to create a subfolder in that folder with a memorable name (e.g., “2015 Work”). Trust me, it will be beneficial later. I like to organize things by year. So, I created subfolders under the Archive folder for each year of emails I wanted to back up. You can see that I have a folder in the above image for the year 2003, which is the oldest email I have (that was back when I had a Hotmail address… I feel so dirty admitting that!).

The next step is weird, but it’s the only way I’ve been able to get Thunderbird to play “nice” with Gmail. Open a browser, log in to the Gmail account you’re using, and empty your trash. Trust me, you’ll want to have the trash empty for this next step.

Now, returning to Thunderbird, select the emails you want to transfer to your local folder and drag them over the “Trash” folder in your Gmail account in Thunderbird. This won’t delete them but it will assign them the “Trash” tag in Gmail. Once they have all been moved into the Trash, select them again (CTRL+A) and drag them into the folder you created to store your archived emails. In the screenshot below, I’m moving just a few emails from 2003 to the Trash to test this process:

Once the transfer is complete, click on the Local Folder where you transferred the emails to make sure they are there:

And there are the emails. Just where I want them. This also means that you have a local copy of all your emails in a folder exactly where you want them. At this point, you have technically made a backup of all the email you wanted to backup.

To remove them from your Gmail account, you need to do one additional thing. Go back to the browser where you have your Gmail account open, click on the Trash, and empty the Trash. When you do, the emails will no longer be on Gmail’s server. The only copy is on your local computer.

Now, the next tricky part to this (I didn’t say this was perfectly easy, but it’s pretty easy). Thunderbird doesn’t really store the Local Folder emails in an obvious location. But you can find the location easy enough. Right-click your Local Folder where you are archiving the emails and select “Properties.”

You’ll get this window:

Basically, in the Location box you’ll see the folder’s location. This is telling you where to find your Local Folder where your email is stored. On Linux, make sure that you have “view hidden files” turned on in your file manager (I’m using Dolphin), the location is your home folder, followed by your user folder, then it’s inside the hidden “.thunderbird” folder followed by a randomly generated folder that Thunderbird creates. Inside that folder, look for the “Mail/Local Folders” folder. Or, simply:

/home/ryan/.thunderbird/R$ND0M.default-release/Mail/Local Folders/
I have opened all the relevant folders on my computer in Dolphin see you can see the file structure.

Since I created an additional folder, there are two files in my Archive.sbd folder that contain all the emails I have put into that folder: “2003” and “2003.msf.” You can learn more about the contents of those files here, but, basically, the file with no file extension stores the actual contents of the emails. The .msf file is basically an index of the emails (I was able to open them both in Kate, a text editor, and read them fine). In short, you won’t have a bunch of files in your archive. You’ll have two files – one with the contents of the emails and one that serves as an index of the emails that Thunderbird can read. These are the two files that you’ll want to backup.

I’ll admit, this was the part that scared me. I needed to know that I could just take those two files, move them to wherever I wanted to ultimately store them and then, when needed, move them back into my Thunderbird profile and read them just fine. So, I tested it repeatedly.

Here’s what I did. First, I clicked on my backup folder in Thunderbird to make sure all of the emails were there:

I then went into the Local Folders directory and moved the files to where I want to back them up.

I then clicked on a different folder in Thunderbird and then hit F5 to make Thunderbird refresh the Local Folders. It took a couple of minutes for Thunderbird to refresh the folder, but eventually, it did. Then, I selected the 2003 Archive folder again and the emails were gone:

This is what I expected. The emails are in the “2003.msf” and “2003” files on my backup server. Now for the real test. I copied the two backed up files back to the Archive.sbd folder in my Thunderbird profile, selected the parent folder in Thunderbird, and hit F5 to refresh the folders again. It took a minute for the folder to refresh, but eventually, when I clicked on the 2003 folder and…

The emails are back!

It worked!!!

What this means is that you can put all of the email you want to back up into a folder; that folder is stored in your Thunderbird profile. You can then find the relevant .msf file and its corresponding data file, move them wherever you want for storage and, if needed, move them back to your Thunderbird profile (or, technically, any Thunderbird profile using the same folder structure) and you’ll still have all of your email.

This may not be the most elegant method for backing up your email but it’s free, it’s relatively simple and straightforward, and works reliably. Your email is not in a proprietary format but rather in an open source format that can actually be ready in a simple text editor. Of course, it’s easiest to read it in Thunderbird, but you have the files in a format that is open and secure.

BONUS:

If you don’t think you’re going to need to access your email on a regular basis, you can always compress the files before storing them. Given that most of email is text, you’ll likely realize a savings of close to 50% if space is at a premium (once I moved all of my 2003 email to storage, I compressed it and saw a savings of 60%). This will add a small amount of time to accessing the email as you’ll have to extract it from the compressed format, but it could mean pretty substantial space savings depending on how many emails you’re archiving.

EXTRA BONUS:

This is also a way to move emails between computers. I ended up using this approach to move my email archives to my laptop so I could go through my old email while I’m watching TV at night and delete all the emails I’ll never want in the future (you could of course do that with the email online before you archive it). I’m pretty good about deleting useless emails as I go, but I don’t catch them all. With the .msf file and its accompanying data file, I was able to transport my email archives to whichever computer I wanted and modify the file, then return it to my file server for long term storage.

 3,375 total views,  5 views today

Linux – Bulk UnRAR

If you’re not familiar with RAR files, they are like ZIP files. RAR is an archive format.

I had a collection of more than 150 RAR files in a single folder I needed to unrar (that is, open and extract from the archive). Doing them one at a time via KDE’s Ark software would work, but it would have taken a long time. Why spend that much time when I could automate the process.

Enter a loop bash script in KDE’s Konsole:

for i in *.rar; do unrar x "$i"; done

Here’s how the command works. The “for i in” part starts the loop (note: you can use any letter here). The “*.rar” component indicates that we want the loop to run through all the RAR files in the directory, regardless of the name of the file. The “do” command tells the loop what command to run. The “unrar x “$i”” component tells the software to use the unrar function which unpacks the archive. The “x” in that command tells the software to use the directory structure inside the RAR archive. The final piece of the loop after the second semi-colon – “done” – indicates what the terminal should do once the loop completes.

It took about 20 minutes to run through all the RAR files, but I used that time to create this tutorial. Doing this by hand would have taken me hours.

 2,063 total views,  2 views today

Linux: Batch Convert .avi files to .mp4/.mkv

I’ve been trying to clean up my video library since building my latest NAS. In the process, I found a number of .avi files, which is an older file format that isn’t widely used these days. While every time a file is converted it loses some of its quality, I was willing to risk the slight quality reduction to convert the remaining few files I had to convert into more modern file formats.

I initially tried converting the files using HandBrake. But given the number I needed to convert, I decided pretty quickly that I needed a faster method for this. Enter stackoverflow.

Assuming you have all of your .avi video files in a single directory, navigate to that directory in a terminal and you can use the following single line of code to iterate through all of the .avi files and convert them to .mp4 files:

for i in *.avi; do ffmpeg -i "$i" "${i%.*}.mp4"; done

In case you’re interested, the code is a loop. The first part starts the loop (“for i in *.avi;”), telling the computer to look for every file with the file extension .avi. The second part tells the computer what to do with every file with that extension – convert it to a .mp4 file with the same name. The last piece indicates what to do when the loop is completed – done.

Of course, this code could also be used to convert any other file format into a different format by replacing the .avi or .mp4 file extensions in the appropriate places. For instance, to convert all the .avi files to .mkv, the code would look like this:

for i in *.avi; do ffmpeg -i "$i" "${i%.*}.mkv"; done

Or if you wanted to convert a bunch of .mp4 files into .mkv files, you could do this:

for i in *.mp4; do ffmpeg -i "$i" "${i%.*}.mkv"; done

BONUS:

If you have .avi files in a number of subfolders, you’ll want to use this script:

find . -exec ffmpeg -i {} {}.mp4 \;

To use it, navigate in a terminal to the top-level folder, then execute this command. It will search through all the subfolders, find all the files in those subfolders, and convert them all to .mp4 files.

Of course, if you have a mixture of file types in the folders, you’ll want a variation of this command that searches for just the files of a certain type. To do that, use this command:

find . -name *.avi -exec ffmpeg -i {} {}.mp4 \;

This command will find all the files with the extension .avi and convert them all to .mp4 files using ffmpeg.

And, if you’re feeling particularly adventurous, you could search for multiple file types and convert them all:

find . -name *.avi -or -name *.mkv -exec ffmpeg -i {} {}.mp4 \;

This code would find every file with the extension .avi or .mkv and convert it to .mp4 using ffmpeg.

NOTE: This command uses the default conversion settings of ffmpeg. If you want more fine-tuned conversions, you can always check the options available in converting video files using ffmpeg.

BONUS 2

If you want to specify the codec to use in converting the files, that is a little more complicated. For instance, if I want to use H265 instead of H264 as my codec, I could use the following code to convert all of my .avi files in a folder into .mkv files with H265 encoding:

for i in *.avi; do ffmpeg -i "$i" -c:v libx265 -crf 26 -preset fast -c:a aac -b:a 128k "${i%.*}.mkv"; done

The default setting in ffmpeg for audio is to pass it through. Thus, if you wanted to just convert the video to a new codec but leave the audio, you could use the following command:

for i in *.avi; do ffmpeg -i "$i" -c:v libx265 -crf 26 -preset fast "${i%.*}.mkv"; done

This will convert the video to H265 but retain whatever audio was included with the video file (the default is to take the audio with the highest number of channels).

Additional information on the various settings for H.265 is available here. Some quick notes: the number after “-crf” is basically an indicator of quality, but is inverted. Lower numbers improve the quality; higher numbers reduce the quality. Depending on what I’m encoding, I vary this from 24 (higher quality) to 32 (lower quality). This will affect the resulting file size. If time is not a concern, you can also change the variable after “-preset.” The options are ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, and veryslow. Slower encodes will result in better quality output but it will take substantially longer to do the encode.

If you run into the problem where you are trying to do bulk conversions but the names of the videos you are converting have spaces in them, you may get the error: “find: paths must precede expression.” The solution is to put the pattern in single quotes, like this:

find . -name '*.mkv' -exec ffmpeg -i {} -c:v libx265 -crf 26 -preset fast {}-new.mkv \;

 5,879 total views,  12 views today

HandBrake – H.265 NVEnc 1080p Ripping Chart and Guidelines

With a Plex server, I want my collection of movies backed up digitally so I can watch them when and where I want to. This involves a two-step process. First, I have to rip the video from Blu Ray, which I do using MakeMKV. Since that process is pretty straightforward, I’m not going to cover how to do it here. It’s the second step that is more complicated – compressing the video using HandBrake.

I’ve been using HandBrake for years, but have typically just used the default settings. That’s a terrible idea for a number of reasons, which I’ll detail in this post. But the primary reason why I’m posting is to detail how different settings translate into different file sizes so people have a better sense of what settings to use.

Format

The first thing you need to decide when ripping a video using HandBrake is the resulting file format. I’m using HandBrake 1.3.3 which includes three options: MPEG-4, Matroska, and WebM. Each of these formats has advantages and disadvantages.

You can choose which format you want on the summary tab of Handbrake

MPEG-4 (or mp4/MP4) is the most widespread format and the most compatible with various devices and programs. This is likely going to be the default for most people. However, MPEG-4 has a major limitation – you cannot store multiple subtitles in the video file itself. If you don’t care about subtitles (e.g., you never watch foreign films), that may not matter to you. But it is a problem for those of us who enjoy a good foreign film. (NOTE: The workaround for most people is to save the subtitles as an .srt file in the same folder, assuming your playback device can then load a separate SRT file.)

Matroska (or MKV) is kind of odd. It’s actually not a video codec. It’s an open-standard container. Basically, it’s like a zip file or folder for the actual media. Inside, it can have video files in a lot of different formats, including MPEG-4, but it can also include subtitle tracks and other media. Thus, the major advantage is that you can store more inside a Matroska file than you can in MPEG-4 files. The disadvantage is that not every device or app supports Matroska files natively. However, support for the format has increased quite a bit. Matroska is now my preferred format, mostly because all of the devices I use for playback can play MKV files and I can store subtitles in the same file.

WebM is actually a variant of Matroska but is meant to be a royalty-free alternative to MP4 that is primarily sponsored by Google and supports Google’s VP8 and VP9 codecs. It is licensed under a BSD license. Basically, MP4 is a proprietary codec while WebM is an open-source one. (NOTE: I don’t have any videos stored in WebM format. However, when I create videos to upload to YouTube, I typically rip them into VP9, which is the preferred format for YouTube.)

Audio

HandBrake searches for the default audio track from the video you are planning on converting but then does something very odd. By default, it sets whatever audio track that is to be converted to stereo audio (i.e., 2.0 or 2 channels – left and right). You can see that in the screenshot below:

HandBrake’s default setting on audio is to convert the default audio track to Stereo (2.0) sound.

If you have a home theater or good headphones, you’ll want 5.1 surround sound at a minimum. If you have a nicer setup, you’ll want 7.1 surround sound. So, make sure that you delete the default Audio option and instead include a 5.1 surround sound option into your new file. Like this:

5.1 audio track instead of stereo.

(NOTE: I haven’t figured out the best options for 5.1 surround sound. I’m not sure whether AC3 or AC3 passthrough is better. And I’m not sure if Dolby Surround, Dolby ProLogic II, or 5.1 Surround is better. Perhaps someone out there will have insights on this.)

(NOTE: Most modern video players are capable of converting 5.1 surround sound into stereo sound [a.k.a. downsampling]. Not including a 2.0 option is perfectly fine for most people as playback devices will compensate.)

Tags

If you haven’t ever used the Tags tab in Handbrake, you really should. By default, the “Title” tag is filled in with whatever information is stored in the video file or DVD you are converting, like this:

Default tag information loaded by HandBrake

However, what you include in the “Title” tag is also what video playback software or devices will think is the name of the video. It is worth taking two minutes to fill in the tags. I typically will change the “Title,” at a minimum, as well as the “Release Date” and “Genre” tags, like this:

I have filled in the tags in this screenshot.

Video

Now on to the primary purpose of this post – video quality settings. To determine the ideal balance between quality and file size, I actually converted the same file using different settings. The original file is a rip of the film Amadeus (Director’s Cut) from the Blu-Ray disc that was 28.4 GB in size. The table below illustrates the results of converting the file using different Constant Quality options – all into 1080p video. For each of these conversions, I used identical settings except I changed the Constant Quality option in HandBrake. Each of these used the H.265 (NVenc) encoder with all the other video options on their default settings. Each conversion took about 1 hour (give or take 10 to 15 minutes).

SettingResulting File SizeCompression
(% of original file)
CQ 2214.3 GB50.35%
CQ 248.6 GB30.28%
CQ 266.5 GB22.89%
CQ 284.9 GB17.23%
CQ 303.7 GB13.03%
CQ 322.9 GB10.21%

The big question, of course, is how much degradation there is in quality with the smaller file sizes. To illustrate this, I took a screenshot using VLC from four of the video files: the original, unconverted Blu-Ray rip (28.4 GB file), from the CQ 22 rip, the CQ 28 file, and the CQ 32 file. I uploaded the full resolution files below so you can see and compare the differences.

Screenshot from the original MKV file – 28.4 GB.
Screenshot from the converted file at CQ 22.
Screenshot from the converted file at CQ 28.
Screenshot from the converted file at CQ 32.

To help my old eyes see the differences, I zoomed in on just one part of these photos to see if I could tell if there were any meaningful differences:

I thought I might see differences in the quality of the screenshots with the musical score (that’s why I chose this frame). I thought the lines might get blurred or the notes would become fuzzier. But that isn’t actually the case. Most of the space savings actually came from the detail in the brown section of this frame. In the original, if you look closely at the bottom left of the image, you can see lots of “noise.” This is basically film grain. The compression algorithm must look for that kind of noise and then remove it. If you look closely at the CQ 22 frame, there is less film grain. In CQ 28, large swathes of the film grain have been removed. And in CQ 32, all of the film grain has been removed and converted to blocks of single colors. Where there is fine detail in the video or distinct color differences, that has been retained. I should probably try this same exercise with a more modern movie shot digitally and not on film to see how the compression varies. Even so, this is a good illustration of how compression algorithms save space – they look for areas that can be considered generally the same color and drop detail that is deemed unimportant.

TL:DR: My recommendation for file sizes and CQ settings…

Scenario 1: Storage space is genuinely not an issue and you have a very fast internet connection, file server, and home network. You should rip your BluRay disks using MakeMKV and leave them as an MKV file with their full size. That will give you the best resolution with lots of options for future conversion if you want.

Scenario 2: You have a fair amount of storage space, a decent internet connection, a reasonably fast/powerful file server, and a good home network, then I’d suggest a dual approach. If it’s a movie you absolutely love and want the ideal combination of quality while minimizing file size, use a CQ of 22 to 26. That will reduce the quality, but retain much of the detail. If it’s just a movie that you enjoyed and think you might want to watch it again in the future but don’t care so much about the quality, go with a lower quality setting (e.g., CQ of 30 or 32).

Scenario 3: You have very little storage space, your internet connection isn’t great, your file server cannot convert files on the fly, and your home network is meh, then you should probably go for higher levels of compression, like CQ 30 or 32. You’ll still get very good quality videos (compression algorithms are very impressive these days) but in a much smaller sized video. Oh, and you should probably convert your files to MP4.

 16,434 total views,  37 views today