Linux – Batch Convert .wav to .flac

I ran into a weird issue the other day where SoundConverter, a GUI for converting audio files, was generating flac files that my audio player couldn’t see. I’m still not exactly sure what the problem was, but in trying to solve the problem, I went ahead and wrote a command to batch convert a folder of .wav (WAV) files to .flac (FLAC) files using FFMPEG. I figured I’d put it up here for me in the future and in case anyone else finds it useful.

First, navigate in a terminal/console to the folder where the audio files are you want to convert to flac. Then run the following command:

for i in *.wav; do ffmpeg -i "$i" -c:a flac "${i%.*}.flac"; done

Breaking this code down… The first part “for i in *.wav” starts the loop by telling the computer to loop through every file in that folder. The second part tells the computer what to do (“do”) with each of those files: load the ffmpeg software and for each file “$i” convert it to flac “-c:a flac”, renaming the file with the same name as before but with the flac file extension “”${i%.*}.flac””. (See here for what these characters do.) When that is complete, the loop is done.

 202 total views,  11 views today

LibreOffice – KDE Integration Package for Skinning/Color Management

Super simple note for future reference and for anyone else running into this issue. I upgraded my version of LibreOffice in Kubuntu 20.10 to a pre-release version to fix a bug. I was running version 7.1.4.2.

When I purged the version of LibreOffice that shipped with Kubuntu 20.10 (6.X – I don’t remember exactly which version), that also purged the KDE integration package that helps LibreOffice interface with the window manager and makes skinning LibreOffice much easier. As a result, when I would change my Global theme in System Settings, only part of the LibreOffice windows would change to reflect that theme. In particular, the sidebar on the right side of LibreOffice wasn’t changing colors with the rest of the window. This was making it impossible for me to see the names of different styles and also looked really weird.

The solution was easy. In Synaptic, search for the “libreoffice-kde” integration package and install it. Now, when I change my Global Theme, the LibreOffice windows change to reflect that.

In short, if you purge the version of LibreOffice in the repositories in KDE and install a newer version, make sure you also install the libreoffice-kde integration package or your LibreOffice windows will behave strangely.

 207 total views,  7 views today

Linux Server – tar files and directories

I recently had to move all my websites from one virtual private server (VPS) to another. When I only had a few such websites, I was okay with using SFTP (via Filezilla) to download all of the files and then upload them to the VPS. It took a while but I was okay with that. With about a dozen websites I host on my VPS, that was just not an option. It was time to finally try to figure out how to use tar more effectively on my server.

Why tar? Tar is similar to zip in that it combines lots of files and/or directories into a single file, a tarball or archive, which makes it much easier and way faster to move. Computers can move one large file faster than they can move lots of small files that, combined, make up the same size. When you have to move roughly 8 gigabytes and hundreds of thousands of files, it is far easier to do so by putting all of those files into a single tarball than moving the files individually. That’s why tar is way better for what I was trying to accomplish.

Sidenote: Tar is just a format for packaging all of the files/directories together. Typically, the files are also compressed using something like gzip, leading to a tarball with the extension “tar.gz.” It is possible to just combine the files without compressing them as will be detailed below.

Why was I so reticent to use tar to move my files? Because my prior experience with tar had resulted in several tarbombs, which are a nightmare. Basically, I had unpacked a tarball into the wrong directory, which resulted in thousands of files being in the wrong place, necessitating me having to figure out which files I should keep and which I should get rid of individually. That took more time than I saved by using tar. And since I was doing everything via SSH on a remote VPS, there was no easy way to clean up the mess. Even so, it was time to use tar. So, I bit the bullet and figured out how to do this. This post is my guide on how to carefully use tar but avoid tarbombs, which suck. Note, GNU has an entire manual on tar that goes into much greater detail than my post.

What do all those letters mean?

Since we’re using a console or terminal to package all these files and not a GUI, we have to specify what we want to do with the files using some letters. That’s what the letters do. Not to worry, though – I’m not going to go through all of them. There are literally dozens of options. I’ll keep this simple. The basic structure of the tar command is as follows:

tar [letters to specify what to do] [output tarball] [files or directories to add]

I’ll give some specific examples of tar commands below, but, first, let’s cover what the letters we’ll be using mean.

  • c – “create”: This tells tar to create an archive, in contrast to modifying a tarball. (Note: This letter alone cannot create an archive.)
  • f – “file”: This tells tar to modify files relative to the archive. (Note: Just c and f can work to create a tarball as will be shown below.)
  • v – “verbose”: This tells tar to show the progress of the archiving process by showing which files have been added to the archive.
  • z – “gzip”: This tells tar to compress the files in the tarball using gzip.
  • j – “bzip2”: This tells tar to compress the files in the tarball using bzip2.
  • x – “extract”: This tells tar to extract the files in a tarball.

A couple of important notes here regarding the letters. First, the order of the letters does not matter. Second, letters that do the same things should not be included in the command (e.g., “z” and “j” should not both be used).

Basic Examples:

Creating Tarballs

I put a couple of my papers and some images into a folder to use to demonstrate basic uses of tar. Here’s a screenshot so you can see what we’re working with:

First up, I’ll create a tarball of the entire test folder but with no compression and use that to show a couple of important elements of the process. Here’s the code:

cd /home/ryan/Desktop
tar cf test.tar tar.test.directory/

The code above changes my directory (“cd”) to the parent folder (Desktop). Then it calls the “tar” software and tells it to create an archive called “test.tar” and file inside it (“cf”) the entire test directory “tar.test.directory/”. What this code doesn’t do is compress the files. This can be seen when comparing the size of the folder – 5.4 mb – with the size of the newly created tarball – 5.4 mb.

Quickly, try switching the order of the letters, from “cf” to “fc” and you’ll see that the outcome is the same. Also, if that is all you switch in the code, you’ll notice if you re-run the same command that tar will not warn you that it is going to overwrite the previous tarball, it simply does it.

One more item to note that is actually really important, particularly when thinking about extracting an archive, is the folder structure inside the archive. If I open the archive using Ark, you’ll see that, because I navigated to the parent directory in my terminal before creating the tarball, the folder structure inside the archive is from the folder where I created the tarball (in this case, the Desktop directory).

I’m going to create the same tarball but I’m not going to navigate to the parent folder and instead will tell tar which folder to compress and where to store the new tarball:

tar cf /home/ryan/Desktop/test.tar /home/ryan/Desktop/tar.test.directory/

Functionally, this is the exact same command and archives all the same content. However, look at the directory structure inside the tarball when I open it with Ark:

That tar creates a different directory structure depending on the code you use is important, particularly if you want to avoid tarbombing. Why is this important? When you extract the tarball, the same directory structure that is in the archive will be created. If you don’t know what the directory structure is inside the tarball and extract it, that can result in all sorts of problems, some of which I’ll address at the end of this post.

The lesson here: the folder structure is based on what you enter into the command to create the tarball. So, if you don’t want lots of folders in your tarball, navigate to the folder above it and create it there. Otherwise, you’ll get the same folder structure.

Let’s add two additional letters: “v” for verbose so we can see what is added to the tarball and “z” to compress the files that are being added to the tarball. Here’s the code (remember, the order of the letters doesn’t matter):

tar cfvz test.compressed.tar.gz tar.test.directory/

This command calls the tar software, tells it to create (“c”) a tarball and add files (“f”), show the progress (“v”) and compress the files (“z”), saving the resulting tarball as “test.compressed.tar.gz” and the last piece is what should be put into the tarball. Note the modification to the file name to indicate that the tarball has been compressed – “.gz”. This extension is usually a reflection of the compression format. So, if you go with bzip2, it would be “.bz2” instead of “.gz”. Here’s how it looks in the terminal.

The resulting tarball is only 3.4 mb, illustrating that the contents were compressed as they were added to the archive.

Variations on the above command might include replacing the “z” with “j” to compress using bzip2 or “J” to compress using “xz.” Additionally, appending a “p” to the letters will preserve the permissions of the files and directories that are added to the tarball (though that is done by default, so it isn’t necessary to include it).

Modifying a Tarball

After creating a tarball, it’s possible you may need to change the tarball by either adding files to it or deleting files inside it. Here’s how to do each of those.

To add a file to a tarball, use “r” and, of course, “f”, like this:

tar rf test.tar file-to-add.odt

This command calls the tar software and tells it to append (“r”) a file to the archive (the “f” is necessary to tell the software to make changes). The tarball that is modified is next “test.tar.” And what is to be added is last “file-to-add.odt.”

Assuming the tarball you want to add a file to is compressed, files cannot be added. Attempting this will likely give you the error, “tar: Cannot update compressed archives.” Instead, you would need to extract the archive, make whatever changes you want, then create a new compressed tarball.

It is also possible to delete a file from a tarball. This doesn’t involve a letter but a word, “–delete.” The structure of the command is a bit different. Here’s what your code might look like:

tar --delete --file=test.tar file-to-delete.odt

This command calls the tar software and tells it you want to delete a file “–delete.” You then have to tell it which tarball and which file, which is done with the “–file=” option, which specifies the tarball and then the file is added at the end.

If you aren’t sure what files you have in your tarball, you can always list those using the “–list” command:

tar --list --file=test.tar

This is particularly helpful if you are looking for a specific file to remove from a tarball as it will also tell you if the file is in a subfolder inside the archive. If so, you would need to modify the code to take that into account:

tar --delete --file=test.tar "folder1/folder2/folder with space/file-to-delete.odt"

Do note that, just like adding files to a compressed tarball, deleting files from a compressed archive isn’t possible.

Extracting a Tarball

Here’s how to extract the files inside a tarball. The basic structure is the same, though, there are some things to consider. First, to extract a tarball, replace the “c” from above when creating it with an “x” which means “extract.” So, with our uncompressed tarball, we would extract it by using the following command:

tar xf test.tar

This will call the tar software, the “x” tells it to “extract” the files (“f”) and “test.tar” is the name of the tarball that is being extracted. Note that this code doesn’t specify where to extract the tarball. That can be added as an option that requires adding “-C”. If it’s left blank, the tarball will be extracted in whichever directory the console/terminal is in. Extracting to a specific directory would like like this:

tar xf test.tar -C /extract/into/this/directory

Of course, if your tarball is compressed and/or you want to see the progress of extracting the tarball, add the corresponding letters. If the tarball is a “tar.gz” you’ll need to add the “z” to decompress the files and “v” to see the progress, like this:

tar xfzv test.tar.gz

Lastly, keep in mind that the folder structure inside the tarball will be replicated when the files are extracted. What does that mean? If you extract a tarball into a folder called “home” but inside the tarball the files you want to extract are stored inside nested folders like “home/ryan/Desktop/archive”, when you extract the tarball, the files will end up in “home/home/ryan/Desktop/archive.” See below for what to do in such a situation.

Rules for Using tar:

Rule #1: Pay attention to the folder or directory structure when creating and extracting a tarball.

Rule #2: You cannot add or delete files inside a compressed tarball, only in an uncompressed tarball.

Bonus:

Let me give a specific situation that will illustrate this for a Linux server (I may be speaking from experience here). Imagine you just downloaded the latest version of WordPress and want to extract it into the following directory: /var/www/example.com/public/. First, check the directory structure of the tarball. In this case, the fine folks who package WordPress archive the files inside a folder called “wordpress.” As a result, when you extract the tarball, it is going to create a folder inside the “public” folder called “wordpress” when, in fact, you want the files to be directly stored into the folder “public.” That’s a problem. How do you move those files? The Linux move command, “mv” can do it, but it’s kind of tricky how:

mv /var/www/example.com/public/wordpress/* .

This tells the OS to move all the files in the wordpress directory (the asterisk “*” does this) up one level (the period “.”). Once you do this, you should then remove the wordpress directory:

rm -R wordpress/

 217 total views,  7 views today

ASUS Router – VPN with Android phone

I have a specific use scenario where I need to be on my home network to control a device but may want to control that device when I’m not home. After considering several options, I realized that my newly purchased ASUS router – ASUS AX6000/RT-AX88U – has the ability to create a VPN. Rather than pay extra for an additional service, I thought I’d try the built-in VPN option to see if I could get it to work. It took a little doing, especially since ASUS’s directions were vague, but I got it working. Here’s how…

Setting Up Your Router

First, note that I’m doing this on Firmware Version: 3.0.0.4.386_42820. This may change over time.

First, log in to the router and navigate to Advanced Settings where it says VPN:

In that tab, you’ll see three buttons. I ended up getting it to work with OpenVPN (I tried the other two to no avail). Click on OpenVPN and select the button to turn it on and you’ll see this:

I set my port to 1025. Remember what port you select as you’ll need that in the optional section below. If you’ll be connecting from outside your local network (which, obviously, you are), select “Internet and local network.” Finally, at the bottom, create a username and password that you’ll use to connect.

Once you’ve done all of that, hit “Apply.”

Now, where it says “Export OpenVPN configuration file,” click “Export” and you’ll get an “.ovpn” file. You can open that file with a text editor and you’ll see content like this:

remote 192.168.1.23 1025
float
nobind
proto udp
....
------BEGIN CERTIFICATE----
[lots of letters and numbers]
------END CERTIFICATE-----

If your router is connected directly to the internet, you can move on to the section below where you set up your phone. However, I recommend you continue reading as you may need the directions that follow.

In my case, my server is behind my ISP’s gateway, so it isn’t connected directly to the internet. What that means for me is that I have to forward a port from my ISP’s gateway to my ASUS router – port 1025. Log in to your ISP gateway. Mine is an Arris NVG468MQ Gateway. I have to select Firewall -> Port Forwarding to get to where I need to open a port to my ASUS router. Here’s what I added to forward my port to my router: I selected my device from the drop-down list (it’s the only device connected to my gateway, which makes it an easy choice), named this “VPN”, then selected TCP/UDP (technically, you only need UDP for this), then put in the port: 1025. Once that’s done, select “Add.”

Once it’s added, you’ll get this line in your forwarded ports:

If your ASUS router is set up like mine, behind a gateway, the IP address in your OVPN file is wrong. You’ll need to set it to your external-facing (a.k.a. WAN IP) IP address. You can find that in the settings of your Gateway. Or use a website (Google: “What’s my IP”). However, an even better approach is to set up a DNS service either on your router or on a computer on your home network that keeps track of your WAN IP. I use entryDNS with my fileserver. As a result, I have a DNS address that is always current that I used in my OVPN file. Swap out the IP address that was in that OVPN file for either your WAN IP or your DNS address. Here’s how my OVPN file looks now with a fake DNS address (that’s not actually my DNS address)…

remote ryananddebi.randomdns.org 1025
float
nobind
proto udp
....
------BEGIN CERTIFICATE----
[lots of letters and numbers]
------END CERTIFICATE-----

With your port forwarded to your ASUS router and the correct external IP address in the OVPN file, you can now move on to setting up your phone.

Setting Up Your Phone

I installed the OpenVPN app from the Google Play Store on my Pixel 4a:

I then emailed myself the OVPN file, opened that email on my phone, and downloaded the file to my downloads. In the OpenVPN app, click on the + to add a new profile. Find the OVPN file you downloaded:

Then add your username and password you set up on your router.

Once done, you’ll have a profile set up:

Assuming everything was done correctly, select the “activate” button and you should see this:

You can also see that I’m connected in the Router’s VPN screen:

You can now use your phone as though you were connected to your home network.

 1,269 total views,  6 views today

Plex – Export Playlists to M3U

I spent the last year or so cleaning up my music library. It is now organized how I want it to be. I use two software packages to play my music. At home, on my primary computer, I use Clementine (which is also partly how I organized my library). When I’m away from home, I use Plex. With my music library cleaned up, I have started building a few playlists. Many of my playlists are automated; this tutorial doesn’t apply to those. I have started building just a handful of playlists that are very specific. I was building the playlists in both Clementine and Plex and realized that, if I built the playlist in one of the programs, I’d likely want to have the playlist in the other program as well. The problem I was running into is that there is no native way to import or export playlists to Plex.

This sent me down a weird, tangled path. Let me see if I can summarize the situation briefly. Before September 2018, there were some plugins that allowed some playlist import/export functionality. (NOTE: Until writing this post, I didn’t even know there were Plex plugins.) In September of 2018, the Plex developers announced they were discontinuing plugins. That announcement obviously wasn’t that popular with the Plex users who relied on the plugins. Since then, many of the plugins that previously worked have been discontinued and are no longer updated.

Some programmers have switched from developing plugins to developing stand-alone applications that now interface with the Plex Media Server. These applications are no longer installed in the now-defunct “plugins” directory of Plex. One of these applications, WebTools-NG, has a feature that helps partially solve the playlist import/export problem.

(ASIDE: If you google for exporting playlists from Plex, you’ll likely find a few posts or pages from before 2018 indicating that you could click on the context menu inside a playlist in Plex and find an export option. As of the latest version of Plex – I’m currently running 1.22.3.4392 – that option does not exist. So, don’t bother looking for it.)

As I’m on Linux, the WebTools-NG application is an AppImage – nothing to install, just run the AppImage. Download it and put it somewhere where you will remember. You’ll also want to right-click on it and select “Properties.”

In permissions, select “Allow this file to run as a program.”

Once you’ve done that, double-click the application and you’ll get a log in screen that requires your Plex credentials. Fill those out and log in. Assuming everything works, you’ll get a screen like this:

A couple of things to do before you start messing with the playlists. At the top, select your Plex server. You can see I have selected mine:

If you’re like me, you’re probably wondering why you’d need to select your server. It took me a second to realize that some people have multiple servers. That feature exists so people who have multiple servers can select the one they want to work with. Now, go to Global Settings and change two options. First, set your Export Directory and change the “Timeout when requesting info from PMS in sec” to 99. Without changing that, the application was timing out with some of my requests.

Now, to work with the playlists… click on ExportTools on the left and you’ll get this window:

Since I’m working with an audio playlist, in the Select Export Type, I selected “Playlist.”

In Sub Type, I selected “Audio.”

Then you need to Select Media Library. The one I chose for this example is “Favorite Chill.”

Finally, for Export Level, I chose “All” (FYI, I’m not sure what this does. I’ll mess around with it later.)

Then click “Export Media” and you’ll see the Status box light up with your export:

When it’s done, you’ll have a CSV file with all the relevant information from your playlist from Plex. You’ll open that with a spreadsheet program (I’m using LibreOffice Calc) for the next step. With LibreOffice, it asks for the delimiter (the Unicode character that separates data). By default in WebTools-NG, the delimiter is “|”. So, make sure you set that when you open the CSV file. Here’s how in LibreOffice:

Once you’ve done that, open the file and you’ll see lots of columns with lots of information. To create an M3U playlist, we actually just need one column: “Part File Combined.” That has the location of the file:

To make this as easy possible, I’d suggest deleting all of the other columns. Then, once you’ve deleted them all, you’ll need to do one additional step. Since the location of the music files is on my file server and is the absolute location (i.e., relative to where the OS is installed) and is not a relative location (i.e., relative to where the M3U file is stored), you’ll need to adjust that for where you are going to import your M3U file. Let me be a bit more specific. On my fileserver, my music is stored in a folder called “music” (keeping it simple). But that folder is stored on my ZFS raid which is located in the folder “ZFSNAS.” Thus, the absolute location of my music folder is “/ZFSNAS/music/”. That is different on my desktop computer where I am going to be playing the music. On my desktop computer, my music folder is loaded using NFS into a folder called “Music” but that folder is located in a different absolute location, “/home/ryan/Music/”. So, the final and perhaps most complicated step to creating the M3U file is to replace the absolute location from my fileserver with the absolute file location on my desktop computer. That’s easy enough with a Find and Replace command:

Once you’ve don’t that, creating the M3U file is super easy. Save your CSV file that now has just one column with a column header. Now, open that file with a text editor. I’m using Kate. Replace the column header “Part File Combined” with “#EXTM3U”. The file format for M3U file is pretty simple – it has to start with #EXTM3U then can literally just consist of the absolute file location of the files.

Then either save the file with the extension “.m3u” or close the file and replace the “.csv” with “.m3u”. You now have an M3U playlist!

The last step is to import the file into your other application. In my case, this is Clementine. First, open a new Playlist in Clementine. This is important because, when you import the M3U playlist, it will get added to whichever playlist you currently have active.

Then click on “Music” -> “Open File.” At the bottom of that window, choose “M3U playlists.” Find your playlist and select it. Then hit “Open.”

And there you have it – your playlist exported from Plex into Clementine:

If you want to save that playlist in Clementine, just click the “Star” at the top of the Playlist and it will get added to your Playlists. You can also save the playlist as an M3U file and Clementine will format it nice and proper.

Some notes…

This is a bit cumbersome. It sure would be nice of the folks at Plex to add a playlist import/export feature. It really wouldn’t be that hard to do this. A simple walk through process that first imports the M3U playlist then either (a) searches the audio library for matching files or (b) just cuts off the absolute location of the audio files and replaces that with the relative location for the audio library would do this.

I’ve noticed that not every song gets properly imported. I’ll probe that issue but I’m guessing it’s songs that have weird characters in their names that are problematic.

This is obviously just a one-way export feature. Right now, I have only figured out a way to export a playlist from Plex using a third application. I’d really love to be able to import playlists into Plex. If anyone has any thoughts on that, I welcome them. I typically create all my playlists in Clementine. For now, I have to go the other way for custom-built playlists that I want in both locations.

 2,137 total views,  19 views today

Linux Server – Adding a New Domain and WordPress Site – Linode VPS – Ubuntu 18.04

I have a VPS server with Linode that I use to host about a dozen different websites. All but one of them run on WordPress. Occasionally, I get a request to add another domain and website to the server. It’s not terribly time consuming to do, but it does require a number of specific steps and I never remember all of them. To help me remember them (and perhaps to help someone else), I’m putting together this tutorial.

Step 1: Purchase the new domain. For this tutorial, I’m going to be adding a domain my brother-in-law requested: flyingyoga.us. He’s a pilot but is getting certified as a yoga instructor and wanted to set up a simple website. I use Google Domains to purchase and manage all my domains. So, step 1, decide on what company you want to use to purchase your domains and purchase your domain.

My domains in Google Domains.

Step 2: Change the DNS settings on the new domain to point to Linode’s nameservers. If using Google domains, click on the domain then click on DNS:

Select “DNS” to change the nameservers.

Under Name servers, select “Use custom name servers” and enter “ns1.linode.com” for the first Name server then add a second and enter “ns2.linode.com.” Hit Save.

Here are the custom name servers in Google Domains.

Step 3: You now need to add the domain and then add domain records to your Linode account. Login to your account and select Domains.

Click on “Create Domain”:

click “Create Domain” at the top right.

Enter the domain and the admin email address. Unless you need to do something special with the Records, select “Insert default records from one of my Linodes.” then select your Linode:

Basic domain creation information.

Assuming you don’t need anything special, the defaults should take care of this step and you’re done.

Step 4: Since I already have about a dozen websites running on the server, I’m not going to go into detail on how to install a LAMP stack – Apache, MySQL, and PHP. There are a number of tutorials for doing so. Instead, my next step is to SSH into my server (obviously replace “user” and the IP address with your own) and create the directories where the files for the new website will be hosted.

ssh user@192.168.0.1

Whenever I log into my server, I use that as an opportunity to run updates.

sudo apt-get update
sudo apt-get upgrade

Next, navigate to the directory where you store your public-facing web files. On my server, it’s /var/www/

cd /var/www/

In that directory, I’m going to create a new folder for the domain:

mkdir flyingyoga.us

I’m then going to navigate inside that folder and create two additional folders: (1) a “public” folder where the actual files go and (2) a “logs” folder for access and error logs.

cd flyingyoga.us
mkdir logs public

Now, navigate back to the main directory where you store all your website files and change the ownership of the directories:

cd ..
sudo chown -R www-data:www-data flyingyoga.us/

This allows Apache to access the information in the folders and since Apache is effectively the web server, that’s important. Don’t skip this step.

Step 5: Download the latest version of WordPress and untar it into the public folder. Where you download it and untar isn’t actually all that important as we’re going to move it to the public folder shortly.

sudo wget http://wordpress.org/latest.tar.gz
tar -xvf latest.tar.gz
mv wordpress/* /var/www/flyingyoga.us/public/
rmdir wordpress/
rm latest.tar.gz

Just to clarify the above commands. The first line downloads the latest version of wordpress. The second one unpacks wordpress into a folder called “wordpress.” The third line moves all of the files that were just unpacked into the newly created public folder for the domain. The fourth line deletes the now empty “wordpress” folder and the fifth line deletes the wordpress tar.gz download (nice and clean server).

Step 6: It would be nice if we were done, but we’ve got a ways to go yet. Next up, let’s create a MySQL user and database with a corresponding user. This can be done from the command line as well, but I prefer using phpmyadmin.

You’ll need to look up where to find phpMyAdmin on your server.

Navigate to “User accounts” and scroll down to “Add user account.” Click on that and you’ll get this screen:

Click “add user account” to set up a new database and user.

Obviously, choose an appropriate user name. I typically let phpMyAdmin generate a nice strong password. Just below the “Login Information” box is a box that says “Database for user account.” Check “Create database with same name and grant all privileges.” Don’t check below that where it says “Global privileges – Check all.” That would give this user access to all databases you have on the server. Not a good security choice. Write down or copy the username and password to a text file as you’ll need it later. When you’ve got all that done, scroll down to the bottom and select “Go.” That will create your database, the user, with the password you wrote down (you wrote it down or copied it to a text file, right?). You now have the database WordPress is going to use for your website.

Here’s where you can create a new user and database in PHPMyAdmin

Step 7: Next up is creating the website information for Apache. Back to the SSH shell. Navigate to where the Apache websites are stored on your server:

cd /etc/apache2/sites-available

In there, you should see the configuration files for all the websites on your server. Since I already have sites configured, I typically just copy one of the existing configuration files and then edit it according to the new domain:

cp otherdomain.com.conf flyingyoga.us.conf
cp otherdomain.com-le-ssl-conf flyingyoga.us-le-ssl-conf

Since I’m using SSL on all my domains, I have two configuration files per domain. The above commands copy existing configuration files and create new ones for my new domain. Here’s the contents for the first one: flyingyoga.us.conf:

<Directory /var/www/flyingyoga.us/public>
    Require all granted
</Directory>
<VirtualHost *:80>
        ServerName flyingyoga.us
        ServerAlias www.flyingyoga.us
        ServerAdmin ryantcragun@gmail.com
        DocumentRoot /var/www/flyingyoga.us/public

        ErrorLog /var/www/flyingyoga.us/logs/error.log
        CustomLog /var/www/flyingyoga.us/logs/access.log combined

RewriteEngine on
RewriteCond %{SERVER_NAME} =www.flyingyoga.us [OR]
RewriteCond %{SERVER_NAME} =flyingyoga.us
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>

And the contents for the second one – flyingyoga.us-le-ssl-conf

<IfModule mod_ssl.c>
<Directory /var/www/flyingyoga.us/public>
    Require all granted
</Directory>
<VirtualHost *:443>
        ServerName flyingyoga.us
        ServerAlias www.flyingyoga.us
        ServerAdmin ryantcragun@gmail.com
        DocumentRoot /var/www/flyingyoga.us/public

        ErrorLog /var/www/flyingyoga.us/logs/error.log
        CustomLog /var/www/flyingyoga.us/logs/access.log combined

Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/ryantcragun.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ryantcragun.com/privkey.pem
</VirtualHost>
</IfModule>

Once you have these updated, you can then tell Apache to load the site:

sudo a2ensite flyingyoga.us.conf
systemctl reload apache2

The first line tells Apache to enable the site. The second line restarts Apache. NOTE: You don’t have to load the ssl configuration file (i.e., flyingyoga.us-le-ssl-conf).

Step 8: Since I have SSL encryption on all of my websites using LetsEncrypt, there is an extra step. This is always the one I forget. Since I’m adding a domain, I have to use the following commands to add a domain to my existing domains on the single SSL certificate that I use for all of my domains. First, let me find the name of my current certificate:

certbot certificates

That provides me the name of my current certificate as well as a list of all of my other domains. Next, I copy all of the existing domains so I can update the certificate and add the two new ones I need to add. The command to then get a new certificate with the added domains is:

certbot --expand -d existing.com,www.existing.com,flyingyoga.us,www.flyingyoga.com

Assuming everything works, this will expand the existing certificate with the new domain and issue a new SSL certificate with all the domains. (NOTE: no spaces between the domains.)

Step 9: Now you can test your server. I always do this by creating a simple html file with the classic “Hello World” in it and putting that into the public directory for the new website:

<!DOCTYPE html>
<html>
    <head>
        <title>Test Page</title>
    </head>
    <body>
        <p>Hello World!</p>
    </body>
</html>

Save that as “index.html” and put it in the public folder. Now, navigate to the new domain in your browser and, hopefully, you’ll see “Hello World!”

Yeah. Website is working!

If you saw “Hello World!” in your browser, everything is working. It’s always a good idea to check that the https redirect is working as well – so you know that your SSL certificate is good and working. The easiest way to do that is to click on the lock icon in your browser and then check the certificate information.

Step 10: Now, the final step to install WordPress – change the name of the index.html file to something else (e.g. “index.html-test”) then reload the page. You should now see the installation guide for WordPress that will ask for your database name, username, and password:

This is the last step to install WordPress on a new domain.

Enter the database information from Step 6 above. Assuming everything goes according to plan, WordPress will populate the database with the relevant fields and your site will be ready:

Here’s the backend of my new wordpress installation.

 2,148 total views,  5 views today

GPRENAME – removing text up to a space using regular expressions

For a recent project, I came into possession of hundreds of photos, each of which was named according to the settings the person who take the photo with their phone had in place but ended with ” – FirstName LastName.jpg.”

The photos have varied characters before the names. That required some creative thinking on how to batch rename them.

I wanted to keep the users’ names, but everything before that could go. With hundreds of photos, I realized that it would be way too time-consuming to edit each file name by hand. Enter “gprename,” an app for bulk renaming files in Linux, and regular expressions.

Regular expressions are strings of characters that can be used to search for patterns. I’m no expert, but I’m familiar with them and realized that this could be a solution to my problem. A quick google search brought up this Stack Overflow post with the answer. I then applied it using gprename and, a 2 hour task was cut down to 5 minutes.

Here’s how I did it…

Open gprename and navigate to the folder where the files are located.

This is one of about 14 folders I needed to clean up.

On the “Replace / Remove” tab, in the “Replace” box, enter: “^.*?\s+”. In the “with” box that follows, you can just leave that blank. Make sure you check the box next to “Regular expression.”

Here’s where you enter the regular expression.

Here’s what the expression means:
^: Match from the beginning of the line
.*?\s+: Match anything until a space is encountered

Hit “Preview” and you should see the result:

A preview of the changes.

When you’re ready, click “Rename” and all of the characters prior to the first space will be wiped out:

The cleaned up files.

In my case, I then did some additional renaming (removing the ” – ” before the names and adding dates after), but this should get you close to your goal.

 1,913 total views,  3 views today

ZFS Snapshot Management

With my new fileserver/NAS, I am using a ZFS raid array for my file storage. One aspect of the ZFS file system that I didn’t really investigate all that closely before I installed it was the snapshot feature. ZFS snapshots basically store a “picture” or “image” of all the files on the system at a specific point in time. The logic behind snapshots is that you can then roll the filesystem back to that point in time, in case there was a problem.

This is, of course, a great feature for a variety of reasons. If you accidentally deleted a file, you could recover it this way. If someone hacked your server and encrypted your files, you could recover them this way (assuming the snapshots aren’t also encrypted).

However, there is a small issue here. In order for the snapshots to work, they basically have to store all of the files that were in existence when the snapshot was made. In other words, so long as you retain a snapshot, you will also have to retain all of the files associated with that snapshot, which will take up all the corresponding space as well. Technically, once you delete a file, it won’t show up in your file directory, but the file system has to retain a copy of it as part of the snapshot.

It was this issue that finally got me to look more closely at snapshot management. My fileserver has four 4TB hard drives in it, which gives me roughly 8TB of storage. I can always upgrade the drives to bigger drives if needed, but I don’t have 8TB of files at this point. I have around 4TB, but my fileserver was reporting less and less available space, even after I deleted some files I didn’t want/need. I was confused until I realized that I had set up some automatic snapshots when I first set up the fileserver. Those snapshots included many of the files I had deleted, so no space was freed up when I deleted those files. I had to delete the snapshots in order to free up the space. It took some doing, but I finally figured out how to manage snapshots a bit better.

First, here’s how to list your existing snapshots:

zfs list -t snapshot -r [ZFSpoolnamehere]

If you have any snapshots, you should see a list like this:

This shows the snapshots I have created on my fileserver.

To create a snapshot, you can use this code:

sudo zfs snapshot -r [ZFSpoolnamehere]@[nameofsnapshot]

The above command puts you in root (sudo) and calls zfs, telling it to create a snapshot. The “-r” part of the command tells zfs to make a recursive snapshot, which means it will create snapshots of each of your top directories in your ZFS pool. You then have to invoke the ZFS pool followed by the “@” symbol and then the name of your snapshot. Here’s my actual command:

sudo zfs snapshot -r ZFSNAS@2021_01_02

This command creates a snapshot of my entire ZFS pool and names it with the date it was created – January 2nd, 2021.

Once I’ve created my snapshot, I can see it in the list of snapshots I have created with the command above.

Of course, there is also the issue of deleting snapshots. This was the issue I needed to address that got me started on ZFS snapshots – I had some very old snapshots that were taking up terabytes of space on my fileserver. To delete a snapshot, you can use the following code:

sudo zfs destroy [ZFSpoolname]/[directoryname]@[nameofsnapshot]

Here’s an example of my actual code used to delete an old snapshot:

sudo zfs destroy ZFSNAS/files@2020_12_25

This command tells zfs to delete my snapshot in the ZFSNAS pool that was made of the “files” directory on December 25, 2020 (@2020_12_25). When I check the list of snapshots after deleting that one, it’s no longer there and the space that was reserved for that snapshot is freed up.

Luckily, I haven’t needed to roll back to a snapshot at this point. However, it is nice knowing that my filesystem on my NAS has that capability.

For more information on working with ZFS snapshots, see here.

Finally, there is the issue of how many snapshots to keep and when to make them. Given my use case – a fileserver/NAS that stores my music, my video collection, my old photos (not the current year’s photos), and my old files, I actually don’t need particularly frequent snapshots. I decided that one snapshot per month would be sufficient. You can automate this by putting it into a crontab job. However, I simply added this to my calendar to create a snapshot once a month. That way, I can also delete the corresponding snapshot from 12 months prior. Basically, I am keeping one year’s worth of snapshots with one snapshot per month. This gives me a year’s worth of backups to which I can roll back should I need to but also regularly frees up any space that might be taken up by particularly old backups.

I can imagine that other use scenarios would require different frequencies of snapshots for a zfs system (e.g., a production server or someone who uses a zfs raid as their primary file storage). Since my server is primarily used for storing old files and relatively unchanging collections, once a month snapshots are sufficient.

 2,874 total views,  2 views today

PVC paintball gun holder

I play paintball frequently enough that I finally broke down and made myself a paintball gun holder. Is it a requirement for paintball? Of course not! Is it a convenient place to put your gun between games while you fill the hopper and work on it? Yes!

I initially tried a different design but didn’t like it. This was my second attempt and I’m quite pleased with it. I found several youtube videos showing how to make PVC paintball gun holders. They showed what I needed to see, but in almost all of them, the people creating the videos didn’t measure anything – they were just winging it. I’m not one for winging it. So, after successfully building my own, here are detailed instructions.

First, supplies. You’ll need:

  • PVC cutting tool (you can use a hacksaw if needed)
  • dry erase marker
  • ruler or tape measure
  • PVC fittings – all in 3/4 inch PVC (unless otherwise indicated):
    • 4 – 90 degree elbows
    • 4 – 45 degree elbows
    • 3 – tees
    • 1 – 3/4″ to 1″ tee
  • 3/4 PVC pipe cut to the following lengths:
    • 1 – 24 centimeter piece (9 1/2 inches)
    • 4 – 22 centimeter pieces (8 2/3 inches)
    • 1 – 16 centimeter piece (6 1/3 inches)
    • 2 – 11 centimeter pieces (4 1/3 inches)
    • 4 – 10 centimeter pieces (4 inches)
    • 1 – 5.5 centimeter pieces (2 1/4 inches)
    • 2 – 4.5 centimeter pieces (1 3/4 inches)

Here’s a photo of all the pieces labeled:

Once you cut all of the pieces of PVC, then it’s just a matter of assembling them in the right way. Here are the pieces assembled and labeled:

The hardest part of this build was cutting the 3/4″ x 1″ tee in half. That isn’t actually required. You could also just use another 3/4″ tee and turn it sideways, placing it under the barrel. If you do, the length of PVC that holds it up will need to be slightly shorter.

 3,131 total views,  13 views today

Long Term Storage of Gmail/Email using Mozilla’s Thunderbird (on Linux)

I have email going back to 2003. There have been times when I have actually benefited from my email archive. Several times, I have gone back 5 or more years to look for a specific email and my archive has saved me. However, my current approach to backing up my email is, let’s say, a borderline violation of Google’s Terms of Service. Basically, I created a second email account that I use almost exclusively (not exclusively – I use it for a couple of other things) for storing the email from my primary Gmail account. However, Google has recently changed its storage policies for their email accounts, which has made me a little nervous. And, of course, I’m always wary about storing my data with Google or other large software companies.

Since I have already worked out a storage solution for old files that is quite reliable, moving my old email to that storage solution makes sense. (FYI, my storage solution is to run a local file server in my house with a RAID array so I have plenty of space and local backups. Certain folders on the file server are backed up in real-time to an online service so I also have a real-time offsite backup of all my important files. In other words, I have local and offsite redundancy for all my important files.)

I’m also a fan of Open Source Software (OSS) and would prefer an OSS solution to any software needs I have. Enter Mozilla’s Thunderbird email client. I have used Thunderbird for various tasks in the past and like its interface. I was wondering if there was a way to have Mozilla archive my email in a way that I can easily retrieve the email should I need to. Easily might be a bit of a stretch here, but it is retrievable using this solution. And, it’s free, doesn’t violate any terms of service, and works with my existing data backup approach.

So, how does it work? And how did I set it up?

First, install Thunderbird and set up whatever online email account you want to backup. I’m not going to go through those steps as there are plenty of tutorials for both of them. I’m using a Gmail account for this.

Once you have your email account set up in Thunderbird, click on the All Mail folder (assuming it’s a Gmail account) and let Thunderbird take the time it needs to synchronize all of your email locally. With the over one hundred thousand emails I had in my online archive, it took the better part of a day to do all the synchronizing.

I had over 167,000 emails in my online archive account.

Once you’ve got your email synchronized, right-click on “Local Folders” and select “New Folder.” I called my new folder “Archive.”

Technically, you could store whatever emails you want to store in that folder. However, you’ll probably want to create a subfolder in that folder with a memorable name (e.g., “2015 Work”). Trust me, it will be beneficial later. I like to organize things by year. So, I created subfolders under the Archive folder for each year of emails I wanted to back up. You can see that I have a folder in the above image for the year 2003, which is the oldest email I have (that was back when I had a Hotmail address… I feel so dirty admitting that!).

The next step is weird, but it’s the only way I’ve been able to get Thunderbird to play “nice” with Gmail. Open a browser, log in to the Gmail account you’re using, and empty your trash. Trust me, you’ll want to have the trash empty for this next step.

Now, returning to Thunderbird, select the emails you want to transfer to your local folder and drag them over the “Trash” folder in your Gmail account in Thunderbird. This won’t delete them but it will assign them the “Trash” tag in Gmail. Once they have all been moved into the Trash, select them again (CTRL+A) and drag them into the folder you created to store your archived emails. In the screenshot below, I’m moving just a few emails from 2003 to the Trash to test this process:

Once the transfer is complete, click on the Local Folder where you transferred the emails to make sure they are there:

And there are the emails. Just where I want them. This also means that you have a local copy of all your emails in a folder exactly where you want them. At this point, you have technically made a backup of all the email you wanted to backup.

To remove them from your Gmail account, you need to do one additional thing. Go back to the browser where you have your Gmail account open, click on the Trash, and empty the Trash. When you do, the emails will no longer be on Gmail’s server. The only copy is on your local computer.

Now, the next tricky part to this (I didn’t say this was perfectly easy, but it’s pretty easy). Thunderbird doesn’t really store the Local Folder emails in an obvious location. But you can find the location easy enough. Right-click your Local Folder where you are archiving the emails and select “Properties.”

You’ll get this window:

Basically, in the Location box you’ll see the folder’s location. This is telling you where to find your Local Folder where your email is stored. On Linux, make sure that you have “view hidden files” turned on in your file manager (I’m using Dolphin), the location is your home folder, followed by your user folder, then it’s inside the hidden “.thunderbird” folder followed by a randomly generated folder that Thunderbird creates. Inside that folder, look for the “Mail/Local Folders” folder. Or, simply:

/home/ryan/.thunderbird/R$ND0M.default-release/Mail/Local Folders/
I have opened all the relevant folders on my computer in Dolphin see you can see the file structure.

Since I created an additional folder, there are two files in my Archive.sbd folder that contain all the emails I have put into that folder: “2003” and “2003.msf.” You can learn more about the contents of those files here, but, basically, the file with no file extension stores the actual contents of the emails. The .msf file is basically an index of the emails (I was able to open them both in Kate, a text editor, and read them fine). In short, you won’t have a bunch of files in your archive. You’ll have two files – one with the contents of the emails and one that serves as an index of the emails that Thunderbird can read. These are the two files that you’ll want to backup.

I’ll admit, this was the part that scared me. I needed to know that I could just take those two files, move them to wherever I wanted to ultimately store them and then, when needed, move them back into my Thunderbird profile and read them just fine. So, I tested it repeatedly.

Here’s what I did. First, I clicked on my backup folder in Thunderbird to make sure all of the emails were there:

I then went into the Local Folders directory and moved the files to where I want to back them up.

I then clicked on a different folder in Thunderbird and then hit F5 to make Thunderbird refresh the Local Folders. It took a couple of minutes for Thunderbird to refresh the folder, but eventually, it did. Then, I selected the 2003 Archive folder again and the emails were gone:

This is what I expected. The emails are in the “2003.msf” and “2003” files on my backup server. Now for the real test. I copied the two backed up files back to the Archive.sbd folder in my Thunderbird profile, selected the parent folder in Thunderbird, and hit F5 to refresh the folders again. It took a minute for the folder to refresh, but eventually, when I clicked on the 2003 folder and…

The emails are back!

It worked!!!

What this means is that you can put all of the email you want to back up into a folder; that folder is stored in your Thunderbird profile. You can then find the relevant .msf file and its corresponding data file, move them wherever you want for storage and, if needed, move them back to your Thunderbird profile (or, technically, any Thunderbird profile using the same folder structure) and you’ll still have all of your email.

This may not be the most elegant method for backing up your email but it’s free, it’s relatively simple and straightforward, and works reliably. Your email is not in a proprietary format but rather in an open source format that can actually be ready in a simple text editor. Of course, it’s easiest to read it in Thunderbird, but you have the files in a format that is open and secure.

BONUS:

If you don’t think you’re going to need to access your email on a regular basis, you can always compress the files before storing them. Given that most of email is text, you’ll likely realize a savings of close to 50% if space is at a premium (once I moved all of my 2003 email to storage, I compressed it and saw a savings of 60%). This will add a small amount of time to accessing the email as you’ll have to extract it from the compressed format, but it could mean pretty substantial space savings depending on how many emails you’re archiving.

EXTRA BONUS:

This is also a way to move emails between computers. I ended up using this approach to move my email archives to my laptop so I could go through my old email while I’m watching TV at night and delete all the emails I’ll never want in the future (you could of course do that with the email online before you archive it). I’m pretty good about deleting useless emails as I go, but I don’t catch them all. With the .msf file and its accompanying data file, I was able to transport my email archives to whichever computer I wanted and modify the file, then return it to my file server for long term storage.

 3,597 total views,  9 views today