Linux – Batch Convert .wav to .flac

I ran into a weird issue the other day where SoundConverter, a GUI for converting audio files, was generating flac files that my audio player couldn’t see. I’m still not exactly sure what the problem was, but in trying to solve the problem, I went ahead and wrote a command to batch convert a folder of .wav (WAV) files to .flac (FLAC) files using FFMPEG. I figured I’d put it up here for me in the future and in case anyone else finds it useful.

First, navigate in a terminal/console to the folder where the audio files are you want to convert to flac. Then run the following command:

for i in *.wav; do ffmpeg -i "$i" -c:a flac "${i%.*}.flac"; done

Breaking this code down… The first part “for i in *.wav” starts the loop by telling the computer to loop through every file in that folder. The second part tells the computer what to do (“do”) with each of those files: load the ffmpeg software and for each file “$i” convert it to flac “-c:a flac”, renaming the file with the same name as before but with the flac file extension “”${i%.*}.flac””. (See here for what these characters do.) When that is complete, the loop is done.

 100 total views,  10 views today

LibreOffice – KDE Integration Package for Skinning/Color Management

Super simple note for future reference and for anyone else running into this issue. I upgraded my version of LibreOffice in Kubuntu 20.10 to a pre-release version to fix a bug. I was running version 7.1.4.2.

When I purged the version of LibreOffice that shipped with Kubuntu 20.10 (6.X – I don’t remember exactly which version), that also purged the KDE integration package that helps LibreOffice interface with the window manager and makes skinning LibreOffice much easier. As a result, when I would change my Global theme in System Settings, only part of the LibreOffice windows would change to reflect that theme. In particular, the sidebar on the right side of LibreOffice wasn’t changing colors with the rest of the window. This was making it impossible for me to see the names of different styles and also looked really weird.

The solution was easy. In Synaptic, search for the “libreoffice-kde” integration package and install it. Now, when I change my Global Theme, the LibreOffice windows change to reflect that.

In short, if you purge the version of LibreOffice in the repositories in KDE and install a newer version, make sure you also install the libreoffice-kde integration package or your LibreOffice windows will behave strangely.

 113 total views,  8 views today

Linux Server – tar files and directories

I recently had to move all my websites from one virtual private server (VPS) to another. When I only had a few such websites, I was okay with using SFTP (via Filezilla) to download all of the files and then upload them to the VPS. It took a while but I was okay with that. With about a dozen websites I host on my VPS, that was just not an option. It was time to finally try to figure out how to use tar more effectively on my server.

Why tar? Tar is similar to zip in that it combines lots of files and/or directories into a single file, a tarball or archive, which makes it much easier and way faster to move. Computers can move one large file faster than they can move lots of small files that, combined, make up the same size. When you have to move roughly 8 gigabytes and hundreds of thousands of files, it is far easier to do so by putting all of those files into a single tarball than moving the files individually. That’s why tar is way better for what I was trying to accomplish.

Sidenote: Tar is just a format for packaging all of the files/directories together. Typically, the files are also compressed using something like gzip, leading to a tarball with the extension “tar.gz.” It is possible to just combine the files without compressing them as will be detailed below.

Why was I so reticent to use tar to move my files? Because my prior experience with tar had resulted in several tarbombs, which are a nightmare. Basically, I had unpacked a tarball into the wrong directory, which resulted in thousands of files being in the wrong place, necessitating me having to figure out which files I should keep and which I should get rid of individually. That took more time than I saved by using tar. And since I was doing everything via SSH on a remote VPS, there was no easy way to clean up the mess. Even so, it was time to use tar. So, I bit the bullet and figured out how to do this. This post is my guide on how to carefully use tar but avoid tarbombs, which suck. Note, GNU has an entire manual on tar that goes into much greater detail than my post.

What do all those letters mean?

Since we’re using a console or terminal to package all these files and not a GUI, we have to specify what we want to do with the files using some letters. That’s what the letters do. Not to worry, though – I’m not going to go through all of them. There are literally dozens of options. I’ll keep this simple. The basic structure of the tar command is as follows:

tar [letters to specify what to do] [output tarball] [files or directories to add]

I’ll give some specific examples of tar commands below, but, first, let’s cover what the letters we’ll be using mean.

  • c – “create”: This tells tar to create an archive, in contrast to modifying a tarball. (Note: This letter alone cannot create an archive.)
  • f – “file”: This tells tar to modify files relative to the archive. (Note: Just c and f can work to create a tarball as will be shown below.)
  • v – “verbose”: This tells tar to show the progress of the archiving process by showing which files have been added to the archive.
  • z – “gzip”: This tells tar to compress the files in the tarball using gzip.
  • j – “bzip2”: This tells tar to compress the files in the tarball using bzip2.
  • x – “extract”: This tells tar to extract the files in a tarball.

A couple of important notes here regarding the letters. First, the order of the letters does not matter. Second, letters that do the same things should not be included in the command (e.g., “z” and “j” should not both be used).

Basic Examples:

Creating Tarballs

I put a couple of my papers and some images into a folder to use to demonstrate basic uses of tar. Here’s a screenshot so you can see what we’re working with:

First up, I’ll create a tarball of the entire test folder but with no compression and use that to show a couple of important elements of the process. Here’s the code:

cd /home/ryan/Desktop
tar cf test.tar tar.test.directory/

The code above changes my directory (“cd”) to the parent folder (Desktop). Then it calls the “tar” software and tells it to create an archive called “test.tar” and file inside it (“cf”) the entire test directory “tar.test.directory/”. What this code doesn’t do is compress the files. This can be seen when comparing the size of the folder – 5.4 mb – with the size of the newly created tarball – 5.4 mb.

Quickly, try switching the order of the letters, from “cf” to “fc” and you’ll see that the outcome is the same. Also, if that is all you switch in the code, you’ll notice if you re-run the same command that tar will not warn you that it is going to overwrite the previous tarball, it simply does it.

One more item to note that is actually really important, particularly when thinking about extracting an archive, is the folder structure inside the archive. If I open the archive using Ark, you’ll see that, because I navigated to the parent directory in my terminal before creating the tarball, the folder structure inside the archive is from the folder where I created the tarball (in this case, the Desktop directory).

I’m going to create the same tarball but I’m not going to navigate to the parent folder and instead will tell tar which folder to compress and where to store the new tarball:

tar cf /home/ryan/Desktop/test.tar /home/ryan/Desktop/tar.test.directory/

Functionally, this is the exact same command and archives all the same content. However, look at the directory structure inside the tarball when I open it with Ark:

That tar creates a different directory structure depending on the code you use is important, particularly if you want to avoid tarbombing. Why is this important? When you extract the tarball, the same directory structure that is in the archive will be created. If you don’t know what the directory structure is inside the tarball and extract it, that can result in all sorts of problems, some of which I’ll address at the end of this post.

The lesson here: the folder structure is based on what you enter into the command to create the tarball. So, if you don’t want lots of folders in your tarball, navigate to the folder above it and create it there. Otherwise, you’ll get the same folder structure.

Let’s add two additional letters: “v” for verbose so we can see what is added to the tarball and “z” to compress the files that are being added to the tarball. Here’s the code (remember, the order of the letters doesn’t matter):

tar cfvz test.compressed.tar.gz tar.test.directory/

This command calls the tar software, tells it to create (“c”) a tarball and add files (“f”), show the progress (“v”) and compress the files (“z”), saving the resulting tarball as “test.compressed.tar.gz” and the last piece is what should be put into the tarball. Note the modification to the file name to indicate that the tarball has been compressed – “.gz”. This extension is usually a reflection of the compression format. So, if you go with bzip2, it would be “.bz2” instead of “.gz”. Here’s how it looks in the terminal.

The resulting tarball is only 3.4 mb, illustrating that the contents were compressed as they were added to the archive.

Variations on the above command might include replacing the “z” with “j” to compress using bzip2 or “J” to compress using “xz.” Additionally, appending a “p” to the letters will preserve the permissions of the files and directories that are added to the tarball (though that is done by default, so it isn’t necessary to include it).

Modifying a Tarball

After creating a tarball, it’s possible you may need to change the tarball by either adding files to it or deleting files inside it. Here’s how to do each of those.

To add a file to a tarball, use “r” and, of course, “f”, like this:

tar rf test.tar file-to-add.odt

This command calls the tar software and tells it to append (“r”) a file to the archive (the “f” is necessary to tell the software to make changes). The tarball that is modified is next “test.tar.” And what is to be added is last “file-to-add.odt.”

Assuming the tarball you want to add a file to is compressed, files cannot be added. Attempting this will likely give you the error, “tar: Cannot update compressed archives.” Instead, you would need to extract the archive, make whatever changes you want, then create a new compressed tarball.

It is also possible to delete a file from a tarball. This doesn’t involve a letter but a word, “–delete.” The structure of the command is a bit different. Here’s what your code might look like:

tar --delete --file=test.tar file-to-delete.odt

This command calls the tar software and tells it you want to delete a file “–delete.” You then have to tell it which tarball and which file, which is done with the “–file=” option, which specifies the tarball and then the file is added at the end.

If you aren’t sure what files you have in your tarball, you can always list those using the “–list” command:

tar --list --file=test.tar

This is particularly helpful if you are looking for a specific file to remove from a tarball as it will also tell you if the file is in a subfolder inside the archive. If so, you would need to modify the code to take that into account:

tar --delete --file=test.tar "folder1/folder2/folder with space/file-to-delete.odt"

Do note that, just like adding files to a compressed tarball, deleting files from a compressed archive isn’t possible.

Extracting a Tarball

Here’s how to extract the files inside a tarball. The basic structure is the same, though, there are some things to consider. First, to extract a tarball, replace the “c” from above when creating it with an “x” which means “extract.” So, with our uncompressed tarball, we would extract it by using the following command:

tar xf test.tar

This will call the tar software, the “x” tells it to “extract” the files (“f”) and “test.tar” is the name of the tarball that is being extracted. Note that this code doesn’t specify where to extract the tarball. That can be added as an option that requires adding “-C”. If it’s left blank, the tarball will be extracted in whichever directory the console/terminal is in. Extracting to a specific directory would like like this:

tar xf test.tar -C /extract/into/this/directory

Of course, if your tarball is compressed and/or you want to see the progress of extracting the tarball, add the corresponding letters. If the tarball is a “tar.gz” you’ll need to add the “z” to decompress the files and “v” to see the progress, like this:

tar xfzv test.tar.gz

Lastly, keep in mind that the folder structure inside the tarball will be replicated when the files are extracted. What does that mean? If you extract a tarball into a folder called “home” but inside the tarball the files you want to extract are stored inside nested folders like “home/ryan/Desktop/archive”, when you extract the tarball, the files will end up in “home/home/ryan/Desktop/archive.” See below for what to do in such a situation.

Rules for Using tar:

Rule #1: Pay attention to the folder or directory structure when creating and extracting a tarball.

Rule #2: You cannot add or delete files inside a compressed tarball, only in an uncompressed tarball.

Bonus:

Let me give a specific situation that will illustrate this for a Linux server (I may be speaking from experience here). Imagine you just downloaded the latest version of WordPress and want to extract it into the following directory: /var/www/example.com/public/. First, check the directory structure of the tarball. In this case, the fine folks who package WordPress archive the files inside a folder called “wordpress.” As a result, when you extract the tarball, it is going to create a folder inside the “public” folder called “wordpress” when, in fact, you want the files to be directly stored into the folder “public.” That’s a problem. How do you move those files? The Linux move command, “mv” can do it, but it’s kind of tricky how:

mv /var/www/example.com/public/wordpress/* .

This tells the OS to move all the files in the wordpress directory (the asterisk “*” does this) up one level (the period “.”). Once you do this, you should then remove the wordpress directory:

rm -R wordpress/

 148 total views,  6 views today

Linux Server – Adding a New Domain and WordPress Site – Linode VPS – Ubuntu 18.04

I have a VPS server with Linode that I use to host about a dozen different websites. All but one of them run on WordPress. Occasionally, I get a request to add another domain and website to the server. It’s not terribly time consuming to do, but it does require a number of specific steps and I never remember all of them. To help me remember them (and perhaps to help someone else), I’m putting together this tutorial.

Step 1: Purchase the new domain. For this tutorial, I’m going to be adding a domain my brother-in-law requested: flyingyoga.us. He’s a pilot but is getting certified as a yoga instructor and wanted to set up a simple website. I use Google Domains to purchase and manage all my domains. So, step 1, decide on what company you want to use to purchase your domains and purchase your domain.

My domains in Google Domains.

Step 2: Change the DNS settings on the new domain to point to Linode’s nameservers. If using Google domains, click on the domain then click on DNS:

Select “DNS” to change the nameservers.

Under Name servers, select “Use custom name servers” and enter “ns1.linode.com” for the first Name server then add a second and enter “ns2.linode.com.” Hit Save.

Here are the custom name servers in Google Domains.

Step 3: You now need to add the domain and then add domain records to your Linode account. Login to your account and select Domains.

Click on “Create Domain”:

click “Create Domain” at the top right.

Enter the domain and the admin email address. Unless you need to do something special with the Records, select “Insert default records from one of my Linodes.” then select your Linode:

Basic domain creation information.

Assuming you don’t need anything special, the defaults should take care of this step and you’re done.

Step 4: Since I already have about a dozen websites running on the server, I’m not going to go into detail on how to install a LAMP stack – Apache, MySQL, and PHP. There are a number of tutorials for doing so. Instead, my next step is to SSH into my server (obviously replace “user” and the IP address with your own) and create the directories where the files for the new website will be hosted.

ssh user@192.168.0.1

Whenever I log into my server, I use that as an opportunity to run updates.

sudo apt-get update
sudo apt-get upgrade

Next, navigate to the directory where you store your public-facing web files. On my server, it’s /var/www/

cd /var/www/

In that directory, I’m going to create a new folder for the domain:

mkdir flyingyoga.us

I’m then going to navigate inside that folder and create two additional folders: (1) a “public” folder where the actual files go and (2) a “logs” folder for access and error logs.

cd flyingyoga.us
mkdir logs public

Now, navigate back to the main directory where you store all your website files and change the ownership of the directories:

cd ..
sudo chown -R www-data:www-data flyingyoga.us/

This allows Apache to access the information in the folders and since Apache is effectively the web server, that’s important. Don’t skip this step.

Step 5: Download the latest version of WordPress and untar it into the public folder. Where you download it and untar isn’t actually all that important as we’re going to move it to the public folder shortly.

sudo wget http://wordpress.org/latest.tar.gz
tar -xvf latest.tar.gz
mv wordpress/* /var/www/flyingyoga.us/public/
rmdir wordpress/
rm latest.tar.gz

Just to clarify the above commands. The first line downloads the latest version of wordpress. The second one unpacks wordpress into a folder called “wordpress.” The third line moves all of the files that were just unpacked into the newly created public folder for the domain. The fourth line deletes the now empty “wordpress” folder and the fifth line deletes the wordpress tar.gz download (nice and clean server).

Step 6: It would be nice if we were done, but we’ve got a ways to go yet. Next up, let’s create a MySQL user and database with a corresponding user. This can be done from the command line as well, but I prefer using phpmyadmin.

You’ll need to look up where to find phpMyAdmin on your server.

Navigate to “User accounts” and scroll down to “Add user account.” Click on that and you’ll get this screen:

Click “add user account” to set up a new database and user.

Obviously, choose an appropriate user name. I typically let phpMyAdmin generate a nice strong password. Just below the “Login Information” box is a box that says “Database for user account.” Check “Create database with same name and grant all privileges.” Don’t check below that where it says “Global privileges – Check all.” That would give this user access to all databases you have on the server. Not a good security choice. Write down or copy the username and password to a text file as you’ll need it later. When you’ve got all that done, scroll down to the bottom and select “Go.” That will create your database, the user, with the password you wrote down (you wrote it down or copied it to a text file, right?). You now have the database WordPress is going to use for your website.

Here’s where you can create a new user and database in PHPMyAdmin

Step 7: Next up is creating the website information for Apache. Back to the SSH shell. Navigate to where the Apache websites are stored on your server:

cd /etc/apache2/sites-available

In there, you should see the configuration files for all the websites on your server. Since I already have sites configured, I typically just copy one of the existing configuration files and then edit it according to the new domain:

cp otherdomain.com.conf flyingyoga.us.conf
cp otherdomain.com-le-ssl-conf flyingyoga.us-le-ssl-conf

Since I’m using SSL on all my domains, I have two configuration files per domain. The above commands copy existing configuration files and create new ones for my new domain. Here’s the contents for the first one: flyingyoga.us.conf:

<Directory /var/www/flyingyoga.us/public>
    Require all granted
</Directory>
<VirtualHost *:80>
        ServerName flyingyoga.us
        ServerAlias www.flyingyoga.us
        ServerAdmin ryantcragun@gmail.com
        DocumentRoot /var/www/flyingyoga.us/public

        ErrorLog /var/www/flyingyoga.us/logs/error.log
        CustomLog /var/www/flyingyoga.us/logs/access.log combined

RewriteEngine on
RewriteCond %{SERVER_NAME} =www.flyingyoga.us [OR]
RewriteCond %{SERVER_NAME} =flyingyoga.us
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>

And the contents for the second one – flyingyoga.us-le-ssl-conf

<IfModule mod_ssl.c>
<Directory /var/www/flyingyoga.us/public>
    Require all granted
</Directory>
<VirtualHost *:443>
        ServerName flyingyoga.us
        ServerAlias www.flyingyoga.us
        ServerAdmin ryantcragun@gmail.com
        DocumentRoot /var/www/flyingyoga.us/public

        ErrorLog /var/www/flyingyoga.us/logs/error.log
        CustomLog /var/www/flyingyoga.us/logs/access.log combined

Include /etc/letsencrypt/options-ssl-apache.conf
SSLCertificateFile /etc/letsencrypt/live/ryantcragun.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/ryantcragun.com/privkey.pem
</VirtualHost>
</IfModule>

Once you have these updated, you can then tell Apache to load the site:

sudo a2ensite flyingyoga.us.conf
systemctl reload apache2

The first line tells Apache to enable the site. The second line restarts Apache. NOTE: You don’t have to load the ssl configuration file (i.e., flyingyoga.us-le-ssl-conf).

Step 8: Since I have SSL encryption on all of my websites using LetsEncrypt, there is an extra step. This is always the one I forget. Since I’m adding a domain, I have to use the following commands to add a domain to my existing domains on the single SSL certificate that I use for all of my domains. First, let me find the name of my current certificate:

certbot certificates

That provides me the name of my current certificate as well as a list of all of my other domains. Next, I copy all of the existing domains so I can update the certificate and add the two new ones I need to add. The command to then get a new certificate with the added domains is:

certbot --expand -d existing.com,www.existing.com,flyingyoga.us,www.flyingyoga.com

Assuming everything works, this will expand the existing certificate with the new domain and issue a new SSL certificate with all the domains. (NOTE: no spaces between the domains.)

Step 9: Now you can test your server. I always do this by creating a simple html file with the classic “Hello World” in it and putting that into the public directory for the new website:

<!DOCTYPE html>
<html>
    <head>
        <title>Test Page</title>
    </head>
    <body>
        <p>Hello World!</p>
    </body>
</html>

Save that as “index.html” and put it in the public folder. Now, navigate to the new domain in your browser and, hopefully, you’ll see “Hello World!”

Yeah. Website is working!

If you saw “Hello World!” in your browser, everything is working. It’s always a good idea to check that the https redirect is working as well – so you know that your SSL certificate is good and working. The easiest way to do that is to click on the lock icon in your browser and then check the certificate information.

Step 10: Now, the final step to install WordPress – change the name of the index.html file to something else (e.g. “index.html-test”) then reload the page. You should now see the installation guide for WordPress that will ask for your database name, username, and password:

This is the last step to install WordPress on a new domain.

Enter the database information from Step 6 above. Assuming everything goes according to plan, WordPress will populate the database with the relevant fields and your site will be ready:

Here’s the backend of my new wordpress installation.

 2,100 total views,  1 views today

Long Term Storage of Gmail/Email using Mozilla’s Thunderbird (on Linux)

I have email going back to 2003. There have been times when I have actually benefited from my email archive. Several times, I have gone back 5 or more years to look for a specific email and my archive has saved me. However, my current approach to backing up my email is, let’s say, a borderline violation of Google’s Terms of Service. Basically, I created a second email account that I use almost exclusively (not exclusively – I use it for a couple of other things) for storing the email from my primary Gmail account. However, Google has recently changed its storage policies for their email accounts, which has made me a little nervous. And, of course, I’m always wary about storing my data with Google or other large software companies.

Since I have already worked out a storage solution for old files that is quite reliable, moving my old email to that storage solution makes sense. (FYI, my storage solution is to run a local file server in my house with a RAID array so I have plenty of space and local backups. Certain folders on the file server are backed up in real-time to an online service so I also have a real-time offsite backup of all my important files. In other words, I have local and offsite redundancy for all my important files.)

I’m also a fan of Open Source Software (OSS) and would prefer an OSS solution to any software needs I have. Enter Mozilla’s Thunderbird email client. I have used Thunderbird for various tasks in the past and like its interface. I was wondering if there was a way to have Mozilla archive my email in a way that I can easily retrieve the email should I need to. Easily might be a bit of a stretch here, but it is retrievable using this solution. And, it’s free, doesn’t violate any terms of service, and works with my existing data backup approach.

So, how does it work? And how did I set it up?

First, install Thunderbird and set up whatever online email account you want to backup. I’m not going to go through those steps as there are plenty of tutorials for both of them. I’m using a Gmail account for this.

Once you have your email account set up in Thunderbird, click on the All Mail folder (assuming it’s a Gmail account) and let Thunderbird take the time it needs to synchronize all of your email locally. With the over one hundred thousand emails I had in my online archive, it took the better part of a day to do all the synchronizing.

I had over 167,000 emails in my online archive account.

Once you’ve got your email synchronized, right-click on “Local Folders” and select “New Folder.” I called my new folder “Archive.”

Technically, you could store whatever emails you want to store in that folder. However, you’ll probably want to create a subfolder in that folder with a memorable name (e.g., “2015 Work”). Trust me, it will be beneficial later. I like to organize things by year. So, I created subfolders under the Archive folder for each year of emails I wanted to back up. You can see that I have a folder in the above image for the year 2003, which is the oldest email I have (that was back when I had a Hotmail address… I feel so dirty admitting that!).

The next step is weird, but it’s the only way I’ve been able to get Thunderbird to play “nice” with Gmail. Open a browser, log in to the Gmail account you’re using, and empty your trash. Trust me, you’ll want to have the trash empty for this next step.

Now, returning to Thunderbird, select the emails you want to transfer to your local folder and drag them over the “Trash” folder in your Gmail account in Thunderbird. This won’t delete them but it will assign them the “Trash” tag in Gmail. Once they have all been moved into the Trash, select them again (CTRL+A) and drag them into the folder you created to store your archived emails. In the screenshot below, I’m moving just a few emails from 2003 to the Trash to test this process:

Once the transfer is complete, click on the Local Folder where you transferred the emails to make sure they are there:

And there are the emails. Just where I want them. This also means that you have a local copy of all your emails in a folder exactly where you want them. At this point, you have technically made a backup of all the email you wanted to backup.

To remove them from your Gmail account, you need to do one additional thing. Go back to the browser where you have your Gmail account open, click on the Trash, and empty the Trash. When you do, the emails will no longer be on Gmail’s server. The only copy is on your local computer.

Now, the next tricky part to this (I didn’t say this was perfectly easy, but it’s pretty easy). Thunderbird doesn’t really store the Local Folder emails in an obvious location. But you can find the location easy enough. Right-click your Local Folder where you are archiving the emails and select “Properties.”

You’ll get this window:

Basically, in the Location box you’ll see the folder’s location. This is telling you where to find your Local Folder where your email is stored. On Linux, make sure that you have “view hidden files” turned on in your file manager (I’m using Dolphin), the location is your home folder, followed by your user folder, then it’s inside the hidden “.thunderbird” folder followed by a randomly generated folder that Thunderbird creates. Inside that folder, look for the “Mail/Local Folders” folder. Or, simply:

/home/ryan/.thunderbird/R$ND0M.default-release/Mail/Local Folders/
I have opened all the relevant folders on my computer in Dolphin see you can see the file structure.

Since I created an additional folder, there are two files in my Archive.sbd folder that contain all the emails I have put into that folder: “2003” and “2003.msf.” You can learn more about the contents of those files here, but, basically, the file with no file extension stores the actual contents of the emails. The .msf file is basically an index of the emails (I was able to open them both in Kate, a text editor, and read them fine). In short, you won’t have a bunch of files in your archive. You’ll have two files – one with the contents of the emails and one that serves as an index of the emails that Thunderbird can read. These are the two files that you’ll want to backup.

I’ll admit, this was the part that scared me. I needed to know that I could just take those two files, move them to wherever I wanted to ultimately store them and then, when needed, move them back into my Thunderbird profile and read them just fine. So, I tested it repeatedly.

Here’s what I did. First, I clicked on my backup folder in Thunderbird to make sure all of the emails were there:

I then went into the Local Folders directory and moved the files to where I want to back them up.

I then clicked on a different folder in Thunderbird and then hit F5 to make Thunderbird refresh the Local Folders. It took a couple of minutes for Thunderbird to refresh the folder, but eventually, it did. Then, I selected the 2003 Archive folder again and the emails were gone:

This is what I expected. The emails are in the “2003.msf” and “2003” files on my backup server. Now for the real test. I copied the two backed up files back to the Archive.sbd folder in my Thunderbird profile, selected the parent folder in Thunderbird, and hit F5 to refresh the folders again. It took a minute for the folder to refresh, but eventually, when I clicked on the 2003 folder and…

The emails are back!

It worked!!!

What this means is that you can put all of the email you want to back up into a folder; that folder is stored in your Thunderbird profile. You can then find the relevant .msf file and its corresponding data file, move them wherever you want for storage and, if needed, move them back to your Thunderbird profile (or, technically, any Thunderbird profile using the same folder structure) and you’ll still have all of your email.

This may not be the most elegant method for backing up your email but it’s free, it’s relatively simple and straightforward, and works reliably. Your email is not in a proprietary format but rather in an open source format that can actually be ready in a simple text editor. Of course, it’s easiest to read it in Thunderbird, but you have the files in a format that is open and secure.

BONUS:

If you don’t think you’re going to need to access your email on a regular basis, you can always compress the files before storing them. Given that most of email is text, you’ll likely realize a savings of close to 50% if space is at a premium (once I moved all of my 2003 email to storage, I compressed it and saw a savings of 60%). This will add a small amount of time to accessing the email as you’ll have to extract it from the compressed format, but it could mean pretty substantial space savings depending on how many emails you’re archiving.

EXTRA BONUS:

This is also a way to move emails between computers. I ended up using this approach to move my email archives to my laptop so I could go through my old email while I’m watching TV at night and delete all the emails I’ll never want in the future (you could of course do that with the email online before you archive it). I’m pretty good about deleting useless emails as I go, but I don’t catch them all. With the .msf file and its accompanying data file, I was able to transport my email archives to whichever computer I wanted and modify the file, then return it to my file server for long term storage.

 3,554 total views,  1 views today

Linux – Bulk UnRAR

If you’re not familiar with RAR files, they are like ZIP files. RAR is an archive format.

I had a collection of more than 150 RAR files in a single folder I needed to unrar (that is, open and extract from the archive). Doing them one at a time via KDE’s Ark software would work, but it would have taken a long time. Why spend that much time when I could automate the process.

Enter a loop bash script in KDE’s Konsole:

for i in *.rar; do unrar x "$i"; done

Here’s how the command works. The “for i in” part starts the loop (note: you can use any letter here). The “*.rar” component indicates that we want the loop to run through all the RAR files in the directory, regardless of the name of the file. The “do” command tells the loop what command to run. The “unrar x “$i”” component tells the software to use the unrar function which unpacks the archive. The “x” in that command tells the software to use the directory structure inside the RAR archive. The final piece of the loop after the second semi-colon – “done” – indicates what the terminal should do once the loop completes.

It took about 20 minutes to run through all the RAR files, but I used that time to create this tutorial. Doing this by hand would have taken me hours.

 2,309 total views,  5 views today

Linux: Batch Convert .avi files to .mp4/.mkv

I’ve been trying to clean up my video library since building my latest NAS. In the process, I found a number of .avi files, which is an older file format that isn’t widely used these days. While every time a file is converted it loses some of its quality, I was willing to risk the slight quality reduction to convert the remaining few files I had to convert into more modern file formats.

I initially tried converting the files using HandBrake. But given the number I needed to convert, I decided pretty quickly that I needed a faster method for this. Enter stackoverflow.

Assuming you have all of your .avi video files in a single directory, navigate to that directory in a terminal and you can use the following single line of code to iterate through all of the .avi files and convert them to .mp4 files:

for i in *.avi; do ffmpeg -i "$i" "${i%.*}.mp4"; done

In case you’re interested, the code is a loop. The first part starts the loop (“for i in *.avi;”), telling the computer to look for every file with the file extension .avi. The second part tells the computer what to do with every file with that extension – convert it to a .mp4 file with the same name. The last piece indicates what to do when the loop is completed – done.

Of course, this code could also be used to convert any other file format into a different format by replacing the .avi or .mp4 file extensions in the appropriate places. For instance, to convert all the .avi files to .mkv, the code would look like this:

for i in *.avi; do ffmpeg -i "$i" "${i%.*}.mkv"; done

Or if you wanted to convert a bunch of .mp4 files into .mkv files, you could do this:

for i in *.mp4; do ffmpeg -i "$i" "${i%.*}.mkv"; done

BONUS:

If you have .avi files in a number of subfolders, you’ll want to use this script:

find . -exec ffmpeg -i {} {}.mp4 \;

To use it, navigate in a terminal to the top-level folder, then execute this command. It will search through all the subfolders, find all the files in those subfolders, and convert them all to .mp4 files.

Of course, if you have a mixture of file types in the folders, you’ll want a variation of this command that searches for just the files of a certain type. To do that, use this command:

find . -name *.avi -exec ffmpeg -i {} {}.mp4 \;

This command will find all the files with the extension .avi and convert them all to .mp4 files using ffmpeg.

And, if you’re feeling particularly adventurous, you could search for multiple file types and convert them all:

find . -name *.avi -or -name *.mkv -exec ffmpeg -i {} {}.mp4 \;

This code would find every file with the extension .avi or .mkv and convert it to .mp4 using ffmpeg.

NOTE: This command uses the default conversion settings of ffmpeg. If you want more fine-tuned conversions, you can always check the options available in converting video files using ffmpeg.

BONUS 2

If you want to specify the codec to use in converting the files, that is a little more complicated. For instance, if I want to use H265 instead of H264 as my codec, I could use the following code to convert all of my .avi files in a folder into .mkv files with H265 encoding:

for i in *.avi; do ffmpeg -i "$i" -c:v libx265 -crf 26 -preset fast -c:a aac -b:a 128k "${i%.*}.mkv"; done

The default setting in ffmpeg for audio is to pass it through. Thus, if you wanted to just convert the video to a new codec but leave the audio, you could use the following command:

for i in *.avi; do ffmpeg -i "$i" -c:v libx265 -crf 26 -preset fast "${i%.*}.mkv"; done

This will convert the video to H265 but retain whatever audio was included with the video file (the default is to take the audio with the highest number of channels).

Additional information on the various settings for H.265 is available here. Some quick notes: the number after “-crf” is basically an indicator of quality, but is inverted. Lower numbers improve the quality; higher numbers reduce the quality. Depending on what I’m encoding, I vary this from 24 (higher quality) to 32 (lower quality). This will affect the resulting file size. If time is not a concern, you can also change the variable after “-preset.” The options are ultrafast, superfast, veryfast, faster, fast, medium, slow, slower, and veryslow. Slower encodes will result in better quality output but it will take substantially longer to do the encode.

If you run into the problem where you are trying to do bulk conversions but the names of the videos you are converting have spaces in them, you may get the error: “find: paths must precede expression.” The solution is to put the pattern in single quotes, like this:

find . -name '*.mkv' -exec ffmpeg -i {} -c:v libx265 -crf 26 -preset fast {}-new.mkv \;

 6,448 total views,  7 views today

Teaching with a Mask – Headset Solution on Linux

My university, the University of Tampa, is doing what it can to continue teaching in-person classes (many are hybrid) during the COVID-19 pandemic. To facilitate that, our IT folks installed webcams and microphones in all of our classrooms. Unfortunately, the classrooms all have Windows-based desktops that don’t include all the software I need for educating my students (e.g., LibreOffice, Zotero, RStudio, etc.). I have always just plugged my laptop into an HDMI cable and then projected directly from it.

However, now that I’m teaching in a mask that substantially muffles my voice, I need a microphone that is projected through the speakers in the classroom so the students in the back can hear me, particularly when the A/C is on. I tried using the microphone provided the first day of class and ended up having to hold it for the entire class and it still cut out regularly. Our IT folks suggested we could start a Zoom meeting on the desktop, connect our laptop to it, and then display our laptop in the Zoom meeting and project that onto the screen so we can use our laptop and a microphone. That seemed like a kludge approach to solve the problem.

I figured there had to be a better way. So, I did a little thinking and a little research and found one. The answer – a bluetooth headset designed for truckers! Yep, truckers to the rescue!

If I could get a bluetooth headset to connect to my computer and then project the sound through the classroom’s speakers via bluetooth, I could continue to use my laptop to teach my class while still having a microphone to project my mask-muffled voice. Admittedly, this required a couple of hours of testing and some trial and error, but I got it working. Now, I have my own microphone set up for the classroom (I bring it with me) and can continue to use my laptop instead of the Windows-based PC.

So, how did I do it?

First, get yourself a bluetooth headset. I bought the Mpow M5 from Amazon. This is the perfect style headset as it has just one earphone, meaning I can still hear perfectly fine when students are talking to me in the classroom.

Second, connect the headset to your laptop. I’m going to assume your laptop has built-in bluetooth. Mine, a Dell Latitude 7390, does. Pairing it with my laptop was super easy. (If you don’t have bluetooth built-in, there are cheap USB bluetooth dongles you can buy as well.)

Third, the Linux part. I installed the package “blueman,” which provides a GUI interface for working with bluetooth devices. I didn’t know if this would be necessary, but, it turns out, it definitely was. Once you have your headset connected, open the blueman GUI and you’ll see this:

The next part stymied me for a while. Initially, my computer detected the headset as just headphones and not a headset with a microphone. I didn’t know why. Eventually, I got lucky and right-clicked on the Mpow M5 device in blueman and got a context window with the option I needed:

When you right-click on the device, you can select “Audio Profile” and then “Headset Head Unit”. The default, for some reason, was “High Fidelity Playback.” Once I did that, Linux detected the microphone.

Before you continue, make sure you have plugged in the HDMI cable as you’ll need that connected for the next part.

Next up was making sure all my audio settings were correct. This, too, took some trial and error. The settings window you need is paudio or Pulse Audio, which goes by different names in various versions of Linux. Regardless, here’s what the window looks like:

I’ll go over the settings for each tab, though the first two – Playback and Recording – won’t have anything in them until you start up OBS Studio, which I’ll cover shortly.

In the Configuration tab, you should now see your Mpow M5 connected as a Headset and you should see Built-in Audio. This may not say “Digital Stereo (HDMI) Output” to begin with. There is a drop down menu there. Click on it and you’ll see various options:

The default is “Analog Stereo Duplex”. Click on that drop down and select the HDMI Output. (NOTE: I typically use just “Digital Stereo (HDMI) Output” without the “+ Analog Stereo Input”. I have the wrong one highlighted above, but it should still work.)

Here’s what the Input Devices tab should look like:

And here’s how the Output Devices tab should look:

You probably will need to change one thing on the Output Devices tab. Make the HDMI output the default (click the little blue icon). You may also need to mute the Mpow M5 on this screen. Either way, you want to make sure that the HDMI output is where the sound is going.

Now, we need another piece of software. (NOTE: For those using Windows or Mac who want to do this as well, here’s the software that you’ll use that should use a fairly similar set up.) The software is OBS Studio, which is free and open-source software that works on all platforms. Install OBS Studio, then open it up.

The software is very good at detecting everything on your computer. Here are the settings I had to adjust. In the bottom right corner of the software, click on “Settings” and you’ll get a window with various tabs (tabs are on the right). The one you need is “Audio”. Click on that and you’ll see this:

In the Devices section, you’ll need to change the following: “Desktop Audio” should be set to “Built-in Audio Digital Stereo (HDMI)”. “Desktop Audio 2” should be disabled. “Mic/Auxiliary Audio” should be set to “Mpow M5.” All the others should be disabled. Then, under “Advanced,” where it says “Monitoring Device,” select “Monitor of Mpow M5.” Then click Apply or OK.

Close that window then click on “Edit” -> “Advanced Audio Properties” and you’ll get this window:

In that window, where you see “Audio Monitoring,” click on the drop down option for “Mic/Aux” and set it to “Monitor and Output.” What this does is tells the operating system that you want to monitor the audio from your Mpow M5 headset and output it through the speakers. Select “Close” and, assuming you’ve done everything correctly, you should now hear your voice coming out of the speakers. Woot!

A little more detail may be helpful, though. Back to Linux. If you return to the Pulse Audio window, you’ll now see that there is information in the remaining two tabs. Here’s what you should see in the Recording tab:

And here’s what you should see in the Playback tab:

And here is how I look with my headset and a mask:

Some notes:

I haven’t tested this with Zoom yet. I probably will to make sure that the audio also goes through Zoom.

OBS Studio can actually be used to record your presentation as well. It’s designed for streaming gamers, but works just as well for screen capture. So, if you need to record your class, just use OBS Studio to record your audio and your screen during the class.

 4,661 total views

Kubuntu – Audio CD Ripping

I mostly buy digital audio these days. My preferred source is bandcamp as they provide files in FLAC (Free Lossless Audio Codec). However, I ended up buying a CD recently (Last Night’s Fun by Scartaglen) as there wasn’t a digital download available and, in the process, I realized that there are lots of options for ripping the audio from a CD on Linux and quite the process to get the files ripped, tagged, properly named, and stored in my library. This is my attempt to summarize my process.

Format/Codec

First, you need to decide in what format you want the audio from the CD. As noted, I prefer FLAC these days. Given how relatively inexpensive storage is, I no longer need to scrimp on space for the most part. If space was an issue, ripping the files to mp3 format at, say, 192 kbps at a variable bit rate would probably be the optimum balance between decent quality and small size. But I prefer the best quality sound with no real regard for the size of the resulting files. It helps that I store all my music on a dedicated file server that runs Plex. That solves two problems: I have lots of space and Plex will transcode the files if I ever need that done (if, for example, I want to store the music on my phone and want it in a different format). So, my preferred file format is FLAC. (Another option is OGG, but I find not as many audio players work as well with OGG.)

There is another issue that I recently ran into: single audio files with cue sheets. Typically, people want their audio in individual files for each song. However, if you want to accurately represent an audio CD, the best approach to do this is to rip the audio as a single file with a corresponding cue sheet. The cue sheet keeps an exact record of the tracks from the CD. With the resulting two files, the audio CD can be recreated and burned back to a CD. I have no real intention of burning the audio back to a CD (I want everything digital so I can store it on my file server), but it’s good to know about this option. Typically, those who opt for this approach use one of two formats, .flac or .ape, for storing the audio and .cue for storing the timing of the tracks. The .ape format is a proprietary format, however, so it is definitely not my preferred approach.

As a quick illustration for how file format is related to size, I ripped my demonstration CD, Last Night’s Fun by Scartaglen into a single FLAC file and a single mp3 file (at 192 kbps using a variable bit rate) and put the resulting files into the same folder so you can see the size difference:

As you can see, the FLAC rip resulted in a file that was 222.9 MB compared to the mp3 file that is only 49.4 MB. The FLAC file is about 4.5 times the size of the mp3 file. A higher-quality mp3 rip at 320 kbps at a constant bit rate resulted in a 54.8 MB file. A pretty good estimate would be that the FLAC format is going to be somewhere between 3 to 5 times the size of an mp3 file. Thus, if space is an issue but you want good quality, ripping your music to the highest quality mp3 (320 kbps) is probably your best option. If space isn’t an issue and you care more about quality, FLAC is the way to go.

NOTE: I also ripped the disc to OGG and the file size was 38 MB.

Ripping Software

First, if you’re planning on ripping to FLAC on Linux, you’ll need to install FLAC. It is not installed in most distributions by default. This can be done easily from the terminal:

sudo apt-get install flac

Without FLAC installed, the software below won’t be able to rip to FLAC.

K3b

K3b is installed in Kubuntu 20.04 by default and is, IMO, a good interface for working with CDs and DVDs. When I inserted my demonstration CD into my drive, Kubuntu gave me the option of opening the disk in K3b. When I did, K3b automatically recognized the CD, grabbed the information from a CDDB, and immediately gave me options for ripping the CD:

When you click on “Start Ripping,” you get a new window:

In this new window, you have a bunch of options. You can change the format (Filetype). With the FLAC codec installed, the options listed are: WAVE, Ogg-Vorbis, MPEG1 Layer III (mp3), Mp3 (LAME), or Flac. You can obviously change the Target Folder as well. K3b also gives you the option of creating an m3u playlist and the option “Create single file” with “Write cue file” which is where you could create the single file and cue file from the CD as noted above. There are also options for changing the naming structure and, under the Advanced tab, options for how many times you want to retry reading the CD tracks. K3b is pretty fully featured and works for well for ripping audio CDs.

Clementine

My preferred music player in Linux is Clementine. I have used a number of music players over the years (e.g., Banshee, Rhythmbox, Amarok), but Clementine has the best combination of features while still being easy to use. Clementine is in the repositories and can easily be installed via synaptic or the terminal:

sudo apt-get install clementine

Clementine also has the ability to rip audio CDs. Once your CD is inserted, click on Tools -> Rip audio CD:

You’ll get this window, which is similar to the ripping window in K3b:

If the information is available in a CDDB, Clementine will pull that information in (as it did for my demonstration CD). You then have a bunch of options for the Audio format: FLAC, M4A, MP3, Ogg Flac, Ogg Opus, Ogg Speex, Ogg Vorbis, Wav, and Windows Media Audio. The settings for each of these can be adjusted in the “Options” box. One clear advantage of Clementine over K3b is that you can readily edit the titles of the tracks. Another advantage of Clementine over K3b is that you could import the files directly into your music library.

Ripping from a Cue Sheet

Another scenario I have run into on Linux is having a single file for the audio from a CD with a corresponding .cue sheet (the file is typically in the FLAC format, but I have also run into this in .ape format). I used to immediately turn to Flacon, a GUI that helped rip the single file into individual tracks. However, I have had mixed success with Flacon working lately (as of Kubuntu 20.04, I couldn’t get it to work). Never fear, of course, because Flacon is really just a GUI for tools that can be used in the terminal.

To split a single FLAC file with a corresponding .cue sheet into the individual tracks, you’ll need to install “shntool“:

sudo apt-get install shntool

(NOTE: It’s also a good idea to install the suggested packages, “cuetools,” “sox,” and “wavpack” but not required.) Assuming you have already installed “flac” as described above, ripping a single FLAC file into the individual tracks is fairly straightforward. The easiest way is to navigate to the folder where you have the FLAC file (e.g., “audiofile.flac”) and the cue sheet (e.g., “audiofile.cue”). Then use the following command at the terminal:

shnsplit -f audiofile.cue -o flac audiofile.flac 

Breaking the command down, “shnsplit” calls the program “shnsplit” which is part of the “shntool” package. The “-f” tells the program to show detailed format information. The first file is the cue sheet. The “-o” indicates that you are going to specify the output file format extension. After the “-o” is the target file format “flac” and the last file is the single FLAC file that you want to split.

Here’s a screenshot of me rippling the single FLAC file from my demonstration CD into individual FLAC files:

If you happen to run into a single audio file in the .ape format, shntool probably won’t be able to read it so the above command won’t work. However, a simple workaround is to convert the file to flac format using ffmpeg, which can read it. Here’s the command you could use from the terminal:

ffmpeg -i audiofile.ape audiofile.flac

That command will call ffmpeg (which you probably have installed) and convert the .ape file into a .flac file which can then be split using the command above (assuming you have a corresponding cue sheet).

Tagging Software

Let’s say I have successfully ripped my files into my desired format and now I want to tag them. There are a number of software packages that can do this, but my preferred software is Picard by MusicBrainz. Picard is open source, which is awesome, but it also interfaces with the MusicBrainz website and pulls in information that way, which means your files will get robust tagging information. If you pull in all the information from MusicBrainz, not only will the artist and album be tagged, but so to will lots of additional information, depending on how much was entered into the database by whoever added the album in the first place. (NOTE: Clementine also interfaced with MusicBrainz but this broke in 3.1. Once it broke, I started using Picard directly and now I realized that it has a lot more features than Clementine’s implementation, so I just use Picard. However, you could try doing this in Clementine as well.)

Again, I’ll use my demonstration CD to illustrate how this is done. I ripped the tracks into individual FLAC files above. Those tracks are completely devoid of tags – whatever software I use to try to play them won’t know what the audio files are. The screenshot below illustrates this. I used MediaInfo (a gui for pulling information from audio and video files in Linux) to pull available information from the file. It shows the format and length but provides no information about the artist or album, which it would if the file had tags.

We’ll use Picard to find the album and add all the tags. First, of course, install Picard:

sudo apt-get install picard

Open the program. Now, since my files have no tag information, I’m going to click on Add Files (you can also add a folder with subfolders, which then has Picard run through multiple audio files, a great feature if you are tagging multiple albums at the same time).

You’ll get a new window where you can select the audio files you want to add. Select them and then click “Open.”

If the files have existing tags, Picard will do its best to group the tracks together and may even associate the files with the corresponding albums. In my case, it simply puts the files into the “Unclustered” category:

Since I pulled them all in from the same folder, I can select the tracks and then click on the “Cluster” button in the toolbar and Picard will cluster the files.

Clustering is a first step toward find the corresponding album information. Once they are grouped together, they will show up in the Cluster category:

Without any tag information, Picard is unlikely to find the corresponding album if you select the cluster and then click on “Lookup.” If there was some tag information in the files, that might be enough for Picard to find the corresponding albums, so you could just select the album and then click on “Lookup.” In this case, that won’t work. So, I’m going to use a different approach. If you right-click on the cluster, you can select “Search for similar albums.”

This gives you a window where you can enter search terms to try to find the corresponding album in the Music Brainz database. Based on the limited information it has, it will try to find a corresponding album automatically. But it likely won’t find it because there are no tags, so there is virtually no information available. Generally, I have found that I have better luck finding albums if I use the album title first followed by the artist, like this “Last Night’s Fun Scartaglen” then hit enter:

Once you find the correct album, select it and then click on “Load into Picard” at the bottom of the window.

Once you do that, the album will move to the right side of the screen. If all of the tracks are included and Picard is able to align them with the corresponding information it has from the database, the CD icon will turn gold. If there is a little red dot on the CD, that means Picard has tag information that can be saved to the individual tracks.

Click on the album and then click the “Save” button in the toolbar and the tag information will be saved to the files.

You can track the progress as tracks will turn green once the information has been successfully added to the tags.

You can also set up Picard to modify the file names when it saves information to the files. Click on “Options” then click the checkmark next to “Rename Files.”

I typically let Clementine rename my files when I import them into my Music library, so I don’t worry about this with Picard, but it is a nice option. Finally, here’s that same Mediainfo box with the tagged file showing that information about the artist and track is now included in the file:

 3,770 total views,  1 views today

Linux – Video Tag Editing

Not everyone may be as particular as I am about having my files organized, but I like to make sure everything is where it’s supposed to be. I make sure my music is tagged accurately. I also like to have my video files tagged correctly. What does that mean? Just like with audio files, video container formats include as part of the file some tags that provide information about the file. Those tags can include the name of the video, the year, and other information (e.g., genre, performers, etc.). If you rip files or have digital copies, it’s not really necessary to update the information in the tags. However, depending on the software you use to play your video files, having that information included in the tags substantially increases the odds that your video player will be able to figure out what the video is and will then be able to retrieve any other relevant data. Thus, having accurate metadata in your video files is nice. It’s not necessary, but nice.

I was cleaning up some video files the other data and realized that I didn’t have accurate tags in some of them. I opened the video in VLC and then clicked on “Tools” -> “Media Information”:

I wanted to see the tags in the video file.

Here’s what VLC saw:

Yep, I’m working with Frozen!

As you can see, it didn’t have any tags filled except “Encoded by.” It actually filled the title by pulling the name of the video file itself. The minimum tags that should be included in a video file are: title and year, but including genre and some of the performers is always nice.

While there are a number of music file tag editors that work very well on Linux (e.g., Picard), I have struggled to find a good video metatag editor for Linux. I had one that was working for a while, Puddletag, which actually worked quite well even though it only billed itself as a tag editor for music files. However, Puddletag does not appear to be maintained anymore and, as of Kubuntu 20.04, it is no longer in Ubuntu’s repositories and the PPA does not contain the correct release file. I could try building it from source, but I wanted to see if there was a good alternative.

After googling around, I found one that seems to work quite well – Tag Editor. (You have to love the Linux community: call the software exactly what it does!) Here’s the GitHub site. And here’s where you can download an AppImage (I went with “tageditor-latest-x86_64.AppImage”), which worked great on Kubuntu 20.04.

Once you’ve downloaded the AppImage, you can set it to be executable (right-click and select “properties” then, on the “permissions” tab, select “executable”) or just double-click it and allow it to be executed. It should load.

In the left pane, navigate to your video file:

Once you find the file, you can see all of the tags that can be edited. Fill in the information:

Once you’ve filled in the tags you want to add or modify, click on “Save” at the bottom of the screen:

I particularly like this next feature. Once you click save, it shows the progress and actually tells you what stage it is at in saving the tags in the file:

Progress is in the circle with robust information on what it is doing next to it.

Tag Editor also does something that I actually questioned at first until it saved my bacon – it makes a backup of the file before it writes the new file. The backup file is named the same as the original file but with a new file extension: “.bak”.

You can see the backup copy of Frozen (“Frozen.m4v.bak”) just below the updated copy.

I initially thought this was just going to be annoying as I’d have to go through and delete all the backup copies once I was done. However, I did run into a couple of files that, for whatever reason, could not be modified. Partway through the tag saving process, I got an error message. Sure enough, Tag Editor, in writing the file, had stopped midway through. If a backup file wasn’t made, I would have lost the video. I don’t know exactly what caused the errors, but I quickly learned to appreciate this feature.

Just to illustrate that the tags were updated, I opened the new file in VLC and went back to the media information:

As you can see, the Title, Date, and Genre fields are now filled with accurate information.

Unlike, say, mp3 audio files, video files can take quite some time to update because the file has to be re-written. With a very fast computer, this won’t take an exorbitant amount of time. But it is a much lengthier process than updating tags in mp3 audio files.

 7,280 total views,  3 views today