Linux – Bulk UnRAR

If you’re not familiar with RAR files, they are like ZIP files. RAR is an archive format.

I had a collection of more than 150 RAR files in a single folder I needed to unrar (that is, open and extract from the archive). Doing them one at a time via KDE’s Ark software would work, but it would have taken a long time. Why spend that much time when I could automate the process.

Enter a loop bash script in KDE’s Konsole:

for i in *.rar; do unrar x "$i"; done

Here’s how the command works. The “for i in” part starts the loop (note: you can use any letter here). The “*.rar” component indicates that we want the loop to run through all the RAR files in the directory, regardless of the name of the file. The “do” command tells the loop what command to run. The “unrar x “$i”” component tells the software to use the unrar function which unpacks the archive. The “x” in that command tells the software to use the directory structure inside the RAR archive. The final piece of the loop after the second semi-colon – “done” – indicates what the terminal should do once the loop completes.

It took about 20 minutes to run through all the RAR files, but I used that time to create this tutorial. Doing this by hand would have taken me hours.

 215 total views,  31 views today

Linux: Batch Convert .avi files to .mp4/.mkv

I’ve been trying to clean up my video library since building my latest NAS. In the process, I found a number of .avi files, which is an older file format that isn’t widely used these days. While every time a file is converted it loses some of its quality, I was willing to risk the slight quality reduction to convert the remaining few files I had to convert into more modern file formats.

I initially tried converting the files using HandBrake. But given the number I needed to convert, I decided pretty quickly that I needed a faster method for this. Enter stackoverflow.

Assuming you have all of your .avi video files in a single directory, navigate to that directory in a terminal and you can use the following single line of code to iterate through all of the .avi files and convert them to .mp4 files:

for i in *.avi; do ffmpeg -i "$i" "${i%.*}.mp4"; done

In case you’re interested, the code is a loop. The first part starts the loop (“for i in *.avi;”), telling the computer to look for every file with the file extension .avi. The second part tells the computer what to do with every file with that extension – convert it to a .mp4 file with the same name. The last piece indicates what to do when the loop is completed – done.

Of course, this code could also be used to convert any other file format into a different format by replacing the .avi or .mp4 file extensions in the appropriate places. For instance, to convert all the .avi files to .mkv, the code would look like this:

for i in *.avi; do ffmpeg -i "$i" "${i%.*}.mkv"; done

Or if you wanted to convert a bunch of .mp4 files into .mkv files, you could do this:

for i in *.avi; do ffmpeg -i "$i" "${i%.*}.mkv"; done

BONUS:

If you have .avi files in a number of subfolders, you’ll want to use this script:

find . -exec ffmpeg -i {} {}.mp4 \;

To use it, navigate in a terminal to the top-level folder, then execute this command. It will search through all the subfolders, find all the files in those subfolders, and convert them all to .mp4 files.

Of course, if you have a mixture of file types in the folders, you’ll want a variation of this command that searches for just the files of a certain type. To do that, use this command:

find . -name *.avi -exec ffmpeg -i {} {}.mp4 \;

This command will find all the files with the extension .avi and convert them all to .mp4 files using ffmpeg.

And, if you’re feeling particularly adventurous, you could search for multiple file types and convert them all:

find . -name *.avi -or -name *.mkv -exec ffmpeg -i {} {}.mp4 \;

This code would find every file with the extension .avi or .mkv and convert it to .mp4 using ffmpeg.

NOTE: This command uses the default conversion settings of ffmpeg. If you want more fine-tuned conversions, you can always check the options available in converting video files using ffmpeg.

 903 total views,  47 views today

HandBrake – H.265 NVEnc 1080p Ripping Chart and Guidelines

With a Plex server, I want my collection of movies backed up digitally so I can watch them when and where I want to. This involves a two-step process. First, I have to rip the video from Blu Ray, which I do using MakeMKV. Since that process is pretty straightforward, I’m not going to cover how to do it here. It’s the second step that is more complicated – compressing the video using HandBrake.

I’ve been using HandBrake for years, but have typically just used the default settings. That’s a terrible idea for a number of reasons, which I’ll detail in this post. But the primary reason why I’m posting is to detail how different settings translate into different file sizes so people have a better sense of what settings to use.

Format

The first thing you need to decide when ripping a video using HandBrake is the resulting file format. I’m using HandBrake 1.3.3 which includes three options: MPEG-4, Matroska, and WebM. Each of these formats has advantages and disadvantages.

You can choose which format you want on the summary tab of Handbrake

MPEG-4 (or mp4/MP4) is the most widespread format and the most compatible with various devices and programs. This is likely going to be the default for most people. However, MPEG-4 has a major limitation – you cannot store multiple subtitles in the video file itself. If you don’t care about subtitles (e.g., you never watch foreign films), that may not matter to you. But it is a problem for those of us who enjoy a good foreign film. (NOTE: The workaround for most people is to save the subtitles as an .srt file in the same folder, assuming your playback device can then load a separate SRT file.)

Matroska (or MKV) is kind of odd. It’s actually not a video codec. It’s an open-standard container. Basically, it’s like a zip file or folder for the actual media. Inside, it can have video files in a lot of different formats, including MPEG-4, but it can also include subtitle tracks and other media. Thus, the major advantage is that you can store more inside a Matroska file than you can in MPEG-4 files. The disadvantage is that not every device or app supports Matroska files natively. However, support for the format has increased quite a bit. Matroska is now my preferred format, mostly because all of the devices I use for playback can play MKV files and I can store subtitles in the same file.

WebM is actually a variant of Matroska but is meant to be a royalty-free alternative to MP4 that is primarily sponsored by Google and supports Google’s VP8 and VP9 codecs. It is licensed under a BSD license. Basically, MP4 is a proprietary codec while WebM is an open-source one. (NOTE: I don’t have any videos stored in WebM format. However, when I create videos to upload to YouTube, I typically rip them into VP9, which is the preferred format for YouTube.)

Audio

HandBrake searches for the default audio track from the video you are planning on converting but then does something very odd. By default, it sets whatever audio track that is to be converted to stereo audio (i.e., 2.0 or 2 channels – left and right). You can see that in the screenshot below:

HandBrake’s default setting on audio is to convert the default audio track to Stereo (2.0) sound.

If you have a home theater or good headphones, you’ll want 5.1 surround sound at a minimum. If you have a nicer setup, you’ll want 7.1 surround sound. So, make sure that you delete the default Audio option and instead include a 5.1 surround sound option into your new file. Like this:

5.1 audio track instead of stereo.

(NOTE: I haven’t figured out the best options for 5.1 surround sound. I’m not sure whether AC3 or AC3 passthrough is better. And I’m not sure if Dolby Surround, Dolby ProLogic II, or 5.1 Surround is better. Perhaps someone out there will have insights on this.)

(NOTE: Most modern video players are capable of converting 5.1 surround sound into stereo sound [a.k.a. downsampling]. Not including a 2.0 option is perfectly fine for most people as playback devices will compensate.)

Tags

If you haven’t ever used the Tags tab in Handbrake, you really should. By default, the “Title” tag is filled in with whatever information is stored in the video file or DVD you are converting, like this:

Default tag information loaded by HandBrake

However, what you include in the “Title” tag is also what video playback software or devices will think is the name of the video. It is worth taking two minutes to fill in the tags. I typically will change the “Title,” at a minimum, as well as the “Release Date” and “Genre” tags, like this:

I have filled in the tags in this screenshot.

Video

Now on to the primary purpose of this post – video quality settings. To determine the ideal balance between quality and file size, I actually converted the same file using different settings. The original file is a rip of the film Amadeus (Director’s Cut) from the Blu-Ray disc that was 28.4 GB in size. The table below illustrates the results of converting the file using different Constant Quality options – all into 1080p video. For each of these conversions, I used identical settings except I changed the Constant Quality option in HandBrake. Each of these used the H.265 (NVenc) encoder with all the other video options on their default settings. Each conversion took about 1 hour (give or take 10 to 15 minutes).

SettingResulting File SizeCompression
(% of original file)
CQ 2214.3 GB50.35%
CQ 248.6 GB30.28%
CQ 266.5 GB22.89%
CQ 284.9 GB17.23%
CQ 303.7 GB13.03%
CQ 322.9 GB10.21%

The big question, of course, is how much degradation there is in quality with the smaller file sizes. To illustrate this, I took a screenshot using VLC from four of the video files: the original, unconverted Blu-Ray rip (28.4 GB file), from the CQ 22 rip, the CQ 28 file, and the CQ 32 file. I uploaded the full resolution files below so you can see and compare the differences.

Screenshot from the original MKV file – 28.4 GB.
Screenshot from the converted file at CQ 22.
Screenshot from the converted file at CQ 28.
Screenshot from the converted file at CQ 32.

To help my old eyes see the differences, I zoomed in on just one part of these photos to see if I could tell if there were any meaningful differences:

I thought I might see differences in the quality of the screenshots with the musical score (that’s why I chose this frame). I thought the lines might get blurred or the notes would become fuzzier. But that isn’t actually the case. Most of the space savings actually came from the detail in the brown section of this frame. In the original, if you look closely at the bottom right of the image, you can see lots of “noise.” This is basically film grain. The compression algorithm must look for that kind of noise and then remove it. If you look closely at the CQ 22 frame, there is less film grain. In CQ 28, large swathes of the film grain have been removed. And in CQ 32, all of the film grain has been removed and converted to blocks of single colors. Where there is fine detail in the video or distinct color differences, that has been retained. I should probably try this same exercise with a more modern movie shot digitally and not on film to see how the compression varies. Even so, this is a good illustration of how compression algorithms save space – they look for areas that can be considered generally the same color and drop detail that is deemed unimportant.

TL:DR: My recommendation for file sizes and CQ settings…

Scenario 1: Storage space is genuinely not an issue and you have a very fast internet connect, file server, and home network. You should rip your BluRay disks using MakeMKV and leave them as an MKV file with their full size. That will give you the best resolution with lots of options for future conversion if you want.

Scenario 2: You have a fair amount of storage space, a decent internet connection, a reasonably fast/powerful file server, and a good home network, then I’d suggest a dual approach. If it’s a movie you absolutely love and want the ideal combination of quality while minimizing file size, use a CQ of 22 to 26. That will reduce the quality, but retain much of the detail. If it’s just a movie that you enjoyed and think you might want to watch it again in the future but don’t care so much about the quality, go with a lower quality setting (e.g., CQ of 30 or 32).

Scenario 3: You have very little storage space, your internet connection isn’t great, your file server cannot convert files on the fly, and your home network is meh, then you should probably go for higher levels of compression, like CQ 30 or 32. You’ll still get very good quality videos (compression algorithms are very impressive these days) but in a much smaller sized video. Oh, and you should probably convert your files to MP4.

 2,299 total views,  69 views today

How to Export Garmin Tracks/Activities to Google Maps

While it’s possible to share your Garmin tracks or activities with people through the Garmin website, I like exporting my tracks to Google Maps so I can embed them on my website. Here’s how I do that.

First, log in to your Garmin Connect account. Once you have logged in, click on the Activities tab.

You’ll see a list of your activities. Look for and find the activity you want to share.

Click on the activity you want and you’ll see all the details for the activity. To export the GPS track of your activity, look for the gear icon on the upper right of the screen:

Click on the gear icon and you’ll see several options. The one you want is “Export to GPX.”

You should then get a prompt to download a GPX file. Download it to your computer (and, of course, remember where you downloaded it to). If you’re curious, you can open the GPX file with a text editor and see that it is basically just a list of points using a markup language.

Now, log in to your Google account. The next part is always the part that takes me the longest when doing this – figuring out the URL where I can upload the GPX file to create a map. Google doesn’t make this easy (and, admittedly, it has changed over time). The link to upload and create maps using Google Maps is this one: https://www.google.com/maps/d/ Assuming you are signed in and everything works, you should see a page called “My Maps” that looks like this:

I’ve made a fair number of maps!

Click on “+CREATE A NEW MAP”, the big red button. On the next screen, you’ll see a checkbox next to “Untitled Layer” and below that a link that says “Import.”

Click on the “Import” link and you’ll get this window:

You can either click on the blue button and select your GPX file or just drag it to the window. Either way, as soon as it has been uploaded, the site will process it and overlay your track onto a map, like this:

(BONUS: Since I have created quite a few of these for my hikes, I actually try to keep them organized in my Google Drive. If you’re going to create a lot of these, you should do the same.)

Now you have lots of options to customize your map. You should obviously click on where it says “Untitled map” and give it a name. You can also add layers, add points of interest, insert pins, etc. You can also change the Base map to terrain or satellite instead of the default map. I added three pins, a title, and a description to my map:

When you’re done modifying your map, you should do two things to share it. First, change the permissions by clicking on the “Share” icon.

By default, maps are restricted and you can only provide access to specific people. You can change that by clicking at the bottom of that window where it says “Change to anyone with the link.”

Now you can embed the map into a different website or just share the link with people. To embed the map, click the three little vertical dots near the title and select “Embed on my site”:

You’ll then get a window with the embed code:

The code should look something like this:

<iframe src="https://www.google.com/maps/d/embed?mid=1hCMJebINYk1cTYWCUEo48YN48oUO1GGK" width="640" height="480"></iframe>

And here’s how it looks actually embedded on your site:

 2,775 total views,  15 views today

Teaching with a Mask – Headset Solution on Linux

My university, the University of Tampa, is doing what it can to continue teaching in-person classes (many are hybrid) during the COVID-19 pandemic. To facilitate that, our IT folks installed webcams and microphones in all of our classrooms. Unfortunately, the classrooms all have Windows-based desktops that don’t include all the software I need for educating my students (e.g., LibreOffice, Zotero, RStudio, etc.). I have always just plugged my laptop into an HDMI cable and then projected directly from it.

However, now that I’m teaching in a mask that substantially muffles my voice, I need a microphone that is projected through the speakers in the classroom so the students in the back can hear me, particularly when the A/C is on. I tried using the microphone provided the first day of class and ended up having to hold it for the entire class and it still cut out regularly. Our IT folks suggested we could start a Zoom meeting on the desktop, connect our laptop to it, and then display our laptop in the Zoom meeting and project that onto the screen so we can use our laptop and a microphone. That seemed like a kludge approach to solve the problem.

I figured there had to be a better way. So, I did a little thinking and a little research and found one. The answer – a bluetooth headset designed for truckers! Yep, truckers to the rescue!

If I could get a bluetooth headset to connect to my computer and then project the sound through the classroom’s speakers via bluetooth, I could continue to use my laptop to teach my class while still having a microphone to project my mask-muffled voice. Admittedly, this required a couple of hours of testing and some trial and error, but I got it working. Now, I have my own microphone set up for the classroom (I bring it with me) and can continue to use my laptop instead of the Windows-based PC.

So, how did I do it?

First, get yourself a bluetooth headset. I bought the Mpow M5 from Amazon. This is the perfect style headset as it has just one earphone, meaning I can still hear perfectly fine when students are talking to me in the classroom.

Second, connect the headset to your laptop. I’m going to assume your laptop has built-in bluetooth. Mine, a Dell Latitude 7390, does. Pairing it with my laptop was super easy. (If you don’t have bluetooth built-in, there are cheap USB bluetooth dongles you can buy as well.)

Third, the Linux part. I installed the package “blueman,” which provides a GUI interface for working with bluetooth devices. I didn’t know if this would be necessary, but, it turns out, it definitely was. Once you have your headset connected, open the blueman GUI and you’ll see this:

The next part stymied me for a while. Initially, my computer detected the headset as just headphones and not a headset with a microphone. I didn’t know why. Eventually, I got lucky and right-clicked on the Mpow M5 device in blueman and got a context window with the option I needed:

When you right-click on the device, you can select “Audio Profile” and then “Headset Head Unit”. The default, for some reason, was “High Fidelity Playback.” Once I did that, Linux detected the microphone.

Before you continue, make sure you have plugged in the HDMI cable as you’ll need that connected for the next part.

Next up was making sure all my audio settings were correct. This, too, took some trial and error. The settings window you need is paudio or Pulse Audio, which goes by different names in various versions of Linux. Regardless, here’s what the window looks like:

I’ll go over the settings for each tab, though the first two – Playback and Recording – won’t have anything in them until you start up OBS Studio, which I’ll cover shortly.

In the Configuration tab, you should now see your Mpow M5 connected as a Headset and you should see Built-in Audio. This may not say “Digital Stereo (HDMI) Output” to begin with. There is a drop down menu there. Click on it and you’ll see various options:

The default is “Analog Stereo Duplex”. Click on that drop down and select the HDMI Output. (NOTE: I typically use just “Digital Stereo (HDMI) Output” without the “+ Analog Stereo Input”. I have the wrong one highlighted above, but it should still work.)

Here’s what the Input Devices tab should look like:

And here’s how the Output Devices tab should look:

You probably will need to change one thing on the Output Devices tab. Make the HDMI output the default (click the little blue icon). You may also need to mute the Mpow M5 on this screen. Either way, you want to make sure that the HDMI output is where the sound is going.

Now, we need another piece of software. (NOTE: For those using Windows or Mac who want to do this as well, here’s the software that you’ll use that should use a fairly similar set up.) The software is OBS Studio, which is free and open-source software that works on all platforms. Install OBS Studio, then open it up.

The software is very good at detecting everything on your computer. Here are the settings I had to adjust. In the bottom right corner of the software, click on “Settings” and you’ll get a window with various tabs (tabs are on the right). The one you need is “Audio”. Click on that and you’ll see this:

In the Devices section, you’ll need to change the following: “Desktop Audio” should be set to “Built-in Audio Digital Stereo (HDMI)”. “Desktop Audio 2” should be disabled. “Mic/Auxiliary Audio” should be set to “Mpow M5.” All the others should be disabled. Then, under “Advanced,” where it says “Monitoring Device,” select “Monitor of Mpow M5.” Then click Apply or OK.

Close that window then click on “Edit” -> “Advanced Audio Properties” and you’ll get this window:

In that window, where you see “Audio Monitoring,” click on the drop down option for “Mic/Aux” and set it to “Monitor and Output.” What this does is tells the operating system that you want to monitor the audio from your Mpow M5 headset and output it through the speakers. Select “Close” and, assuming you’ve done everything correctly, you should now hear your voice coming out of the speakers. Woot!

A little more detail may be helpful, though. Back to Linux. If you return to the Pulse Audio window, you’ll now see that there is information in the remaining two tabs. Here’s what you should see in the Recording tab:

And here’s what you should see in the Playback tab:

And here is how I look with my headset and a mask:

Some notes:

I haven’t tested this with Zoom yet. I probably will to make sure that the audio also goes through Zoom.

OBS Studio can actually be used to record your presentation as well. It’s designed for streaming gamers, but works just as well for screen capture. So, if you need to record your class, just use OBS Studio to record your audio and your screen during the class.

 2,460 total views,  16 views today

Kubuntu – Audio CD Ripping

I mostly buy digital audio these days. My preferred source is bandcamp as they provide files in FLAC (Free Lossless Audio Codec). However, I ended up buying a CD recently (Last Night’s Fun by Scartaglen) as there wasn’t a digital download available and, in the process, I realized that there are lots of options for ripping the audio from a CD on Linux and quite the process to get the files ripped, tagged, properly named, and stored in my library. This is my attempt to summarize my process.

Format/Codec

First, you need to decide in what format you want the audio from the CD. As noted, I prefer FLAC these days. Given how relatively inexpensive storage is, I no longer need to scrimp on space for the most part. If space was an issue, ripping the files to mp3 format at, say, 192 kbps at a variable bit rate would probably be the optimum balance between decent quality and small size. But I prefer the best quality sound with no real regard for the size of the resulting files. It helps that I store all my music on a dedicated file server that runs Plex. That solves two problems: I have lots of space and Plex will transcode the files if I ever need that done (if, for example, I want to store the music on my phone and want it in a different format). So, my preferred file format is FLAC. (Another option is OGG, but I find not as many audio players work as well with OGG.)

There is another issue that I recently ran into: single audio files with cue sheets. Typically, people want their audio in individual files for each song. However, if you want to accurately represent an audio CD, the best approach to do this is to rip the audio as a single file with a corresponding cue sheet. The cue sheet keeps an exact record of the tracks from the CD. With the resulting two files, the audio CD can be recreated and burned back to a CD. I have no real intention of burning the audio back to a CD (I want everything digital so I can store it on my file server), but it’s good to know about this option. Typically, those who opt for this approach use one of two formats, .flac or .ape, for storing the audio and .cue for storing the timing of the tracks. The .ape format is a proprietary format, however, so it is definitely not my preferred approach.

As a quick illustration for how file format is related to size, I ripped my demonstration CD, Last Night’s Fun by Scartaglen into a single FLAC file and a single mp3 file (at 192 kbps using a variable bit rate) and put the resulting files into the same folder so you can see the size difference:

As you can see, the FLAC rip resulted in a file that was 222.9 MB compared to the mp3 file that is only 49.4 MB. The FLAC file is about 4.5 times the size of the mp3 file. A higher-quality mp3 rip at 320 kbps at a constant bit rate resulted in a 54.8 MB file. A pretty good estimate would be that the FLAC format is going to be somewhere between 3 to 5 times the size of an mp3 file. Thus, if space is an issue but you want good quality, ripping your music to the highest quality mp3 (320 kbps) is probably your best option. If space isn’t an issue and you care more about quality, FLAC is the way to go.

NOTE: I also ripped the disc to OGG and the file size was 38 MB.

Ripping Software

First, if you’re planning on ripping to FLAC on Linux, you’ll need to install FLAC. It is not installed in most distributions by default. This can be done easily from the terminal:

sudo apt-get install flac

Without FLAC installed, the software below won’t be able to rip to FLAC.

K3b

K3b is installed in Kubuntu 20.04 by default and is, IMO, a good interface for working with CDs and DVDs. When I inserted my demonstration CD into my drive, Kubuntu gave me the option of opening the disk in K3b. When I did, K3b automatically recognized the CD, grabbed the information from a CDDB, and immediately gave me options for ripping the CD:

When you click on “Start Ripping,” you get a new window:

In this new window, you have a bunch of options. You can change the format (Filetype). With the FLAC codec installed, the options listed are: WAVE, Ogg-Vorbis, MPEG1 Layer III (mp3), Mp3 (LAME), or Flac. You can obviously change the Target Folder as well. K3b also gives you the option of creating an m3u playlist and the option “Create single file” with “Write cue file” which is where you could create the single file and cue file from the CD as noted above. There are also options for changing the naming structure and, under the Advanced tab, options for how many times you want to retry reading the CD tracks. K3b is pretty fully featured and works for well for ripping audio CDs.

Clementine

My preferred music player in Linux is Clementine. I have used a number of music players over the years (e.g., Banshee, Rhythmbox, Amarok), but Clementine has the best combination of features while still being easy to use. Clementine is in the repositories and can easily be installed via synaptic or the terminal:

sudo apt-get install clementine

Clementine also has the ability to rip audio CDs. Once your CD is inserted, click on Tools -> Rip audio CD:

You’ll get this window, which is similar to the ripping window in K3b:

If the information is available in a CDDB, Clementine will pull that information in (as it did for my demonstration CD). You then have a bunch of options for the Audio format: FLAC, M4A, MP3, Ogg Flac, Ogg Opus, Ogg Speex, Ogg Vorbis, Wav, and Windows Media Audio. The settings for each of these can be adjusted in the “Options” box. One clear advantage of Clementine over K3b is that you can readily edit the titles of the tracks. Another advantage of Clementine over K3b is that you could import the files directly into your music library.

Ripping from a Cue Sheet

Another scenario I have run into on Linux is having a single file for the audio from a CD with a corresponding .cue sheet (the file is typically in the FLAC format, but I have also run into this in .ape format). I used to immediately turn to Flacon, a GUI that helped rip the single file into individual tracks. However, I have had mixed success with Flacon working lately (as of Kubuntu 20.04, I couldn’t get it to work). Never fear, of course, because Flacon is really just a GUI for tools that can be used in the terminal.

To split a single FLAC file with a corresponding .cue sheet into the individual tracks, you’ll need to install “shntool“:

sudo apt-get install shntool

(NOTE: It’s also a good idea to install the suggested packages, “cuetools,” “sox,” and “wavpack” but not required.) Assuming you have already installed “flac” as described above, ripping a single FLAC file into the individual tracks is fairly straightforward. The easiest way is to navigate to the folder where you have the FLAC file (e.g., “audiofile.flac”) and the cue sheet (e.g., “audiofile.cue”). Then use the following command at the terminal:

shnsplit -f audiofile.cue -o flac audiofile.flac 

Breaking the command down, “shnsplit” calls the program “shnsplit” which is part of the “shntool” package. The “-f” tells the program to show detailed format information. The first file is the cue sheet. The “-o” indicates that you are going to specify the output file format extension. After the “-o” is the target file format “flac” and the last file is the single FLAC file that you want to split.

Here’s a screenshot of me rippling the single FLAC file from my demonstration CD into individual FLAC files:

If you happen to run into a single audio file in the .ape format, shntool probably won’t be able to read it so the above command won’t work. However, a simple workaround is to convert the file to flac format using ffmpeg, which can read it. Here’s the command you could use from the terminal:

ffmpeg -i audiofile.ape audiofile.flac

That command will call ffmpeg (which you probably have installed) and convert the .ape file into a .flac file which can then be split using the command above (assuming you have a corresponding cue sheet).

Tagging Software

Let’s say I have successfully ripped my files into my desired format and now I want to tag them. There are a number of software packages that can do this, but my preferred software is Picard by MusicBrainz. Picard is open source, which is awesome, but it also interfaces with the MusicBrainz website and pulls in information that way, which means your files will get robust tagging information. If you pull in all the information from MusicBrainz, not only will the artist and album be tagged, but so to will lots of additional information, depending on how much was entered into the database by whoever added the album in the first place. (NOTE: Clementine also interfaced with MusicBrainz but this broke in 3.1. Once it broke, I started using Picard directly and now I realized that it has a lot more features than Clementine’s implementation, so I just use Picard. However, you could try doing this in Clementine as well.)

Again, I’ll use my demonstration CD to illustrate how this is done. I ripped the tracks into individual FLAC files above. Those tracks are completely devoid of tags – whatever software I use to try to play them won’t know what the audio files are. The screenshot below illustrates this. I used MediaInfo (a gui for pulling information from audio and video files in Linux) to pull available information from the file. It shows the format and length but provides no information about the artist or album, which it would if the file had tags.

We’ll use Picard to find the album and add all the tags. First, of course, install Picard:

sudo apt-get install picard

Open the program. Now, since my files have no tag information, I’m going to click on Add Files (you can also add a folder with subfolders, which then has Picard run through multiple audio files, a great feature if you are tagging multiple albums at the same time).

You’ll get a new window where you can select the audio files you want to add. Select them and then click “Open.”

If the files have existing tags, Picard will do its best to group the tracks together and may even associate the files with the corresponding albums. In my case, it simply puts the files into the “Unclustered” category:

Since I pulled them all in from the same folder, I can select the tracks and then click on the “Cluster” button in the toolbar and Picard will cluster the files.

Clustering is a first step toward find the corresponding album information. Once they are grouped together, they will show up in the Cluster category:

Without any tag information, Picard is unlikely to find the corresponding album if you select the cluster and then click on “Lookup.” If there was some tag information in the files, that might be enough for Picard to find the corresponding albums, so you could just select the album and then click on “Lookup.” In this case, that won’t work. So, I’m going to use a different approach. If you right-click on the cluster, you can select “Search for similar albums.”

This gives you a window where you can enter search terms to try to find the corresponding album in the Music Brainz database. Based on the limited information it has, it will try to find a corresponding album automatically. But it likely won’t find it because there are no tags, so there is virtually no information available. Generally, I have found that I have better luck finding albums if I use the album title first followed by the artist, like this “Last Night’s Fun Scartaglen” then hit enter:

Once you find the correct album, select it and then click on “Load into Picard” at the bottom of the window.

Once you do that, the album will move to the right side of the screen. If all of the tracks are included and Picard is able to align them with the corresponding information it has from the database, the CD icon will turn gold. If there is a little red dot on the CD, that means Picard has tag information that can be saved to the individual tracks.

Click on the album and then click the “Save” button in the toolbar and the tag information will be saved to the files.

You can track the progress as tracks will turn green once the information has been successfully added to the tags.

You can also set up Picard to modify the file names when it saves information to the files. Click on “Options” then click the checkmark next to “Rename Files.”

I typically let Clementine rename my files when I import them into my Music library, so I don’t worry about this with Picard, but it is a nice option. Finally, here’s that same Mediainfo box with the tagged file showing that information about the artist and track is now included in the file:

 2,327 total views,  15 views today

LibreOffice – How To Change Icons to a Darker Theme

I prefer darker themes for my desktop environment (Kubuntu 20.04) and browser (Brave). For the most part, this isn’t a problem, but it does cause an issue with some applications, including LibreOffice (6.4.4.2).

One of the first things I do when I install Kubuntu is switch my desktop environment from the default theme (System Settings -> Global Theme), Breeze, which is a lighter theme, to Breeze Dark. You can see the differences in the screenshots below:

This is the Breeze theme that is the default in Kubuntu 20.04
This is the Breeze Dark theme that I typically use in Kubuntu.

The problem is with the icon set in LibreOffice. With the default Breeze theme, the icons are very visible and work great:

These are the default icons in LibreOffice 6.4.4.2 in Kubuntu 20.04 with the default Breeze theme.

The problem comes when I switch the theme to Breeze Dark. Here is how the default Breeze icons look in LibreOffice when I switch the theme:

The default icon set, Breeze, in LibreOffice when the Kubuntu Global Theme is switched to Breeze Dark.

Perhaps it’s just my aging eyes, but those icons are very difficult for me to see. The solution is quite simple, though finding it is always hard for me to remember (thus this tutorial). All you need to do is switch the icon set in LibreOffice. There are several icon sets for dark themes that come pre-packaged with the standard version of LibreOffice that ships with Kubuntu and is in the repositories. It’s just a matter of knowing where to look.

In LibreOffice, go to Tools -> Options:

You’ll get this window. You want the third option down under “LibreOffice”, “View”:

Right at the top of this window you can see “Icon style.” That’s the setting you want to change. If you click on the drop down arrow, you’ll see six or so options. Two are specifically for dark themes, Breeze (SVG + dark) and Breeze (dark). Either of those will work:

I typically choose Breeze (SVG + dark). Select the dark theme you want, then click on OK and you’ll get a new icon set in LibreOffice that works much better for dark themes:

These icons are much more visible to my aging eyes.

Et voila! I can now see the icons in the LibreOffice toolbars.

 2,905 total views,  11 views today

HandBrake – Convert Files with GPU/Nvenc Rather than CPU

I don’t know exactly when HandBrake added the capability of using the GPU for encoding, but it was somewhere between 1.3.1 (current version in the Ubuntu repositories) and 1.3.3 (current version on PPA). Regardless, this option offers dramatic speed improvements, particularly when working with 4K videos. In this post, I’ll show how to use this feature in Handbrake and show some comparisons to illustrate the benefits and tradeoffs that result.

HandBrake 1.3.1 is the current option in the Ubuntu 20.04 repositories.
Version of HandBrake available from the PPA that has NVENC/GPU encoding capabilities.

How to Use NVENC GPU Encoding

First, make sure you have the latest version of Handbrake installed (as of 6/26/2020 that is version 1.3.3). Also, to use Nvenc encoding, you’ll need the Nvidia Graphics Driver 418.81 or later and an Nvidia GeForce GTX 1050+ series GPU or better per HandBrake’s documentation. I’m using a GTX 1060 and have Driver version 440.100 installed. My CPU is an AMD Ryzen 5 3600 6-Core, 12-Thread processor.

My GPU (GeForce GTX 1060) and my driver version: 440.100.

The video I’m using to illustrate how to use GPU encoding is a Blu-Ray rip of Pan’s Labyrinth. I’m making a backup copy of the video to store on my file server. I used MakeMKV to pull the video off the BluRay disc, resulting in a 31.7GB file. I’m going to convert that to a 1080p H265 video file that is substantially smaller in size.

Open up HandBrake and load your file you want to convert by clicking on “Open Source.” Find the file you want to convert and select “Open.”

HandBrake will run through the file, gathering information about the codec, subtitles, audio tracks, etc. Once it’s done, you’ll need to select what format you want to convert it to. For this tutorial, I’m just going to use a General Preset, but I want to illustrate the difference in encoding speed, so I’m going to select Super HQ 1080p30 Surround rather than Fast 1080p30.

My mouse is on Super HQ 480p 30 in the image, but I selected Super HQ 1080p.

To change from CPU encoding to GPU encoding, click on the Video tab:

In the middle of the screen, you’ll see a drop-down menu labeled “Video Encoder.” Click on that drop-down menu and you should see two NVENC options: H.264 (NVenc) and H.265 (NVenc). Those are the two options for using your GPU for the encoding versus using your CPU. The H.265 codec is newer and has somewhat better compression algorithms, but I’m not going to opine as to which of these you should choose. That choice should really be driven by what device you’re going to be playing your videos on. Regardless of which you choose, those two will push the encoding to your GPU instead of your CPU.

Again, my mouse is on the wrong option here. The two options you want are: H.264 (NVenc) or H.265 (NVenc).

Make sure you’ve adjusted your Audio, Subtitles, and Tags to your preferences, then click “Start.” That’s all there is to it – you now have GPU encoding.

Benchmarks: Encoding Speed

Using the H.265 (NVenc) encoder, it took about 17 minutes to convert the 31.7GB MKV file into a 8.4GB m4v file.

My mouse is next to the estimated time for encoding the video: 16m50s.

You can also see that even using the H.265 (NVenc) encoder, much of the processing is passed to the CPU, as shown in this screenshot that shows my GPU is working, but it’s certainly not being stressed:

GPU load is on the left. CPU load is top right.

Using all the same options but CPU encoding instead, HandBrake took 1 hour and 15 minutes to encode the file, so about 5 times as long.

When I grabbed this screenshot, HandBrake was estimating 1h07m, but it ended up taking about 1h15m for the entire encode.

Here’s the same resource utilization illustration showing that HandBrake is drawing exclusively on the CPU and not the GPU:

GPU utilization is on the left; CPU utilization is upper right.

Other tests I ran encoding 4K video illustrated that this difference increases with 4K video. I converted one 4K movie that was about 1 hour, 40 minutes using H.265 (NVenc) and it took about 1 hour. Using the CPU alone, HandBrake estimated it would take 18 hours (I didn’t wait to see if that was accurate). Thus, there is a dramatic difference in encoding speed with higher resolution and larger video files.

Benchmarks: Video Size and Quality

What about file size and video quality? I’m probably not going to do justice the differences because I don’t have the most discerning eye for pixellation and resolution differences, but I will try to use some objective measures. The video encoded with the GPU was 8.39GB in size. The video encoded with the CPU was 3.55GB. I’m not exactly sure why the file sizes are so different given that I chose the same setting for both encodes, but this next screenshot illustrates that the NVENC encode resulted in a higher bitrate (9,419 kb/s) versus the CPU encode with a lower bitrate (3,453 kb/s). Strange.

On the left are the specs for the NVENC encoded video; the CPU encoded video specs are on the right.

I also wanted to see if I could tell if there was a noticeable difference in quality between the two videos. I navigated to the same scene in both videos and used VLC to take a screenshot. First, the NVENC encoded screenshot:

Click for full resolution. This is the NVENC encoded video.
Click for full resolution. This is the CPU encoded video.

My 43-year-old eyes don’t see much of a difference at all.

Conclusion

If you’ve got the hardware and want to save time, using GPU encoding with Handbrake is a nice option. The end result is a much faster encode, particularly with higher resolution videos. Some of the forums where I was reading about this option suggested there are problems with using GPU encoding. I certainly won’t challenge those assertions, but I can’t tell the difference.

 18,339 total views,  198 views today

2020 NAS – Plex, nomachine, Crashplan

After about a year and a half with my previous NAS (see here), I decided it was time for an upgrade. The previous NAS had served dutifully, but it was no match for 4K video (I don’t have a lot of it), it took forever to transcode files when I wanted to synchronize them with a device using Plex, and it couldn’t transcode pretty much any videos in real-time. For playing up to four files simultaneously that didn’t need to be transcoded, it worked like a champ. But it just couldn’t handle all the scenarios I was throwing at it. So, it was time for an upgrade.

Hardware

The major change, of course, was the hardware. Here are the specs for the new NAS:

Case: Fractal Design Define R5 – Mid Tower ATX case
RAM: Corsair LPX 32GB (2x16GB) 3200MHz C16 DDR4 DRAM
GPU: EVGA GeForce GTX 1060 3GB
Motherboard: ASUS AM4 TUF Gaming X570
CPU: AMD Ryzen 5 3600 6-Core
Powersupply: Fractal Design Ion+ Platinum 860W PSU
SSD: Samsung 850 EVO 250GB
Hard Drives: 4 WD Blue 4TB 5400 RPM Hard Drives

Since I was repurposing the SSD and hard drives from my previous NAS, I didn’t buy those for this build. But here is the estimated cost of the NAS assuming those were included: $1,357.47 + tax.

(NOTE: Full disclosure, the links above are affiliate links to Amazon.)

A couple of notes on my hardware choices…

The case is amazing. It’s big, roomy, and extremely well-designed. It is also absolutely silent and does a great job with cable management. It also has plenty of room for hard drives, which I absolutely love. It’s also understated. I don’t need flashing lights with my NAS. I need a discrete box that makes no noise.

My new fileserver. It’s absolutely silent. It may be a little large, but that gives me plenty of space for the components I want.

I had originally considered a different GPU, the PNY Quadro P2000. A lot of Plex forums suggested it was a real beast when it came to hardware transcoding for video. By the time I put this together, these were really hard to find and very expensive. I looked around and the GeForce GTX 1060 had higher benchmarks at a much lower price. You can see my testing data below to see how well it works and whether I made the right choice.

I also considered an AMD Threadripper over a standard Ryzen, but I also realized that I was passing the video transcoding to a GPU with this build (my last one used the CPU), which meant I really didn’t need that much CPU power. Again, you can see in the benchmarks below how well this paid off.

The remaining hardware choices were pretty straightforward. The SSD is small because it only houses the OS. All the storage is on the hard drives. I kept the ZFS raid (see details below) and can upgrade it by simply buying bigger hard drives in the future and swapping the smaller ones out, making this NAS fairly future proof for at least the next few years.

Software

I changed very little from the previous NAS when it comes to software. I installed the latest Kubuntu LTS, 20.04. I am running the latest versions of Plex, Crashplan, tinyMediaManager, kTorrent, and nomachine. I am using NFS to access my server directly from other computers within the network. These are all detailed in my previous build.

I also kept my ZFS file system. Conveniently, I was able to transfer the raid right over to my new NAS, which I detail below.

Test Results

To test how well my new build did, I ran a couple of tests. First, I tried to simultaneously stream a 4K video to 5 devices – my TV, two Google Pixel 3 phones, a Google Pixel phone, and a laptop (all on my home network). The Google Pixel couldn’t handle the 4K video, so I ended up streaming a different 1080p video instead. My previous NAS couldn’t stream a single 4K video. This one handled all 5 streams like a champ, as shown in the video below:

I also wanted to make sure that my Plex server was using the GPU for converting videos, so I took that same 4K video and set it to synchronize with my phone at 720p. On my previous NAS, converting a 31gb 4K video to a 720p video would have taken hours. Plex passed the conversion on to the GPU and it took less than 10 minutes.

To make sure your Plex server is using your GPU, you need to set it to do so in the settings. It’s under Settings->Transcoder. Click on “Use hardware acceleration when available” and “Use hardware-accelerated video encoding.”

NOTE: To monitor the CPU, I used Ksysguard, which ships with Kubuntu. To monitor the GPU, I used “nvtop,” which can be installed from most distribution repositories.

Headless Monitor Issue

One of the problems I have been struggling with for months now (and had a similar issue with my previous NAS) was setting up a virtual screen/monitor. Here’s the problem…

I actually do the initial installation of the operating system using a monitor that I plug into the NAS. But I don’t want to keep a monitor out, sitting near my NAS/fileserver all the time. I want the NAS to be “headless” – meaning no monitor is attached. Once I get the operating system installed, I first install SSH (just in case I screw something up, ’cause you’re always going to screw something up). Then I install my remote management software, nomachine. Once I have nomachine installed, I can unplug the monitor, mouse, and keyboard and take care of everything else remotely. However, I found out with my previous NAS build, that the X server that manages sessions on Kubuntu, doesn’t set up a monitor if it doesn’t detect one being attached physically. In other words, if I restart the computer with no monitor attached, the X server correctly detects that there is no monitor attached and doesn’t set up a monitor in the X session.

As a result, when I would try to remotely connect to my NAS using a VNC client, the client would have to try to generate a monitor and the result was always problematic – either nothing would show up or it would be really slow and sluggish. I get that the X server doesn’t want to use resources to run a monitor if there isn’t a monitor connected. That’s a feature, not a bug. But it is a challenge to solve this problem. With my new NAS, it took me about 2 hours to finally figure out a solution. So I don’t have to go through this again, I’m going to post the solution here in detail.

With my previous NAS, I didn’t have a graphics card installed. I let the CPU do all the processing, which meant I needed a different solution than the one I had employed before (here’s the old solution). With my new NAS, I wanted a graphics card so I could pass off some of the video processing to the GPU instead. As noted above, I bought an NVIDIA graphics card, which Kubuntu recognized when I installed the OS and appropriately added the recommended NVIDIA drivers. My graphics card also runs the monitor. This was the key to solving my headless monitor issue.

Basically, what you need to do is tell the X Server to create a monitor even though one isn’t attached. You do this by creating a “xorg.conf” file that is stored in the /etc/X11/ directory. Once the NAS is up and running, you can actually let the NVIDIA X Server Settings application create the necessary xorg.conf file, for the most part. Here’s what you do. With the NVIDIA drivers installed and the NVIDIA X Server Settings application installed, launch the NVIDIA X Server Settings application. Go to the second option down on that window that reads, “X Server Display Configuration.”

Near the bottom of that window is a button that reads, “Save to X Configuration File.” This is nearly the answer to all our problems.

When you click on that, it will pop up a window that will allow you to preview the code it is going to add to /etc/X11/xorg.conf, which is the X server’s configuration file for the monitor. That code will be based on your current monitor – whatever is currently physically attached to the computer. Go ahead and follow the prompt and it will generate the xorg.conf file.

However, you now need to make two modifications to the file. I use “nano” to edit text files:

sudo nano /etc/X11/xorg.conf

First, you need to add the following to the Device Section of the xorg.conf file:

Option “AllowEmptyInitialConfiguration”

This code tells the X Server that it’s okay to create a screen/monitor without a physical monitor connected. (Got this information from here).

Second, you need to add the dimensions of the virtual screen you want to set up to the “Display” SubSection of the “Screen” Section of the xorg.conf file, like this:

Virtual 1440 900

This creates a virtual monitor/screen with the dimensions indicated. (Got this information from here). Here is the complete xorg.conf file I am using for my NAS:

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings:  version 440.64

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
    Option         "Xinerama" "0"
EndSection

Section "Files"
EndSection

Section "Module"
    Load           "dbe"
    Load           "extmod"
    Load           "type1"
    Load           "freetype"
    Load           "glx"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Acer AL1917W"
    HorizSync       31.0 - 84.0
    VertRefresh     56.0 - 76.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1060 3GB"
    Option	   "AllowEmptyInitialConfiguration"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "Stereo" "0"
    Option         "nvidiaXineramaInfoOrder" "DFP-0"
    Option         "metamodes" "nvidia-auto-select +0+0"
    Option         "SLI" "Off"
    Option         "MultiGPU" "Off"
    Option         "BaseMosaic" "off"
    SubSection     "Display"
        Depth       24
	Virtual 1680 1050
    EndSubSection
EndSection

With the above xorg.conf file (which is located in /etc/X11/, so, /etc/X11/xorg.conf), I can then shutdown my NAS, unplug the mouse, keyboard, and monitor, then restart it and use nomachine to completely control the NAS. Here’s how it looks when I pull it up:

This approach will only work if you have an NVIDIA graphics card. If you don’t, you’ll need to take a different approach. This site may point you in the right direction.

UPDATE 2020-07-13: Back to XFCE

Well, I’m back to using XFCE as my desktop environment on my NAS/file server. An update completely wiped out KDE making it throw errors every time I tried to reboot the file server and not even showing the desktop. KDE is just not the right option for a NAS/file server. I had the same issue with my previous file server. Lesson learned. From now on, ultra lightweight XFCE will be my desktop environment of choice from my file server.

How to install XFCE. First, install tasksel:

sudo apt-get install tasksel

Then run tasksel and select Xubuntu desktop:

sudo tasksel

When prompted, select “lightdm” as your display manager of choice:

Go ahead and restart and you’ll be good to go. (NOTE: I’m still using the same xorg.conf file and it’s working.)

Also, I had to manually remove the NVIDIA drivers then reinstall them with the monitor connected.

ZFS Transfer

As I detailed in my previous NAS build, I opted to go with ZFS for storing my files using a raidz2 array (4 x 4tb drives). It provides, IMO, the optimal balance of speed and redundancy. With just over 4tb of files on my ZFS array in my old NAS, I didn’t really want to have to copy all of those files over to the new. As it turns out, ZFS has the ability to move physical disks to a new device and recreate a ZFS pool. Hallelujah!

On the old NAS, I shut down all connections to my NAS from external devices, restarted the machine, and then used the following command:

sudo zpool export

This basically sets up the ZFS pool so it is ready to be transferred to another machine. I then physically removed all four 4tb drives from the old NAS, moved them to the new one, and got everything ready to import the new NAS. I installed the following packages first:

sudo apt-get install zfsutils-linux nfs-kernel-server zfs-initramfs watchdog

The above packages also installed the following packages:

libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-zed keyutils libnfsidmap2 libtirpc-common libtirpc3 nfs-common rpcbind

With those installed, I checked to see if I had any pools (I didn’t, of course, I hadn’t set any up):

zpool status

Then, I used the following command to look for the pool that was spread across the four hard drives:

sudo zpool import

Beautifully, it immediately found the pool (ZFSNAS) and gave me a bunch of useful information:

pool: ZFSNAS
id: 11715369608980340525
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and the '-f' flag.
see: http://zfsonlinux.org/msg/ZFS-8000-EY
config:
 ZFSNAS      ONLINE
      raidz2-0  ONLINE
        sdd     ONLINE
        sdb     ONLINE
        sde     ONLINE
        sdc     ONLINE

Honestly, I cheered when I saw this. It was going to work. I then used the following command to import my pool:

sudo zpool import -f ZFSNAS

Then, just to be sure everything worked, I checked my pool status:

zpool status

And up popped my zpool. It did tell me that I needed to upgrade my pool, which I proceeded to do. And, with that, my pool from my old NAS was transferred to my new NAS intact. No need to copy over the files. ZFS is definitely the way to go when it comes to NAS file storage. I’m increasingly happy with ZFS!

NOTE: I bought a 10gb external drive and copied everything onto it just to be extra cautious. Turns out, I didn’t need to do that. But, better safe than sorry.

Mounting Fileserver Across the Internet

This trick blew my mind. I am using EasyDNS to be able to VNC into my fileserver from wherever I am in the world (since I don’t have a static IP from my ISP). Nomachine works like a champ across the internet. But I had a crazy thought the other day, “Wouldn’t it be nice if I could actually mount my fileserver directly onto my work computer?” I don’t typically need to do this but I have been working with some files on my fileserver for a project and wanted direct access. Turns out, you can (thank you StackExchange). And so easily it will blow your mind. Linux rocks!

Since I have already set up SSH access on my fileserver, all I needed to do was learn about this as I already had all the packages installed. (If you haven’t set up SSH on your fileserver and opened the relevant ports on your home network, do that first). Then, on my computer at work, I created a new folder in my home directory (/home/ryan/fileserver/). I then used the following command at the terminal to mount my fileshares on my work desktop:

sshfs ryan@myfileserver.easydns.org:/directory/to/share /home/ryan/fileserver

You obviously need to change your username for your fileserver, which comes before the “@”, and need to have the IP address of your fileserver (what comes after the “@” sign). You also need to know what directory from your fileserver you want to mount (comes after the :/) and the folder where you want to mount it (the last piece). Once you have everything set up correctly, hit enter and it will ask you for your password. Once I entered my password, all of the files on my fileserver were securely shared over SSH to my work desktop:

Those are the main directories from my fileserver at home.

Note, this isn’t a permanent share. Once you shutdown your work computer, the share will go away. But it’s just a quick repeat of that command and you’ll have the files mounted right back to your computer. Also, the speed with which you can access the files will depend on your home and work networks. Luckily, mine are both very fast. So, it’s basically like being on my home network as I work with the files I need. Absolutely amazing!

 4,968 total views,  30 views today

LibreOffice – exporting high-resolution TIFF/TIF files

As a scholar who regularly publishes work with charts and graphs, I’m often confronted with varied requirements from publishers for the format in which they want the charts and graphs. Most often, the format is as a TIFF/TIF file, typically with at least 300 dpi and somewhere around 1500×1500 pixels. I make most of my charts in LibreOffice Calc, though occasionally I make some in R as well.

I have recently been editing several volumes in which I had to manage the charts and graphs of other scholars as well. As the editor, I had to make sure the final images met the criteria detailed above. Since scholars often make charts and graphs in Word, it took a little finagling to come up with a quick and easy way to export the images in the format needed by the publisher. Since I did finally figure this out, I figured I’d post it here so I remember how to do this in the future. Luckily, LibreOffice works extremely well with these formats (for the most part), which makes this quite easy.

LibreOffice Calc

Assuming you have created your chart/graph in LibreOffice Calc, exporting it into a TIF format should be fairly easy, though it requires an unfortunate extra step. Here’s a chart I created in LibreOffice Calc:

The LibreOffice programmers make it so you can just right-click on the graph and select “Export as Image.”

When you do this, you’ll get this pop-up window asking where you want to save the image and, more importantly, the format:

Here’s where you get a problem. If you select TIFF, you’ll get a .tif file, but the resolution will be basically the same as what you see on your screen, like this:

Ideally, LibreOffice would ask you what DPI and resolution you want once you select the TIFF format and would then export the chart in that resolution and you’d be done in one simple step. Alas, that’s not an option when you export from LibreOffice Calc.

What you can do instead is copy your chart, open an empty LibreOffice Writer document, and paste it into the document, like this:

Then, go up to File -> Export, like this:

You’ll get the same prompt as before asking what you want to name the file and format. Name the file and select PNG format then click “Save.” What you’re looking for is the window that pops up next:

In this window, you can change the DPI to 300 (do this first) and then change the width and height (they are typically linked, so, if you change one, the other automatically changes). When you’re done, click “OK.” The file you’ll get will be 300 DPI and whatever pixels you chose:

Now, open that file with any image editing software (I’m using Gwenview on KDE for this example) and simply select File->Save As:

Now select the TIFF format. Once you save it, you’ll have a TIFF file with the proper DPI and resolution per the publisher’s instructions. The resulting TIFF file will be huge, but it will meet the criteria of the publisher:

NOTE:

The other way to do this is to copy your chart into a LibreOffice Draw file that has been modified with a huge area (e.g., 4000×4000 pixels). You can then expand your chart to file the area in the LibreOffice Draw document and then export the image. However, depending on the original format of your chart/graph, you may have to resize the text if you do this, which is a pain. However, this will give you a much larger image file. But the approach above is much easier.

The tutorial above used LibreOffice 6.4.4.2.

 2,719 total views,  23 views today