Kubuntu – Audio CD Ripping

I mostly buy digital audio these days. My preferred source is bandcamp as they provide files in FLAC (Free Lossless Audio Codec). However, I ended up buying a CD recently (Last Night’s Fun by Scartaglen) as there wasn’t a digital download available and, in the process, I realized that there are lots of options for ripping the audio from a CD on Linux and quite the process to get the files ripped, tagged, properly named, and stored in my library. This is my attempt to summarize my process.

Format/Codec

First, you need to decide in what format you want the audio from the CD. As noted, I prefer FLAC these days. Given how relatively inexpensive storage is, I no longer need to scrimp on space for the most part. If space was an issue, ripping the files to mp3 format at, say, 192 kbps at a variable bit rate would probably be the optimum balance between decent quality and small size. But I prefer the best quality sound with no real regard for the size of the resulting files. It helps that I store all my music on a dedicated file server that runs Plex. That solves two problems: I have lots of space and Plex will transcode the files if I ever need that done (if, for example, I want to store the music on my phone and want it in a different format). So, my preferred file format is FLAC. (Another option is OGG, but I find not as many audio players work as well with OGG.)

There is another issue that I recently ran into: single audio files with cue sheets. Typically, people want their audio in individual files for each song. However, if you want to accurately represent an audio CD, the best approach to do this is to rip the audio as a single file with a corresponding cue sheet. The cue sheet keeps an exact record of the tracks from the CD. With the resulting two files, the audio CD can be recreated and burned back to a CD. I have no real intention of burning the audio back to a CD (I want everything digital so I can store it on my file server), but it’s good to know about this option. Typically, those who opt for this approach use one of two formats, .flac or .ape, for storing the audio and .cue for storing the timing of the tracks. The .ape format is a proprietary format, however, so it is definitely not my preferred approach.

As a quick illustration for how file format is related to size, I ripped my demonstration CD, Last Night’s Fun by Scartaglen into a single FLAC file and a single mp3 file (at 192 kbps using a variable bit rate) and put the resulting files into the same folder so you can see the size difference:

As you can see, the FLAC rip resulted in a file that was 222.9 MB compared to the mp3 file that is only 49.4 MB. The FLAC file is about 4.5 times the size of the mp3 file. A higher-quality mp3 rip at 320 kbps at a constant bit rate resulted in a 54.8 MB file. A pretty good estimate would be that the FLAC format is going to be somewhere between 3 to 5 times the size of an mp3 file. Thus, if space is an issue but you want good quality, ripping your music to the highest quality mp3 (320 kbps) is probably your best option. If space isn’t an issue and you care more about quality, FLAC is the way to go.

NOTE: I also ripped the disc to OGG and the file size was 38 MB.

Ripping Software

First, if you’re planning on ripping to FLAC on Linux, you’ll need to install FLAC. It is not installed in most distributions by default. This can be done easily from the terminal:

sudo apt-get install flac

Without FLAC installed, the software below won’t be able to rip to FLAC.

K3b

K3b is installed in Kubuntu 20.04 by default and is, IMO, a good interface for working with CDs and DVDs. When I inserted my demonstration CD into my drive, Kubuntu gave me the option of opening the disk in K3b. When I did, K3b automatically recognized the CD, grabbed the information from a CDDB, and immediately gave me options for ripping the CD:

When you click on “Start Ripping,” you get a new window:

In this new window, you have a bunch of options. You can change the format (Filetype). With the FLAC codec installed, the options listed are: WAVE, Ogg-Vorbis, MPEG1 Layer III (mp3), Mp3 (LAME), or Flac. You can obviously change the Target Folder as well. K3b also gives you the option of creating an m3u playlist and the option “Create single file” with “Write cue file” which is where you could create the single file and cue file from the CD as noted above. There are also options for changing the naming structure and, under the Advanced tab, options for how many times you want to retry reading the CD tracks. K3b is pretty fully featured and works for well for ripping audio CDs.

Clementine

My preferred music player in Linux is Clementine. I have used a number of music players over the years (e.g., Banshee, Rhythmbox, Amarok), but Clementine has the best combination of features while still being easy to use. Clementine is in the repositories and can easily be installed via synaptic or the terminal:

sudo apt-get install clementine

Clementine also has the ability to rip audio CDs. Once your CD is inserted, click on Tools -> Rip audio CD:

You’ll get this window, which is similar to the ripping window in K3b:

If the information is available in a CDDB, Clementine will pull that information in (as it did for my demonstration CD). You then have a bunch of options for the Audio format: FLAC, M4A, MP3, Ogg Flac, Ogg Opus, Ogg Speex, Ogg Vorbis, Wav, and Windows Media Audio. The settings for each of these can be adjusted in the “Options” box. One clear advantage of Clementine over K3b is that you can readily edit the titles of the tracks. Another advantage of Clementine over K3b is that you could import the files directly into your music library.

Ripping from a Cue Sheet

Another scenario I have run into on Linux is having a single file for the audio from a CD with a corresponding .cue sheet (the file is typically in the FLAC format, but I have also run into this in .ape format). I used to immediately turn to Flacon, a GUI that helped rip the single file into individual tracks. However, I have had mixed success with Flacon working lately (as of Kubuntu 20.04, I couldn’t get it to work). Never fear, of course, because Flacon is really just a GUI for tools that can be used in the terminal.

To split a single FLAC file with a corresponding .cue sheet into the individual tracks, you’ll need to install “shntool“:

sudo apt-get install shntool

(NOTE: It’s also a good idea to install the suggested packages, “cuetools,” “sox,” and “wavpack” but not required.) Assuming you have already installed “flac” as described above, ripping a single FLAC file into the individual tracks is fairly straightforward. The easiest way is to navigate to the folder where you have the FLAC file (e.g., “audiofile.flac”) and the cue sheet (e.g., “audiofile.cue”). Then use the following command at the terminal:

shnsplit -f audiofile.cue -o flac audiofile.flac 

Breaking the command down, “shnsplit” calls the program “shnsplit” which is part of the “shntool” package. The “-f” tells the program to show detailed format information. The first file is the cue sheet. The “-o” indicates that you are going to specify the output file format extension. After the “-o” is the target file format “flac” and the last file is the single FLAC file that you want to split.

Here’s a screenshot of me rippling the single FLAC file from my demonstration CD into individual FLAC files:

If you happen to run into a single audio file in the .ape format, shntool probably won’t be able to read it so the above command won’t work. However, a simple workaround is to convert the file to flac format using ffmpeg, which can read it. Here’s the command you could use from the terminal:

ffmpeg -i audiofile.ape audiofile.flac

That command will call ffmpeg (which you probably have installed) and convert the .ape file into a .flac file which can then be split using the command above (assuming you have a corresponding cue sheet).

Tagging Software

Let’s say I have successfully ripped my files into my desired format and now I want to tag them. There are a number of software packages that can do this, but my preferred software is Picard by MusicBrainz. Picard is open source, which is awesome, but it also interfaces with the MusicBrainz website and pulls in information that way, which means your files will get robust tagging information. If you pull in all the information from MusicBrainz, not only will the artist and album be tagged, but so to will lots of additional information, depending on how much was entered into the database by whoever added the album in the first place. (NOTE: Clementine also interfaced with MusicBrainz but this broke in 3.1. Once it broke, I started using Picard directly and now I realized that it has a lot more features than Clementine’s implementation, so I just use Picard. However, you could try doing this in Clementine as well.)

Again, I’ll use my demonstration CD to illustrate how this is done. I ripped the tracks into individual FLAC files above. Those tracks are completely devoid of tags – whatever software I use to try to play them won’t know what the audio files are. The screenshot below illustrates this. I used MediaInfo (a gui for pulling information from audio and video files in Linux) to pull available information from the file. It shows the format and length but provides no information about the artist or album, which it would if the file had tags.

We’ll use Picard to find the album and add all the tags. First, of course, install Picard:

sudo apt-get install picard

Open the program. Now, since my files have no tag information, I’m going to click on Add Files (you can also add a folder with subfolders, which then has Picard run through multiple audio files, a great feature if you are tagging multiple albums at the same time).

You’ll get a new window where you can select the audio files you want to add. Select them and then click “Open.”

If the files have existing tags, Picard will do its best to group the tracks together and may even associate the files with the corresponding albums. In my case, it simply puts the files into the “Unclustered” category:

Since I pulled them all in from the same folder, I can select the tracks and then click on the “Cluster” button in the toolbar and Picard will cluster the files.

Clustering is a first step toward find the corresponding album information. Once they are grouped together, they will show up in the Cluster category:

Without any tag information, Picard is unlikely to find the corresponding album if you select the cluster and then click on “Lookup.” If there was some tag information in the files, that might be enough for Picard to find the corresponding albums, so you could just select the album and then click on “Lookup.” In this case, that won’t work. So, I’m going to use a different approach. If you right-click on the cluster, you can select “Search for similar albums.”

This gives you a window where you can enter search terms to try to find the corresponding album in the Music Brainz database. Based on the limited information it has, it will try to find a corresponding album automatically. But it likely won’t find it because there are no tags, so there is virtually no information available. Generally, I have found that I have better luck finding albums if I use the album title first followed by the artist, like this “Last Night’s Fun Scartaglen” then hit enter:

Once you find the correct album, select it and then click on “Load into Picard” at the bottom of the window.

Once you do that, the album will move to the right side of the screen. If all of the tracks are included and Picard is able to align them with the corresponding information it has from the database, the CD icon will turn gold. If there is a little red dot on the CD, that means Picard has tag information that can be saved to the individual tracks.

Click on the album and then click the “Save” button in the toolbar and the tag information will be saved to the files.

You can track the progress as tracks will turn green once the information has been successfully added to the tags.

You can also set up Picard to modify the file names when it saves information to the files. Click on “Options” then click the checkmark next to “Rename Files.”

I typically let Clementine rename my files when I import them into my Music library, so I don’t worry about this with Picard, but it is a nice option. Finally, here’s that same Mediainfo box with the tagged file showing that information about the artist and track is now included in the file:

 421 total views,  26 views today

LibreOffice – How To Change Icons to a Darker Theme

I prefer darker themes for my desktop environment (Kubuntu 20.04) and browser (Brave). For the most part, this isn’t a problem, but it does cause an issue with some applications, including LibreOffice (6.4.4.2).

One of the first things I do when I install Kubuntu is switch my desktop environment from the default theme (System Settings -> Global Theme), Breeze, which is a lighter theme, to Breeze Dark. You can see the differences in the screenshots below:

This is the Breeze theme that is the default in Kubuntu 20.04
This is the Breeze Dark theme that I typically use in Kubuntu.

The problem is with the icon set in LibreOffice. With the default Breeze theme, the icons are very visible and work great:

These are the default icons in LibreOffice 6.4.4.2 in Kubuntu 20.04 with the default Breeze theme.

The problem comes when I switch the theme to Breeze Dark. Here is how the default Breeze icons look in LibreOffice when I switch the theme:

The default icon set, Breeze, in LibreOffice when the Kubuntu Global Theme is switched to Breeze Dark.

Perhaps it’s just my aging eyes, but those icons are very difficult for me to see. The solution is quite simple, though finding it is always hard for me to remember (thus this tutorial). All you need to do is switch the icon set in LibreOffice. There are several icon sets for dark themes that come pre-packaged with the standard version of LibreOffice that ships with Kubuntu and is in the repositories. It’s just a matter of knowing where to look.

In LibreOffice, go to Tools -> Options:

You’ll get this window. You want the third option down under “LibreOffice”, “View”:

Right at the top of this window you can see “Icon style.” That’s the setting you want to change. If you click on the drop down arrow, you’ll see six or so options. Two are specifically for dark themes, Breeze (SVG + dark) and Breeze (dark). Either of those will work:

I typically choose Breeze (SVG + dark). Select the dark theme you want, then click on OK and you’ll get a new icon set in LibreOffice that works much better for dark themes:

These icons are much more visible to my aging eyes.

Et voila! I can now see the icons in the LibreOffice toolbars.

 453 total views,  31 views today

go vote sign

Hillsborough County, FL – Fall 2020 Primaries and Elections

In researching candidates for elections, I have taken to posting links to the information I find on my website to help others. Note, I’m a registered Democrat only so I can vote in the Democrat primaries. I would prefer to be considered an Independent voter as I vote by the candidate, not by party. Here’s what I’ve found…

Update 07-27-2020: The Tampa Bay Times has put together a nice voter information guide that is pretty comprehensive. It doesn’t include links to candidates’ websites, but provides a fair amount of information. I’ve been wishing for something like this for a long time. I highly recommend it.

Clerk of Circuit Court and Comptroller

Tampa Bay Times article on the race. Tampa Bay Times on Stuart entering the race.

Kevin Beckner

Party: Democrat
Background Information: Hillsborough County Commissioner from 2008 to 2016; Executive Director of the Hillsborough County Civil Service Board (hears appeals of employee discipline and termination); CSB was abolished in 2019; BA in Criminal Justice from Indiana University; Indiana Law Enforcement Academy degree in 1990; (hypes his credentials by referencing a Harvard Leadership program that isn’t anything meaningful); financial planner by profession
Finances (per voterfocus.com)
Websites: election website, Twitter,
Endorsements: lots of labor unions,

Cindy Stuart

Party: Democrat
Background Information: Hillsborough County School Board member from District 3; has a degree in business from Florida International University; worked in insurance before running for the school board
Endorsements: outgoing clerk, Pat Frank (possibly because Beckner ran against him in an earlier campaign)
Finances (per voterfocus.com)
Websites: election website, Twitter, Facebook,

Tax Collector

Tampa Bay Times endorsed Nancy C. Millan. Tampa Bay Times article on April Griffin entering the race.

April Griffin

Party: Democrat
Background: Served on the Hillsborough County School Board for 3 terms; was chair of Hillsborough County School Board twice; BA in Organizational Studies from Eckerd College; Hillsborough County native; owns a software development company
Finances (per voterfocus.com)
Websites: election website, Facebook, Twitter, Instagram

Nancy C. Millan

Party: Democrat
Background: Has worked for the tax collector’s office for decades (31 years), rising through the ranks; served on a number of boards, including the Board of Trinity School for Children from 2005-2008 (full disclosure – my son attended Trinity School for Children from about 2012 until 2020);
Finances (per voterfocus.com)
Websites: election website, Facebook, Twitter, Instagram
Personal Commentary: I received a flyer for Nancy Millan that was riddled with typos. That lack of detail bothers me. Millan is also purchasing ads on Google’s search engine. The Tax Collector’s website, hillstax.org, does use Cloudfare to protect against DDOS attacks, though the settings for the protection wouldn’t let me access the website, so someone set it up wrong.

Board of County Commissioners District 3

Tampa Bay Times story on Rick Fernandez entering the race. Tampa Bay Times endorsed Thomas Scott.

Ricardo “Rick” Fernandez

Party: Democrat
Background: Hillsborough County native (born in District 3); Navy veteran; Lawyer and former Tampa Heights Civic Association president; currently a career coach and recruiter for law firms;
Finances (per voterfocus.com)
Websites: Twitter, LinkedIn

Gwen Myers

Party: Democrat
Background: Tampa native; graduate of FAMU; worked for Hillsborough County for 25 years; now retired; served on a variety of local boards (e.g., Salvation Army Adult Rehabilitation Council;
Endorsements: County Commissioner Pat Kemp among others
Finances (per voterfocus.com)
Websites: election website, Facebook, Twitter, Instagram
Personal Commentary: Likes the color green. Platform seems to be focused on public transit.

Frank Reddick

Party: Democrat
Background: Hillsborough County native; graduated from Paine College in Augusta, GA; served on Tampa City Council for 8 years; President and CEO of Sickle Cell Association, served on lots of local boards
Finances (per voterfocus.com)
Websites: election website

Thomas Scott

Party: Democrat
Background: native of Macon, Georgia; graduate of UNF – degree in Criminal Justice with a minor in Sociology; has an MA in Biblical Studies from the Assembly of God Theological Seminary; Army veteran; moved to Tampa in 1980 to be a pastor (22nd Street Church of God); served as Hillsborough County Commissioner from 1996 to 2006; served on Tampa City Council from 2007 to 2011; was appointed by Rick Scott to the State of Florida Elections Commission in 2015
Endorsements: Chad Chronister – Hillsborough County Sheriff; Les Miller – Hillsborough County Commissioner District 3 (who he would be replacing); Tampa Bay Times
Finances (per voterfocus.com)
Websites: election website, Facebook, Twitter, Instagram
Personal Commentary: He sent a flyer in the mail.

Sky U. White

Party: Democrat
Background: Hillsborough County native; nurse; community organizer; owns REVIVED magazine; served on a lot of boards
Finances (per voterfocus.com)
Websites: election website, Facebook, Twitter, Instagram
Personal Commentary: Youngest candidate (per her website); most progressive candidate as well; probably the only candidate who would bring fresh ideas to the Hillsborough County Commission.

Before considering candidates for the various court benches, you should check out the Judicial Candidate Forum video and information hosted by the North Tampa Bar Association.

Circuit Judge, 13th Judicial Circuit Group 9

Tampa Bay Times endorsed John Schifino.

Kelly Ayers

Party: nonpartisan race
Background: BS in Journalism and Communication from University of Florida; law degree from Stetson; practicing law for 26 years; owns three law firms; visiting professor at USF; editor of Stetson Law Journal
Endorsements:
Personal Commentary: She misspelled Communication (she put an “s” on the end) – personal pet peeve;
Judicial Candidate Forum responses
Websites: election website,

John Schifino

Party: nonpartisan race
Background: father was a lawyer; lived in Tampa for a long time; on a number of boards;
Endorsements: Firefighter unions; Janet Cruz; Bob Buckhorn; Tampa Bay Times
Personal Commentary: Not much about his views on his website; sent a flyer
Judicial Candidate Forum responses
Websites: election website, Facebook, LinkedIn

Circuit Judge, 13th Judicial Circuit Group 19

Tampa Bay Times endorsed Michael J. Scionti.

Ashley Ivanov

Party: nonpartisan race
Background: from Charleston, SC; graduated from The George Washington University; probate and estate planning attorney; admitted to bar in Florida in 2015; founded her own firm in 2018; limited experience; clerked for the federal government; volunteers with her church in Brandon
Endorsements:
Personal Commentary:
Judicial Candidate Forum responses
Websites: election website

Michael J. Scionti

Party: nonpartisan race
Background: Tampa native; served as a state representative, army officer, US diplomat; originally elected in 2014;
Endorsements: Tampa Bay Times
Personal Commentary: He has a Wikipedia page!
Judicial Candidate Forum responses
Websites: election website, Wikipedia

Circuit Judge, 13th Judicial Circuit Group 30

Tampa Bay Times recommends Helene Daniel.

Danny Alvarez

Party: nonpartisan race
Background: BA in Journalism from University of Florida; family is from Cuba; played sports in college and was in the ROTC; admitted to Florida Bar in 2008; Army infantry officer; worked for Hillsborough County Sheriff’s office since 2017 as a special projects manager and communications chief; motivational speaker
Endorsements:
Personal Commentary: I have never trusted anyone who calls themselves a “motivational speaker”. “His family escaped communist Cuba” – phrasing from his website strongly suggests very conservative views.
Judicial Candidate Forum responses
Websites: election website, Facebook, Instagram

Helene Daniel

Party: nonpartisan race
Background: born in France; admitted to Florida Bar in 1986; experience with family law, juvenile law, and insurance; AV Preeminent Rating by Martindale-Hubbell
Endorsements: Tampa Bay Times, La Gaceta
Personal Commentary: Loves her dogs
Judicial Candidate Forum responses
Websites: election website, Facebook, Twitter, Instagram, LinkedIn

Circuit Judge, 13th Judicial Circuit Group 31

Tampa Bay Times endorsed Greg Green.

Scott Bonavita

Party: nonpartisan race
Background: born in Glen Falls, NY; moved to Tampa at 9; Gaither High School graduate; played soccer; attended the University of Tampa to play soccer (full disclosure, I’m a professor at UT) then left to pursue a career with the Rowdies – only played one game for the Rowdies; BA in Psychology from USF; JD from St. Thomas University School of Law in Miami; former prosecutor; owned his own firm since 2012; handles business law; certified court mediator; emphasizes he is a single father; is also a personal trainer and teaches CrossFit
Endorsements:
Personal Commentary: really proud of his son
Judicial Candidate Forum responses
Websites: election website, Facebook, Instagram, Law Firm

Gary Dolgin

Party: nonpartisan race
Background: Florida native; lived in Tampa since he was 2; attended Emory University; law degree from University of Florida; admitted to Florida Bar in 1990; was an assistant state attorney and public defender; owned his own firm focusing on family law since 1993; board-certified in his field; author of a book on family law; volunteers regularly; member of Congregation Schaarai Zedek
Endorsements:
Personal Commentary: Seems like the most humble candidate.
Judicial Candidate Forum responses
Websites: election website,

Greg Green

Party: nonpartisan race
Background: Tampa native; played basketball and football at Chamberlain High; got his law degree in 1999; worked as an assistant state attorney; his current law practice focuses on divorce and child custody cases; volunteer flag football coach at Robinson High School; biblical counselor at the Crossing Church;
Endorsements: various firefighter unions; Tampa Bay Times
Judicial Candidate Forum responses
Websites: election website, Facebook

Circuit Judge, 13th Judicial Circuit Group 39

Tampa Bay Times endorsed Steven Scott Stephens.

Wendy Joy DePaul

Party: nonpartisan race
Background: graduated from FSU and Stetson University College of Law; admitted to the Florida Bar in 1997; has a CPA; managing partner of a law firm; practice is a mix of family law, bankruptcy, and foreclosure; provides free legal assistance to the poor; active with her congregation
Endorsements:
Personal Commentary: loves dogs
Judicial Candidate Forum responses
Websites: election website,

Steven Scott Stephens

Party: nonpartisan race
Background: incumbent – has been a judge for 15 years; was appointed in 2005 by Jeb Bush; has a PhD in business as well as advanced degrees in computer science and engineering; former faculty member at USF, UT, and Stetson University; published author on trial court and family law
Endorsements: Tampa Bay Times
Personal Commentary: Who gives their kid the same first name as their last name?; He has ads on Google; Definitely playing up the fact that he is the incumbent.
Judicial Candidate Forum responses
Websites: election website;

County Court Judge Group 7

Tampa Bay Times endorsed Bill Yanger.

Nancy L. Jacobs

Party: nonpartisan race
Background: from Miami; University of Florida for undergraduate degree; JD from University of Miami; former Hillsborough prosecutor; owned her own firm since 1993 handing family law, criminal defense, and estates; provides free legal services on behalf of veterans, animal welfare groups, and youth organizations
Endorsements: a number of judges and attorneys
Personal Commentary: rescues dogs
Finances (per voterfocus.com)
Judicial Candidate Forum responses
Websites: election website, Facebook

Monique Scott

Party: nonpartisan race
Background: BA in Criminology and Psychology from USF; former Tampa police officer (left for health reasons) and public school teacher; worked as an assistant state attorney; accident attorney; volunteers with epilepsy groups; married to a chiropractor
Endorsements:
Personal Commentary:
Finances (per voterfocus.com)
Judicial Candidate Forum responses
Websites: election website, Franchi Law firm

Rickey “Rick” Silverman

Party: nonpartisan race
Background: from New York; grew up in a blue-collar family; first practiced in Miami then moved to Tampa (in 1995); wife has a PhD in Microbiology; top-rated traffic attorney;
Endorsements:
Personal Commentary:
Finances (per voterfocus.com)
Judicial Candidate Forum responses
Websites: election website,

Bill Yanger

Background: Admitted to Florida Bar in 1989; admitted to Texas bar in 1986; graduate of Jesuit High School; went to University of Florida for undergrad; South Texas College of Law for law degree; founder of Yanger Law Group; works on complex business litigation; former Chair of the Tampa Chamber of Commerce; Board of Fellows member at the University of Tampa (full disclosure, I am a professor at the University of Tampa); Presbyterian – goes to Palma Ceia Presbyterian Church
Endorsements: local firefighters unions, Tampa City Council members Guido Maniscalco, Charlie Miranda, and Luis Viera; Tampa Bay Times
Personal Commentary: He drives a truck, per his flyer he circulated (seems important to him).
Finances (per voterfocus.com)
Judicial Candidate Forum responses
Websites: election website, Facebook, Instagram

School Board Member District 1

The Tampa Bay Times recently announced their endorsements for School Board in Hillsborough County. Good Tampa Bay Times article with information on the candidates in this race.

Nadia Combs

Party: nonpartisan race
Background: BA in Social Studies Education and MA in Educational Leadership from USF; taught in Japan; taught in Hillsborough County Schools for 10 years; founded a company in 2005 as part of the Supplemental Education Services – provides free tutoring to students in Hillsborough County; opened Brighton Learning tutoring center in 2014
Endorsements: Tampa Bay Times
Personal Commentary:
Finances (per voterfocus.com)
Websites: election website, Brighton Learning

Steve Cona

Party: nonpartisan race
Background: Tampa native; Bachelor’s from USF; CEO of Associated Builders and Contractors Florida Gulf Coast Chapter; on the Board of Trustees of Hillsborough Community College; platform is to improve Florida’s skilled labor force; incumbent
Endorsements:
Personal Commentary: Platform is three-fold: fiscal accountability, school security, and addressing maintenance problems in schools. Raised almost 10 times as much money as all the other candidates combined. Pretty telling that the Tampa Bay Times didn’t endorse him as the incumbent.
Finances (per voterfocus.com)
Websites: election website,

Ben “Floridaman” Greene

Party: nonpartisan race
Background: No website. Not a lot of information about him.
Endorsements:
Personal Commentary: Appears to be running a protest campaign. Was thrown out of a School Board meeting. Here’s a video of him talking to the School Board.
Finances (per voterfocus.com)
Websites:

Bill Person

Party: nonpartisan race
Background: Retired educator in Hillsborough County; Vietnam War veteran
Endorsements:
Personal Commentary: Not much about him available.
Finances (per voterfocus.com)
Websites: LinkedIn

School Board Member District 7

Tampa Bay Times article about the candidates in this race.

Lynn Gray

Party: nonpartisan race
Background: worked for over 20 years as a teacher in Tampa; incumbent on the school board; platform – healthier kids (healthier foods and recess); more support for students; improved literacy
Endorsements: Tampa Bay Times
Personal Commentary:
Finances (per voterfocus.com)
Websites: election website, Facebook, Twitter, Instagram

Sally A. Harris

Party: nonpartisan race
Background: South Tampa native; owner of Circle C Ranch Academy – early care and education company; focus is on safety, discipline, and management
Endorsements:
Personal Commentary: She seems to care more about policing the kids than educating them.
Finances (per voterfocus.com)
Websites: election website

Jeffery Alex James Johnson

Party: nonpartisan race
Background: From Jacksonville, FL; has degrees from Warner University (private Christian university) and St. Thomas Christian University (fake online university with no accreditation) – the degrees are all highly suspect; runs a Girls Summit; works as a Senior Manager of Neighborhood Initiatives for United Way Suncoast
Endorsements: County Commissioner Les Miller; State Representatives Wengay Newton and Dianne Hart
Personal Commentary: If he isn’t knowledgeable enough to know that a doctorate from an online diploma mill with no accreditation isn’t a real degree, he has no business being on the school board. Website is filled with typos. (I try not to endorse candidates on this page, but I oppose this candidate.)
Finances (per voterfocus.com)
Websites: election website, Facebook, Twitter, Instagram

Angela Schroden

Party: nonpartisan race
Background: Tampa native; lives on Davis Islands; EdD from USF; literacy consultant and adjunct professor at USF in Educational Leadership and Policy Studies; volunteers at Hyde Park United Methodist Church and South Tampa Community Bible Study; worked in Hillsborough County Schools for 15 years
Endorsements: local teachers and principals
Personal Commentary:
Finances (per voterfocus.com)
Websites: election website, Facebook, Twitter, Instagram

 3,180 total views,  121 views today

Richard Bartlett’s claimed COVID-19 cure – A Skeptical Response

A family member sent me this video interviewing Richard Bartlett, MD, a family practice doctor in Odessa, TX. In the video, Dr. Bartlett claims to have found the cure for COVID-19 – an inhaled steroid, Budesonide. Here is my response to my family member:

TL:DR version: This guy’s claims are not credible and his proposed treatment does not have sufficient evidence to support it.

Here’s the long version of my response:

In the sciences, responsible scholars are unwilling to make any claims, let alone really bold claims, until other scholars have verified their claims. You may recall the cold fusion debacle at the University of Utah in the 1980s in which Stanley Pons claimed (with Martin Fleischmann of the University of Southampton) that he had discovered cold fusion. He was forced to retract those claims when no other scientists could replicate his research. Basically, it is highly irresponsible (and, in a pandemic it is reckless, unethical, and dishonest) to make claims that have not been tested, verified, and validated by other experts. As I watched the video, a number of red flags popped up for me. I will detail them in turn. But, before I do, I will note one thing that makes this claim seem almost credible.

The current evidence we have suggests that two drugs may help people with COVID-19, dexamethasone and Remdesivir. Dexamethasone is a steroid and it has been shown to help but is absolutely not a cure for everyone who takes it. It cuts the risk of death by about a third for patients on ventilators (it cuts the risk of death by about a fifth for those on oxygen). Remdesivir is NOT a steroid. It is an antiviral that interferes with the production of viral RNA (as opposed to DNA). Thus, it could seem credible that inhaled steroids like Budesonide would be effective, especially since our early understanding of COVID-19 was that it was a respiratory virus. This also seems plausible because COVID-19 is often contracted by breathing in particles from infected individuals into the lungs where the virus is able to infect cells. However, more research has revealed that COVID-19 spreads to other parts of the body and causes damage in other locations (e.g., kidneys, cardiovascular system, etc.). Thus, the claim that inhaled steroids works seems plausible. But just because something “seems” plausible doesn’t mean it actually works.

UPDATE 7/20/2020: A new study suggests inhaling interferon beta may reduce the risk of developing severe disease from COVID-19 by as much as 79%. However, appropriate caution about the results is warranted as the study has not been peer-reviewed, has a small sample size, and needs replication. This study is a good illustration of how research should be done, in contrast to Dr. Bartlett’s claims.

Now, on to the red flags… 

Red Flags

As I watched the interview, a number of very serious problems surfaced. Here they are in detail.

1) The most outlandish problem with Dr. Richard Bartlett’s interview was that he was claiming things that are demonstrably untrue. He suggested that the very low death rates in Taiwan, Japan, and Iceland (among other countries) are due to the medical experts in those countries using inhaled steroids. That is demonstrably false. Iceland, for instance, closed all of its borders, tracked down every single case of COVID-19, isolated them, and eventually, stopped the virus. They also force anyone coming into the country to quarantine for two weeks – everyone! You can read about their efforts here. Similar approaches were taken in Taiwan, New Zealand, Vietnam, and South Korea. None of these countries attribute their low death rates to the use of inhaled steroids to treat patients. They used contact tracing and quarantine to minimize the number of cases. Dr. Bartlett is being dishonest and misrepresenting the facts when he claims that these countries used inhaled steroids to treat these patients when there is no evidence to support his claims. This was a major red flag suggesting he is being dishonest.

2) When Dr. Bartlett was asked how many patients he had treated, he didn’t give a direct answer. A scientist with compelling evidence would know exactly what their sample size is. I have published dozens of research articles and I make it very clear in all of them what my sample size is. Sample sizes are a component of any research study because other researchers need to know the basics of the research design so they can replicate it. Instead, he just keeps saying that he’s treated lots of people and has had a 100% success rate. He provides no more information about the patients: How severe were these cases (we know COVID-19 cases vary in severity)? How old were they? What other comorbidities did they have? He provides no additional information in a credible format. These are serious red flags to me.

3) As noted above, responsible scientists submit their research for publication before they make claims, particularly bold claims. Dr. Bartlett’s evidence is entirely “anecdotal,” which is to say he has no real evidence at all. Unless he has kept detailed records for every single patient he has treated with clear information about their diagnosis with COVID-19, the length of time they had the disease before they were treated, other medical interventions involved, all underlying comorbidities, and can rule out all other possible medical interventions that would have helped, and can aggregate that information into a clear pattern of success, he would not be able to publish these claims. Stories are powerful. We like them. And we find them convincing. But scientists don’t find them compelling. We want evidence. Lots of it. And we need to have it verified, ideally by 2 or more experts. Dr. Bartlett’s claims are extraordinary. Extraordinary claims require extraordinary evidence. He provides none.

4) These claims have all the hallmarks of a conspiracy theory. The video was posted on July 3rd. If this was the cure, major news outlets around the world would have picked this up. So far, none of them are touching this. Conspiracy theorists will point to this and say that it is evidence that there is a conspiracy against Dr. Bartlett. But that is the problem with conspiracy theorists – when something does happen, it supports their conspiracy; and when nothing happens, it also supports their conspiracy. It’s virtually impossible to convince conspiracy theorists that they are wrong because all the evidence, including the absence of evidence, is seen to support their conspiracy. Yet, doesn’t it seem far more reasonable to conclude that, if someone had found a cure nearly two weeks ago that every major news source on the planet would have put this on the front page or made this the headline in their broadcasts? Only a conspiracy theorist would look at the lack of media coverage and see a conspiracy to hide a cure. 

5) Dr. Richard Bartlett has not, to my knowledge, ever published a single research article in the scientific literature. There is one Richard Bartlett with a user profile on Google Scholar – a law professor at the University of Western Australia (who, no doubt, is going to be pissed that someone with his same name is going to get a lot of negative publicity). There are some other “R Bartletts” who have published research, but those individuals do not appear to be Dr. Richard Bartlett from Texas. So you can see the Google Scholar profile for an actual scholar, here is my Google Scholar profile. The nice thing about Google Scholar is that it is publicly accessible. There are other ways to find research by scholars, but they are behind paywalls and the public cannot see them. But Google Scholar makes it quite easy to see whether someone is a recognized scholar. Dr. Richard Bartlett is not. Our most basic criteria for determining whether someone is an expert in the sciences is to see if they have published research in their stated area of expertise. In this case, Dr. Bartlett should have published research in medicine related journals, particularly on the uses of inhaled steroids or on the treatment of viral respiratory infections. He has not. He is NOT an expert. Just because he is a medical doctor does not mean he is an expert on these topics. There are lots of MDs who push treatments that are completely ineffective and even harmful

So, those are the red flags. I did some additional digging on this topic and here’s what I found:

a) I found two review articles by actual experts on the efficacy of inhaled steroids for treating COVID-19 (article 1 and article 2). Neither claim this is the cure for COVID-19. Here is the summary from one of those studies, “At present, there is no evidence as to whether pre-morbid use or continued administration of ICS [inhaled corticosteroids] is a factor for adverse or beneficial outcomes in acute respiratory infections due to coronavirus.”

b) Further digging by a local news channel called Dr. Bartlett’s claims into question as well. 

So, the long answer to your question is: Dr. Bartlett is, at a minimum, not being honest (as detailed above). He is also being irresponsible in making claims that have not been verified with peer-review. He is not an expert on respiratory infections or inhaled steroids. He is dishonest about his claims and evasive with his answers. The scientific literature does not support his claims, though responsible scientists admit that more research is needed.

My verdict: There is no compelling evidence that Dr. Bartlett has found “THE CURE” for COVID-19. Maybe this will help; maybe not. The only way to know for certain is to conduct rigorous clinical trials.

 4,592 total views,  107 views today

Examples of Religious Syncretism

I’m always on the lookout for good examples of religious syncretism and wanted a good place to store these.

Lôtān -> Leviathan

In Psalm 74, verse 14, Yahweh is described as having defeated a sea monster called Leviathan. This sea creature, its name, and its mythology derive from a Ugaritic sea monster named Lôtān, who was similarly defeated by Hadad, a god in Canaanite and Mesopotamian religions. This is a clear case of Jewish religion incorporating earlier Canaanite and Mesopotamian beliefs.

Sargon of Akkad birth story -> Moses birth story

Sargon of Akkad, was the first ruler of the Akkadian Empire in the 24th to 23rd centuries BCE. Per a 7th century BCE Neo-Assyrian text purporting to be Sargon’s autobiography, Sargon was claimed to the illegitimate son of a priestess who put him as an infant in a basket of rushes sealed with tar and set him afloat in a river. He, of course, was found and raised, eventually becoming a great leader. This text may not be a direct antecedent of the biblical myth of Moses (Exodus 2) similarly being put into a basket of reeds that was sealed with tar and set afloat in the river but rather may have simply been a common archetype from the time period that served as the basis for multiple origin stories. Either way, it is a clear example of religious syncretism.

Virgin Births (or god impregnations) – Perseus, Oenopion, Romulus and Remus -> Jesus

The suggestion that a human was born to a virgin or that the individual had divine heritage was not uncommon in the ancient world. The following are some of the individuals who were claimed to have been fathered by a deity: Perseus (fathered by Zeus), Oenopion (fathered by Dionysus), and Romulus and Remus (fathered by Mars).

This is another scenario in which there may not be one specific belief, myth, or story that became the virgin birth story of Jesus, but the archetype of virgin/divine births was widespread and then incorporated into Christianity.

 1,141 total views,  24 views today

HandBrake – Convert Files with GPU/Nvenc Rather than CPU

I don’t know exactly when HandBrake added the capability of using the GPU for encoding, but it was somewhere between 1.3.1 (current version in the Ubuntu repositories) and 1.3.3 (current version on PPA). Regardless, this option offers dramatic speed improvements, particularly when working with 4K videos. In this post, I’ll show how to use this feature in Handbrake and show some comparisons to illustrate the benefits and tradeoffs that result.

HandBrake 1.3.1 is the current option in the Ubuntu 20.04 repositories.
Version of HandBrake available from the PPA that has NVENC/GPU encoding capabilities.

How to Use NVENC GPU Encoding

First, make sure you have the latest version of Handbrake installed (as of 6/26/2020 that is version 1.3.3). Also, to use Nvenc encoding, you’ll need the Nvidia Graphics Driver 418.81 or later and an Nvidia GeForce GTX 1050+ series GPU or better per HandBrake’s documentation. I’m using a GTX 1060 and have Driver version 440.100 installed. My CPU is an AMD Ryzen 5 3600 6-Core, 12-Thread processor.

My GPU (GeForce GTX 1060) and my driver version: 440.100.

The video I’m using to illustrate how to use GPU encoding is a Blu-Ray rip of Pan’s Labyrinth. I’m making a backup copy of the video to store on my file server. I used MakeMKV to pull the video off the BluRay disc, resulting in a 31.7GB file. I’m going to convert that to a 1080p H265 video file that is substantially smaller in size.

Open up HandBrake and load your file you want to convert by clicking on “Open Source.” Find the file you want to convert and select “Open.”

HandBrake will run through the file, gathering information about the codec, subtitles, audio tracks, etc. Once it’s done, you’ll need to select what format you want to convert it to. For this tutorial, I’m just going to use a General Preset, but I want to illustrate the difference in encoding speed, so I’m going to select Super HQ 1080p30 Surround rather than Fast 1080p30.

My mouse is on Super HQ 480p 30 in the image, but I selected Super HQ 1080p.

To change from CPU encoding to GPU encoding, click on the Video tab:

In the middle of the screen, you’ll see a drop-down menu labeled “Video Encoder.” Click on that drop-down menu and you should see two NVENC options: H.264 (NVenc) and H.265 (NVenc). Those are the two options for using your GPU for the encoding versus using your CPU. The H.265 codec is newer and has somewhat better compression algorithms, but I’m not going to opine as to which of these you should choose. That choice should really be driven by what device you’re going to be playing your videos on. Regardless of which you choose, those two will push the encoding to your GPU instead of your CPU.

Again, my mouse is on the wrong option here. The two options you want are: H.264 (NVenc) or H.265 (NVenc).

Make sure you’ve adjusted your Audio, Subtitles, and Tags to your preferences, then click “Start.” That’s all there is to it – you now have GPU encoding.

Benchmarks: Encoding Speed

Using the H.265 (NVenc) encoder, it took about 17 minutes to convert the 31.7GB MKV file into a 8.4GB m4v file.

My mouse is next to the estimated time for encoding the video: 16m50s.

You can also see that even using the H.265 (NVenc) encoder, much of the processing is passed to the CPU, as shown in this screenshot that shows my GPU is working, but it’s certainly not being stressed:

GPU load is on the left. CPU load is top right.

Using all the same options but CPU encoding instead, HandBrake took 1 hour and 15 minutes to encode the file, so about 5 times as long.

When I grabbed this screenshot, HandBrake was estimating 1h07m, but it ended up taking about 1h15m for the entire encode.

Here’s the same resource utilization illustration showing that HandBrake is drawing exclusively on the CPU and not the GPU:

GPU utilization is on the left; CPU utilization is upper right.

Other tests I ran encoding 4K video illustrated that this difference increases with 4K video. I converted one 4K movie that was about 1 hour, 40 minutes using H.265 (NVenc) and it took about 1 hour. Using the CPU alone, HandBrake estimated it would take 18 hours (I didn’t wait to see if that was accurate). Thus, there is a dramatic difference in encoding speed with higher resolution and larger video files.

Benchmarks: Video Size and Quality

What about file size and video quality? I’m probably not going to do justice the differences because I don’t have the most discerning eye for pixellation and resolution differences, but I will try to use some objective measures. The video encoded with the GPU was 8.39GB in size. The video encoded with the CPU was 3.55GB. I’m not exactly sure why the file sizes are so different given that I chose the same setting for both encodes, but this next screenshot illustrates that the NVENC encode resulted in a higher bitrate (9,419 kb/s) versus the CPU encode with a lower bitrate (3,453 kb/s). Strange.

On the left are the specs for the NVENC encoded video; the CPU encoded video specs are on the right.

I also wanted to see if I could tell if there was a noticeable difference in quality between the two videos. I navigated to the same scene in both videos and used VLC to take a screenshot. First, the NVENC encoded screenshot:

Click for full resolution. This is the NVENC encoded video.
Click for full resolution. This is the CPU encoded video.

My 43-year-old eyes don’t see much of a difference at all.

Conclusion

If you’ve got the hardware and want to save time, using GPU encoding with Handbrake is a nice option. The end result is a much faster encode, particularly with higher resolution videos. Some of the forums where I was reading about this option suggested there are problems with using GPU encoding. I certainly won’t challenge those assertions, but I can’t tell the difference.

 2,665 total views,  51 views today

2020 NAS – Plex, nomachine, Crashplan

After about a year and a half with my previous NAS (see here), I decided it was time for an upgrade. The previous NAS had served dutifully, but it was no match for 4K video (I don’t have a lot of it), it took forever to transcode files when I wanted to synchronize them with a device using Plex, and it couldn’t transcode pretty much any videos in real-time. For playing up to four files simultaneously that didn’t need to be transcoded, it worked like a champ. But it just couldn’t handle all the scenarios I was throwing at it. So, it was time for an upgrade.

Hardware

The major change, of course, was the hardware. Here are the specs for the new NAS:

Case: Fractal Design Define R5 – Mid Tower ATX case
RAM: Corsair LPX 32GB (2x16GB) 3200MHz C16 DDR4 DRAM
GPU: EVGA GeForce GTX 1060 3GB
Motherboard: ASUS AM4 TUF Gaming X570
CPU: AMD Ryzen 5 3600 6-Core
Powersupply: Fractal Design Ion+ Platinum 860W PSU
SSD: Samsung 850 EVO 250GB
Hard Drives: 4 WD Blue 4TB 5400 RPM Hard Drives

Since I was repurposing the SSD and hard drives from my previous NAS, I didn’t buy those for this build. But here is the estimated cost of the NAS assuming those were included: $1,357.47 + tax.

(NOTE: Full disclosure, the links above are affiliate links to Amazon.)

A couple of notes on my hardware choices…

The case is amazing. It’s big, roomy, and extremely well-designed. It is also absolutely silent and does a great job with cable management. It also has plenty of room for hard drives, which I absolutely love. It’s also understated. I don’t need flashing lights with my NAS. I need a discrete box that makes no noise.

My new fileserver. It’s absolutely silent. It may be a little large, but that gives me plenty of space for the components I want.

I had originally considered a different GPU, the PNY Quadro P2000. A lot of Plex forums suggested it was a real beast when it came to hardware transcoding for video. By the time I put this together, these were really hard to find and very expensive. I looked around and the GeForce GTX 1060 had higher benchmarks at a much lower price. You can see my testing data below to see how well it works and whether I made the right choice.

I also considered an AMD Threadripper over a standard Ryzen, but I also realized that I was passing the video transcoding to a GPU with this build (my last one used the CPU), which meant I really didn’t need that much CPU power. Again, you can see in the benchmarks below how well this paid off.

The remaining hardware choices were pretty straightforward. The SSD is small because it only houses the OS. All the storage is on the hard drives. I kept the ZFS raid (see details below) and can upgrade it by simply buying bigger hard drives in the future and swapping the smaller ones out, making this NAS fairly future proof for at least the next few years.

Software

I changed very little from the previous NAS when it comes to software. I installed the latest Kubuntu LTS, 20.04. I am running the latest versions of Plex, Crashplan, tinyMediaManager, kTorrent, and nomachine. I am using NFS to access my server directly from other computers within the network. These are all detailed in my previous build.

I also kept my ZFS file system. Conveniently, I was able to transfer the raid right over to my new NAS, which I detail below.

Test Results

To test how well my new build did, I ran a couple of tests. First, I tried to simultaneously stream a 4K video to 5 devices – my TV, two Google Pixel 3 phones, a Google Pixel phone, and a laptop (all on my home network). The Google Pixel couldn’t handle the 4K video, so I ended up streaming a different 1080p video instead. My previous NAS couldn’t stream a single 4K video. This one handled all 5 streams like a champ, as shown in the video below:

I also wanted to make sure that my Plex server was using the GPU for converting videos, so I took that same 4K video and set it to synchronize with my phone at 720p. On my previous NAS, converting a 31gb 4K video to a 720p video would have taken hours. Plex passed the conversion on to the GPU and it took less than 10 minutes.

To make sure your Plex server is using your GPU, you need to set it to do so in the settings. It’s under Settings->Transcoder. Click on “Use hardware acceleration when available” and “Use hardware-accelerated video encoding.”

NOTE: To monitor the CPU, I used Ksysguard, which ships with Kubuntu. To monitor the GPU, I used “nvtop,” which can be installed from most distribution repositories.

Headless Monitor Issue

One of the problems I have been struggling with for months now (and had a similar issue with my previous NAS) was setting up a virtual screen/monitor. Here’s the problem…

I actually do the initial installation of the operating system using a monitor that I plug into the NAS. But I don’t want to keep a monitor out, sitting near my NAS/fileserver all the time. I want the NAS to be “headless” – meaning no monitor is attached. Once I get the operating system installed, I first install SSH (just in case I screw something up, ’cause you’re always going to screw something up). Then I install my remote management software, nomachine. Once I have nomachine installed, I can unplug the monitor, mouse, and keyboard and take care of everything else remotely. However, I found out with my previous NAS build, that the X server that manages sessions on Kubuntu, doesn’t set up a monitor if it doesn’t detect one being attached physically. In other words, if I restart the computer with no monitor attached, the X server correctly detects that there is no monitor attached and doesn’t set up a monitor in the X session.

As a result, when I would try to remotely connect to my NAS using a VNC client, the client would have to try to generate a monitor and the result was always problematic – either nothing would show up or it would be really slow and sluggish. I get that the X server doesn’t want to use resources to run a monitor if there isn’t a monitor connected. That’s a feature, not a bug. But it is a challenge to solve this problem. With my new NAS, it took me about 2 hours to finally figure out a solution. So I don’t have to go through this again, I’m going to post the solution here in detail.

With my previous NAS, I didn’t have a graphics card installed. I let the CPU do all the processing, which meant I needed a different solution than the one I had employed before (here’s the old solution). With my new NAS, I wanted a graphics card so I could pass off some of the video processing to the GPU instead. As noted above, I bought an NVIDIA graphics card, which Kubuntu recognized when I installed the OS and appropriately added the recommended NVIDIA drivers. My graphics card also runs the monitor. This was the key to solving my headless monitor issue.

Basically, what you need to do is tell the X Server to create a monitor even though one isn’t attached. You do this by creating a “xorg.conf” file that is stored in the /etc/X11/ directory. Once the NAS is up and running, you can actually let the NVIDIA X Server Settings application create the necessary xorg.conf file, for the most part. Here’s what you do. With the NVIDIA drivers installed and the NVIDIA X Server Settings application installed, launch the NVIDIA X Server Settings application. Go to the second option down on that window that reads, “X Server Display Configuration.”

Near the bottom of that window is a button that reads, “Save to X Configuration File.” This is nearly the answer to all our problems.

When you click on that, it will pop up a window that will allow you to preview the code it is going to add to /etc/X11/xorg.conf, which is the X server’s configuration file for the monitor. That code will be based on your current monitor – whatever is currently physically attached to the computer. Go ahead and follow the prompt and it will generate the xorg.conf file.

However, you now need to make two modifications to the file. I use “nano” to edit text files:

sudo nano /etc/X11/xorg.conf

First, you need to add the following to the Device Section of the xorg.conf file:

Option “AllowEmptyInitialConfiguration”

This code tells the X Server that it’s okay to create a screen/monitor without a physical monitor connected. (Got this information from here).

Second, you need to add the dimensions of the virtual screen you want to set up to the “Display” SubSection of the “Screen” Section of the xorg.conf file, like this:

Virtual 1440 900

This creates a virtual monitor/screen with the dimensions indicated. (Got this information from here). Here is the complete xorg.conf file I am using for my NAS:

# nvidia-settings: X configuration file generated by nvidia-settings
# nvidia-settings:  version 440.64

Section "ServerLayout"
    Identifier     "Layout0"
    Screen      0  "Screen0" 0 0
    InputDevice    "Keyboard0" "CoreKeyboard"
    InputDevice    "Mouse0" "CorePointer"
    Option         "Xinerama" "0"
EndSection

Section "Files"
EndSection

Section "Module"
    Load           "dbe"
    Load           "extmod"
    Load           "type1"
    Load           "freetype"
    Load           "glx"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Mouse0"
    Driver         "mouse"
    Option         "Protocol" "auto"
    Option         "Device" "/dev/psaux"
    Option         "Emulate3Buttons" "no"
    Option         "ZAxisMapping" "4 5"
EndSection

Section "InputDevice"
    # generated from default
    Identifier     "Keyboard0"
    Driver         "kbd"
EndSection

Section "Monitor"
    # HorizSync source: edid, VertRefresh source: edid
    Identifier     "Monitor0"
    VendorName     "Unknown"
    ModelName      "Acer AL1917W"
    HorizSync       31.0 - 84.0
    VertRefresh     56.0 - 76.0
    Option         "DPMS"
EndSection

Section "Device"
    Identifier     "Device0"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    BoardName      "GeForce GTX 1060 3GB"
    Option	   "AllowEmptyInitialConfiguration"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "Device0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "Stereo" "0"
    Option         "nvidiaXineramaInfoOrder" "DFP-0"
    Option         "metamodes" "nvidia-auto-select +0+0"
    Option         "SLI" "Off"
    Option         "MultiGPU" "Off"
    Option         "BaseMosaic" "off"
    SubSection     "Display"
        Depth       24
	Virtual 1680 1050
    EndSubSection
EndSection

With the above xorg.conf file (which is located in /etc/X11/, so, /etc/X11/xorg.conf), I can then shutdown my NAS, unplug the mouse, keyboard, and monitor, then restart it and use nomachine to completely control the NAS. Here’s how it looks when I pull it up:

This approach will only work if you have an NVIDIA graphics card. If you don’t, you’ll need to take a different approach. This site may point you in the right direction.

UPDATE 2020-07-13: Back to XFCE

Well, I’m back to using XFCE as my desktop environment on my NAS/file server. An update completely wiped out KDE making it throw errors every time I tried to reboot the file server and not even showing the desktop. KDE is just not the right option for a NAS/file server. I had the same issue with my previous file server. Lesson learned. From now on, ultra lightweight XFCE will be my desktop environment of choice from my file server.

How to install XFCE. First, install tasksel:

sudo apt-get install tasksel

Then run tasksel and select Xubuntu desktop:

sudo tasksel

When prompted, select “lightdm” as your display manager of choice:

Go ahead and restart and you’ll be good to go. (NOTE: I’m still using the same xorg.conf file and it’s working.)

Also, I had to manually remove the NVIDIA drivers then reinstall them with the monitor connected.

ZFS Transfer

As I detailed in my previous NAS build, I opted to go with ZFS for storing my files using a raidz2 array (4 x 4tb drives). It provides, IMO, the optimal balance of speed and redundancy. With just over 4tb of files on my ZFS array in my old NAS, I didn’t really want to have to copy all of those files over to the new. As it turns out, ZFS has the ability to move physical disks to a new device and recreate a ZFS pool. Hallelujah!

On the old NAS, I shut down all connections to my NAS from external devices, restarted the machine, and then used the following command:

sudo zpool export

This basically sets up the ZFS pool so it is ready to be transferred to another machine. I then physically removed all four 4tb drives from the old NAS, moved them to the new one, and got everything ready to import the new NAS. I installed the following packages first:

sudo apt-get install zfsutils-linux nfs-kernel-server zfs-initramfs watchdog

The above packages also installed the following packages:

libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-zed keyutils libnfsidmap2 libtirpc-common libtirpc3 nfs-common rpcbind

With those installed, I checked to see if I had any pools (I didn’t, of course, I hadn’t set any up):

zpool status

Then, I used the following command to look for the pool that was spread across the four hard drives:

sudo zpool import

Beautifully, it immediately found the pool (ZFSNAS) and gave me a bunch of useful information:

pool: ZFSNAS
id: 11715369608980340525
state: ONLINE
status: The pool was last accessed by another system.
action: The pool can be imported using its name or numeric identifier and the '-f' flag.
see: http://zfsonlinux.org/msg/ZFS-8000-EY
config:
 ZFSNAS      ONLINE
      raidz2-0  ONLINE
        sdd     ONLINE
        sdb     ONLINE
        sde     ONLINE
        sdc     ONLINE

Honestly, I cheered when I saw this. It was going to work. I then used the following command to import my pool:

sudo zpool import -f ZFSNAS

Then, just to be sure everything worked, I checked my pool status:

zpool status

And up popped my zpool. It did tell me that I needed to upgrade my pool, which I proceeded to do. And, with that, my pool from my old NAS was transferred to my new NAS intact. No need to copy over the files. ZFS is definitely the way to go when it comes to NAS file storage. I’m increasingly happy with ZFS!

NOTE: I bought a 10gb external drive and copied everything onto it just to be extra cautious. Turns out, I didn’t need to do that. But, better safe than sorry.

Mounting Fileserver Across the Internet

This trick blew my mind. I am using EasyDNS to be able to VNC into my fileserver from wherever I am in the world (since I don’t have a static IP from my ISP). Nomachine works like a champ across the internet. But I had a crazy thought the other day, “Wouldn’t it be nice if I could actually mount my fileserver directly onto my work computer?” I don’t typically need to do this but I have been working with some files on my fileserver for a project and wanted direct access. Turns out, you can (thank you StackExchange). And so easily it will blow your mind. Linux rocks!

Since I have already set up SSH access on my fileserver, all I needed to do was learn about this as I already had all the packages installed. (If you haven’t set up SSH on your fileserver and opened the relevant ports on your home network, do that first). Then, on my computer at work, I created a new folder in my home directory (/home/ryan/fileserver/). I then used the following command at the terminal to mount my fileshares on my work desktop:

sshfs ryan@myfileserver.easydns.org:/directory/to/share /home/ryan/fileserver

You obviously need to change your username for your fileserver, which comes before the “@”, and need to have the IP address of your fileserver (what comes after the “@” sign). You also need to know what directory from your fileserver you want to mount (comes after the :/) and the folder where you want to mount it (the last piece). Once you have everything set up correctly, hit enter and it will ask you for your password. Once I entered my password, all of the files on my fileserver were securely shared over SSH to my work desktop:

Those are the main directories from my fileserver at home.

Note, this isn’t a permanent share. Once you shutdown your work computer, the share will go away. But it’s just a quick repeat of that command and you’ll have the files mounted right back to your computer. Also, the speed with which you can access the files will depend on your home and work networks. Luckily, mine are both very fast. So, it’s basically like being on my home network as I work with the files I need. Absolutely amazing!

 1,739 total views,  23 views today

LibreOffice – exporting high-resolution TIFF/TIF files

As a scholar who regularly publishes work with charts and graphs, I’m often confronted with varied requirements from publishers for the format in which they want the charts and graphs. Most often, the format is as a TIFF/TIF file, typically with at least 300 dpi and somewhere around 1500×1500 pixels. I make most of my charts in LibreOffice Calc, though occasionally I make some in R as well.

I have recently been editing several volumes in which I had to manage the charts and graphs of other scholars as well. As the editor, I had to make sure the final images met the criteria detailed above. Since scholars often make charts and graphs in Word, it took a little finagling to come up with a quick and easy way to export the images in the format needed by the publisher. Since I did finally figure this out, I figured I’d post it here so I remember how to do this in the future. Luckily, LibreOffice works extremely well with these formats (for the most part), which makes this quite easy.

LibreOffice Calc

Assuming you have created your chart/graph in LibreOffice Calc, exporting it into a TIF format should be fairly easy, though it requires an unfortunate extra step. Here’s a chart I created in LibreOffice Calc:

The LibreOffice programmers make it so you can just right-click on the graph and select “Export as Image.”

When you do this, you’ll get this pop-up window asking where you want to save the image and, more importantly, the format:

Here’s where you get a problem. If you select TIFF, you’ll get a .tif file, but the resolution will be basically the same as what you see on your screen, like this:

Ideally, LibreOffice would ask you what DPI and resolution you want once you select the TIFF format and would then export the chart in that resolution and you’d be done in one simple step. Alas, that’s not an option when you export from LibreOffice Calc.

What you can do instead is copy your chart, open an empty LibreOffice Writer document, and paste it into the document, like this:

Then, go up to File -> Export, like this:

You’ll get the same prompt as before asking what you want to name the file and format. Name the file and select PNG format then click “Save.” What you’re looking for is the window that pops up next:

In this window, you can change the DPI to 300 (do this first) and then change the width and height (they are typically linked, so, if you change one, the other automatically changes). When you’re done, click “OK.” The file you’ll get will be 300 DPI and whatever pixels you chose:

Now, open that file with any image editing software (I’m using Gwenview on KDE for this example) and simply select File->Save As:

Now select the TIFF format. Once you save it, you’ll have a TIFF file with the proper DPI and resolution per the publisher’s instructions. The resulting TIFF file will be huge, but it will meet the criteria of the publisher:

NOTE:

The other way to do this is to copy your chart into a LibreOffice Draw file that has been modified with a huge area (e.g., 4000×4000 pixels). You can then expand your chart to file the area in the LibreOffice Draw document and then export the image. However, depending on the original format of your chart/graph, you may have to resize the text if you do this, which is a pain. However, this will give you a much larger image file. But the approach above is much easier.

The tutorial above used LibreOffice 6.4.4.2.

 1,462 total views,  23 views today

Linux – Video Tag Editing

Not everyone may be as particular as I am about having my files organized, but I like to make sure everything is where it’s supposed to be. I make sure my music is tagged accurately. I also like to have my video files tagged correctly. What does that mean? Just like with audio files, video container formats include as part of the file some tags that provide information about the file. Those tags can include the name of the video, the year, and other information (e.g., genre, performers, etc.). If you rip files or have digital copies, it’s not really necessary to update the information in the tags. However, depending on the software you use to play your video files, having that information included in the tags substantially increases the odds that your video player will be able to figure out what the video is and will then be able to retrieve any other relevant data. Thus, having accurate metadata in your video files is nice. It’s not necessary, but nice.

I was cleaning up some video files the other data and realized that I didn’t have accurate tags in some of them. I opened the video in VLC and then clicked on “Tools” -> “Media Information”:

I wanted to see the tags in the video file.

Here’s what VLC saw:

Yep, I’m working with Frozen!

As you can see, it didn’t have any tags filled except “Encoded by.” It actually filled the title by pulling the name of the video file itself. The minimum tags that should be included in a video file are: title and year, but including genre and some of the performers is always nice.

While there are a number of music file tag editors that work very well on Linux (e.g., Picard), I have struggled to find a good video metatag editor for Linux. I had one that was working for a while, Puddletag, which actually worked quite well even though it only billed itself as a tag editor for music files. However, Puddletag does not appear to be maintained anymore and, as of Kubuntu 20.04, it is no longer in Ubuntu’s repositories and the PPA does not contain the correct release file. I could try building it from source, but I wanted to see if there was a good alternative.

After googling around, I found one that seems to work quite well – Tag Editor. (You have to love the Linux community: call the software exactly what it does!) Here’s the GitHub site. And here’s where you can download an AppImage (I went with “tageditor-latest-x86_64.AppImage”), which worked great on Kubuntu 20.04.

Once you’ve downloaded the AppImage, you can set it to be executable (right-click and select “properties” then, on the “permissions” tab, select “executable”) or just double-click it and allow it to be executed. It should load.

In the left pane, navigate to your video file:

Once you find the file, you can see all of the tags that can be edited. Fill in the information:

Once you’ve filled in the tags you want to add or modify, click on “Save” at the bottom of the screen:

I particularly like this next feature. Once you click save, it shows the progress and actually tells you what stage it is at in saving the tags in the file:

Progress is in the circle with robust information on what it is doing next to it.

Tag Editor also does something that I actually questioned at first until it saved my bacon – it makes a backup of the file before it writes the new file. The backup file is named the same as the original file but with a new file extension: “.bak”.

You can see the backup copy of Frozen (“Frozen.m4v.bak”) just below the updated copy.

I initially thought this was just going to be annoying as I’d have to go through and delete all the backup copies once I was done. However, I did run into a couple of files that, for whatever reason, could not be modified. Partway through the tag saving process, I got an error message. Sure enough, Tag Editor, in writing the file, had stopped midway through. If a backup file wasn’t made, I would have lost the video. I don’t know exactly what caused the errors, but I quickly learned to appreciate this feature.

Just to illustrate that the tags were updated, I opened the new file in VLC and went back to the media information:

As you can see, the Title, Date, and Genre fields are now filled with accurate information.

Unlike, say, mp3 audio files, video files can take quite some time to update because the file has to be re-written. With a very fast computer, this won’t take an exorbitant amount of time. But it is a much lengthier process than updating tags in mp3 audio files.

 2,404 total views,  8 views today

LibreOffice 6.4.3.2 – Not Showing Greek Letters/Symbols

I ran into an issue the other day that ended up taking me hours to solve, in part because I couldn’t find any other solutions online, which is pretty unusual these days.

Here was the issue: I was evaluating a paper (I’m an academic and read lots of papers) that had a bunch of Greek letters/symbols in it as part of a regression formula. On my computer running Kubuntu 19.10 with LibreOffice 6.3, the Greek letters showed up perfectly fine. On my laptop, which I had just reformatted and on which was a fresh install of Kubuntu 20.04 with Libreoffice 6.4.3.2, the Greek letters were all showing up as something other than Greek letters – odd symbols or dingbats or something. Here’s the version number from a fresh install of Kubuntu 20.04:

And here’s what was being displayed in LibreOffice with the document:

Those familiar with the Greek alphabet will clearly see that these odd dingbats or symbols are definitely not from the Greek alphabet.

I spent about three hours googling for a solution and trying various suggestions. Google is usually a Linux user’s best friend and it’s common that someone else has had the same issue or something similar. Alas, no luck this time. No one, as far as I could tell, had run into this exact issue before. The closest problems seemed to suggest that the problem wasn’t with LibreOffice but with my Linux installation and that I was missing some language packs. Specifically, these semi-related issues suggested I needed to install a language pack with Cyrillic characters. This suggestion seemed reasonable as this version of LibreOffice didn’t seem to ship with support for Cyrillic characters:

Screenshot from LibreOffice for inserting special characters; Greek is not included by default.

I installed a Cyrillic language package from the repositories and restarted my computer. Nothing. I was still getting dingbats instead of Greek letters. I tried about 10 more Cyrillic language packages thinking that maybe I hadn’t found just the right one, searching through the repositories for anything that mentioned Greek or Cyrillic. Haphazardly adding language packages doesn’t seem like a good approach, but I was getting desperate. Even so, it didn’t help. I still couldn’t display the Greek letters in the document.

Next, I tried uninstalling and reinstalling the same version of LibreOffice – 6.4.3.2, which is the version shipping with Kubuntu 20.04. That didn’t work.

After a couple of hours and no solution, I decided that I’d try a different version of LibreOffice. On their website, LibreOffice makes two additional release candidates or development versions available. I could have gone straight to 7.0.0, which was in Alpha, but I opted instead for version 6.4.4.2. To uninstall LibreOffice, I used the following commands (see here):

sudo apt-get remove --purge libreoffice*
sudo apt-get clean
sudo apt-get autoremove

To install the new version, you have to untar the files you downloaded then navigate to the DEBS folder you just unpacked, then run the following:

sudo dpkg -i *.deb

After installing LibreOffice 6.4.4.2, I opened the file that was having issues and, lo and behold, it worked just fine:

There are my lovely alphas, betas, sigmas, epsilons, and omegas!

I’m assuming this is a bug in LibreOffice 6.4.3.2 or, at a minimum, the folks who packaged that version left something out of it. Either way, I was frustrated enough at the end that I realized I needed to post a solution for others who may run into this. Since Ubuntu/Kubuntu 20.04 is an LTS (long-term support) release, having a serious bug shipping in the included version of LibreOffice is, no doubt, going to frustrate many users.

I spent a solid three hours on something that was working perfectly fine in LibreOffice 6.3 but broken in 6.4.3.2. That’s annoying. I’m a huge fan of LibreOffice and prefer it far and above MS Office. It’s mature enough software now that little regressions like this really shouldn’t happen.

 2,221 total views,  4 views today