Press "Enter" to skip to content

Month: November 2022

Podcast Recommendation: A History of Rock Music in 500 Songs

For a long time my podcast listening has been almost exclusively nerdy tech podcasts mixed up with nerdy theology podcasts, with an occasional news or true crime mixed in to liven things up. Somehow I have almost entirely bypassed any that were music-related. (I did listen to a couple episodes of Song Exploder right after it debuted, but it just didn’t hook me.)

Somewhere along the line, Rob Weinert-Kendt on Twitter started linking to A History of Rock Music in 500 Songs by British host Andrew Hickey. It took me only a couple episodes and I was hooked.

The format of the podcast is one song per 30-ish minute episode, but each episode covers far more than just the titular song. Hickey provides background on the artist, the influences that formed that artist, stories about the creation of the song, and so on. You come away from the episode having learned a lot not just about a particular song but also about the developing music scene in the Americas (and, once you get in a ways, in Europe). He starts with the first inklings of what would become rock music as they emerged in the big band scene. (Episode 1: “Flying Home” by the Benny Goodman Sextet.)

500 episodes is a significant feat for any podcast, and setting out that goal in the title of your show seems rather ambitious, but I’m willing to bet that Mr. Hickey has all 500 songs charted out, and the moxie to see it through. He’s currently up through Episode 157 (“See Emily Play” by Pink Floyd), and is publishing a lot of bonus material for Patreon subscribers. I’m learning a lot as I go, so even if some interruption keeps the series from completion, it’s still been an excellent investment of time.

So, if you’re interested in rock music, A History of Rock Music in 500 Songs is highly recommended.

Hosting a Mastodon instance: moving asset storage to S3

I’ve had my Mastodon instance ( up and running for about a week and a half now. It’s not grown very large – 10 users or so last time I checked – but it’s been a great project to learn some more server admin type stuff.

Ten days and ten users in, my Linode instance seems to quite adequate from a processing standpoint. (I’m using the lowest-level Linode offering, which provides 1 CPU core and 2 GB of RAM.) However, the disk space usage is growing. The database itself isn’t overly large, but the cache directory (which stores profile pictures, preview cards for images) is up over 15 GB. I could work to get really aggressive about cache management, but realistically a larger cache is a reasonable thing. Mastodon provisions for this by providing easy hooks to use S3 buckets for the cache, so I figured I’d give that a shot.

I found Nolan Lawson’s excellent step-by-step instructions and followed them to a T. Well, almost to a T. I first went to set up an S3 bucket, kicked off the script to copy the cache over from Linode to S3, then went to bed. The next morning I did some more reading and decided that Linode’s very similar Object Storage service (it’s just their S3 clone) might be a better deal cost-wise. Amazon S3 charges a small amount per GB for storage and then a different rate for data access. Linode does it slightly differently – you pay a flat fee per month for a given bucket size, and then you get a large amount of transfer every month for free. Since my server is already on Linode, it was easier and simpler to just use the Linode buckets, so I tried again there.

One gotcha that’s not obvious when creating the bucket at Linode: if you’re going to put a custom domain name in front of the bucket, you need to name the bucket that domain name if you want their TLS/SSL stuff to work. In my case, I setup a CNAME record to point at my bucket, so I needed to name my bucket There’s no renaming buckets, so I had to empty and delete my old one and then create a new one with the correct name. After that the certificate generation went smoothly enough and I once again kicked off the copy job. Then I went to the gym.

A couple hours and 68,000 file copies later, my cache is in the bucket and a quick restart of Mastodon via docker-compose pulled in the configuration updates that now point out to the cloud. It went amazingly smoothly.

Edit: I posted this a little too soon…

All the existing assets were working fine, but new assets weren’t loading properly. Commence some more googling. The correct answer was that in addition to the .env.production settings listed in the instructions above, you also need this one:


In my instance, that looked like this:


Now it seems to be fully working.

Carter Burwell: polymath film composer

I was familiar with Carter Burwell’s name thanks to his score for the Coen brothers’ film True Grit, but I wasn’t aware of the full scope of his film compositions or of his backstory. A brilliant man who just picked up and learned lots of things. Just out of college and trying to make it as a musician while working a lousy warehouse job:

One day, Burwell saw a help-wanted ad in the Times for a computer programmer at Cold Spring Harbor Laboratory, a nonprofit research institution whose director, James D. Watson, had shared the Nobel Prize in 1962 for discovering the structure of DNA. Burwell wrote a jokey letter in which he said that, although he had none of the required skills, he would cost less to employ than someone with a Ph.D. would. Surprisingly, the letter got him the job, and he spent two years as the chief computer scientist on a protein-cataloguing project funded by a grant from the Muscular Dystrophy Association. “Watson let me live at the lab, and he would invite me to his house for breakfast with all these amazing people,” he said. When that job ended, Burwell worked on 3-D modelling and digital audio in the New York Institute of Technology’s Computer Graphics Lab, several of whose principal researchers had just left to start Pixar.

The Polymath Film Composer Known as “the Third Coen Brother” by David Owen in The New Yorker

His royalties from scoring Twilight funded a house on Long Island, where he lives and works from home, composing on a 1947 Steinway D that came from the Columbia Records studio in New York. “I still fret about having replaced the hammers, but they were worn almost to the wood—some say by Dave Brubeck.”

Worth reading the whole profile.

Setting up a Mastodon instance on Linode

With the Muskian shenanigans happening over at Twitter, I’ve been looking into Mastodon as an alternative social media site. It’s an interesting concept – a distributed network of servers that all talk to each other through an open protocol. So, being the nerdy sort, I decided to try setting up my own server. Here’s how it went:

Registering a Domain Name

Just deciding on the domain name was the hardest part, I think. But once I settled on, I went over to Hover and registered it. Easy peasy.

SMTP email service

Mastodon needs an SMTP mail service to send notification emails. Not wanting to tie it to my personal Fastmail account, I looked around and determined that Amazon AWS Simple Email Service (SES) was a reasonable fit. The pricing is scalable and pretty cheap ($1 / 1000 emails at this time). It took a little bit of time to get configured, but the AWS website walked me through the steps pretty clearly. Basically, AWS will give you three pieces of info: an SMTP server name (available on the SMTP settings page, will be something like and then a username and password. For the latter two it generates a pair that look like guacamole, and it warns you it will only ever show you them once, so copy and paste them to a useful location. If you lose them you’ll have to regenerate a new pair.

Also note: if you’re a new AWS account, AWS will put you in a restricted mode until you request a review and a move into production. Once you verify a domain identity you can also verify individual emails. You need to do this before you setup Mastodon because Mastodon will want to send you an email or two.)


I decided I didn’t want to go with a hosted setup – most of the custom hosting providers are all slammed right now, too – so it felt like my best options were either DigitalOcean or Linode. DigitalOcean advertises a pre-configured “droplet” that will give you a Mastodon instance with a minimum of configuration; Linode provides instructions for installing Mastodon on Ubuntu 20.04. I went with Linode.

I started out with a 1 CPU core, 2 GB of RAM, 50 GB disk space Linode. Once I get past the initial credit they give you (go find a referral link… my account is still too new to be able to give you one of my own), I think they’ll charge me $10/mo for it. We’ll see how well it performs if I get more than a few users.

The instructions were good as far as they went, but a few key things seemed to be missing. I’m going to walk through the instruction sections here and comment on them.

“Before You Begin”

This section was pretty straightforward. I followed the instructions for Creating a Compute Instance, Setting up and Securing a Compute Instance, and Adding DNS Records.

“Install Docker and Docker Compose”

Followed the instructions for installing Docker, then for installing Docker Compose. When you clone the Mastodon git repository in the next step, it includes a docker-compose file. It took me a few minutes to realize that the docker-compose also includes the Postgres database, so there’s no separate database installation required on the Linode server.

“Download Mastodon”

OK, this bit was simple. It’s just a git clone command.

“Configure Docker Compose”

Editing the docker-compose file is a little bit of a pain. Pay attention to the details. For step #3, setting up the Postgres password, db, and user, you can set the password to whatever you want it to be. Write it down because you will need it down in the next section.

Now, step 6, generating SECRET_KEY_BASE and OTP_SECRET. There are 4 separate commands in the instruction box: 2 echos to create the values and two ‘sed’ lines that I mistakenly assumed would write the value into the config file. Don’t assume that! Follow the instructions directly and copy/paste the values for SECRET_KEY_BASE and OTP_SECRET into the .env.production file. Same story with the VAPID_PRIVATE_KEY and VAPID_PUBLIC_KEY.

“Complete the Docker Compose Setup”

The involved command here is the rake command that does the initial database setup. It will ask you a bunch of questions here, many of them duplicate info to what you just put into .env.production. Such is life.

“Initiate the Docker Compose Services”

Two commands. Easy peasy.

“Setup an HTTP/HTTPS Proxy”

Here we install nginx. There’s an nginx.conf file in the Mastodon distro that is a good start, but there are changes you will need to make to it.

There are two lines that say root /home/mastodon/live/public;. You have to change these lines. Having followed the instructions thus far, the actual location of my /public folder was /home/chris/mastodon/public. Update the root lines to point to the actual location of your public folder.

Next edit, and I don’t remember where I found it, but it seems to work: every instance in that file that says try_files $uri =404; needs to be updated to say try_files $uri @proxy;

I think that’s all I had to do to the nginx config.

“Get an SSL/TLS Certificate”

I followed the steps to install Certbot, but when I got to step #4 where you run certbot to generate a certificate, I got an error saying that there was no ssl in a listen section of my nginx configuration. It turns out there is a “snake oil” certificate you set up first to get things moving, then you can get and install the real certificate. For the snake oil thing, use the steps described here. I then had to cleanup a broken renewal with steps listed here, but that may have been because I was messing around w/ certificate stuff before I found out about the snake oil. If your renewal dry run works ok, you can ignore that second link.

“Using Mastodon”

The instructions would lead you to believe at this point that the site is ready to use. However, when I went to, I just got an nginx configuration error. After a bunch of debugging, the key issue I found was this: I had never run any command that did a bundle install, that step where ruby installs and builds all the gems listed in Mastodon’s gem file. Of course it’s not gonna work that way!

The command you will want to run is bundle exec rails assets:precompile.

When I ran that it then sent me through several searches to install other libraries on the linux server. To save yourself some time, run this:

sudo apt install ruby-bundler ruby-dev ruby-full build-essential libicu-dev zlib1g-dev openssl libgmp-dev libxml2-dev libssl-dev libpq-dev

That should give you all the libraries you need to complete the bundle install.

It works!

After all that, sure enough, it worked! My Linode CPU usage had been sitting solidly between 10% and 15% the whole time, apparently because it was in a loop continuously trying to restart my misconfigured Rails app. Once Rails started up cleanly, the CPU usage went down to about 2%.

Adding another admin user

I originally set up an admin account for during the setup steps. Then I realized I would want my own account ( to be an admin once I moved it over. How to do that, I wondered? Turns out Mastodon has a command-line tool called tootctl just for this purpose. But to run tootctl and get it to talk to the database running in Docker, you need to run a special version of the command. What you’re looking for is this:

docker-compose run --rm web bin/tootctl <tootctl command>



First off: it’s amazing how much snappier Mastodon is on this server than back on The account migration process went smoothly enough, too. I will advertise it to a few friends once I get the Amazon AWS SES account moved to production so it can send emails to people other than me. Then we’ll see if it gets any traction. I’ll follow up later.

Liège Waffles!

On my work trip to Brussels last week I was introduced to the liège waffle, a type of waffle that is denser and sweeter than the common “Belgian waffle” that I’m used to in the States. They were delicious! So when I got home there was no possible option but to find a recipe and try making them myself.

When you go search through the multitude of online recipies for liège waffles, you find that the secret ingredient for these things is pearl sugar — a compressed form of sugar bits smaller than sugar cubes, roughly the size of mini M&Ms. So, I ordered some from Amazon, printed off this recipe, and got to work.

Pearl sugar in a bowl
Pearl sugar

With liège waffles you don’t really have a waffle batter – it’s more a waffle dough. It’s sort of a brioche, more similar to cinnamon roll dough that I’ve made than to pancake batter. Threw it all in to the KitchenAid mixer with the dough hook and kneaded it for 5 minutes, then threw it into a warm oven to let it rise for an hour.

Nicely risen and ready to add the sugar

Once the dough has risen you knead in the pearl sugar, which gives you a strange mound of dough that crunches when you cut it up into pieces.

Dough with pearl sugar kneaded in
Dough balls

Throw these guys into the waffle maker and they cook up very nicely! I used a medium heat that turned out to be just right to caramelize the sugar, giving the waffles a nice shine and a nice crisp surface.

You can see the shine!

The caramelization is, of course, a little more work to clean up…

Might scale the pearl sugar back by about 20% next time but otherwise this recipe was great! Not quite to the level I got from the pros in Belgium but for sure a good Saturday morning breakfast. These were a hit with the family and we will definitely be making them again.