Press "Enter" to skip to content

Category: Uncategorized

On leaving Twitter, embracing Mastodon, and recommending Ivory

I joined Twitter back before it was a social media juggernaut. I don’t remember how many years ago that was, but I blogged about it back at least as far as 2007… which was a long time ago. Twitter was the social network that allowed me to make friends, meet interesting people, learn and grow… also to get irrationally angry, unduly stressed, and unnecessarily anxious at times. Learning how to use it for my own improvement and not to my own detriment was a continuous process.

Twitter’s decline over the past few months under Elon Musk’s ownership has been well documented. Functionality was axed. Content moderation staffs were fired. Longstanding rules were set aside on the boss’s whim. Nazis had their suspensions lifted. Third party apps (the only way I used the service, anyway) were cut off without any notice or announcement. The reasons to be done with Twitter grow more numerous every day.

I haven’t closed down my Twitter account, since closing it releases the username back to the wild. Not that I’m so well known that my username is desirable for someone else to take up, but still. But I locked the account down, deleted the apps from my devices, and just don’t go to Twitter in my browser any more.

Enter Mastodon

I was an early adopter of Mastodon, too – at least as far as creating an account back in 2019 when it released, then forgetting all about it. When the Twitter shenanigans started I decided to dive into the fediverse in earnest. I’ve written previously about setting up and administering a small Mastodon instance at It’s humming along with about a dozen active users. That doesn’t sound like many (and it isn’t), but thanks to the magic of the fediverse, we interact with hundreds of thousands of other users who are members of other Mastodon servers around the world.

Mastodon has largely replaced Twitter as my daily social media tool. Some of my old Twitter network came across and we reconnected; I’ve also found a bunch of new interesting folks to follow. There are still a few people who haven’t made their way from Twitter to Mastodon who I really miss. Here’s hoping one of these days they also make the jump.

App Recommendation: Ivory

Finally, a practical suggestion if you’re an Apple user. For years I used an app called Tweetbot to browse Twitter. Tweetbot was one of the third-party apps that got summarily dispatched without notice a couple weeks ago when Elon pulled their plug. The developers of Tweetbot cranked up their efforts on a Mastodon app called Ivory, and it released today. I’ve been using it in beta for a few weeks now and it’s excellent. I happily subscribed at the highest level (don’t worry, dear, it’s only $25/year) as soon as I downloaded the released version today. Developers gotta eat, too, and for an app I use every day, 7 cents per day seems like a pittance. If you’re using an iPhone or an iPad and want to use Mastodon, I’d highly recommend it. (There are lower-priced subscription levels if that is a hindrance.)

Here’s to new platforms and new (and old!) friends. If you’re in the fediverse already, you can follow me at You can follow my blog at, too! And if you’d just like to give Mastodon a try, you’re welcome to sign up for an account at!

New on The Bookshelf

I’ve tracked my reading in one way or another here on this site and then using Goodreads since 2007 or so. At some point Goodreads got bought by Amazon and its functionality stagnated; I’m still logging books there but not interested in investing in it as my continued long-term logging. I was casting around for ideas on book logging back during the holidays and ran across some spiffy static site generator ideas, which lead me to rolling out The Bookshelf at today.

The vast bulk of the functionality driving The Bookshelf was written by Tobias (aka Rixx), who maintains his own book logging site at He provides the source on Github. It was designed to scrape Goodreads for data, assuming that the user would have Goodreads Developer API keys. Goodreads no longer issues new developer API keys (stagnating, remember?) so that path wasn’t available. I ended up writing some Python to parse the Goodreads export CSV file (which contained all of my reading logs since 2007) and process it into a whole structure of Markdown files with associated metadata. Those are then the master data that Rixx’s site generator tools use to generate static HTML.

I love the layout and organization of The Bookshelf. You can look at reading by year, by author, by title, and by series. You can also look at statistics on titles and pages read. There is “to be read” functionality that is a bit raggedy-looking yet; I have plans to add my existing to-read bookshelf (above my dresser in my bedroom!) to The Bookshelf as To Read, but I haven’t gotten that done yet.

If history serves, The Bookshelf will be the most actively updated part of my website. I haven’t done a great job over the years at writing short book reviews, but I think this site and the workflow to update it will encourage me to do that. I’m sure a decade from now it’ll be time for a change to something else, but as an organizer and cataloger, I’m excited to have 15 years worth of reading data here.

My 2022 Reading in Review

Another year full of books! (Previous summaries: 2021, 2020, 2019, 2018, 2017, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007… argh, how did I miss some of those years?)

I got through 61 books this year, which feels like a bit of a down year. My “one book at a time” practice got me bogged down in some slow theology books, and then I got sucked into a cross-stitch project and a couple web projects at the end of the year which stole some of my reading time. (I finally came to grips with breaking up the long theology slogs with some fiction, and that helps a lot.)

Here’s the full list of reading, with particular standouts noted in bold:


  • Heavy Burdens: Seven Ways LGBTQ Christians Experience Harm in the Church – Bridget Eileen Rivera
  • Happiness and Contemplation – Josef Pieper
  • The Aryan Jesus – Susannah Heschel
  • The Joy of Being Wrong – James Alison
  • Attached to God: A Practical Guide to Deeper Spiritual Experience – Krispin Mayfield
  • The Emergent Christ – Ilia Delio
  • The Beatitudes Through the Ages – Rebekah Ann Eklund
  • Let the Light In: Healing from Distorted Images of God – Colin McCartney
  • In: Incarnation and Inclusion, Abba and Lamb – Brad Jersak
  • Having the Mind of Christ – Matt Tebbe and Ben Sternke
  • The Dark Interval – John Dominic Crossan
  • Love Over Fear – Dan White, Jr.
  • Faith Victorious – Lennart Pinomaa
  • History and Eschatology – N. T. Wright
  • Destined for Joy – Alvin F. Kimel
  • A Thicker Jesus – Glen Harold Stassen
  • Changing Our Mind – David P. Gushee

Dr. Ilia Delio’s The Emergent Christ is the one that had me thinking the most this year, and that will stick with me longer than any of the others. Her approach to thinking about God, evolution, and universal progress within a Christian framework blew my mind, and consistently challenges me to think about God and the universe differently.

Other Non-Fiction

  • Maximum City – Suketu Mehta
  • Music is History – Ahmir “Questlove” Thompson
  • The Argonauts – Maggie Nelson
  • How the Word Is Passed – Clint Smith
  • The New Abolition – Gary Dorrien
  • Reading Evangelicals – Daniel Silliman
  • Fearful Symmetry – A. Zee
  • The Joshua Generation – Rachel Havrelock
  • Belabored – Lyz Lenz
  • The Method – Isaac Butler
  • The Dead Sea Scrolls – John J. Collins
  • Strange Rites – Tara Isabella Burton
  • A Different Kind of Animal – Robert Boyd
  • The Dawn of Everything – David Graeber and David Wengrow
  • Bible Nation – Candida R. Moss and Joel S. Baden
  • Protestants Abroad – David A. Hollinger
  • Do I Make Myself Clear? – Harold Evans
  • White Flight – Kevin M. Kruse
  • How God Becomes Real – T. M. Luhrmann
  • Salty – Alissa Wilkinson
  • Blood In The Garden: The Flagrant History of the 1990s New York Knicks – Chris Herring
  • Searching for the Oldest Stars – Anna Frebel
  • This Here Flesh – Cole Arthur Riley
  • The Invention of Religion – Jan Assmann
  • The Phoenix Project – Gene Kim, George Spafford, and Kevin Behr
  • The Late Medieval English Church – G. W. Bernard
  • The Life of Saint Teresa of Avila – Carlos Eire
  • Strangers in Their Own Land – Arlie Russell Hochschild

Three women’s books stand out here: Tara Isabella Burton’s Strange Rites, looking at how the current generation of young people are looking for religious experiences in places other than traditional religion; Cole Arthur Riley’s spiritual memoir This Here Flesh, and Arlie Russell Hochschild’s Strangers in Their Own Land, describing a sociologist’s quest to understand Louisianans who have been devastatingly impacted by environmental destruction and yet persistently support the businesses and political causes behind that destruction.


  • Unthinkable – Brad Parks
  • Lent – Jo Walton
  • The Last Commandment – Scott Shepherd
  • When We Cease To Understand the World – Benjamin Labatut
  • Everything Sad Is Untrue – Daniel Nayeri
  • Once A Thief – Christopher Reich
  • A Deadly Education – Naomi Novik
  • The Blue Diamond – Leonard Goldberg
  • A Psalm for the Wild-Built – Becky Chambers
  • The Coffin Dancer – Jeffery Deaver
  • Sea of Tranquility – Emily St. John Mandel
  • Small Things Like These – Claire Keegan
  • A Prayer for the Crown-Shy – Becky Chambers
  • A Long Way to a Small, Angry Planet – Becky Chambers (re-read)
  • Slow Horses – Mick Herron
  • The Last Agent – Robert Dugoni

Here the standout was author Becky Chambers. Her little Monk & Robot novellas sucked me in and made me happy. That prompted me to purchase her Small Angry Planet series and start in on a re-read. Chambers works in the best tradition of science fiction pushing for inclusion and acceptance of The Other and in using the exploration of a very different universe to make you think about how our own could be improved.

Coming Up…

I’ve continued to log on Goodreads this past year but I get the feeling it’s spooling down as it gets absorbed by Amazon. I’m working on a self-hosted book logging site – it’s actually live online right now if you know where to look but I’m going to do some cleanup on it before I publicize it. I’ll post here about it when I do!

Stanley Hauerwas on sin, character formation, and fear

From Chapter 3 of Stanley Hauerwas’ book on Christian ethics The Peaceable Kingdom, this wonderful insight into how we can think about sin as interacting with our own power, control, and self-direction (emphasis mine):

We are rooted in sin just to the extent we think we have the inherent power to claim our life – our character – as our particular achievement. In other words, our sin – our fundamental sin – is the assumption that we are the creators of the history through which we acquire and possess our character. Sin is the form our character takes as a result of our fear that we will be “nobody” if we lose control of our lives.

Moreover our need to be in control is the basis for the violence of our lives. For since our “control” and “power” cannot help but be built on an insufficient basis, we must use force to maintain the illusion that we are in control. We are deeply afraid of losing what unity of self we have achieved. Any idea or person threatening that unity must be either manipulated or eliminated…

This helps us understand why we are so resistant to the training offered by the gospel, for we simply cannot believe that the self might be formed without fear of the other.

This gets to the heart of a lot of the discussions I’ve had with my Dad lately about the first step in making a positive spiritual change (which might be what Hauerwas here calls “the training offered by the gospel”) is to be freed from fear. One needs to be secure in their standing with God and with their community to be able to change and grow. (The counter-example here is frequently seen: spiritual communities that make any interest in ideas outside the accepted orthodoxy grounds for exclusion and expulsion.)

Hauerwas continues:

Our sin lies precisely in our unbelief – our distrust that we are creatures of a gracious creator known only to the extent we accept the invitation to become part of his kingdom. It is only be learning to make that story – that story of God – our own that we gain the freedom necessary to make our life our own. Only then can I learn to accept what has happened to me (which includes what I have done) without resentment. It is then that I am able to accept my body, my psychological conditioning, my implicit distrust of others and myself, as mine, as part of my story. And the acceptance of myself as a sinner is made possible only because it is an acceptance of God’s acceptance. This I am able to see myself as a sinner and yet to go on.

This does not mean that tragedy is eliminated from our lives; rather we have the means to recognize and accept the tragic without turning to violence. For finally our freedom is learning how to exist in the world, a violent world, in peace with ourselves and others. The violence of the world is but the mirror of the violence of our lives. We say we desire peace, but we have not the souls for it. We fear the boredom a commitment to peace would entail. As a result the more we seek to bring “under our control”, the more violent we have to become to protect what we have. And the more violent we allow ourselves to become, the more vulnerable we are to challenges.

This is growth toward wholeness: “the means to recognize and accept the tragic without turning to violence”.

For what does “peace with ourselves” involve? It surely does not mean that we will live untroubled – though it may be true that no one can really harm a just person. Nor does it mean that we are free of self-conflict, for we remain troubled sinners – indeed, that may well be the best description of the redeemed. To be “at peace with ourselves” means we have the confidence, gained through participation in the adventure we call God’s kingdom, to trust ourselves and others. Such confidence becomes the source of our character and our freedom as we are loosed from a debilitating preoccupation with ourselves. Moreover by learning to be at peace with ourselves, we find we can live at peace with one another. And this freedom, after all, is the only freedom worth having.

Moving my Mastodon Instance to a New Server

Continuing my adventures hosting, a very small Mastodon instance.

When I started hosting back in early November, I set it up on Linode. I’d heard lots of advertisements for their services, and they offered $100 of free credits to try them out for a couple months, so hey, why not? There was also a guide for setting up Mastodon on Linode that turned out to be not quite as smooth as it claimed. (Linode has since created a one-click install package, so life may be easier if you tried to start now.) I was not only using Linode’s basic server instance ($10/mo for 1 CPU, 2 GB of RAM, 50 GB of storage) but also their object storage ($5/mo for 250 GB of storage, lots of data transfer allowance).

Six weeks later, the object storage continues to seem like a very good deal. Some basic math with the AWS calculator tells me that if I was hosting this same data with this much transfer on Amazon S3, I’d be in the $30/month neighborhood. Maybe there would be ways to optimize that, but a flat $5/mo seems like a solid choice going forward.

On the server side, though, $10/mo for the server + $2.50/mo for backups wasn’t awful, but I felt like I could do better. I found a recommendation for, whose “Black Friday” deal is still up and provides a 3 CPU, 4.5 GB RAM, 100 GB SSD storage for $49/year. (I think it was $45/year when I signed up…) I went ahead and signed up, and within minutes had an IP address and a username and password to login and start working. I had some limited experience with Linux server administration already, but there was still some learning curve here. Fortunately I didn’t have a pressing need to move my Mastodon instance, so I could take my time and make sure I was comfortable with how everything was going.

My basic setup steps:

  • Secure the instance using Linode’s recommended steps. This includes things like setting up SSH keys and disabling password logon, setting up a firewall, etc.
  • I’m using Tailscale for VPN connections between my servers and my personal computers and Synology. This has been the most useful piece of the whole puzzle. Again free for my use case, it provides secure WireGuard VPN connections directly between my devices, making login and secure copy very easy. Highly recommended.
  • I installed Netdata for performance monitoring. It’s free for the tier I’m using, but it gives me plenty of info on CPU and RAM usage, etc, and will email me if anything goes too wrong.
  • I played a little bit with setting up a web server (using nginx) and database using Docker images, but that felt like it got complicated fairly quickly without significant benefit. So I jettisoned Docker and natively installed Nginx and Mariadb, and set up Adminer as a single-page database admin interface.

Migrating Mastodon

The Mastodon documentation on this is very good. Basically you do a fresh server install per the Mastodon production guide but then stop short of doing the mastodon:setup step. The two key things that need to come over from the old server are the .env.production file (which includes the secret keys) and the database. (This is where using Linode Object Storage helped – I didn’t need to move ~100 GB of media files over! They just stayed where they were in object storage.) I dumped the database (ended up about 1.4 GB), copied the dump file over, and imported it on the new server. The .env.production file needed minor updates for the new database connection parameters, but otherwise was good to go.

Everything seemed pretty functional, so then it was time to get DNS pointed over to the new server. I had been pointing at Linode nameservers and managing my DNS settings within Linode, so I had some work to recreate all the DNS entries back over at Hover (my domain registrar of choice). I belatedly realized that I could’ve edited the /etc/hosts file on my MacBook to point to the new server to allow me to start testing the site out before DNS resolved… Next time.

It’s never quite that easy

At this point, I could see the Mastodon services were up and running, the server looked like it was happily doing work, and my phone app was connecting up fine. But when I tried to load the web interface it was failing. I asked for help on the Mastodon Discord server and was reminded to look at Nginx configuration. Nginx was indeed throwing errors:

2022/12/17 10:39:30 [crit] 411144#411144: *1966 stat() "/home/mastodon/live/public/inbox" failed (13: Permission denied), client:, server:, request: "POST /inbox HTTP/2.0", host: ""

The helpful person on Discord immediately told me that Nginx wasn’t proxying the request to the puma socket… that gave me someplace to start looking. Puma was running fine, and Nginx was configured to pass the requests along to it.

After about 90 minutes of head-scratching and increasingly-focused googling, I found the answer on Stack Overflow (of course). Turns out when I had done the very first step, creating the mastodon user on the server, the /home/mastodon folder was set up with 750 permissions (no visibility for basic users). That lack of visibility into the folder for non-mastodon users was causing the problem. A quick chmod o=rx /home/mastodon solved that, and voila! The web interface was up and running.


One bit that was left hanging at that point was backups. I had been paying Linode to do nightly backups, which appear to be just a snapshot of the full server instance. But I see now that for the Mastodon server, the only things that are critical to backup (again, since all the media is out in the cloud) are the .env.production file and the database dump.

It turns out I have a Synology NAS in my basement with sufficient empty space on it for backups, and Tailscale made it very easy to just rsync the database dump file to my Synology every night. So, I set up a quick cron job to run a database dump and rsync at 2 AM every day. Miracle of miracles, it worked the first time. (It worked the second, third, and fourth times, too…)

The aftermath

Several days into the new server I’m happy with the performance. Mastodon is enjoying the extra RAM, the CPU load is quite reasonable (~25% average, maybe), and I have a server instance that I can use for more hosting. I will slowly be moving the several WordPress sites I run over to this server, which will eventually let me get rid of the hosted service I’ve been paying for for a decade. It feels like a good upgrade to me.

Podcast Recommendation: A History of Rock Music in 500 Songs

For a long time my podcast listening has been almost exclusively nerdy tech podcasts mixed up with nerdy theology podcasts, with an occasional news or true crime mixed in to liven things up. Somehow I have almost entirely bypassed any that were music-related. (I did listen to a couple episodes of Song Exploder right after it debuted, but it just didn’t hook me.)

Somewhere along the line, Rob Weinert-Kendt on Twitter started linking to A History of Rock Music in 500 Songs by British host Andrew Hickey. It took me only a couple episodes and I was hooked.

The format of the podcast is one song per 30-ish minute episode, but each episode covers far more than just the titular song. Hickey provides background on the artist, the influences that formed that artist, stories about the creation of the song, and so on. You come away from the episode having learned a lot not just about a particular song but also about the developing music scene in the Americas (and, once you get in a ways, in Europe). He starts with the first inklings of what would become rock music as they emerged in the big band scene. (Episode 1: “Flying Home” by the Benny Goodman Sextet.)

500 episodes is a significant feat for any podcast, and setting out that goal in the title of your show seems rather ambitious, but I’m willing to bet that Mr. Hickey has all 500 songs charted out, and the moxie to see it through. He’s currently up through Episode 157 (“See Emily Play” by Pink Floyd), and is publishing a lot of bonus material for Patreon subscribers. I’m learning a lot as I go, so even if some interruption keeps the series from completion, it’s still been an excellent investment of time.

So, if you’re interested in rock music, A History of Rock Music in 500 Songs is highly recommended.

Hosting a Mastodon instance: moving asset storage to S3

I’ve had my Mastodon instance ( up and running for about a week and a half now. It’s not grown very large – 10 users or so last time I checked – but it’s been a great project to learn some more server admin type stuff.

Ten days and ten users in, my Linode instance seems to quite adequate from a processing standpoint. (I’m using the lowest-level Linode offering, which provides 1 CPU core and 2 GB of RAM.) However, the disk space usage is growing. The database itself isn’t overly large, but the cache directory (which stores profile pictures, preview cards for images) is up over 15 GB. I could work to get really aggressive about cache management, but realistically a larger cache is a reasonable thing. Mastodon provisions for this by providing easy hooks to use S3 buckets for the cache, so I figured I’d give that a shot.

I found Nolan Lawson’s excellent step-by-step instructions and followed them to a T. Well, almost to a T. I first went to set up an S3 bucket, kicked off the script to copy the cache over from Linode to S3, then went to bed. The next morning I did some more reading and decided that Linode’s very similar Object Storage service (it’s just their S3 clone) might be a better deal cost-wise. Amazon S3 charges a small amount per GB for storage and then a different rate for data access. Linode does it slightly differently – you pay a flat fee per month for a given bucket size, and then you get a large amount of transfer every month for free. Since my server is already on Linode, it was easier and simpler to just use the Linode buckets, so I tried again there.

One gotcha that’s not obvious when creating the bucket at Linode: if you’re going to put a custom domain name in front of the bucket, you need to name the bucket that domain name if you want their TLS/SSL stuff to work. In my case, I setup a CNAME record to point at my bucket, so I needed to name my bucket There’s no renaming buckets, so I had to empty and delete my old one and then create a new one with the correct name. After that the certificate generation went smoothly enough and I once again kicked off the copy job. Then I went to the gym.

A couple hours and 68,000 file copies later, my cache is in the bucket and a quick restart of Mastodon via docker-compose pulled in the configuration updates that now point out to the cloud. It went amazingly smoothly.

Edit: I posted this a little too soon…

All the existing assets were working fine, but new assets weren’t loading properly. Commence some more googling. The correct answer was that in addition to the .env.production settings listed in the instructions above, you also need this one:


In my instance, that looked like this:


Now it seems to be fully working.

Carter Burwell: polymath film composer

I was familiar with Carter Burwell’s name thanks to his score for the Coen brothers’ film True Grit, but I wasn’t aware of the full scope of his film compositions or of his backstory. A brilliant man who just picked up and learned lots of things. Just out of college and trying to make it as a musician while working a lousy warehouse job:

One day, Burwell saw a help-wanted ad in the Times for a computer programmer at Cold Spring Harbor Laboratory, a nonprofit research institution whose director, James D. Watson, had shared the Nobel Prize in 1962 for discovering the structure of DNA. Burwell wrote a jokey letter in which he said that, although he had none of the required skills, he would cost less to employ than someone with a Ph.D. would. Surprisingly, the letter got him the job, and he spent two years as the chief computer scientist on a protein-cataloguing project funded by a grant from the Muscular Dystrophy Association. “Watson let me live at the lab, and he would invite me to his house for breakfast with all these amazing people,” he said. When that job ended, Burwell worked on 3-D modelling and digital audio in the New York Institute of Technology’s Computer Graphics Lab, several of whose principal researchers had just left to start Pixar.

The Polymath Film Composer Known as “the Third Coen Brother” by David Owen in The New Yorker

His royalties from scoring Twilight funded a house on Long Island, where he lives and works from home, composing on a 1947 Steinway D that came from the Columbia Records studio in New York. “I still fret about having replaced the hammers, but they were worn almost to the wood—some say by Dave Brubeck.”

Worth reading the whole profile.

Setting up a Mastodon instance on Linode

With the Muskian shenanigans happening over at Twitter, I’ve been looking into Mastodon as an alternative social media site. It’s an interesting concept – a distributed network of servers that all talk to each other through an open protocol. So, being the nerdy sort, I decided to try setting up my own server. Here’s how it went:

Registering a Domain Name

Just deciding on the domain name was the hardest part, I think. But once I settled on, I went over to Hover and registered it. Easy peasy.

SMTP email service

Mastodon needs an SMTP mail service to send notification emails. Not wanting to tie it to my personal Fastmail account, I looked around and determined that Amazon AWS Simple Email Service (SES) was a reasonable fit. The pricing is scalable and pretty cheap ($1 / 1000 emails at this time). It took a little bit of time to get configured, but the AWS website walked me through the steps pretty clearly. Basically, AWS will give you three pieces of info: an SMTP server name (available on the SMTP settings page, will be something like and then a username and password. For the latter two it generates a pair that look like guacamole, and it warns you it will only ever show you them once, so copy and paste them to a useful location. If you lose them you’ll have to regenerate a new pair.

Also note: if you’re a new AWS account, AWS will put you in a restricted mode until you request a review and a move into production. Once you verify a domain identity you can also verify individual emails. You need to do this before you setup Mastodon because Mastodon will want to send you an email or two.)


I decided I didn’t want to go with a hosted setup – most of the custom hosting providers are all slammed right now, too – so it felt like my best options were either DigitalOcean or Linode. DigitalOcean advertises a pre-configured “droplet” that will give you a Mastodon instance with a minimum of configuration; Linode provides instructions for installing Mastodon on Ubuntu 20.04. I went with Linode.

I started out with a 1 CPU core, 2 GB of RAM, 50 GB disk space Linode. Once I get past the initial credit they give you (go find a referral link… my account is still too new to be able to give you one of my own), I think they’ll charge me $10/mo for it. We’ll see how well it performs if I get more than a few users.

The instructions were good as far as they went, but a few key things seemed to be missing. I’m going to walk through the instruction sections here and comment on them.

“Before You Begin”

This section was pretty straightforward. I followed the instructions for Creating a Compute Instance, Setting up and Securing a Compute Instance, and Adding DNS Records.

“Install Docker and Docker Compose”

Followed the instructions for installing Docker, then for installing Docker Compose. When you clone the Mastodon git repository in the next step, it includes a docker-compose file. It took me a few minutes to realize that the docker-compose also includes the Postgres database, so there’s no separate database installation required on the Linode server.

“Download Mastodon”

OK, this bit was simple. It’s just a git clone command.

“Configure Docker Compose”

Editing the docker-compose file is a little bit of a pain. Pay attention to the details. For step #3, setting up the Postgres password, db, and user, you can set the password to whatever you want it to be. Write it down because you will need it down in the next section.

Now, step 6, generating SECRET_KEY_BASE and OTP_SECRET. There are 4 separate commands in the instruction box: 2 echos to create the values and two ‘sed’ lines that I mistakenly assumed would write the value into the config file. Don’t assume that! Follow the instructions directly and copy/paste the values for SECRET_KEY_BASE and OTP_SECRET into the .env.production file. Same story with the VAPID_PRIVATE_KEY and VAPID_PUBLIC_KEY.

“Complete the Docker Compose Setup”

The involved command here is the rake command that does the initial database setup. It will ask you a bunch of questions here, many of them duplicate info to what you just put into .env.production. Such is life.

“Initiate the Docker Compose Services”

Two commands. Easy peasy.

“Setup an HTTP/HTTPS Proxy”

Here we install nginx. There’s an nginx.conf file in the Mastodon distro that is a good start, but there are changes you will need to make to it.

There are two lines that say root /home/mastodon/live/public;. You have to change these lines. Having followed the instructions thus far, the actual location of my /public folder was /home/chris/mastodon/public. Update the root lines to point to the actual location of your public folder.

Next edit, and I don’t remember where I found it, but it seems to work: every instance in that file that says try_files $uri =404; needs to be updated to say try_files $uri @proxy;

I think that’s all I had to do to the nginx config.

“Get an SSL/TLS Certificate”

I followed the steps to install Certbot, but when I got to step #4 where you run certbot to generate a certificate, I got an error saying that there was no ssl in a listen section of my nginx configuration. It turns out there is a “snake oil” certificate you set up first to get things moving, then you can get and install the real certificate. For the snake oil thing, use the steps described here. I then had to cleanup a broken renewal with steps listed here, but that may have been because I was messing around w/ certificate stuff before I found out about the snake oil. If your renewal dry run works ok, you can ignore that second link.

“Using Mastodon”

The instructions would lead you to believe at this point that the site is ready to use. However, when I went to, I just got an nginx configuration error. After a bunch of debugging, the key issue I found was this: I had never run any command that did a bundle install, that step where ruby installs and builds all the gems listed in Mastodon’s gem file. Of course it’s not gonna work that way!

The command you will want to run is bundle exec rails assets:precompile.

When I ran that it then sent me through several searches to install other libraries on the linux server. To save yourself some time, run this:

sudo apt install ruby-bundler ruby-dev ruby-full build-essential libicu-dev zlib1g-dev openssl libgmp-dev libxml2-dev libssl-dev libpq-dev

That should give you all the libraries you need to complete the bundle install.

It works!

After all that, sure enough, it worked! My Linode CPU usage had been sitting solidly between 10% and 15% the whole time, apparently because it was in a loop continuously trying to restart my misconfigured Rails app. Once Rails started up cleanly, the CPU usage went down to about 2%.

Adding another admin user

I originally set up an admin account for during the setup steps. Then I realized I would want my own account ( to be an admin once I moved it over. How to do that, I wondered? Turns out Mastodon has a command-line tool called tootctl just for this purpose. But to run tootctl and get it to talk to the database running in Docker, you need to run a special version of the command. What you’re looking for is this:

docker-compose run --rm web bin/tootctl <tootctl command>



First off: it’s amazing how much snappier Mastodon is on this server than back on The account migration process went smoothly enough, too. I will advertise it to a few friends once I get the Amazon AWS SES account moved to production so it can send emails to people other than me. Then we’ll see if it gets any traction. I’ll follow up later.