Thomas Cannon

Jay Harris “I swear if you say ‘we have forgotten the old ways’ one more time [with regard to building sites that work without JS]”

Me: “this sounds more like a collective ‘we’ problem than a ‘me’ problem”

Y’all couldn’t put any sort of weighted algorithm on this? Like, I know almost all of these are rude in some capacity, but this just feels like a footgun waiting to happen

A photo of Github’s emoji autocompleter; where typing :finger returns the first result of a middle finger emoji

Little CRM is underway

After over a year of planning, chipping away at the problem, and designing the foundation for reuse; I’m finally starting on Little CRM.

The thing I’ve learned is that every time I decide the right thing engineering-wise, even if it’s just the bare-minimum version, I’m better off that it’s done.

A git commit message with the following details:
<p>Run of app template
302 files changed
10,974 additions and 39 deletions" width=“599” height=“276” border=“0” /></p></p>

      </div>
    </p>
  </div>

  <div class=

I have found it; the saddest, unintentionally funniest podcast ad of all time

Blue Ridge Ruby 2024

I went to Blue Ridge Ruby, and it was an excellent time! I'm extremely grateful there's a regional Ruby conference (only a 90-minute drive away!) with great speakers and great company to boot.

My apprentice, Josh O'Halloran, got a sponsorship ticket and was immediately welcomed by the community. After 12+ years as a Rubyist, I had forgotten just how quickly & warmly we embrace newcomers. It's something we should continually practice.

Also, we need to make weird projects again! Talking about _why, Tenderlove, conference talks of yore, and other watershed moments in the early days of Ruby made me realize that we've been so focused on legitimizing, optimizing, and working with Ruby that we've stopped making dumb, silly things. If there's one critical thing I took away from my time in Asheville among folks that use the language I love every day, it's that we are better when we're collectively goofing off.

That, and root beer taught me the importance of B2B sales.

A photo of 3 different bottles of the same root beer

A photo of myself and Josh O'Halloran post-conference

A photo of a conference slide, with the text

A photo of a conference slide, with the text

A photo from a coffee shop and record store, with hanging plants, mid-century railings, and a green/grey color palette

I hope the people who invented zip ties are living a nice, comfortable life

The lowest latency combination between Hetzner ARM and Crunchy Bridge

tl;dr: The best combination of Hetzner's ARM servers and Crunchy Bridge's regions are:

  • Hetzner: nbg1-dc3
  • Crunchy Bridge:  AWS eu-central-1
  • Benchmarks at the end of the post

I’ve been working on getting the first Practical Computer app out of beta (a dispatcher app for service companies). It, and the rest of our smaller apps, have the following constraints:

  • Extremely shoestring budget. $30/month operating budget. Not much to work with!
  • Since it’s production data, there’s responsibility for the data to be redundant, safe, and secure.

Given those, I ended up choosing Hetzner’s ARM servers, given the sheer power + cost combination (even with them being in the EU; and this is a US-based project).

I’ve had great experiences with the Crunchy Bridge team, and 18+ years of experience have taught me that I really want someone else to make sure the database is secure for me. So even though it’s 10% of my operating budget here, it’s worth it (because the database is the most crucial part of the app!).

The problem is that the app server & database are on two separate networks; I don’t get the benefit of an internal network, so I had to benchmark what combination of regions would work best.

The previous iteration was completely misconfigured: Hetzner’s Helsinki region for the app, AWS eu-north–1 for Crunchy Bridge. This turned out to be awful for performance. After some benchmarking and experimenting, I found that the following combination gets pretty dang close to good performance given the constraints:

  • Hetzner: nbg1-dc3
  • Crunchy Bridge: AWS eu-central–1

(I’ve got benchmarks at the end of the blog post, don’t worry!)

Lessons learned

I wanted to write this not only to help other folks find the best combination easily; but to talk about the mistakes I made and lessons I learned.

  • Mistake: I should have benchmarked different options & done more research. My first Crunchy Bridge cluster was actually in eu-central–1 , with the Hetzner server in Helsinki; and if I’d benchmarked & explored my options I would have avoided an unnecessary database migration.
  • Win: When spooling up the Hetzner server, I did my best to use Ansible to provision the server, and use Dokku for a PaaS-like experience. This made it so that spooling up a new server in nbg1-dc3 was significantly easier. This is why I advocate for “automate as you go”, especially for infrastructure.
  • Win: choosing the right vendors. Hetzner allows me to quickly & cheaply spool up a small project, and having Crunchy Bridge’s support & their tooling made spooling up the replicas & promoting them dead-simple.

Alternative solves

To be clear, I had a few other options available to me, some of which I could still choose, and have implemented:

  • Application-level caching: This is done, and basically a given for any production app. I ended up using Redis here.
  • Making a read replica on the Hetzner instance; use that for reads (a heroically misguided, but technically feasible idea!)
  • Move from Hetzner to AWS. This would only make sense for an app that’s actually generating the profit to justify the switch. It’s hard to bear Hetzner’s price & power combo here.
  • Move the primary DB back to Hetzner (not ideal. Again, 18+ years of experience have taught me that I don’t want to manage a bunch of databases)

Benchmarks, as promised

# Benchmark script
# x.report("crunchy-eu-central") { ApplicationRecord.connected_to(role: :crunchy-eu-central){ Organization.first.onsites }}
# x.report("crunchy-eu-north-1") { ApplicationRecord.connected_to(role: :crunchy-eu-north-1){ Organization.first.onsites } }

hetzner-helsinki:~$ dokku run dispatcher rails@aaaaaa:/rails$ ruby script/benchmarks/database_performance.rb … ruby 3.3.1 (2024-04-23 revision c56cd86388) +YJIT [aarch64-linux] Warming up ————————————– crunchy-eu-central 1.000 i/100ms crunchy-eu-north-1 1.000 i/100ms Calculating ————————————- crunchy-eu-central 9.788 (±10.2%) i/s - 49.000 in 5.031111s crunchy-eu-north-1 14.124 (± 7.1%) i/s - 71.000 in 5.045164s

Comparison: crunchy-eu-north-1: 14.1 i/s crunchy-eu-central: 9.8 i/s - 1.44x slower

====================================================================================================================

hetzner-nuremberg:~$ dokku run dispatcher rails@5e9d1df43591:/rails$ ruby script/benchmarks/database_performance.rb … ruby 3.3.1 (2024-04-23 revision c56cd86388) +YJIT [aarch64-linux] Warming up ————————————– crunchy-eu-central 1.000 i/100ms crunchy-eu-north-1 7.000 i/100ms Calculating ————————————- crunchy-eu-central 20.452 (± 0.0%) i/s - 103.000 in 5.037650s crunchy-eu-north-1 96.340 (± 2.1%) i/s - 483.000 in 5.016063s

Comparison: crunchy-eu-north-1: 96.3 i/s crunchy-eu-central: 20.5 i/s - 4.71x slower

====================================================================================================================

/// Spot-check of query latency

hetzner-helsinki:~$ dokku run dispatcher rails@aaaaaa:/rails$ bin/rails c Loading production environment (Rails 7.1.3.2) irb(main):001> 10.times { p Benchmark.ms{ ApplicationRecord.connected_to(role: :crunchy-eu-central){ Organization.first.onsites } } } 1637.3784099705517 103.8463581353426 106.55084392055869 105.82964192144573 102.70471591502428 102.40731597878039 107.70620591938496 112.95577604323626 111.48629407398403 105.25572090409696 => 10 irb(main):002> 10.times { p Benchmark.ms{ ApplicationRecord.connected_to(role: :crunchy-eu-north-1){ Organization.first.onsites } } } 1040.9399520140141 72.33657804317772 73.68614105507731 77.82750786282122 75.32874401658773 73.2035799883306 77.43134815245867 72.01353716664016 71.68629695661366 74.11726214922965 => 10

====================================================================================================================

hetzner-nuremberg:~$ dokku run dispatcher rails@aaaaaa:/rails$ bin/rails c Loading production environment (Rails 7.1.3.2) irb(main):001> 10.times { p Benchmark.ms{ ApplicationRecord.connected_to(role: :crunchy-eu-central){ Organization.first.onsites } } } 846.7416610001237 54.84945299758692 56.269697000971064 55.53825500101084 54.511651000211714 54.32877099883626 57.52018099883571 59.374027001467766 63.36915899737505 55.06629299998167 => 10 irb(main):002> 10.times { p Benchmark.ms{ ApplicationRecord.connected_to(role: :crunchy-eu-north-1){ Organization.first.onsites } } } 168.30048900010297 11.614236002060352 12.266477999219205 15.882210002018837 15.326369000831619 11.819797000498511 14.219884997146437 10.650393000105396 11.742956001398852 13.63104199845111 => 10

Picked up the RadWagon over the weekend and have already checked off much of bike ownership: ✅ the gear I bought didn’t fit & have to reorder parts ✅ “it’ll just take a bit” maintenance turned into an hour+ and wheel disassembly to track down a mystery noise ✅ rained on the planned ride day ✅ immediately helped someone get their lost dog on my first test ride

Once again, Teams should ALWAYS be an MVP feature

One of the hardest parts of starting the codebase for the Practical Framework was the continual restraint I had to practice to do stuff that is boring & annoying, but I knew it needed full focus to get done right. Teams & Organizations were a prime example of that; because I knew academically that they should be an MVP feature

Frankly, it took like 1-2 months to get right; and slowed everything down. All boilerplate, nothing fun, no business logic. But imagine the sheer vindication felt when I got this email in the middle of one of the smallest betas I’ve ever worked on:

There is one thing I wanted to let you know, since I think [REDACTED] hasn’t spoken to you about this yet. […], is there a way that we can add a section to say which company has the job? So for example, I’ve been entering jobs for [X] but I know [REDACTED] wants to enter jobs into the dispatcher for [Y]

Teams/organizations are always always gonna show up

Ticket title: fix mobile layout

A screenshot of one of my web apps on an iPhone 13 mini, where the aspect ratio of an image is completely incorrect and the sidebar navigation is taking up half the page

Finally got the copy “done” (in as much as anything is done on the web) for the Practical Computer site; and I’m really happy with how it all turned out! practical.computer

Famicom Disk System reproduction for transit cards from the Nintendo store in Kyoto + blank NFC Card + custom “meetup” landing page with contact form = digital business card for conferences.

They finally arrived & look great!

A photo of my business card, front & back, which says the name of my business (Practical Computer), my contact information, and has a QR code

I got the design for my Practical Computer business cards today and I can’t emphasize enough how much I love them. Can’t wait to see the final product

Business card design for Practical Computer; with our logo, my contact info, and a patterned background made of various brand elements.

Use a PORO for Configuration

This post has been sitting in my drafts for months, so I'm shoving it out into the world!

Garrett Dimon and I have been chatting back & forth about Rails configurations. This is an extension/riff on his idea for Unified Configuration in Rails

While Garrett's approach hooks nicely into Rails and has multiple metaprogramming niceties; my approach is generalized & direct (but verbose). Essentially: create a Configuration class that is a PORO; I named mine AppSettings. You can see a gist of it here!

This is born out of a convention in my iOS apps, which use a Swift struct called Configuration, that pulls the configuration from various sources into a strongly-typed, consistent location that is easy to remember.

My primary focus is to explicitly declare the configuration values and standardize how they’re accessed with a straightforward API, rather than try to standardize how they’re stored. This is because there are so many cases where either:

  • “It depends” on how they should be stored.
  • There’s an external dependency that makes you use a particular storage mechanism.
  • The configuration is so entrenched as part of the environment that it ends up in the environment’s configuration.

Using a PORO with clearly defined methods gives you:

  • Clarity in how the value is retrieved.
  • The flexibility for different value types. Some are single keys, some are nested objects, etc.
  • The same API for dynamically generated values; such as subdomained URIs
  • An easy object to mock out for tests as needed.

I've been using this approach for an internal app I've been commissioned to make; it's worked out very well so far! I'd definitely recommend moving towards this approach in your projects.

"Do it right, or do it twice" Code Quality Edition

Inspired by Lucian’s post, I finally setup code quality for the first Practical Computer app.

This whole process was definitely borderline “do it right, or do it twice.” I wish I’d solved this a bit sooner. I knew it was necessary, but had kept pushing it back because the app isn’t even close to being live yet. But this line from Lucian’s post changed my opinion:

Side projects developed while having a full-time job have a unique characteristic worth noting. The time dedicated to working on the side project is not continuous. For instance, you may work on it for 1-2 hours on Saturday, and the next opportunity to work on it may only arise a week later.

It is then essential to make the code quality built-in and use as much automation as possible.

That’s a very strong argument, and one I hadn’t heard yet. Of course, since I delayed, it caused the past 3 work sessions to be solely about fixing up the repo. But the upside is now I have all the code ready for the next project. And speaking of that, here’s a gist of my customized set of Rubocop & CircleCI configuration. I hope it helps!

I made a few technical differences than Lucian:

  • I trimmed down the Rubocops used. This gives me a balance of expressiveness & the benefits of a linter
  • I chose CircleCI because it’s what I’ve been using for years and it’s Fine™️. Plus it has the distinct advantage of SSH access to debug jobs, and job reporters
  • I’m using Bun, so relying on Dependabot for my JS dependencies
  • I’m using Code Climate for maintainability monitoring

(Some) strategies for reducing test flakiness

This is a quick post, but wanted to get it out there!

Over the past week, I’ve been having to detangle some serious test flakiness. Below are a few findings that I wanted to put out there for other folks to hopefully find useful!

Sync up your randomizer seeds (especially for fake data generators) with the test randomizer seed

For example, I use Faker, which allows you to customize the seed it uses. Using the same seed that your test suite is randomized with makes it so that re-running a test with the same seed always generates the same data:

# test/minitest/faker_random_seed_plugin.rb
module Minitest
def self.plugin_faker_random_seed_init(options)
Faker::Config.random = options[:seed]
end
end

Always print the number of parallel workers the suite is run with

Sometimes tests fail under certain randomization & parallelization combinations, such as a subset of tests failing for:

  • Seed 1234
  • 4 parallel workers

In order to help with test reproduction, it’s good to always print the number of parallel workers to replicate the failure

# test/test_helper.rb
puts "MINITEST_WORKER_COUNT: #{Minitest.parallel_executor.instance_variable_get(:@worker_count)}”
puts "PARALLEL_WORKERS: #{ENV["PARALLEL_WORKERS"]}”

Use deterministic data sources whenever possible

I know I just mentioned Faker above, but there are some data sources that are deterministic & should always be stable (like billing plans!) This is where Oaken shines, and why I use it. It gives you a hook to automatically create synced, deterministic data that is cross-environment, while still giving you the right balance of:

  • Dynamic field values for data like names & emails
  • A cohesive “story” for your data’s shape
  • Cleanly defined flows for per-case datasets, like pagination rules

Listen, I’m as much of a planning sicko as the next engineer, but I feel like this isn’t hitting the way you think it should.

The following quote, highlighted: "With OmniPlan for Apple Vision Pro, your Gantt charts are no longer limited by the size of a physical display screen. How cool is that?"

A quick devlog for LittleCRM, this time going over some business decisions; plus some fun (for me) framework components: buttondown.email/little-cr…

BetterImportmaps

After YEARS of nebulous planning & understanding the theoretics, I FINALLY got a working build chain that accounts for the realities of:

  • compiled JS
  • not relying on an external CDN
  • needing to use package.json for standard tooling

While also allowing for multiple importmap support! All without magic or bespoke tooling 👀

Calling it BetterImportmaps, because it is

Hoping to open-source eventually, but it demolished my discretionary time this month. 😬🫠

I published a quick update about Little CRM, mainly the behind-the-scenes work that’s been going on

buttondown.email/little-cr…

“Okay, but what about THIS failure scenario with passkeys?”

“Okay, but what about THIS failure scenario with passkeys?”

Important caveat: I’m not a security researcher, I’ve just read a lot about passkeys & thought about their implementation. I’ve been trying to collect findings from actual security researchers; if you know of any discussions related to this, please send them my way!

When talking about passkeys, I’ve gotten the same set of questions, poking at the edge cases of them. Which is good! Skepticism is always good; especially with new authentication techniques. But I wanted to answer some of these FAQs in a centralized location to save having to repeat myself a bunch.

“What about if my computer/phone breaks?”

If you’re in the vast majority of users, you’ll likely have your passkeys stored in a distributed credential manager; like iCloud keychain, Bitwarden, 1Password, Google’s saved credentials, etc.

Apple has a really great breakdown of the security measures for iCloud keychain, including the security of recovering access: https://support.apple.com/en-us/102195

In short: as long as you’re still able to access your credential manager, and it syncs online, you’re good. 💪

“What if I don’t want to rely on an online service?”

A good question! I always recommend that folks use a hardware security key (such as a Yubico) for their essential accounts, and keep it in a safe place. The analogy I use is: “treat it like your passport, birth certificate, or other essential documents.”

This will allow you to make sure you can access your most essential services, even if there is a SNAFU and your credential manager is no longer accessible.

“Okay, but what if I really lose access to everything? My backup hardware key, my iCloud/Google/1Password account, everything”

This is also a good question; and needs to be addressed. What we’re talking about here is not passkey specific, it’s a general question of “how does account recovery work?” So the questions being asked are the same ones we ask about our current password-based authentication flows.

For the vast majority of services, a familiar email-based recovery process makes sense:

  1. You request an emergency passkey registration for your account
  2. You’re emailed a token that can only be used once, and expires
  3. You use that token to register a new passkey (likely on a new credential manager/hardware key)
  4. You’re logged in!

And if you’re unable to get a new security key or credential management account, this flow works so long as you’re able to access your email. You can use a browser/OS that stores credentials locally. Because of this, even though it’s not recommended, you could recover your account on a public computer (just make sure to delete the passkey when you’re finished!)

This is why it’s important to make sure your email account is as secure as possible, with multiple avenues for recovery. Your email is your identity card online; for better or worse.

This one is tough to tease out a bit, because:

“What about accounts I share access for, like utility accounts?”

Good passkey implementations allow you to register multiple passkeys. This is for a number of reasons, including this one!

  • You can save passkeys for devices on different ecosystems, to reduce the headache of working across platforms. For example: if there’s a service I access on my windows gaming PC, I can create a passkey specifically for that windows machine to avoid the hassle of having to use my phone + Bluetooth to log in every time.
  • Ecosystems can allow you to share a passkey, such as Apple allowing you to AirDrop a passkey to a nearby contact
  • This reason, so your partner/family member can access this joint account independently

“What if I need to remove someone with a passkey from the account?”

Good passkey implementations allow you to remove previously registered passkeys after verifiying that you can access a different passkey (to avoid deleting the passkey you currently have access to!). This is no different than someone using a shared password to change the password on an account; but it is less disruptive.

New LittleCRM devlog! This time about being trapped in The Sketch Vortex for 5 months, why it’s important, and commoditized UIs: buttondown.email/little-cr…

👀👀👀

A screenshot of a drafted newsletter email, titled: 'Devlog 3—Extremely Spongebob Voice: “3 Months Later…”'