Oct. 20, 2025, 1:42 a.m.
Mise
I spent a few hours playing around with Mise, which we're already using for managing some of our dependencies in the monorepo. It almost pains me to divulge, albeit slightly prematurely, that we're going to migrate off of Just and use Mies instead for our task runner. Just is still a great tool — but some of the things that I want to be able to do, such as cross-task dependency caching and better logging, Mies handles out of the box, whereas Just, by design, does not. I don't think this is a case where Mies is flatly better, and I think their alternatives guide is pretty reasonable and fair comparison of the two tools.
Incidents
I moved the incident postmortems that we had, which is thankfully only a handful, from BetterStack onto our blog as part of a slightly larger initiative behind moving and consolidating some of our more random bits of content in a place where they're easy to maintain and surface.
One of the questions I've been asked a handful of times is why our postmortems are as technical as they are, for which I have two answers:
The first is that, frankly, it's easier to write a single nuanced postmortem as opposed to having to write one version of it for internal use and one version of it for external use.
The second is that: people who don’t care about the postmortem also don’t care that it’s technical, and people who really do care, people who want to see a legitimate explanation and not just, "Whoops, we fixed it," appreciate transparency, even if it is sometimes a little inscrutable. Put differently. I've never regretted oversharing when it comes to the incident stuff, even when it can feel a little skinny. I forgot what I was going to say here. Here's my request for a tiny little side project idea for anyone who's looking for a way to spend an hour: marketing site Anki.
Anki
I want a service that every morning sends me a random entry from my sitemap so that I can go through and review it for outdated copy or screenshots. We're at the point now where we just have so much content over so many years that a bit of bitrot is inevitable. Shifting to use iframes has helped a lot here, but the migration is very much a long-term one. Other than that, I think it's useful to have a reminder that as much as I have a tremendous amount of state and context around Buttondown, the average user who learns about us for the first time through a random pillar page or, more likely, through someone else's newsletter, has almost no context.
Dogfooding our API
I had two calls this week with folks who use Buttondown as a part of their platform for their SaaS, i.e., they have users and they use Buttondown to spin up entire newsletters, one per user. This is a fun use case that I think would be a larger part of our overall business if I knew how to actually talk about it. Instead, it seems like people often just fall into the pit of success, as evinced by the fact that both people I talked to wanted to make sure that this was "okay” and not a violation of our terms.
That’s a bit of a non-sequitur: I instead want to talk about a fun little quirk about Buttondown's architecture that has been around for so long that I actually forget about its novelty, which is that we dogfood our own API.
If you go to the dashboard and open the network inspector, you'll see that the vast majority of API calls we make are to the actual external-facing API. This is a tactic that I picked up while at Stripe, which has, or at least had, a very similar approach. The benefits are really obvious: you get to lower your surface area significantly, dogfood every new endpoint, as well as immediately figure out performance hotspots because you're going to encounter those hotspots from the perspective of non-API users. It also forced us to invest a lot in OpenAPI schema and type safety writ large, even though we haven't been able to really close the loop there and start generating clients for users a la Stainless.
The drawbacks are few but non-trivial:
For starters, we have to think pretty hard about data modeling just to build even experimental or lower confidence dashboard services. (You could argue that this is a positive insofar as it forces me to make a lot of those decisions early, even though I'm the kind of person who would happily punt on them forever.)
There's also a fairly significant overhead of work that is required when you are building something from the API out to the dashboard. You have to think about migrations and backwards compatibility and a whole slew of things that would be made slightly easier by an API where we didn't have to contractually oblige ourselves quite as much.
But I think this is all kind of trivial compared to the biggest strength, which is leverage.
Our API is really, really good, even if institutionally we don't talk about it or market it as enough as we should. It is the closest thing we have to a technical moat at the moment, and probably one of our biggest assets outside of our brand and goodwill. This would straight up not happen if we weren't dogfooding it constantly.