Failed experiments on the bleeding edge

2026-01-11


It is fun and easy to talk about shiny new toys; it is more useful to talk about the times when we have to throw them away. Here are a few before I forget about them:

  1. DuckDB. I talked about this a little bit in October, but we spent a really large chunk of time last year trying to use DuckDB as our analytics database, and it just didn't work in prod. This was partially due to us and bad assumptions we had about our data models, partially due to DuckDB itself and where some of the scaling really falls off the cliff, and some of it due to our DBs, internal tooling, or lack of production-level polish. And lastly, some just due to the fact that we underrated our difficulty in adopting and debugging a completely new technology that has relatively little literature around it. I'm still sad it didn't work out, but somewhat relieved as it pushed us to migrate to PlanetScale quicker (which has incidentally solved most of our "database slow"-shaped problems.)

  2. Granola. I run customer support office hours every weekday, meaning anyone can hop in from the team and pair with me on Tuple, whether it's about a ticket or some environment issue or just to shoot the shit. The idea of recording and indexing these videos so we could search them later was really fun and certainly not a bad idea. But after a month of doing this, I realized that I literally had never tried to search back through history. I removed this not because it was tricky or difficult, but simply it just didn't provide any value.

  3. Vector-backed search. When we relaunched the docs in May of 2024 we were excited about using embeddings to power search and solve a lot of the fuzzy-searching pain points that purely naive substring matching had. It was an improvement, but still had problems: it was slow, it mangled strings. Meanwhile, Ben discovered Orama: ostensibly also an AI search solution but also just an extremely ergonomic and performant offline search engine. The entire search implementation now is ~30 lines of code and performs flawlessly.

  4. Vercel Analytics. Vercel Analytics seemed like a fairly obvious successor to Fathom, which we moved off of because we wanted to be able to record A/B testing. Vercel’s integration with their own flags.sdk SKU/initiative seemed like an obvious and natural fit. It also appeared that their performance footprint at the time, given that we were already using Next.js, was a no-brainer. Unfortunately, the product's feature set — which was underwhelming then — has literally not changed in the past two years, and there does not appear to be any reason to expect change in the medium term.

Of these four, the only truly painful one was DuckDB: we committed a lot of time and energy towards something that ended up in the rubbish bin. Still, it forced us to get much better at a lot of database things, which served us well: next time, though, I think we’ll be more disciplined about a full end-to-end tracer bullet to try and unearth the unglamorous parts early.


Don't miss what's next. Subscribe to Weeknotes from Buttondown: