Trading time for visibility
Today's question comes from Benedict:
Do you track events in Buttondown and how do those feed back into product development? How have you changed what you track over time, and how you interpret it?
— Benedict Fritz (@benedictfritz) February 18, 2023
This is a great question!
Let me start by outlining my mental model on data-driven decision making in a technical organization:
Collecting, maintaining, and analyzing data in a rigorous and epistemologically honest manner is a way of exchanging resources (time, sometimes money) for visibility.
And like any trade, sometimes the deal is worth it; sometimes it is not.
You may not be surprised to learn that, for instance, I don't find myself in need of much visibility when it comes to feature development or prioritizing between various product-level efforts. This is for a number of reasons:
- I gather a lot of ambient qualitative feedback, and that ambient feedback is non-trivially statistically significant. Even though I'm no longer the person spending the most time in HelpScout, I'm usually chatting with at least a dozen customers a day in some way, shape, or form — I feel like I have extremely high baseline visibility into how people use Buttondown and how they feel about it.
- There are very few unknown unknowns. Buttondown is a relatively new entrant into a very crowded vertical, not to sound defeatist, but there are few truly novel product choices to make — most of my user personae are fairly well-defined (figuratively, that is, not literally — I do not have some sort of
User Personae.docx
floating around anywhere.) and there's a lot of prior art floating around to piggyback off of. - The penalty of choosing the wrong thing — say, focusing on the seventh most important product evolution instead of something in the top three — is extremely low. I don't have a burn rate that I need to course-correct; I don't have a VP threatening to pull head count if I don't hit a certain KPI.
All of which is to say there's no function or process in Buttondown that goes, roughly, potential_projects + data => critical_path
...
...but I still use data! Mostly in what I think of as boring ways. Like:
- A feature adoption dashboard, divided thrice: % of total active users, % of 30-day actives, % of MRR. (Feature here is deliberately broad; I don't think it's useful, for instance, to track individual button clicks, but I think it's useful to see who's using the referral system.)
- Performance! Performance is something that I think I have a particularly poor sensitivity to because I am interacting with Buttondown in so many different contexts and situations that I develop tunnel vision. I track the runtime of every API request; I track how long it takes to get every email in every inbox (and to get the concomitant webhook events emitted).
- High-cardinality data that isn't immediately actionable but that I want to keep tabs on for marketing material, like "what percentage of newsletters are sent out in a RTL language?" or "how many of the API clients hitting Buttondown are coming from Next?"
All of this feeds into visibility, and implicitly into my roadmap, but it's hard to point to specific circumstances where I explicitly made a decision based on the data and nothing else; it's more of a reinforcement, or a dossier to consult when trying to reconcile qualitative feedback ("FooCorp wrote in that her p95 performance on /v1/subscribers
has been slipping... is this an isolated issue or something systemic?") or brainstorming long-tail work (see last week on SEO.) But 95% of the time, it's vibes-based development. That's a conscious choice: it's easier to lean into the vibes if your roadmap deliberately minimizes surface area.
The point I want to end on: one way to improve the outcome of a trade is to give up less in the trade. If you're spending 20% of your resources maintaining your data posture, you better be getting a hell of a lot of value from it; conversely, if you're spending 2% of your resources on it, then the threshold for usefulness becomes much lower.
Whenever possible, I just use operational data. Buttondown has no analytics database, just a read-only replica; Buttondown's operational database has no data used exclusively for analytics:
- The dashboard I use to track email engagement + delivery is sourced from the exact same data I vend to authors
- The dashboard I use to track various subscription lifecycle events ("how many folks are churning due to dunning?", "how many folks are upscribing due to a teaser email?") is sourced from the data pipeline I use to power webhooks
- The dashboard I use to track feature adoption ("who's using surveys?", "what percentage of MRR comes from users who use metadata?") is sourced from entitlements + billing data
- The dashboard I use to track subscriber growth against pageview growth all comes from the same aggregated data that I use to power author-facing analytics pages
I get to cheat a little bit here, from a philosophical perspective: not every app requires an "analytics" view, and therefore not every app gets to approach data engineering as an end-user experience question.
Thanks again Benedict, for asking!