← March 23, 2026 edition

claude-usage-tracker

See exactly how much you spend on Claude, across every tool

Your AI Coding Bill Is a Mystery. This Tool Does the Math.

Your AI Coding Bill Is a Mystery. This Tool Does the Math.

The Macro: The $40 Billion Observability Problem Everyone’s Pretending Doesn’t Exist

Something funny happened when AI coding assistants became genuinely useful. Developers stopped thinking about cost. Not because cost disappeared, but because it got distributed across so many surfaces that it became invisible.

You’re in Cursor writing a feature. You flip to Claude Code CLI to debug something hairy. You’re running Cline in the background on a refactor. Windsurf for something else. Each of those is burning tokens against Anthropic’s API, each tracked separately (if at all), each sending you a bill you have to mentally sum yourself. Nobody’s doing that mental sum correctly.

This is a real problem that’s getting worse as multi-agent, multi-tool workflows become standard for anyone doing serious AI-assisted development. The tools are multiplying faster than the observability layer. That’s the gap.

Here’s what I think most people get wrong: they treat this as a “nice to have” problem. It’s not. The broader open source tooling market is enormous and growing. Multiple research firms peg the open source software market at well over $40 billion in 2025, with some projections hitting $190 billion by 2034. But that number is meaningless if half the developers building that software have no idea what their AI toolchain actually costs them. Cost visibility isn’t sexy. It’s also the thing every enterprise and indie developer eventually needs, and right now, nobody’s solving it at scale. The timing window for someone to own this layer is closing fast.

The competition here is weirdly scattered. There are a few macOS menu bar apps floating around GitHub, including one that got some traction in the r/ClaudeCode community and reportedly tracks usage limits in real-time (built).

The Micro: One Dashboard, Nine Tools, Zero Cloud

The pitch is simple: you’re flying blind across your AI tools, and this fixes that.

Claude Usage Tracker auto-detects usage from nine or more tools including Cursor, Claude Code CLI, Windsurf, and Cline. It does this by scanning local session data, which is the right call. No cloud. No account creation. No telemetry. The data doesn’t leave your machine, which for developers who are privacy-conscious (read: most of them) is the only acceptable architecture.

The output is a proper dashboard. Daily cost breakdowns, model-level breakdowns (so you can see when you accidentally burned a session on Opus when Sonnet would have done fine), usage heatmaps, session logs, and monthly projections. That last one matters. Knowing what you spent yesterday is useful. Knowing what you’re trending toward by end of month is actually actionable.

The delivery is a native macOS app, with a browser mode for everyone else. I’d want to know how the browser mode works in practice since “browser mode for other OS” is doing a lot of work in that product description, but the core macOS experience sounds solid.

It launched free and fully open source under MIT. No freemium, no seat licensing, no “we’ll add that in Pro.” That’s a real choice with real implications. It got solid traction on launch day.

The interesting product decision here is the local-scan approach. Rather than requiring you to route API calls through their service (which would be cleaner data, but a massive trust ask), it reads what’s already sitting on your machine. Similar instinct to what I’ve seen in other privacy-first dev tools, like the thinking behind ByteRover’s local-first memory architecture. It’s a harder engineering problem but a much easier trust conversation.

Nine tools is a respectable starting list. The question is whether it stays current as the tool count keeps growing.

The Verdict: This Works, But Only If It Stays Boring and Open

I think this is genuinely useful, and I’m somewhat annoyed it doesn’t exist already in a more polished form from Anthropic itself. That should tell you something.

Here’s my strong take: this tool survives if and only if it becomes the unsexy infrastructure layer that developers forget they’re using. The no-cloud, open source stance isn’t just principled, it’s necessary. Developers won’t route financial data through a stranger’s server. Period. MIT license means someone forks it and maintains it if the original maintainer ghosts. That’s not a bug, that’s the whole point.

The real threat isn’t competition from Anthropic’s native dashboards (which will ship, and will be incomplete). The real threat is that the maintainer gets bored or burned out, and then nine tools become seven, and suddenly your biggest cost center isn’t tracked anymore. Open source without active maintenance dies quietly.

But here’s what makes me believe this survives: the problem is too real and too distributed for any single company to solve. Anthropic can’t build a dashboard that tracks Windsurf usage. Claude can’t see what’s happening in Cursor. The cross-tool aggregation is where the actual value lives, and that’s unfixable from inside any single product.

My prediction: in two years, this either becomes the unglamorous backbone of serious AI development workflows (which means it quietly gets forked a hundred times and becomes genuinely useful), or it stalls at 500 GitHub stars because the one person maintaining it moved on. There’s no middle ground here. Being useful isn’t enough. It has to be useful enough that someone else keeps it alive.

The HUGE Brief

Weekly startup features, shipped every Friday. No spam, no filler.