"Wait β which Claude Code session was the one where I figured out the CI flake?"
You've got dozens of repos on disk, hundreds of Claude Code sessions in ~/.claude/projects/, and no way to connect the two. repo-recall is a tiny local web app that joins three data sources you already have - git (commits, churn, working tree), gh (CI, PRs, issues, deploys), and Claude Code (sessions) - keyed to the same set of discovered repos. Two questions, two clicks:
- Which sessions touched this repo? β open the repo, see every session that had it as
cwd. - Which repos did this session touch? β open the session, see every repo it crossed.
Everything is local. The server binds 127.0.0.1 only, and the cache lives in $TMPDIR. Outbound calls are limited to gh run list for CI status and (if you opt into push notifications) signed Web Push deliveries to FCM.
Dashboard β every repo within N levels of where you launched it, ranked by a composite activity score. Per repo you get: session count, commits in last 30d, LOC churn, unique authors, open PRs/issues, CI status, and any action-required signal (failing CI, dirty working tree, mid-rebase). Failures sort to the top regardless of score, so a broken repo you haven't touched in a month still surfaces.
Repo page β the 10 hottest files by churn (great for "where's all the thrash actually happening?"), then every Claude Code session that had this repo as its cwd.
Session page β metadata (duration, message count, token usage, cost estimate) and a full transcript with collapsible tool calls.
The same endpoints a browser hits are fine for an agent. Boot the server, hand a coding agent the local URL, and let it read the dashboard directly. repo-recall acts as a deterministic data aggregation layer: it does the repo walk, session parse, git log shell-outs, working-tree inspection, and CI fetch once, then serves a consistent structured view. The agent reasons over that snapshot instead of re-deriving the same joins with ad-hoc grep and git log calls every turn, and two agents asked the same question hit the same data.
The sweet spot is a broad prompt in auto mode β let the agent work through everything the dashboard flags without you babysitting each one. Copy-paste starters:
Open http://127.0.0.1:7777 and work through every repo flagged as action-required. For each one, investigate the cause and resolve it.Open http://127.0.0.1:7777. Find every repo with a dirty working tree and either commit or discard the changes, whichever is appropriate per repo.Open http://127.0.0.1:7777. Find every repo with failing CI on the default branch, diagnose the failure, and push a fix.Open http://127.0.0.1:7777. Review my recent Claude Code sessions and surface any in-progress work I left unfinished across repos.Open http://127.0.0.1:7777/repos/<id>. Look at the hottest files by churn and tell me what's driving the thrash.
The same URLs a browser hits also serve JSON. Send Accept: application/json (or append ?format=json) to any of:
GET /β the full dashboard projection: repos, banner counts, action-required items, recent sessions/commits, gh health, scan version.GET /repos/{id}β repo + sessions + commits + hotspots.GET /sessions/{id}β session metadata + linked repos + estimated cost.GET /search?q=β¦β partitioned hits (repos, sessions, commits).
Three endpoints exist purely for orchestrators that don't want HTML at all:
GET /api/action-requiredβ thin slice of just the action-required list. Each item carriesid = "<repo_id>:<signal>"so you can tell "same broken thing, still broken" from "this one cleared and a different one appeared." Signals:ci_failing,dirty_tree,in_progress_op,detached_head,review_requested.GET /api/scan-versionβ single-integer poll target so you can ask "did anything change" without paying the JSON projection cost.POST /api/refreshβ sync refresh. Awaits the scan, returns the newscan_version. Sibling ofPOST /refresh, which returns 202 and asks you to watch the WebSocket.
Every JSON response carries ETag: "<scan_version>". Send If-None-Match on the next poll and you'll get 304 Not Modified between scans for free.
The HTML repo-list cards also carry data-repo-id, data-repo-name, data-action-required, and data-signals attributes, plus data-flag on each action pill. Lets you parse the dashboard without regex on Tailwind class soup if you'd rather not switch to JSON.
repo-recall is also an MCP App server. Same data, same scan loop, but exposed to MCP hosts (Claude Desktop, ChatGPT, mcp-preview, ...) as tools and a renderable widget. Both surfaces always run in one process: the binary boots the axum dashboard and the MCP stdio server simultaneously, so a single brew-installed binary serves your browser AND your MCP host without needing two installs.
If the HTTP port is already in use (because another instance is already serving under brew services for example), the new instance falls back gracefully to MCP-only.
For Claude Desktop, drop this into ~/Library/Application Support/Claude/claude_desktop_config.json:
{
"mcpServers": {
"repo-recall": {
"command": "repo-recall",
"env": {
"REPO_RECALL_CWD": "/Users/you/projects",
"REPO_RECALL_DEPTH": "4"
}
}
}
}Six tools are exposed: recall_dashboard (with widget), recall_repo, recall_session, recall_search, recall_action_required, recall_refresh. The dashboard widget renders inside the host's iframe.
# from the directory you want indexed:
cargo run
# then:
open http://127.0.0.1:7777That's it. No config file, no setup wizard. The server walks its cwd + 4 levels deep for .git, parses ~/.claude/projects/**/*.jsonl, joins them by the session's recorded cwd, and ships HTML.
If you want to see what repo-recall looks like before pointing it at your own session history, the public demo image runs the same binary against synthetic fixtures (three fake repos, five fake sessions). Everything is bound inside the container; mutating endpoints (push, pull, clone, push-notification subscribe) return 403, the dashboard renders a "DEMO INSTANCE" banner.
docker run --rm -p 7777:7777 ghcr.io/coilysiren/repo-recall-demo:latest
# then:
open http://127.0.0.1:7777The image is rebuilt on every push to main and gates on a smoke test that asserts the dashboard came up with non-empty repo, session, and join counts, that the banner rendered, and that a mutating endpoint 403s. If a parser refactor breaks fixture compatibility, the gate trips before the image promotes to latest.
make docker-demo-build and make docker-demo-smoke run the same flow locally if you want to iterate on the fixtures or the Dockerfile.
For long-lived background use, install via the coilysiren/tap and let brew services manage the daemon:
brew install coilysiren/tap/repo-recall
brew services start repo-recall
# then:
open http://localhost:7777brew services follows the systemd-style start | stop | restart | info verbs. Logs go to $(brew --prefix)/var/log/repo-recall.{log,err.log}. brew upgrade keeps the binary current.
The Formula's default WorkingDirectory is $HOME so it'll work for any user out of the box. To point it at a specific tree (mine: ~/projects/coilysiren), edit your per-user service file once:
brew services edit repo-recall
# change WorkingDirectory and any REPO_RECALL_* env vars, save
brew services restart repo-recallEdits persist across brew upgrade.
make install # one-time: cargo-watch + pre-commit hooks
make watch # rebuild on save; browser auto-reloads via /livereload
make test # integration tests β boot the router on a random port, hit it
make ci # fmt --check + clippy + check + test
make help # all targetsUnder cargo watch the binary's cwd is the Cargo project root, so point it at the tree you actually want scanned:
REPO_RECALL_CWD=/path/to/your/code cargo watch -w src -w Cargo.toml -w static -x runA .env in the repo root is loaded automatically β drop your REPO_RECALL_* overrides there.
Drop an empty .repo-recall-ignore file at the root of any repo you've cloned for reading rather than working on (third-party sources, release-tag checkouts, vendored references). All action-required signals (detached HEAD, dirty tree, failing CI, etc.) are suppressed for that repo and it stops flowing into the action queue. Explicit, opt-in, no auto-detection.
| Var | Default | Purpose |
|---|---|---|
REPO_RECALL_PORT |
7777 |
HTTP port. Always bound to 127.0.0.1. |
REPO_RECALL_CWD |
process cwd | Directory to scan for repos. |
REPO_RECALL_DEPTH |
4 |
Directory levels below cwd to walk. |
REPO_RECALL_COMMITS_PER_REPO |
500 |
Max commits pulled per repo via git log --all --no-merges. |
REPO_RECALL_CACHE_DIR |
$TMPDIR/repo-recall-<port> |
Cache directory holding cache.redb. Wiped + rebuilt on every startup. |
RUST_LOG |
info,repo_recall=debug |
tracing-subscriber filter. |
Three input taps, all keyed to the same set of discovered repos:
- git β
git log --all --no-mergesfor commits / LOC churn / unique-author counts in the last 30 days, plus working-tree inspection for untracked + modified file counts. Offline, cheap, the bulk of what the dashboard shows. - gh β
gh run listfor CI status on the default branch, plus PRs awaiting your review, issues assigned to you, deploy workflow health, and open-PR counts. Network call, parallel post-pass, best-effort. Missing or unauthenticatedghdegrades silently. - Claude Code β
~/.claude/projects/**/*.jsonlparsed for session metadata (id, timestamps, message count, cost, cwd) and joined to repos bycwd. Offline, cheap.
Internally these are bucketed into three refresh categories β Historical (past activity, offline), LocalState (working tree right now, offline), RemoteState (network, parallel post-pass) β which drives how each attribute is refreshed and how it's rendered. See activity::Category.
Each source gets its own redb table. No unified "events" table β cross-source views are a query-time concern. The cache file is wiped and rebuilt on every process start, which trades a few seconds of scan time for zero migration code and zero stale-state bugs.
Sessions. Each *.jsonl under ~/.claude/projects/ is parsed for sessionId, first/last timestamps, the first user message (as a 200-char summary), message count, and cwd. Malformed lines are skipped with a debug log β the format drifts and I'd rather keep going than bail on one bad record. Sessions join to repos when the session's cwd is inside a discovered repo. Other match types (touched file paths, branch names) are the natural extension point.
Commits. git log --all --no-merges as a subprocess per repo, NUL-separated, capped at REPO_RECALL_COMMITS_PER_REPO. Shelling out to system git beats libgit2's build pain. Per-repo errors are swallowed at debug! β one weird repo doesn't abort the whole scan.
UI. Server-rendered HTML via maud (compile-time checked templates), styled with Tailwind v4 compiled via the standalone CLI (make css, output committed to static/tailwind.css), interactivity via htmx + htmx-ext-ws. Scan progress streams as out-of-band HTML fragments over a WebSocket β htmx pulls them out by id and swaps them in. No JSON progress protocol, no client JS to speak of.
- Stores metadata + a truncated 200-char summary only β not full transcripts on disk. The transcript page re-reads the JSONL at request time.
- Loopback only. Never listens on
0.0.0.0, never on a shared-box socket. - Tailwind ships as a same-origin compiled CSS file; htmx still loads from a CDN in the browser, not from the server process.
- Outbound calls:
gh run listfor CI status (reuses your existingghauth, no tokens stored, no tokens read from env; noghand the CI column stays blank), and Web Push deliveries tofcm.googleapis.comif you opt into PWA notifications. Push is off until you grant permission and subscribe; the VAPID keypair lives in~/.local/share/repo-recall/state.redb.
The 200-char summary can still contain pasted credentials or sensitive text. Treat the redb cache as sensitive (it defaults to $TMPDIR/repo-recall-<port>/cache.redb, which most OSes wipe on reboot).
Repo-recall sits at the intersection of GitHub-state dashboards, multi-repo git-status walkers, and Claude Code session tooling. Closest neighbours, ordered most-similar to least:
- dlvhdr/gh-dash - GitHub TUI dashboard for PRs and issues across repos. Closest analog for the
review_requested/issue_assignedpillar; repo-recall adds local-clone state and Claude Code session joins on top. - steipete/RepoBar - macOS menu bar with CI, PRs, issues, releases, branch + sync state. Same dashboard surface in a different shape; repo-recall lives in the browser and MCP rather than the menu bar, and tracks sessions.
- fboender/multi-git-status - depth-limited walker that prints uncommitted / untracked / unpushed / stashes per repo. Validates the discovery model; repo-recall layers remote state, action-required signals, and an HTTP + MCP surface on top.
- jhlee0409/claude-code-history-viewer - desktop app that renders chat-style transcripts for Claude Code, Codex, Cursor, Aider, OpenCode. Complementary; repo-recall surfaces session metadata and the repo each session worked on, not full transcripts.
- matt1398/claude-devtools - "missing DevTools" for Claude Code: visual inspector for tool calls, subagents, token usage, context window. Complementary; claude-devtools lives inside one session, repo-recall lives across sessions and repos.
- cordwainersmith/Claudoscope - native macOS menu bar with session analytics and cost estimation. Overlaps the per-session metrics; repo-recall ties those metrics to the repo each session actually worked on.
- safishamsi/graphify - tree-sitter code-knowledge graph exposed to AI assistants over MCP for within-repo "who calls what" questions. Different layer; graphify models code structure inside one repo, repo-recall models activity across many repos and sessions.
- adamtornhill/code-maat - CLI that mines git logs for churn, coupling, hotspots, contribution. Deeper offline analysis on a single repo;
loc_churn_30dandauthors_30dare primitive online forms of what Code Maat does exhaustively. - kamranahmedse/git-standup - walks
git logacross nested repos to recap yesterday's work. Pair its commit scraping with repo-recall's session scraping for a richer answer to "what was I doing?"
See AGENTS.md for the conventions β what's a cache vs. a database, how to add new sessionβrepo match types, why DB access uses spawn_blocking, why data sources stay as separate tables, and so on.


