Reproducible react-doctor scores for popular open-source React frontends.
The scores below are produced by GitHub Actions on a weekly cron (and on demand). Every entry is scanned with npx react-doctor@latest --json --offline against a fresh clone of the upstream repo, and the resulting JSON is committed alongside this README so the leaderboard is fully auditable.
| Rank | Project | Score | Errors | Warnings | Files | Commit |
|---|---|---|---|---|---|---|
| 1 | executor | π’ ββββββββββββββββββββ 96/100 |
3 | 8 | 7 | ec3f61e |
| 2 | nodejs.org | π’ ββββββββββββββββββββ 86/100 |
0 | 196 | 179 | 125b760 |
| 3 | tldraw | π‘ ββββββββββββββββββββ 70/100 |
7 | 145 | 76 | 2eb9f83 |
| 4 | t3code | π‘ ββββββββββββββββββββ 68/100 |
0 | 763 | 256 | 131234b |
| 5 | better-auth | π‘ ββββββββββββββββββββ 64/100 |
0 | 628 | 266 | cf59136 |
| 6 | excalidraw | π‘ ββββββββββββββββββββ 63/100 |
1 | 555 | 177 | b2b2815 |
| 7 | mastra | π‘ ββββββββββββββββββββ 63/100 |
23 | 468 | 207 | 4df7cc7 |
| 8 | payload | π‘ ββββββββββββββββββββ 60/100 |
4 | 767 | 424 | 80dabd3 |
| 9 | typebot | π‘ ββββββββββββββββββββ 57/100 |
4 | 382 | 206 | 85eb843 |
| 10 | plane | π‘ ββββββββββββββββββββ 56/100 |
9 | 1944 | 833 | 4c1bdd1 |
| 11 | medusajs/admin | π‘ ββββββββββββββββββββ 56/100 |
10 | 591 | 247 | 7747d05 |
| 12 | rocket.chat | π‘ ββββββββββββββββββββ 51/100 |
57 | 2179 | 970 | 2a927fa |
| 13 | twenty | π΄ ββββββββββββββββββββ 48/100 |
85 | 1833 | 1240 | 0f8ee57 |
| 14 | unkey | π΄ ββββββββββββββββββββ 48/100 |
27 | 966 | 380 | 5a6c1ad |
| 15 | shadcn/ui | π΄ ββββββββββββββββββββ 46/100 |
15 | 2349 | 1014 | fc1ca40 |
| 16 | trigger.dev | π΄ ββββββββββββββββββββ 42/100 |
37 | 1942 | 590 | 749dc46 |
| 17 | formbricks | π΄ ββββββββββββββββββββ 41/100 |
11 | 4056 | 811 | 69ead97 |
| 18 | langfuse | π΄ ββββββββββββββββββββ 36/100 |
27 | 3167 | 852 | cfb88be |
| 19 | tooljet | π΄ ββββββββββββββββββββ 33/100 |
190 | 5976 | 1459 | f2f18d1 |
| 20 | onlook | π΄ ββββββββββββββββββββ 32/100 |
65 | 2034 | 416 | a242be5 |
| 21 | cal.com | π΄ ββββββββββββββββββββ 32/100 |
43 | 1582 | 425 | a4a01a0 |
| 22 | posthog | π΄ ββββββββββββββββββββ 31/100 |
682 | 5304 | 1833 | 8d12f45 |
| 23 | appsmith | π΄ ββββββββββββββββββββ 31/100 |
145 | 2450 | 1314 | 9ab4a39 |
| 24 | supabase | π΄ ββββββββββββββββββββ 30/100 |
52 | 3304 | 1315 | 0278672 |
| 25 | sentry | π΄ ββββββββββββββββββββ 30/100 |
179 | 3196 | 1652 | 52ecb95 |
| 26 | lobehub/lobe-chat | π΄ ββββββββββββββββββββ 30/100 |
298 | 6474 | 1619 | c760171 |
| 27 | dub | π΄ ββββββββββββββββββββ 24/100 |
60 | 3355 | 1209 | a5fa025 |
Last updated 2026-05-08T13:25:23.597Z Β· react-doctor 0.1.0 Β· 27 scored, 0 skipped/failed Β· raw results in results/latest.json
repos.yamllists every benchmark target with its GitHub URL, the workspace project to scan, and any per-repo overrides (skip dead-code, skip install, etc.).- The
benchmarkworkflow fans out across the list using a matrix strategy. Each job clones one repo, attemptspnpm/npm/yarn/bun install --ignore-scripts(auto-detected from the lockfile), and runsnpx -y react-doctor@latest <scanDir> --json --offline --fail-on none. If install fails, it falls back to--no-dead-codeand scans the source anyway so a working score still lands. - A final
publishjob downloads every per-repo artifact, writesresults/latest.json, regenerates this README's leaderboard table, and commits the diff back tomainusing the defaultGITHUB_TOKEN. If nothing changed (idempotent rendering), the commit step is a no-op.
The harness pins nothing about the upstream repos by default β every entry tracks HEAD of its default branch, and the SHA actually scanned is recorded in each result row so any score is reproducible. To pin a specific commit, set the ref field on a repos.yaml entry to a branch, tag, or SHA.
Every CI run writes results/leaderboard.json β a slim, stable JSON blob that downstream repos can fetch and drop in. It mirrors react-doctor's LeaderboardEntry interface so a one-shot replacement is straightforward.
Stable URL (always main, always the latest run):
https://raw.githubusercontent.com/millionco/react-doctor-benchmarks/main/results/leaderboard.json
Schema:
interface ConsumerLeaderboard {
schemaVersion: 1;
generatedAt: string; // ISO 8601 UTC
doctorVersion: string | null; // e.g. "0.0.47"
source: { repo: string; path: string; docs: string };
entries: Array<{
slug: string; // "tldraw"
name: string; // "tldraw"
githubUrl: string; // "https://github.com/tldraw/tldraw"
packageName: string; // workspace project name passed via --project
score: number; // 0β100
errorCount: number;
warningCount: number;
fileCount: number; // affectedFileCount in react-doctor's JsonReport
commitSha: string | null; // SHA we actually scanned
scannedAt: string;
}>; // sorted desc by score
}Example: regenerate react-doctor's leaderboard-entries.ts from CI:
# .github/workflows/refresh-leaderboard.yml in millionco/react-doctor
name: refresh leaderboard
on:
schedule:
- cron: "0 7 * * 1" # one hour after react-doctor-benchmarks runs
workflow_dispatch:
jobs:
refresh:
runs-on: ubuntu-latest
permissions: { contents: write, pull-requests: write }
steps:
- uses: actions/checkout@v4
- name: fetch latest leaderboard
run: |
curl -sSfL \
https://raw.githubusercontent.com/millionco/react-doctor-benchmarks/main/results/leaderboard.json \
-o /tmp/leaderboard.json
- name: codegen leaderboard-entries.ts
run: node scripts/codegen-leaderboard.mjs /tmp/leaderboard.json \
> packages/website/src/app/leaderboard/leaderboard-entries.ts
- uses: peter-evans/create-pull-request@v6
with:
title: "chore: refresh leaderboard from react-doctor-benchmarks"
commit-message: "chore: refresh leaderboard"
branch: chore/refresh-leaderboardWhere scripts/codegen-leaderboard.mjs is whatever projection makes sense for your downstream β typically:
// scripts/codegen-leaderboard.mjs
import { readFileSync } from "node:fs";
const data = JSON.parse(readFileSync(process.argv[2], "utf8"));
const rows = data.entries.map((e) => ` ${JSON.stringify({
name: e.name, githubUrl: e.githubUrl, packageName: e.packageName,
score: e.score, errorCount: e.errorCount, warningCount: e.warningCount,
fileCount: e.fileCount,
})},`).join("\n");
process.stdout.write(`// Auto-generated from ${data.source.repo} on ${data.generatedAt}\n` +
`export const RAW_ENTRIES = [\n${rows}\n];\n`);The blob is rewritten on every CI run, so even when scores don't change the generatedAt timestamp does β you can safely diff or skip in your downstream codegen.
Open a PR that adds an entry to repos.yaml. The schema is defined and validated in scripts/lib/config.ts:
- slug: my-project # kebab-case, must be unique
name: my-project # display name in the leaderboard
githubUrl: https://github.com/owner/repo
scanDir: apps/web # optional, default "."
project: "@my/web" # optional, passes --project to react-doctor
packageManager: pnpm # optional, auto-detected from lockfile
skipDeadCode: false # optional, true β pass --no-dead-code
skipInstall: false # optional, true β don't install (implies skipDeadCode)Once merged, the entry shows up the next time the workflow runs (weekly cron, or click Run workflow on the benchmark action).
pnpm install
pnpm tsx scripts/benchmark-repo.ts dub # scan one entry
pnpm tsx scripts/aggregate.ts # collect results/per-repo/*.json β results/latest.json
pnpm tsx scripts/render-readme.ts # splice into README between markersSet REACT_DOCTOR_VERSION=1.2.3 to pin a specific upstream version (defaults to latest).
| Path | What |
|---|---|
repos.yaml |
Benchmark targets (canonical source of truth). |
results/latest.json |
Aggregated leaderboard data; auto-generated. |
results/per-repo/<slug>.json |
Per-repo result; auto-generated. |
scripts/ |
The harness (matrix prep, single-repo benchmark, aggregator, README renderer). |
.github/workflows/benchmark.yml |
The automation. |
Thirteen of the entries (tldraw, excalidraw, twenty, plane, formbricks, posthog, supabase, onlook, payload, sentry, cal.com, dub, nodejs.org) were originally compiled by hand for the react.doctor/leaderboard page in millionco/react-doctor. This repo turns that snapshot into a self-updating, auditable benchmark and adds ten more popular OSS React apps.
MIT.