PR Size Distribution

Big PRs get rubber-stamped. Small PRs get read.

Paste +additions -deletions straight from GitHub, or a plain total per line. The calculator sums both numbers on each line.
How to get the last 10 PRs from GitHub

With the GitHub CLI installed, from inside the repo:

gh pr list --state merged --limit 10 \
  --json additions,deletions \
  --jq '.[] | "+\(.additions) -\(.deletions)"'

Median

โ€”

p75

โ€”

p90

โ€”

Distribution

How this is calculated

Percentiles use nearest-rank. Buckets are 0 to 50, 50 to 200, 200 to 400, 400 to 1000, and 1000+. Each line is read as the total churn in that PR. If you paste +374 -147, the calculator adds the two numbers (521). A plain number is used as-is. Consistency matters more than which format you pick.

Why this matters

PR size is the single best predictor of review quality. Reviewers skim PRs over a few hundred lines. They read smaller ones. A team that normalized 50-line PRs has a review culture that actually catches things. A team where everything lands as a 1,200-line monolith has a review culture where nothing is caught until production finds it.

A p75 over 400 is the warning sign. A p90 over 1,000 means most of your risky changes are landing with effectively no review. The medians hide that, which is why all three percentiles matter.

What to do next

  • If p90 is over 1,000, start breaking your next feature into a stack of PRs that each merge behind a flag. Get the pattern in muscle memory before trying to change the average.
  • If reviewers are drowning, hard-cap PR size by team agreement. 400 is a common ceiling.
  • If most of your large PRs are generated code, that is fine. Exclude them from the number. Measure what you want people to read.