AI Code Pulse · Updated February 19, 2026

We reviewed 56,706 pull requests last month.

331 open-source repos. 56,706 pull requests. Every review graded against what actually shipped. We never train on customer data — all our research comes from publicly available repositories.

331

repos monitored

56,706

PRs reviewed

monthly

updated

Graded by Opus 4.6. Each PR reviewed twice — with and without docs.

AI Adoption

Every other repo now ships a CLAUDE.md.

We scanned the most active repos for AI configuration files.

Every single one had at least one.

The average ships 2.6.

.claude42%
.cursor40%
.aiderignore2%
.cursorrules40%
AGENTS.md49%
CLAUDE.md54%
CODEX.md1%
copilot-instructions.md22%

The most instrumented repos — PostHog, Superset, Ghost, cal.com — ship 5+ AI config files each.

Vendor Lock-in

Nobody’s picking one AI tool. The default is all of them.

46% of repos configure two or more AI vendors. Claude leads. Copilot trails.

Claude / Anthropic81%
Cursor40%
GitHub Copilot22%
CodeRabbit9%

46%

multi-vendor

Configure 2+ AI tools

81%

Claude adoption

Most popular by far

2.2

avg configs per repo

Not picking one — shipping many

Instructions

Testing tops every repo’s AI instructions. Performance barely makes the list.

Testing
94%
Code quality
91%
Architecture
85%
API design
80%
Database
77%
Documentation
75%
Security
69%
Performance
49%

3,500 bytes

median instruction file

~100 lines of guidance per repo

22%

stub files

Config exists but says nothing

Teams invest most in telling AI how to test. They invest least in telling it how to optimize. Security sits at 69% — better than expected, worse than it should be.

The Paradox

Frameworks write 25x more docs but score lower on review accuracy.

Framework repos4,040 KB avg · 341 docs
Business repos150 KB avg · 15 docs

81%

Review accuracy — business repos

68.5%

Review accuracy — framework repos

Business repos explain why. Frameworks document what. The reviewer learns more from intent than from API surface.

YC Effect

YC companies adopt AI tooling at 1.6x the rate.

43 YC-backed repos vs 240 non-YC. Same open-source ecosystem. Different bets.

Any AI config

YC56%
Non-YC34%

CLAUDE.md

YC30%
Non-YC18%

Cursor config

YC26%
Non-YC11%

Copilot instructions

YC2%
Non-YC10%

The Copilot inversion

YC companies pick best-of-breed tools. Everyone else uses what ships with the IDE. Copilot is 5x more common in non-YC repos.

With vs. Without

Documentation doesn’t find more bugs. It finds different bugs.

Without docs

13.3%

flag rate

With docs

36.6%

flag rate

979

catches

Context found issues blind review missed

232

suppressions

Context prevented false alarms

80.8%

precision

Of context-driven flags, 4 out of 5 were valid

979 issues only visible with context. 232 false alarms that context killed.

Quality

Overzealous reviews outnumber hallucinations 9 to 1.

Overzealous 43.5%
Missed Issue 31.6%
Partial 14.5%
Accurate 5.5%
Hallucinated 4.9%

16.8%

overzealous rate

12.2%

missed issue rate

The top failure: flagging design choices as bugs. The second: missing real issues. Docs reduce both.

Impact of context on review grades

Accurate

With docs2.9%
Without docs1.1%

Missed

With docs7.8%
Without docs10.8%

Brought to you by

Prelint

Prelint checks every pull request against your product specs, compliance rules, and business constraints. If code drifts from what the team decided, Prelint flags it before it ships.

We publish new data every month.

New data every month. Follow for updates.