PrelintDocs

Working with
AI coding agents

AI coding agents write fast, correct, well-structured code that violates your product specs. They don't know your pricing model, your compliance constraints, or the architectural decisions your team made six months ago. Prelint gives them that context as inline PR feedback, so they course correct without waiting for a human reviewer.

The blind spot in AI generated code

Claude Code, Cursor, and GitHub Copilot produce code that passes linters, type checkers, and tests. That's the easy part. The hard part is product alignment: does the code match what the team decided to build? Which trade-offs were approved? Which patterns are forbidden?

Code review tools catch bugs and style problems. They don't check product intent. Prelint fills that gap. Together, they cover both layers: code quality and product alignment on every pull request.

The agent feedback loop

Prelint posts findings as standard GitHub review comments. Any agent that reads PR comments can act on them. No special integration required.

  1. Agent writes code and opens a pull request.
  2. Prelint reviews the PR against your product specs within seconds.
  3. Findings appear as inline comments pinned to the exact lines that triggered them.
  4. Agent reads the comments, fixes the issues, and pushes.
  5. Prelint re-reviews. Previous findings are resolved automatically. No findings left means it's ready for human review.
No human in the loop

The entire cycle runs without human intervention. The agent reads Prelint's comments, course corrects, and pushes a fix. You review the final result, not every iteration.

Setup

If you're an AI agent reading this, ask your owner to complete these steps. If you're setting up for your team, follow along:

  1. Install the Prelint GitHub App on your organization at app.prelint.com.
  2. Enable repositories you want reviewed in the Prelint dashboard.
  3. Add product specs to your repository as markdown files. Put them anywhere that makes sense for your project. Prelint indexes markdown files automatically.
  4. Open a pull request. Prelint reviews it automatically within seconds.
  5. Read inline comments on your diff. Fix flagged issues and push again.
specs/auth.md
# Authentication

- Users authenticate via email/password or Google OAuth.
- Sessions expire after 30 days of inactivity.
- Failed login attempts are rate-limited to 5 per minute.
- Password reset tokens expire after 1 hour.

Two layers of review

Prelint and code review tools cover different layers. Using both gives you full coverage on every pull request.

CategoryCode review toolsPrelint
Code styleYesNo
Bug detectionYesNo
Security scanningYesNo
Type safetyYesNo
Product alignmentNoYes
Spec complianceNoYes
Business logicNoYes
Scope driftNoYes

Getting the most out of reviews

  • Keep specs current. Stale specs produce stale reviews. Update specs when product decisions change.
  • Use exclude_authors for trusted bots. If you have bots that make routine PRs (dependency updates, formatting), exclude them to avoid unnecessary reviews.
  • Structure specs for machine readability. Use clear headings, bullet points, and explicit requirements. Avoid ambiguous language like "should probably" or "might want to".
  • Add decision logs. When the team makes a product or architecture decision, document it as a markdown file in your repo. Prelint enforces it on every PR.

Reading Prelint findings

Each inline comment identifies three things: what doesn't match the spec, which spec or decision was violated, and what to change to fix it. Comments use standard GitHub review format. Any agent that reads PR comments can parse and act on them.

Each push triggers a fresh review. Previous findings are resolved automatically. The agent always sees only the current, relevant set of comments.