Product review for
every pull request
Code review checks whether your code works. Prelint checks whether your code matches what the team agreed to build. It reads your product specs, architecture decisions, and context docs, then reviews every PR for drift, contradictions, and business logic errors before they ship.
The gap in your review process
Every engineering team has some version of this workflow: product decisions get documented somewhere (Notion, ADRs, markdown in the repo), code gets written, and a pull request gets reviewed. The review catches bugs, security issues, and style problems. What it doesn't catch: whether the code actually matches what was decided.
That gap gets worse with AI coding agents. They write correct, passing, well-structured code that violates your product specs. They don't know your pricing model, your compliance constraints, or the architectural decisions your team made six months ago.
Prelint closes that gap. It sits alongside your code review tool and checks the product layer: does this PR match the specs? Does it contradict a documented decision? Does the business logic actually do what the requirements say?
Two minutes to your first review
Install the GitHub App
Add Prelint to your GitHub organization and select which repositories to review. No CI changes, no YAML, no build step.
Add context to your repo
Drop product specs, decision logs, or architecture docs into your repository. Markdown files are indexed automatically.
Open a pull request
Prelint reviews every PR within seconds. Findings appear as inline comments pinned to the exact lines that triggered them.
What gets reviewed
Prelint focuses exclusively on the product layer. It doesn't duplicate your linter, your type checker, or your code review tool.
| Prelint reviews | Other tools handle |
|---|---|
| Spec violations and product drift | Code style and formatting |
| Architecture decision compliance | Bug detection and security scanning |
| Business logic correctness | Type safety and test coverage |
| Scope drift and unplanned changes | Performance and refactoring suggestions |
Guides
Choose the one that matches where you are:
I'm an AI agent
How to read Prelint findings, act on them, and close the loop without waiting for a human.
I'm setting up my team
Invite members, assign roles, and sync repository access from GitHub.
How reviews work
The five stage pipeline from PR webhook to inline comment on your diff.
Configuration
Ignore patterns, custom rules, context files, and the full prelint.json schema.
Billing
Per seat pricing, free trial, proration, and subscription management via Stripe.
Your specs become the review criteria
Prelint reads markdown files in your repository and uses them as review context. Architecture decision records, product requirements, API contracts, compliance rules. Anything you document becomes something Prelint enforces.
Context is assembled within an 80K character budget, prioritized from most specific to most general: project level specs first, then repository level specs, then general documentation. If your repo has a clone on EFS, the review agent can also search and read your full codebase to verify claims before flagging them.
The more you document, the sharper the reviews get. Teams that maintain specs and decision logs see 2.75x more accurate reviews than those running without context.
Your code stays yours
Repository clones are encrypted at rest, stored on isolated per-tenant volumes, and mounted only during reviews. Each review runs in a dedicated container that is destroyed on completion. LLM calls use zero retention APIs. No other tenant can access your code, and Prelint never uses it for training.
See the security page for full details on data handling, infrastructure isolation, and compliance.