PrelintDocs

How reviews work

A pull request opens. Seconds later, inline comments appear on your diff flagging spec violations, decision drift, and business logic errors. Here's what happens in between.

The review pipeline

1

Trigger

A pull request is opened, reopened, converted from draft, or updated (new commits pushed). GitHub sends a webhook event to Prelint.

2

Prepare

Prelint validates the webhook, checks the repository is connected, and confirms the author isn't excluded. The diff is fetched and review context is assembled: product specifications, architecture decisions, and custom rules, all prioritized within an 80K character budget.

3

Review

The AI review engine analyzes the diff against the assembled context. It evaluates product alignment, decision compliance, spec violations, and business logic.

4

Validate

A second AI pass checks each finding for relevance. Findings that overlap with linter territory, rely on speculation, contradict documented context, or lack evidence from the diff are filtered out.

5

Post

Findings are posted as inline comments on the pull request diff, pinned to the exact lines that triggered them. A check run summary is created with an overview. Previous findings from earlier reviews on the same PR are resolved automatically.

What gets checked

The review focuses on product alignment, not code quality. These are different problems handled by different tools.

  • Product alignment. Does the code match what the product spec describes? Are features implemented as specified?
  • Decision compliance. Do the changes follow documented architecture decisions (ADRs)? Are approved patterns being used?
  • Custom rules. Are project-specific rules from prelint.json being followed?
  • Business logic. Do calculations, workflows, and data transformations match documented requirements?
  • Scope drift. Does the PR introduce changes outside its stated purpose? Are unrelated modifications sneaking in?
Not a linter

Prelint does not check code style, formatting, type safety, or security vulnerabilities. Use dedicated tools (ESLint, Prettier, Snyk) for those. Prelint complements them by covering the product layer.

What gets ignored

Prelint automatically skips files that rarely contain product-relevant changes:

  • Binary files (images, fonts, compiled assets)
  • Generated code (protobuf output, OpenAPI clients, migrations from generators)
  • Lock files (package-lock.json, yarn.lock, pnpm-lock.yaml, Gemfile.lock)
  • Vendored dependencies (vendor/, node_modules/)
  • AI tool plumbing files (hooks, skills, MCP configs). Instruction files like .cursorrules and CLAUDE.md are still reviewed normally.
  • Draft pull requests. Reviews start when the PR is marked as ready.
  • Bot-authored PRs (Dependabot, Renovate, GitHub Actions, Snyk) unless configured otherwise

You can customize ignore patterns in prelint.json using glob syntax.

Review context priority

The review engine assembles context within an 80K character budget. Sources are loaded in priority order, and higher-priority context is always included first:

  1. Organization-level specs. Product vision, company-wide standards, and cross-project rules set in the Prelint dashboard.
  2. Project-level specs. Product requirements and decisions scoped to a specific project in the dashboard.
  3. Repository-level context. Specifications and decisions stored in the repository itself. Prelint indexes markdown files automatically.
  4. Documentation. Additional markdown files, READMEs, and other docs discovered in your repository.

If the total context exceeds 80K characters, lower-priority sources are truncated.

Review output

Each review produces four outputs:

Summary comment

Every PR gets a single summary comment that updates in place. Active findings show severity and location. Resolved findings collapse automatically so the comment stays readable as the PR evolves. The summary comment uses GitHub's native alert formatting with warning and caution blocks for easy scanning.

Inline comments

Findings are posted as review comments on the exact diff lines that triggered them. Each comment includes the finding, why it matters, and severity (Critical or Warning).

Finding categories

Every finding is tagged with a category and subcategory. The four categories are:

  • Documentation: findings related to outdated or missing docs.
  • Product: spec violations, decision drift, and business logic errors.
  • Security: security-relevant issues caught from product context.
  • Tooling: configuration and tooling misalignment.

Categories make it easier to filter, triage, and track patterns across reviews.

Check run summary

A GitHub check run is created with a summary of all findings. The check run status reflects the review outcome:

  • Success: review completed. Findings (if any) are posted as inline comments.
  • Neutral: review was skipped (excluded author, draft PR, or no relevant file changes).

Automatic PR approval

When a review finishes with no active findings, Prelint approves the pull request automatically. One less manual step before you can merge.

If new findings appear on a subsequent push, the stale approval is dismissed automatically. This prevents merging a PR that passed review on an earlier version but now has issues. The summary comment is always the single source of truth for the current review state.

Stale documentation detection

When a PR changes behavior that's described in your existing docs, Prelint flags it. This catches outdated documentation before it misleads your team. If a PR updates both code and its documentation, Prelint reviews against the updated docs, not the stale cached version.

Incremental reviews

When new commits are pushed to a PR, Prelint runs a fresh review. Findings from previous reviews that no longer apply are automatically resolved. Only new or changed findings are posted, so the comment thread stays clean.

Rate limits

Reviews are limited to 200 per organization and 30 per repository per hour. These limits are high enough for even the busiest repos and exist to prevent runaway loops from misconfigured automation.