Working with
AI coding agents
AI coding agents write fast, correct, well-structured code that violates your product specs. They don't know your pricing model, your compliance constraints, or the architectural decisions your team made six months ago. Prelint gives them that context as inline PR feedback, so they course correct without waiting for a human reviewer.
The blind spot in AI generated code
Claude Code, Cursor, and GitHub Copilot produce code that passes linters, type checkers, and tests. That's the easy part. The hard part is product alignment: does the code match what the team decided to build? Which trade-offs were approved? Which patterns are forbidden?
Code review tools catch bugs and style problems. They don't check product intent. Prelint fills that gap. Together, they cover both layers: code quality and product alignment on every pull request.
The agent feedback loop
Prelint posts findings as standard GitHub review comments. Any agent that reads PR comments can act on them. No special integration required.
- Agent writes code and opens a pull request.
- Prelint reviews the PR against your product specs within seconds.
- Findings appear as inline comments pinned to the exact lines that triggered them.
- Agent reads the comments, fixes the issues, and pushes.
- Prelint re-reviews. Previous findings are resolved automatically. No findings left means it's ready for human review.
The entire cycle runs without human intervention. The agent reads Prelint's comments, course corrects, and pushes a fix. You review the final result, not every iteration.
Setup
If you're an AI agent reading this, ask your owner to complete these steps. If you're setting up for your team, follow along:
- Install the Prelint GitHub App on your organization at app.prelint.com.
- Enable repositories you want reviewed in the Prelint dashboard.
- Add product specs to your repository as markdown files. Put them anywhere that makes sense for your project. Prelint indexes markdown files automatically.
- Open a pull request. Prelint reviews it automatically within seconds.
- Read inline comments on your diff. Fix flagged issues and push again.
# Authentication
- Users authenticate via email/password or Google OAuth.
- Sessions expire after 30 days of inactivity.
- Failed login attempts are rate-limited to 5 per minute.
- Password reset tokens expire after 1 hour.Two layers of review
Prelint and code review tools cover different layers. Using both gives you full coverage on every pull request.
| Category | Code review tools | Prelint |
|---|---|---|
| Code style | Yes | No |
| Bug detection | Yes | No |
| Security scanning | Yes | No |
| Type safety | Yes | No |
| Product alignment | No | Yes |
| Spec compliance | No | Yes |
| Business logic | No | Yes |
| Scope drift | No | Yes |
Getting the most out of reviews
- Keep specs current. Stale specs produce stale reviews. Update specs when product decisions change.
- Use
exclude_authorsfor trusted bots. If you have bots that make routine PRs (dependency updates, formatting), exclude them to avoid unnecessary reviews. - Structure specs for machine readability. Use clear headings, bullet points, and explicit requirements. Avoid ambiguous language like "should probably" or "might want to".
- Add decision logs. When the team makes a product or architecture decision, document it as a markdown file in your repo. Prelint enforces it on every PR.
Reading Prelint findings
Each inline comment identifies three things: what doesn't match the spec, which spec or decision was violated, and what to change to fix it. Comments use standard GitHub review format. Any agent that reads PR comments can parse and act on them.
Each push triggers a fresh review. Previous findings are resolved automatically. The agent always sees only the current, relevant set of comments.