Agent Configuration
Configure AI coding agents to work effectively with LeanSpec through AGENTS.md instructions and best practices.
AGENTS.md Overview
AGENTS.md serves as permanent instructions for AI coding agents in your repository. When you run lean-spec init, this file is created with LeanSpec guidance.
Purpose:
- Provide context about your project
- Define when to use specs
- Specify workflow and commands
- Set quality standards
Multi-Tool Support
Different AI tools expect different instruction file names. LeanSpec automatically creates symlinks to ensure all tools find instructions:
| AI Tool | Expected File | How LeanSpec Handles It |
|---|---|---|
| Claude Code / Claude Desktop | CLAUDE.md | Symlink → AGENTS.md |
| Gemini CLI | GEMINI.md | Symlink → AGENTS.md |
| GitHub Copilot | AGENTS.md | Primary file |
| Cursor | AGENTS.md | Primary file |
| Windsurf | AGENTS.md | Primary file |
| Cline | AGENTS.md | Primary file |
| Warp Terminal | AGENTS.md | Primary file |
Init Options
# Default: creates CLAUDE.md symlink (most common use case)
lean-spec init -y
# Create symlinks for specific tools
lean-spec init -y --agent-tools claude,gemini
# Create symlinks for all supported tools
lean-spec init -y --agent-tools all
# Skip symlinks (legacy behavior)
lean-spec init -y --agent-tools none
Note: On Windows, copies are created instead of symlinks (requires admin privileges).
MCP-First AGENTS.md
The generated AGENTS.md emphasizes MCP tools as the primary method for spec management:
Key Sections
1. Critical Discovery Steps
## 🚨 CRITICAL: Before ANY Task
**STOP and check these first:**
1. **Discover context** → Use `board` tool to see project state
2. **Search for related work** → Use `search` tool before creating new specs
3. **Never create files manually** → Always use `create` tool for new specs
2. MCP Tools as Primary Method
## 🔧 How to Manage Specs
### Primary Method: MCP Tools (Recommended)
If you have LeanSpec MCP tools available, **ALWAYS use them**:
| Action | MCP Tool | Description |
|--------|----------|-------------|
| See project status | `board` | Kanban view + project health metrics |
| List all specs | `list` | Filterable list with metadata |
| Search specs | `search` | Semantic search across all content |
| View a spec | `view` | Full content with formatting |
| Create new spec | `create` | Auto-sequences, proper structure |
| Update spec | `update` | Validates transitions, timestamps |
| Check dependencies | `deps` | Visual dependency graph |
3. SDD Workflow Checkpoints
## ⚠️ SDD Workflow Checkpoints
### Before Starting ANY Task
1. 📋 **Run `board`** - What's the current project state?
2. 🔍 **Run `search`** - Are there related specs already?
3. 📝 **Check existing specs** - Is there one for this work?
### During Implementation
4. 📊 **Update status to `in-progress`** BEFORE coding
5. 📝 **Document decisions** in the spec as you work
6. 🔗 **Link related specs** if you discover connections
### After Completing Work
7. ✅ **Update status to `complete`** when done
8. 📄 **Document what you learned** in the spec
9. 🤔 **Create follow-up specs** if needed
4. Common Mistakes Table
### 🚫 Common Mistakes to Avoid
| ❌ Don't | ✅ Do Instead |
|----------|---------------|
| Create spec files manually | Use `create` tool |
| Skip discovery before new work | Run `board` and `search` first |
| Leave status as "planned" after starting | Update to `in-progress` immediately |
| Finish work without updating spec | Document decisions, update status |
| Edit frontmatter manually | Use `update` tool |
| Forget about specs mid-conversation | Check spec status periodically |
Complete AGENTS.md Template
# AI Agent Instructions
## Project: [Your Project Name]
[Brief description of what the project does]
## 🚨 CRITICAL: Before ANY Task
**STOP and check these first:**
1. **Discover context** → Use `board` tool to see project state
2. **Search for related work** → Use `search` tool before creating new specs
3. **Never create files manually** → Always use `create` tool for new specs
> **Why?** Skipping discovery creates duplicate work. Manual file creation breaks LeanSpec tooling.
## 🔧 How to Manage Specs
### Primary Method: MCP Tools (Recommended)
If you have LeanSpec MCP tools available, **ALWAYS use them**:
| Action | MCP Tool | Description |
|--------|----------|-------------|
| See project status | `board` | Kanban view + project health metrics |
| List all specs | `list` | Filterable list with metadata |
| Search specs | `search` | Semantic search across all content |
| View a spec | `view` | Full content with formatting |
| Create new spec | `create` | Auto-sequences, proper structure |
| Update spec | `update` | Validates transitions, timestamps |
| Check dependencies | `deps` | Visual dependency graph |
**Why MCP over CLI?**
- ✅ Direct tool integration (no shell execution needed)
- ✅ Structured responses (better for AI reasoning)
- ✅ Real-time validation (immediate feedback)
- ✅ Context-aware (understands project state)
### Fallback: CLI Commands
If MCP tools are not available, use CLI commands:
lean-spec board # Project overview
lean-spec list # See all specs
lean-spec search "query" # Find relevant specs
lean-spec create <name> # Create new spec
lean-spec update <spec> --status <status> # Update status
lean-spec deps <spec> # Show dependencies
**Tip:** Check if you have LeanSpec MCP tools available before using CLI.
## ⚠️ SDD Workflow Checkpoints
### Before Starting ANY Task
1. 📋 **Run `board`** - What's the current project state?
2. 🔍 **Run `search`** - Are there related specs already?
3. 📝 **Check existing specs** - Is there one for this work?
### During Implementation
4. 📊 **Update status to `in-progress`** BEFORE coding
5. 📝 **Document decisions** in the spec as you work
6. 🔗 **Link related specs** if you discover connections
### After Completing Work
7. ✅ **Update status to `complete`** when done
8. 📄 **Document what you learned** in the spec
9. 🤔 **Create follow-up specs** if needed
### 🚫 Common Mistakes to Avoid
| ❌ Don't | ✅ Do Instead |
|----------|---------------|
| Create spec files manually | Use `create` tool |
| Skip discovery before new work | Run `board` and `search` first |
| Leave status as "planned" after starting | Update to `in-progress` immediately |
| Finish work without updating spec | Document decisions, update status |
| Edit frontmatter manually | Use `update` tool |
| Forget about specs mid-conversation | Check spec status periodically |
## Core Rules
1. **Read README.md first** - Understand project context
2. **Check specs/** - Review existing specs before starting
3. **Use MCP tools** - Prefer MCP over CLI when available
4. **Follow LeanSpec principles** - Clarity over documentation
5. **Keep it minimal** - If it doesn't add clarity, cut it
6. **NEVER manually edit frontmatter** - Use `update`, `link`, `unlink` tools
7. **Track progress in specs** - Update status and document decisions
## When to Use Specs
**Write a spec for:**
- Features affecting multiple parts of the system
- Breaking changes or significant refactors
- Design decisions needing team alignment
**Skip specs for:**
- Bug fixes
- Trivial changes
- Self-explanatory refactors
## Quality Standards
- **Status tracking is mandatory:**
- `planned` → after creation
- `in-progress` → BEFORE starting implementation
- `complete` → AFTER finishing implementation
- Specs stay in sync with implementation
- Never leave specs with stale status
## Spec Complexity Guidelines
| Tokens | Status |
|--------|--------|
| <2,000 | ✅ Optimal |
| 2,000-3,500 | ✅ Good |
| 3,500-5,000 | ⚠️ Consider splitting |
| >5,000 | 🔴 Should split |
Use `tokens` tool to check spec size.
Configuring Different AI Tools
Claude Code
Claude Code reads CLAUDE.md by default. LeanSpec automatically creates this symlink:
# After lean-spec init
ls -la CLAUDE.md
# CLAUDE.md -> AGENTS.md
With MCP (Recommended): Configure the LeanSpec MCP server for full functionality. See MCP Integration.
Gemini CLI
Gemini CLI reads GEMINI.md. Create with:
lean-spec init -y --agent-tools gemini
GitHub Copilot
Copilot automatically reads AGENTS.md when files are opened in your editor.
No additional setup needed.
Cursor
Cursor reads .cursorrules by default. Options:
Option 1: Use AGENTS.md (recommended)
# Cursor also reads AGENTS.md
# No additional setup
Option 2: Link .cursorrules to AGENTS.md
ln -s AGENTS.md .cursorrules
Windsurf
Windsurf reads AGENTS.md by default. No additional setup needed.
Claude Desktop / ChatGPT / Other Chat Interfaces
These tools work best with MCP integration. Configure the LeanSpec MCP server:
{
"mcpServers": {
"leanspec": {
"command": "npx",
"args": ["-y", "@leanspec/mcp"]
}
}
}
See MCP Integration for full setup instructions.
Best Practices for AI-Readable Specs
1. Be Explicit and Concrete
❌ Vague:
Implement secure authentication
✅ Specific:
Implement JWT authentication with:
- bcrypt password hashing (min 10 rounds)
- 24-hour token expiry
- Rate limiting (5 attempts/min per IP)
2. Provide Examples
AI agents understand examples better than abstract descriptions.
Good:
## API Response Example
```json
{
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
"expiresAt": "2025-11-03T06:00:00Z",
"user": {
"id": "user_123",
"email": "user@example.com"
}
}
### 3. Use Testable Acceptance Criteria
Make criteria specific and verifiable:
✅ **Good Criteria:**
- [ ] POST /api/auth/login returns 200 with JWT on success
- [ ] Invalid credentials return 401 with error message
- [ ] Passwords are hashed with bcrypt, min 10 rounds
- [ ] Rate limit: max 5 attempts per minute per IP
❌ **Bad Criteria:**
- [ ] Authentication works
- [ ] Good security
- [ ] Fast performance
### 4. Define Boundaries Explicitly
Use "Out of Scope" or "Non-Goals" to prevent scope creep:
```markdown
## Out of Scope
- Social login (Google, GitHub) - separate epic
- Password reset - separate spec
- 2FA - not needed for MVP
- Remember me functionality - future enhancement
5. Show, Don't Just Tell
Include concrete examples of:
- API requests/responses
- CLI commands and output
- Database schemas
- Configuration files
- Test cases
6. Structure for Scanning
AI agents (and humans) scan before reading:
✅ Good Structure:
## Problem
[2-3 sentences]
## Solution
[High-level approach]
### Technical Details
[Implementation specifics]
## Success Criteria
- [ ] [Testable outcome]
❌ Bad Structure:
So we need to build this feature and it should do X, Y, Z...
[wall of text with no structure]
Repository Organization
Make Specs Discoverable
your-project/
├── AGENTS.md ← Primary AI instructions
├── CLAUDE.md → AGENTS.md ← Symlink for Claude Code
├── README.md ← Project overview
├── specs/ ← All specs here
│ ├── 001-feature-a/
│ │ └── README.md
│ ├── 002-feature-b/
│ │ ├── README.md
│ │ ├── DESIGN.md
│ │ └── TESTING.md
│ └── archived/ ← Old specs
├── src/ ← Source code
└── tests/ ← Tests
Keep Specs Close to Code
Specs in the repo (not external wiki):
- ✅ Version controlled
- ✅ Branch-specific
- ✅ Easy for AI to find
- ✅ Searchable
Verification
Test AI Understanding
Ask your AI agent these questions to verify LeanSpec integration:
Test 1: Can it discover the project?
Show me the project board.
Expected: Agent uses MCP board tool (or lean-spec board CLI)
Test 2: Can it search specs?
Find specs related to authentication.
Expected: Agent uses MCP search tool
Test 3: Can it create specs properly?
Create a spec for user authentication.
Expected: Agent uses MCP create tool (NOT manual file creation)
Test 4: Can it update status?
Mark spec 001 as in-progress.
Expected: Agent uses MCP update tool
Test 5: Does it follow SDD workflow?
I want to implement a new caching feature.
Expected: Agent first runs board and search, then creates spec before coding
Common Pitfalls
1. Overly Verbose Instructions
❌ Bad:
The AI agent should carefully read all available documentation
and thoroughly understand the codebase before making any changes.
It's important to...
[500 words of general advice]
✅ Good:
1. Read README.md first
2. Check existing specs with `lean-spec list`
3. Follow LeanSpec principles (see AGENTS.md)
2. Ambiguous Success Criteria
❌ Bad:
- [ ] Feature works well
- [ ] Good performance
- [ ] Users are happy
✅ Good:
- [ ] API response <100ms (p95)
- [ ] Zero crashes in 1 week
- [ ] NPS score >8
3. Missing Context
Always provide:
- Why: Problem and motivation
- What: Specific requirements
- How: Approach and constraints
- When: Success criteria
Next Steps
- MCP Integration - Model Context Protocol setup
- AI-Assisted Spec Writing - Practical patterns
- Getting Started - Initial setup guide
Related: CLI Reference for complete command documentation