Skip to main content

Spec Structure

Understand the anatomy of a LeanSpec spec: frontmatter, required sections, sub-spec files, and how to structure specs for both human and AI understanding.

Spec Anatomy

Every spec consists of:

  1. Frontmatter (YAML metadata)
  2. Title and Status Badge (auto-generated)
  3. Core Sections (Problem, Solution, Success Criteria)
  4. Optional Sub-Spec Files (for complex specs)

Frontmatter

Required Fields

---
status: planned # Current status
created: '2025-11-08' # Creation date (YYYY-MM-DD)
created_at: '2025-11-08T10:30:00Z' # ISO timestamp (auto-managed)
updated_at: '2025-11-08T14:45:00Z' # ISO timestamp (auto-managed)
---

Important: These are system-managed. Use lean-spec update to modify, never edit manually.

Optional Fields

---
status: in-progress
created: '2025-11-08'
tags: # Categorization
- api
- backend
- security
priority: high # high, medium, low
assignee: alice # Who's working on it
depends_on: # Hard dependencies (blocking)
- spec-041
- spec-039
related: # Soft relationships (informational)
- spec-043
- spec-044
created_at: '2025-11-08T10:30:00Z'
updated_at: '2025-11-08T14:45:00Z'
completed_at: '2025-11-09T16:20:00Z' # When marked complete
transitions: # Status change history
- status: in-progress
at: '2025-11-08T11:00:00Z'
- status: complete
at: '2025-11-09T16:20:00Z'
---

Manual editing: Only depends_on and related must be edited manually (no CLI command yet). All others use lean-spec update.

See also: Frontmatter Reference

Status Badge

Auto-generated after frontmatter:

> **Status**: ⏳ In progress · **Priority**: High · **Created**: 2025-11-08 · **Tags**: api, backend

This is rendered by the CLI and shouldn't be manually edited.

Core Sections

Minimal Structure

Every spec should have at minimum:

## Problem
[What problem are we solving? Why does it matter?]

## Solution
[What are we building? How does it solve the problem?]

## Success Criteria
- [ ] [How do we know it's done?]
- [ ] [What does success look like?]

For more complex specs:

## Overview
[1-2 paragraph summary of the feature/change]

## Problem
[Detailed problem description]

### Why It Matters
[Impact, urgency, business context]

### Current State
[What exists today that's insufficient]

## Solution
[Proposed approach]

### Design Decisions
[Key choices made and why]

### Trade-offs
[What we're giving up, alternatives considered]

## Implementation
[High-level implementation approach]

### Phases (if multi-phase)
1. Phase 1: [Description]
2. Phase 2: [Description]

## Success Criteria
- [ ] [Specific, measurable outcomes]
- [ ] [Acceptance criteria]
- [ ] [Performance targets]

## Out of Scope
- [What we're explicitly NOT doing]
- [Future considerations]

Optional Sections

Add only if needed:

## Research
[Findings from exploration/spikes]

## Dependencies
[External dependencies, blocked work]

## Testing Strategy
[How to validate the solution]

## Rollout Plan
[Deployment approach, migration steps]

## Risks
[Potential issues and mitigations]

Sub-Spec Files

When a spec exceeds 400 lines or has multiple distinct concerns, split into sub-specs:

Common Sub-Specs

specs/042-my-feature/
├── README.md # Main spec (required)
├── DESIGN.md # Detailed design and architecture
├── IMPLEMENTATION.md # Implementation plan with phases
├── TESTING.md # Test strategy and test cases
├── CONFIGURATION.md # Config examples and schemas
└── API.md # API design (if applicable)

README.md (Main Spec)

Should contain:

  • Frontmatter (metadata)
  • High-level overview
  • Problem statement
  • Solution approach
  • Success criteria
  • Links to sub-specs for details

Target: <300 lines

DESIGN.md

Detailed design decisions:

  • Architecture diagrams
  • Data models
  • Component interactions
  • Technology choices and rationale

Target: <400 lines

IMPLEMENTATION.md

Execution plan:

  • Phase breakdown
  • Task lists
  • Order of operations
  • Integration points

Target: <400 lines

TESTING.md

Test strategy:

  • Test cases
  • Edge cases
  • Performance tests
  • Integration tests

Target: <400 lines

Examples

Simple Spec (Single File)

---
status: planned
created: '2025-11-08'
tags:
- bug-fix
priority: medium
---

# Fix Authentication Timeout

> **Status**: 📋 Planned · **Priority**: Medium

## Problem
Users are logged out after 15 minutes of inactivity, even if actively using the app. Session timeout is too aggressive.

## Solution
Extend session timeout to 8 hours for active users:
- Keep 15-minute timeout for truly inactive users
- Refresh session on any API call
- Add "Remember me" option for 30-day token

## Implementation
1. Update JWT token TTL configuration
2. Add middleware to refresh token on API calls
3. Add "Remember me" checkbox to login form
4. Update documentation

## Success Criteria
- [ ] Users stay logged in for 8 hours of active use
- [ ] Inactive users (no API calls) timeout after 15 minutes
- [ ] "Remember me" tokens last 30 days
- [ ] Zero customer complaints about premature logouts

## Out of Scope
- SSO integration (future work)
- Multi-device session management

Line count: ~45 lines ✅

Complex Spec (Multi-File)

README.md (overview):

---
status: in-progress
created: '2025-11-08'
tags:
- api
- breaking-change
priority: high
depends_on:
- spec-041
---

# API v2 Redesign

> **Status**: ⏳ In progress · **Priority**: High

## Overview
Complete redesign of REST API to RESTful principles, consistent error handling, and improved performance.

## Problem
Current API (v1) has:
- Inconsistent endpoint naming
- Mixed HTTP methods (some resources use POST for reads)
- No standard error format
- Poor performance (N+1 queries common)

See [DESIGN.md](./DESIGN.md) for detailed analysis.

## Solution
Build API v2 with:
- RESTful resource design
- Consistent error responses
- GraphQL for complex queries
- &lt;200ms p95 response time

See:
- [DESIGN.md](./DESIGN.md) - Architecture and design decisions
- [IMPLEMENTATION.md](./IMPLEMENTATION.md) - Phase breakdown
- [TESTING.md](./TESTING.md) - Test strategy

## Success Criteria
- [ ] All v1 endpoints have v2 equivalents
- [ ] p95 response time &lt;200ms
- [ ] 100% API test coverage
- [ ] Migration guide published
- [ ] Zero breaking changes for 3 months after v2 launch

## Out of Scope
- GraphQL subscriptions (v2.1)
- Batch operations (v2.2)

DESIGN.md (detailed design - 350 lines)
IMPLEMENTATION.md (execution plan - 280 lines)
TESTING.md (test strategy - 190 lines)

Total: ~820 lines across 4 files ✅
Largest file: 350 lines ✅

Structure Best Practices

DO:

  • ✅ Start with Problem, Solution, Success Criteria (minimum viable)
  • ✅ Add sections progressively as needed
  • ✅ Split into sub-specs when >400 lines
  • ✅ Link between sub-specs for navigation
  • ✅ Keep README.md as entry point/overview

DON'T:

  • ❌ Add every possible section upfront
  • ❌ Let any single file exceed 400 lines
  • ❌ Bury the problem statement
  • ❌ Skip success criteria
  • ❌ Mix multiple unrelated concerns in one spec

AI-Friendly Structure

To maximize AI agent understanding:

1. Use clear headings

## Problem
## Solution
## Success Criteria

Not: "The Problem", "Our Solution", "How We Know It Works"

2. Include examples

## Solution
Use Redis for caching:

```json
{
"key": "user:123",
"value": {"name": "Alice", "email": "alice@example.com"},
"ttl": 300
}

**3. Make boundaries explicit**
```markdown
## Out of Scope
- Mobile app parity (future)
- OAuth providers beyond Google (not needed)

4. Define success quantitatively

## Success Criteria
- [ ] API response time &lt;100ms (p95)
- [ ] Cache hit rate >80%
- [ ] Zero false positives in 2 weeks

Not: "Fast API", "Good caching", "Works well"

See also: Writing Specs AI Can Execute

Next Steps


Reference: CLI Documentation for command details