Add dotclaude configuration files

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
Poshan Pandey
2026-03-26 17:16:27 -07:00
parent c10636b330
commit 491a45dd43
37 changed files with 2737 additions and 0 deletions
+69
View File
@@ -0,0 +1,69 @@
# Skills
Skills are slash commands you invoke with `/name`. They run in the main conversation context (they see all loaded rules and CLAUDE.md).
- `disable-model-invocation: true` — manual only, you type `/name` to trigger
- Without that flag — Claude can also trigger the skill automatically when relevant
## Available Skills
### /setupdotclaude
**Trigger**: Manual only
Scans the project codebase and customizes all `.claude/` config files to match the actual tech stack, conventions, and patterns. Run this after adding the `.claude/` folder to a new project. Confirms every change with the user. Runs a final review pass with `/refactor` in plan mode against the full codebase.
### /debug-fix [issue number, error, or description]
**Trigger**: Manual only
Find and fix a bug from any source — GitHub issue number, error message, stack trace, behavior description, or URL. Follows a structured flow: understand → reproduce → investigate → fix → verify → commit.
### /ship [optional message]
**Trigger**: Manual only
Full shipping workflow with confirmation at every step: scan changes → stage & commit → push → create PR. Proposes commit messages and PR descriptions. Blocks secrets, force-push, and push to main.
### /pr-review [PR number | staged | file path]
**Trigger**: Manual only
Reviews code changes by delegating to specialist agents (`@code-reviewer`, `@security-reviewer`, `@performance-reviewer`, `@doc-reviewer`). When given a PR number (or auto-detected from branch), also checks PR title, description quality, CI status, unresolved comments, and size — ending with a clear merge/needs-changes verdict. Also works on staged changes or specific files for pre-PR review.
### /tdd [feature description]
**Trigger**: Manual only
Strict Test-Driven Development loop. Red: write a failing test for the smallest next behavior. Green: write the minimum code to pass. Refactor: clean up without changing behavior. Repeat. Commits after each green+refactor cycle.
### /explain [file, function, or concept]
**Trigger**: Manual only
Explains code with a one-sentence summary, mental model analogy, ASCII diagram, key details, and modification guide.
### /refactor [target]
**Trigger**: Manual only
Safe refactoring with tests as a safety net. Writes tests first if none exist, makes changes in small testable steps, verifies no behavior change.
### /test-writer
**Trigger**: Automatic (when new features are added)
Writes comprehensive tests covering every code path: happy path, edge cases, nulls, type boundaries, error paths, concurrency, state transitions. Covers API endpoints, UI components, database operations, and async. Verifies tests actually catch bugs by breaking the code.
## Adding Your Own
Create a directory with a `SKILL.md` file:
```
your-skill/
└── SKILL.md
```
```yaml
---
name: your-skill
description: What it does and when to use it
disable-model-invocation: true
---
Your instructions here. Use $ARGUMENTS for user input.
```
See [Claude Code docs](https://code.claude.com/docs/en/skills) for all frontmatter options.
+61
View File
@@ -0,0 +1,61 @@
---
name: debug-fix
description: Find and fix a bug or issue — from any source (GitHub issue, error message, user report, or observed behavior)
argument-hint: "[issue number, error message, or description of the problem]"
disable-model-invocation: true
---
Find and fix the following issue:
**Problem**: $ARGUMENTS
## Step 1: Understand the Problem
Determine what kind of input this is:
- **Issue number** → fetch it: `gh issue view $ARGUMENTS` (GitHub), or check the project's issue tracker
- **Error message / stack trace** → parse it for file, line, error type, and the call chain leading to it
- **Description of behavior** → identify what's expected vs what's happening
- **URL / screenshot** → examine the referenced resource
If the problem is unclear, ask clarifying questions before proceeding.
## Step 2: Reproduce
- Find or write the simplest way to trigger the issue (a test, a curl command, a script)
- Confirm you can reproduce it reliably
- If you can't reproduce:
- **Environment-specific?** Check env vars, OS, Node/Python version, database state
- **Intermittent?** Likely a race condition — look for shared mutable state, timing dependencies, or async ordering assumptions
- **Already fixed?** Check `git log` for recent commits that mention the issue
## Step 3: Investigate
Follow this sequence — don't skip ahead to guessing:
1. **Locate the symptom**: which file and line produces the wrong output/error?
2. **Read the code path**: trace backwards from the symptom. What function called this? What data did it pass? Read each caller.
3. **Check git history**: `git log --oneline -20 -- <file>` to see what recently changed in the affected files. `git log --all --grep="<keyword>"` to find related commits.
4. **Narrow the scope**: use `git bisect` or targeted grep to identify when the behavior changed, or which input triggers it.
5. **Form a hypothesis**: "I think [X] is wrong because [evidence]."
6. **Verify the hypothesis**: add a targeted log/assertion/test that would confirm or deny it. Run it.
7. **If wrong, update**: don't keep guessing with the same hypothesis. Go back to step 2 and trace a different path.
## Step 4: Fix
- Make the minimal change that fixes the root cause
- Don't patch symptoms — if a value is wrong, trace back to where it becomes wrong and fix it there
- Don't refactor surrounding code while fixing the bug
- Don't add defensive checks that mask the problem — fix why the bad data exists
## Step 5: Verify
- Write a test that reproduces the original bug and now passes with the fix
- Run related tests to check for regressions
- Run lint and typecheck
- Temporarily revert your fix and confirm the new test fails — this proves the test actually catches the bug
## Step 6: Wrap Up
- Create a branch if not already on one
- Stage only the relevant files (fix + test, nothing else)
- Commit with a message that references the issue if one exists: `fix: <what was wrong and why> (#number)`
+33
View File
@@ -0,0 +1,33 @@
---
name: explain
description: Explain code with visual diagrams and clear mental models
argument-hint: "[file, function, or concept]"
disable-model-invocation: true
---
Explain `$ARGUMENTS` clearly.
## Format
### 1. One-sentence summary
What does it do and why does it exist? One sentence.
### 2. Mental model
Give an analogy or metaphor that captures the core idea. Relate it to something the developer already knows.
### 3. Visual diagram
Draw an ASCII diagram showing the data/control flow. Keep it readable:
```
Input → [Step A] → [Step B] → Output
[Side Effect]
```
### 4. Key details
Walk through the important parts. Skip the obvious — focus on:
- Non-obvious decisions (why this approach?)
- Edge cases and gotchas
- Dependencies and side effects
### 5. How to modify it
What would someone need to know to safely change this code? Where are the landmines?
+71
View File
@@ -0,0 +1,71 @@
---
name: hotfix
description: Emergency production fix — create hotfix branch, minimal change, critical tests only, ship fast
argument-hint: "[issue number, error message, or description of production problem]"
disable-model-invocation: true
allowed-tools:
- Bash(git *)
- Bash(gh *)
- Bash(npm run test *)
- Bash(npm run build)
- Read
- Glob
- Grep
- Edit
- Write
---
Emergency production fix. Speed matters — make the smallest correct change, verify it works, and ship.
## Step 1: Create Hotfix Branch
- Determine the production branch (`main` or `master` — check with `git remote show origin` or `git symbolic-ref refs/remotes/origin/HEAD`)
- Stash any uncommitted work if needed
- Create and switch to `hotfix/<short-description>` branch from the production branch
- **ASK the user to confirm** the branch name before creating
## Step 2: Understand the Problem
- If `$ARGUMENTS` is a GitHub issue number: fetch it with `gh issue view`
- If it's an error message or description: search the codebase for the relevant code
- Identify the root cause — trace from symptom to source
- **Briefly state** what you found and your proposed fix to the user
## Step 3: Fix — Minimal Change Only
- Make the smallest change that correctly fixes the issue
- **Do NOT**:
- Refactor surrounding code
- Add new features
- Clean up unrelated issues
- Change formatting or style
- Add comments beyond what's necessary to understand the fix
- If the fix requires more than ~50 lines changed, warn the user — this may not be a hotfix
## Step 4: Verify
- Run only the tests directly relevant to the changed code (not the full suite)
- Run the build to ensure it compiles
- If there's a way to reproduce the original error, verify it's fixed
- **ASK the user** if they want to run any additional verification
## Step 5: Ship
- Stage only the fix files (never stage secrets, locks, build output)
- Draft a commit message: `hotfix: <short description>` with a brief explanation
- **ASK the user to confirm** the commit message
- Push with `git push -u origin hotfix/<description>`
- Create a PR targeting the production branch:
- Title: `[HOTFIX] <description>`
- Body: what broke, what caused it, what this fixes
- Add label `hotfix` if the repo has it: `gh pr create ... --label hotfix` (fall back to no label if it fails)
- Show the PR URL
## Rules
- NEVER skip confirmation steps
- NEVER force-push
- NEVER commit secrets or unrelated changes
- NEVER refactor — this is a hotfix, not a cleanup
- If the user says "skip" at any step, skip it and move to the next
- If the fix turns out to be complex, tell the user and suggest a regular branch instead
+94
View File
@@ -0,0 +1,94 @@
---
name: pr-review
description: Review code changes or a pull request — delegates to specialist agents for code quality, security, performance, and documentation.
argument-hint: "[PR number | staged | file path — or omit to auto-detect]"
disable-model-invocation: true
---
Review code changes by delegating to specialist agents and synthesizing a unified report. Works with PRs, staged changes, or specific files.
## Step 1: Determine Scope
Parse `$ARGUMENTS` to determine what to review:
- **PR number** (e.g., `123` or `#123`): fetch with `gh pr view $ARGUMENTS`. This is the full PR review path (includes PR quality checks in Step 2).
- **No argument**: try `gh pr view` to detect a PR for the current branch. If a PR exists, use it. If not, fall back to `git diff --cached` (staged), then `git diff` (unstaged).
- **`staged`**: review `git diff --cached`. If nothing staged, fall back to `git diff`.
- **File path**: review that specific file's current state.
If there are no changes to review, say so and stop.
## Step 2: PR Quality Check (PR path only)
Skip this step if reviewing staged changes or a file — jump to Step 3.
When reviewing a PR, fetch and check:
- PR title, description/body, author, base branch, head branch
- `gh pr diff $NUMBER` for the full diff
- `gh pr checks $NUMBER` for CI status
- `gh api repos/{owner}/{repo}/pulls/$NUMBER/comments` for review comments
Review the PR itself before the code:
- **Title**: descriptive and under 72 chars?
- **Description**: explains the *why*? Includes a test plan? Flag if empty or template-only.
- **Size**: count changed files and lines. Flag if >500 lines changed (suggest splitting).
- **Base branch**: targeting the right branch?
- **CI status**: passing, failing, or pending? If failing, note which checks — fix CI first.
- **Unresolved comments**: list open review threads with file:line and comment text.
## Step 3: Code Review (delegate to agents)
1. **Always**: delegate the diff to `@code-reviewer`
2. **If security-sensitive code changed** — auth, input handling, queries, tokens, session management, file path construction: delegate to `@security-reviewer`
3. **If performance-sensitive code changed** — endpoints, DB queries, loops over collections, caching, connection management: delegate to `@performance-reviewer`. Skip if changes are only docs, config, tests, or static assets.
4. **If documentation changed** — .md files, significant docstring/JSDoc changes, API docs: delegate to `@doc-reviewer`
Determine relevance by reading the diff content, not just file paths.
## Step 4: Synthesize Report
For PR reviews:
```
## PR Review: #[number] — [title]
**Author**: [author] | **Base**: [base] → **Head**: [head] | **Changed**: [N files, +X/-Y lines]
### PR Quality
- Title: [ok / needs improvement]
- Description: [ok / missing test plan / empty]
- Size: [ok / large — consider splitting]
- CI: [passing / failing — list failures]
- Unresolved comments: [none / list]
### Code Review
#### Critical / High
- [Agent] File:Line — issue
#### Medium
- [Agent] File:Line — issue
#### Low
- [Agent] File:Line — issue
### Verdict
[Ready to merge / Needs changes — summarize blockers]
```
For non-PR reviews (staged/file):
```
## Review Summary
**Scope**: [staged changes / file path]
**Agents run**: [list]
### Critical / High
- [Agent] File:Line — issue
### Medium / Low
- [Agent] File:Line — issue
### Passed
- [areas with no issues]
```
Deduplicate findings that overlap between agents. Attribute each finding to the agent that found it.
+36
View File
@@ -0,0 +1,36 @@
---
name: refactor
description: Safely refactor code with test coverage as a safety net
argument-hint: "[target to refactor — file, function, or pattern]"
disable-model-invocation: true
---
Refactor `$ARGUMENTS` safely.
## Process
### 1. Understand the current state
- Read the code and its tests
- Identify what the code does, its callers, and its dependencies
- If there are no tests, WRITE TESTS FIRST — you need a safety net before changing anything
### 2. Plan the refactoring
- State what you're changing and why (clearer naming, reduced duplication, better structure)
- List the specific transformations (extract function, inline variable, move module, etc.)
- Check: does this change any external behavior? If yes, this isn't a refactor — reconsider.
### 3. Make changes in small, testable steps
- One transformation at a time
- Run tests after EACH step — not at the end
- If a test breaks, undo the last step and make a smaller change
### 4. Verify
- All existing tests pass
- Lint and typecheck pass
- The public API hasn't changed (unless that was the explicit goal)
- The code is objectively simpler — fewer lines, fewer branches, clearer names
## Rules
- If you can't run the tests, don't refactor
- Never mix refactoring with behavior changes in the same commit
- If the refactoring is large (10+ files), break it into multiple commits
+198
View File
@@ -0,0 +1,198 @@
---
name: setupdotclaude
description: Scan the project codebase and customize all .claude/ configuration files to match. Run this after adding the .claude/ folder to a new project.
argument-hint: "[optional: focus area like 'frontend' or 'backend']"
disable-model-invocation: true
---
Scan this project's codebase and customize every `.claude/` configuration file to match the actual tech stack, conventions, and patterns in use. Confirm with the user before each change using AskUserQuestion.
CLAUDE.md must be at the project root (`./CLAUDE.md`), NOT inside `.claude/`. All other config files live inside `.claude/`.
If the project is empty or has no source code yet, tell the user the defaults will be kept as-is and stop.
## Phase 0: Clean Up Non-Config Files
Before anything else, delete files inside `.claude/` that exist for the dotclaude repo itself but waste tokens or cause issues at runtime:
- `.claude/README.md` (repo README accidentally copied in)
- `.claude/CONTRIBUTING.md` (repo contributing guide accidentally copied in)
- `.claude/.gitignore` (for the dotclaude repo, not the project — the project has its own .gitignore)
- `.claude/rules/README.md`
- `.claude/agents/README.md`
- `.claude/hooks/README.md`
- `.claude/skills/README.md`
Also delete `.claude/CLAUDE.md` if it exists — CLAUDE.md belongs at the project root, not inside `.claude/`.
## Phase 1: Detect Tech Stack
Scan for package manifests, config files, and folder structure to detect: language, framework, package manager, test framework, linter/formatter, architecture pattern, and source/test directories.
Check: `package.json`, `pyproject.toml`, `Cargo.toml`, `go.mod`, `Gemfile`, `composer.json`, `build.gradle`, `pom.xml`, `Makefile`, `Dockerfile`.
Check for monorepo indicators: `workspaces` key in package.json, `pnpm-workspace.yaml`, `lerna.json`, `nx.json`, `turbo.json`, or multiple `package.json` files at depth 2+. If a monorepo is detected, ask the user which packages/apps to focus on and customize rule path patterns to include package prefixes (e.g., `packages/api/src/**` instead of `src/**`).
Detect frameworks from dependencies and config files (frontend, backend, CSS, components, ORM/DB).
Detect test framework from config files (`jest.config.*`, `vitest.config.*`, `pytest.ini`, `conftest.py`, `playwright.config.*`, etc.).
Detect linter/formatter from config files (`.eslintrc.*`, `.prettierrc.*`, `biome.json`, `ruff.toml`, `tsconfig.json`, `.editorconfig`, etc.).
Detect folder structure pattern (feature-based, layered, monorepo, MVC) and locate source, test, API, and auth directories.
Check `git log --oneline -20` for commit message style.
## Phase 2: Present Findings
Present a summary to the user using AskUserQuestion:
```
I scanned your project. Here's what I found:
**Stack**: [language] + [framework] + [CSS] + [DB]
**Package manager**: [npm/pnpm/yarn/bun/pip/cargo/go]
**Test framework**: [jest/vitest/pytest/etc.]
**Linter/Formatter**: [eslint+prettier/ruff/clippy/etc.]
**Architecture**: [layered/feature-based/monorepo/etc.]
**Source dirs**: [list]
**Test dirs**: [list]
Should I customize the .claude/ files based on this? (yes/no/corrections)
```
If the user provides corrections, incorporate them.
## Phase 3: Customize Each File
For each file below, propose the specific changes and ask the user to confirm before applying.
### 3.1 — CLAUDE.md
Replace the template commands with actual commands from the detected manifest:
- **Build**: actual build command from package.json scripts, Makefile targets, etc.
- **Test**: actual test command + how to run a single test file
- **Lint/Format**: actual lint and format commands
- **Dev**: actual dev server command
- **Architecture**: replace placeholder directories with actual project structure (only non-obvious parts)
Remove sections that don't apply (e.g., Architecture section for a single-file utility).
### 3.2 — settings.json
Update permissions to match actual commands:
- Replace `npm run` with the actual package manager (`pnpm run`, `yarn`, `bun run`, `cargo`, `go`, `make`, `python -m pytest`, etc.)
- Add project-specific allow rules for detected scripts
- Keep deny rules for secrets as-is (these are universal)
### 3.3 — rules/code-quality.md
Update naming conventions ONLY if the project's existing code uses different patterns:
- Sample 5-10 source files to detect actual naming style (camelCase vs snake_case, etc.)
- If the project uses different file naming than the template, update
- If the project's import style differs, update the import order section
If everything matches the defaults, leave it unchanged.
### 3.4 — rules/testing.md
Update if the detected test framework has specific idioms. Otherwise leave as-is (it's only 3 lines).
### 3.5 — rules/security.md
Update the `paths:` frontmatter to match actual project directories:
- Replace `src/api/**` with actual API directory paths found
- Replace `src/auth/**` with actual auth directory paths
- Replace `src/middleware/**` with actual middleware paths
- If none found, keep the defaults as reasonable guesses
### 3.5b — rules/error-handling.md
Update the `paths:` frontmatter to match actual backend directories (same paths as security.md plus service/handler directories). If the project has no backend, delete this file.
### 3.6 — rules/frontend.md
- **If no frontend files exist** (no .tsx, .jsx, .vue, .svelte, .css): delete this file entirely
- **If frontend exists**: update the Component Framework table to highlight which options the project actually uses (detected from dependencies)
- Update path patterns in frontmatter if the project uses non-standard directories
### 3.7 — hooks/format-on-save.sh
Uncomment the section matching the detected formatter:
- Prettier found → uncomment Node.js section
- Black/isort found → uncomment Python section
- Ruff found → uncomment Ruff section
- Biome found → uncomment Biome section
- rustfmt found → uncomment Rust section
- gofmt found → uncomment Go section
- Multiple languages → uncomment all relevant sections
### 3.8 — hooks/block-dangerous-commands.sh
Check the default branch name (`git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null` or `git remote show origin`). If it's not `main` or `master`, update the regex pattern.
### 3.9 — rules/database.md
- Check if the project has a database (look for: migration directories, ORM config files like `prisma/schema.prisma`, `drizzle.config.*`, `alembic.ini`, `knexfile.*`, `sequelize` in dependencies, `typeorm` in dependencies, `ActiveRecord` patterns, `flyway`, `liquibase`)
- **If database/migrations detected**: keep the rule, update `paths:` frontmatter to match the actual migration directory paths found
- **If no database detected**: delete `rules/database.md` entirely
### 3.10 — skills/
All skills are methodology-based and project-agnostic. Leave unchanged.
### 3.11 — agents/
- **frontend-designer.md**: delete if no frontend files exist
- **doc-reviewer.md**: delete if the project has no documentation directory (no `docs/`, `doc/`, or significant `.md` files beyond README)
- **security-reviewer.md**: keep (security applies everywhere)
- **code-reviewer.md**: keep (universal)
- **performance-reviewer.md**: keep (universal)
## Phase 4: Review & Simplify
After all changes are applied, run a thorough final review pass.
Strip any remaining `> REPLACE:` placeholder blocks from `CLAUDE.md` — these are template guidance that should have been replaced with real content or removed during Phase 3.1.
Review the entire codebase alongside the customized `.claude/` configuration:
- Do the rules match how the code is actually written?
- Do the settings permissions cover the commands the project actually uses?
- Do the security rule paths match where sensitive code actually lives?
- Do the hook protections cover the files that actually need protecting in this project?
- Are there project patterns, conventions, or architectural decisions not yet captured in the config?
- Remove any redundancy introduced during customization
- Ensure no file contradicts another
- Trim any verbose instructions back to essentials
- Verify all YAML frontmatter is valid
- Verify all hook scripts referenced in settings.json exist and are executable
Present the review findings to the user. If changes are needed, confirm before applying.
## Phase 5: Summary
After everything is finalized, present a summary:
```
Setup complete. Here's what was customized:
- CLAUDE.md: updated commands for [stack]
- settings.json: permissions updated for [package manager]
- rules/security.md: paths updated to [actual dirs]
- rules/frontend.md: [kept/removed]
- hooks/format-on-save.sh: enabled [formatter]
- [any other changes]
Files left as defaults (universal, no project-specific changes needed):
- [list]
Review pass: [any issues found and fixed, or "all clean"]
```
## Rules
- NEVER write changes without user confirmation first
- NEVER delete a file without confirming — propose "remove" and explain why
- If the project is empty (no source files, no manifests), say "Project appears empty — keeping all defaults" and stop
- If detection is uncertain, ASK the user rather than guessing
- Preserve any manual edits the user has already made to .claude/ files — only update sections that need project-specific customization
- Keep it minimal — don't add complexity. If the default works, leave it alone.
+71
View File
@@ -0,0 +1,71 @@
---
name: ship
description: Scan changes, commit, push, and create a PR — with confirmation at each step
argument-hint: "[optional commit message or PR title]"
disable-model-invocation: true
allowed-tools:
- Bash(git status)
- Bash(git diff *)
- Bash(git log *)
- Bash(git add *)
- Bash(git commit *)
- Bash(git push *)
- Bash(git checkout *)
- Bash(git branch *)
- Bash(gh pr create *)
- Bash(gh pr view *)
---
Ship the current changes through commit, push, and PR creation. Confirm with the user before each step using the AskUserQuestion tool.
## Step 1: Scan
- Run `git status` to see all changed, staged, and untracked files
- Run `git diff` to see what changed (staged + unstaged)
- Run `git log --oneline -5` to see recent commit style
- Present a clear summary to the user:
- Files modified
- Files added
- Files deleted
- Untracked files
- If there are no changes, tell the user and stop
## Step 2: Stage & Commit
- Propose which files to stage. **Never stage** these:
- Secrets: `.env*`, `*.pem`, `*.key`, `credentials.json`
- Lock files: `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml` (unless intentionally updated)
- Generated: `*.gen.ts`, `*.generated.*`, `*.min.js`, `*.min.css`
- Build output: `dist/`, `build/`, `.next/`, `__pycache__/`
- Dependencies: `node_modules/`, `vendor/`, `.venv/`
- OS/editor: `.DS_Store`, `Thumbs.db`, `*.swp`, `.idea/`, `.vscode/settings.json`
- Draft a commit message based on the changes, matching the repo's existing commit style
- **ASK the user to confirm or edit**: show the exact files to stage and the proposed commit message
- Only after confirmation: stage the files and create the commit
- If the commit fails (e.g., pre-commit hook), fix the issue and try again with a NEW commit
## Step 3: Push
- Check if the current branch has an upstream remote
- If not, propose creating one with `git push -u origin <branch>`
- **ASK the user to confirm** before pushing
- Only after confirmation: push to remote
## Step 4: Pull Request
- Check if a PR already exists for this branch (`gh pr view` — if it exists, show the URL and stop)
- Analyze ALL commits on this branch vs the base branch (not just the latest commit)
- Draft a PR title (under 72 chars) and body with:
- Summary: 2-4 bullet points
- Test plan: how to verify
- **ASK the user to confirm or edit** the title and body
- Only after confirmation: create the PR with `gh pr create`
- Show the PR URL when done
## Rules
- NEVER skip a confirmation step — each step requires explicit user approval
- NEVER force-push
- NEVER commit .env, secrets, or credential files
- If the user says "skip" at any step, skip that step and move to the next
- If $ARGUMENTS is provided, use it as the commit message / PR title
+70
View File
@@ -0,0 +1,70 @@
---
name: tdd
description: Test-Driven Development loop — write a failing test first, then the minimum code to pass it, then refactor. Repeat.
argument-hint: "[feature description or function signature]"
disable-model-invocation: true
---
Build the following using strict Test-Driven Development:
**Feature**: $ARGUMENTS
## The TDD Cycle
Repeat this cycle for each behavior. Never skip steps.
### Red: Write a Failing Test
1. Write ONE test for the smallest next behavior (not the whole feature)
2. The test must:
- Describe the behavior in its name: `should return 0 for empty cart`
- Use Arrange-Act-Assert structure
- Assert specific values, not vague truths
3. **Run the test. It MUST fail.** If it passes, either:
- The behavior already exists (skip to the next behavior)
- The test is wrong (it's not testing what you think — fix it)
4. Verify the failure message makes sense — it should tell you what's missing
### Green: Write the Minimum Code to Pass
1. Write the **simplest, most obvious code** that makes the failing test pass
2. Don't generalize. Don't make it elegant. Don't handle cases the test doesn't cover.
3. Hardcoding is fine if only one test exists for that path — the next test will force generalization
4. **Run the test. It MUST pass.** If it doesn't, fix the code (not the test — the test defined the behavior)
5. Run ALL tests. Nothing previously passing should break.
### Refactor: Clean Up Without Changing Behavior
1. Look for: duplication, unclear names, functions doing too much, magic values
2. Make ONE improvement at a time
3. **Run ALL tests after each change.** If anything breaks, undo immediately.
4. Stop refactoring when the code is clean enough — don't gold-plate
## Choosing What to Test Next
Work from simple to complex:
1. **Degenerate cases** — null input, empty collection, zero
2. **Happy path** — the simplest valid input
3. **Variations** — different valid inputs that exercise different branches
4. **Edge cases** — boundary values, max sizes, special characters
5. **Error cases** — invalid input, failures, exceptions
6. **Integration** — how this connects to the rest of the system
Each test should require a small code change. If you need to write more than ~10 lines of production code to pass a test, the test is too big — split it.
## Rules
- **Never write production code without a failing test that demands it.**
- **Never write more than one failing test at a time.** One red → green → refactor cycle at a time.
- **The test drives the design.** If the code is hard to test, the design is wrong — change the design, not the test approach.
- **Don't mock what you own.** If you need to mock your own code to test it, the code needs restructuring.
- **Commit after each green+refactor cycle.** Small, passing, meaningful commits.
## Output
After each cycle, briefly state:
- **Test**: what behavior was added
- **Code**: what changed to make it pass
- **Refactor**: what was cleaned up (or "none needed")
When the feature is complete, provide a summary of all behaviors covered and any gaps that would need integration or manual testing.
+97
View File
@@ -0,0 +1,97 @@
---
name: test-writer
description: Write comprehensive tests for new or changed code. Use automatically when new features are added, functions are created, or behavior is modified.
# No disable-model-invocation — Claude can auto-trigger this when adding features.
# Add "disable-model-invocation: true" below if you prefer manual-only via /test-writer.
---
Write comprehensive tests for the code that was just added or changed.
## Step 1: Discover What Changed
- Check `git diff` and `git diff --cached` to identify new/modified functions, classes, and modules
- Read each changed file to understand the behavior being added
- Identify the project's existing test framework, patterns, and conventions by finding existing test files
- Place new test files next to the source files or in the project's established test directory — match whatever the project already does
## Step 2: Analyze Every Code Path
For each new or modified function/method/component, map out:
- **Happy path** — normal input, expected output
- **Edge cases** — empty input, single element, boundary values (0, 1, -1, MAX_INT)
- **Null/undefined/nil** — what happens with missing data
- **Type boundaries** — wrong types, type coercion traps
- **Error paths** — invalid input, network failures, timeouts, permission denied
- **Concurrency** — race conditions, parallel calls with shared state
- **State transitions** — initial state, intermediate states, final state
- **Integration points** — how this code interacts with its dependencies
## Step 3: Write the Tests
For EACH scenario identified above, write a test. No skipping.
### Structure
- **One assertion per test** — if a test name needs "and", split it into two tests
- **Descriptive names** — test names read as sentences describing the behavior:
- `should return empty array when input is empty`
- `should throw ValidationError when email format is invalid`
- `should retry 3 times before failing on network timeout`
- **Arrange-Act-Assert** — set up, execute, verify. Clear separation.
### What to Test
**Pure functions / business logic:**
- Every branch (if/else, switch, ternary)
- Every thrown error with exact error type and message
- Return value types and shapes
- Side effects (mutations, calls to external services)
**API endpoints / handlers:**
- Success response (status code, body shape, headers)
- Validation errors for each field (missing, wrong type, out of range)
- Authentication/authorization failures
- Rate limiting behavior if applicable
- Idempotency for non-GET methods
**UI components (if applicable):**
- Renders without crashing with required props
- Renders correct content for each state (loading, error, empty, populated)
- User interactions trigger correct callbacks (click, submit, type, select)
- Accessibility: focusable, keyboard navigable, correct ARIA attributes
- Conditional rendering — each branch shows/hides correct elements
**Database / data layer:**
- CRUD operations return correct data
- Unique constraints reject duplicates
- Cascade deletes work as expected
- Transactions roll back on failure
**Async operations:**
- Successful resolution
- Rejection / error handling
- Timeout behavior
- Cancellation if supported
- Concurrent calls don't interfere
### Mocking Rules
- Prefer real implementations over mocks
- Only mock at system boundaries: network, filesystem, clock, random
- Never mock the code under test
- If you mock, verify the mock was called with expected arguments
- Reset mocks between tests — no shared state leaking
## Step 4: Verify
- Run the new tests — confirm they all pass
- Temporarily break the code (change a return value or condition) — confirm at least one test fails
- If no test fails when code is broken, the tests are useless — rewrite them
- Check coverage: every new function should have at least one test, every branch should be exercised
## Output
- Complete, runnable test file(s) — not snippets
- Tests grouped by the function/component they cover
- A brief summary: how many tests, what scenarios covered, any gaps you couldn't cover and why