Custom Skills & Commands
You have repeated the same prompt 5 times this week: "Run the tests, check for TypeScript errors, then lint." That is a workflow. And workflows should be codified, not re-typed.
Skills turn recurring prompts into reusable, version-controlled commands you invoke with a slash.
Skills vs Commands
There are two locations for custom commands:
| Feature | Skills (preferred) | Commands (legacy) |
|---|---|---|
| Location | .claude/skills/<name>/SKILL.md | .claude/commands/<name>.md |
| Structure | Directory with SKILL.md + supporting files | Single markdown file |
| Supporting files | Yes - additional .md files in the directory | No |
| Status | Current, recommended | Still works, but prefer skills |
Skills are the preferred approach because they support multi-file skill definitions and keep related files organized in a directory.
Anatomy of a Skill
A skill lives in .claude/skills/<name>/SKILL.md. The SKILL.md file has YAML frontmatter followed by the prompt body:
---
name: deploy
description: Build, test, and deploy the application to production
user-invocable: true
argument-hint: "[environment]"
---
Deploy the application to the specified environment (default: staging).
Steps:
1. Run `npm run build` and verify no errors
2. Run `npm test` and verify all pass
3. Run `npm run deploy -- --env=$ARGUMENTS`
4. Verify the deployment health check passes
5. Report the deployment URLInvoke it with /deploy production and Claude executes the full workflow.
Frontmatter Options
Every option available in skill frontmatter:
| Field | Purpose | Example |
|---|---|---|
| name | Display name for the skill | deploy |
| description | Short description shown in skill list | Deploy to production |
| user-invocable | Can users invoke with /name (default: true) | true |
| disable-model-invocation | Prevent Claude from auto-invoking this skill | true |
| allowed-tools | Restrict which tools this skill can use | Bash, Read |
| argument-hint | Hint shown for expected arguments | [environment] |
| context | Additional files to load as context | See below |
| agent | Run as a subagent (separate context window) | true |
The context field lets you load additional files when the skill runs:
---
context:
- .claude/skills/deploy/runbook.md
- infrastructure/deploy-config.yaml
---Arguments
Skills receive arguments through several variables:
---
argument-hint: "<service> [--dry-run]"
---
Deploy $ARGUMENTS to production.
Service: $0
Flag: $1
# $ARGUMENTS = full string: "api --dry-run"
# $0 = first arg: "api"
# $1 = second arg: "--dry-run"When a user types /deploy api --dry-run, these variables are substituted before Claude sees the prompt.
Dynamic Context
Skills can inject live data from shell commands using the ! backtick syntax:
---
name: status
description: Show project health
---
Report on current project status.
## Current Branch
!`git branch --show-current`
## Uncommitted Changes
!`git status --short`
## Recent Commits
!`git log --oneline -5`
## Test Results
!`npm test 2>&1 | tail -20`When invoked, Claude sees the actual output of each command inlined into the prompt. This gives skills real-time awareness of project state without you having to copy-paste terminal output.
Skills are committed to git alongside your code. Every team member gets the same workflows. When you improve a skill, everyone benefits on the next pull. This is compound engineering - automation that gets better over time.
Supporting Files
Skills can include additional markdown files in their directory. These are loaded as extra context when the skill runs:
.claude/skills/review/
SKILL.md # Main skill definition
checklist.md # Review checklist
severity-levels.md # P1/P2/P3 definitions
Reference them in SKILL.md with the context frontmatter or let Claude discover them in the skill directory.
Practical Examples
Test runner skill
---
name: test
description: Run tests with coverage and report results
argument-hint: "[file-pattern]"
---
Run project tests. If a file pattern is provided, only run matching tests.
1. Run `npm test -- $ARGUMENTS` with coverage enabled
2. If any test fails, analyze the failure and suggest a fix
3. Report: total tests, passed, failed, coverage percentageCode review skill
---
name: review
description: Review staged changes against project standards
agent: true
---
Review all staged git changes against project standards.
## Changes to review
!`git diff --cached --stat`
## Detailed diff
!`git diff --cached`
## Checklist
- [ ] No `console.log` left in production code
- [ ] All new functions have JSDoc comments
- [ ] No hardcoded secrets or API keys
- [ ] Error handling covers edge cases
- [ ] Types are explicit, no `any`
Rate each file: PASS, WARN, or FAIL with specific line references.Database migration skill
---
name: migrate
description: Generate and validate a database migration
argument-hint: "<migration-name>"
---
Generate a new database migration named "$ARGUMENTS".
1. Run `npm run db:generate -- --name=$0`
2. Read the generated migration file
3. Verify it has both up and down operations
4. Check for destructive operations (DROP TABLE, DROP COLUMN)
- If found, warn and ask for confirmation
5. Run `npm run db:migrate` against the dev database
6. Verify the migration applied cleanlySet agent: true for skills that involve significant work (reviews, refactors, multi-step tasks). This runs the skill as a subagent with its own context window, keeping your main conversation clean. The subagent reports back when done.
Run / in Claude Code to see all available skills. The description frontmatter field is what users see in that list, so keep it short and clear.
Exercises
Create a test runner skill
Build a skill at .claude/skills/test/SKILL.md that:
- Detects the test framework in use (Jest, Vitest, pytest, etc.)
- Runs the appropriate test command
- Accepts an optional file pattern argument
- Reports results in a structured format
! backticks to read package.json or pyproject.toml for framework detection.---
name: test
description: Auto-detect test framework and run tests
argument-hint: "[file-pattern]"
---
Run project tests with auto-detection.
## Project config
!`cat package.json 2>/dev/null | head -30 || cat pyproject.toml 2>/dev/null | head -30 || echo "No config found"`
## Steps
1. Detect the test framework from project config above
2. Run the appropriate test command:
- Vitest: `npx vitest run $ARGUMENTS`
- Jest: `npx jest $ARGUMENTS`
- pytest: `python -m pytest $ARGUMENTS -v`
3. If tests fail, show the failing test names and error messages
4. Report: framework detected, total, passed, failedCreate a skill with dynamic context
Build a skill that generates a daily standup summary by pulling live data from git, your task tracker, and the current branch state.
! backtick blocks to pull git log (yesterday's commits), current branch, and any TODO comments in recently changed files.git log --since="yesterday" is a good start.---
name: standup
description: Generate daily standup from git activity
---
Generate my standup update for today.
## What I did (recent commits)
!`git log --oneline --author="$(git config user.email)" --since="yesterday" 2>/dev/null || echo "No commits since yesterday"`
## Current work (branch + uncommitted)
!`echo "Branch: $(git branch --show-current)" && git status --short`
## Open TODOs in changed files
!`git diff --name-only HEAD~5 2>/dev/null | head -10 | xargs grep -n "TODO\|FIXME\|HACK" 2>/dev/null | head -10 || echo "None found"`
## Format
Summarize as:
- **Yesterday**: what was accomplished (from commits)
- **Today**: what is in progress (from branch/uncommitted changes)
- **Blockers**: any TODOs or FIXMEs that need attentionPoint out that skills committed to .claude/skills/ travel with the repo. When a new team member clones the project, they instantly get every workflow the team has built. This is a major advantage over personal shell aliases or bookmarked prompts. Encourage students to think about skills as team infrastructure, not personal shortcuts.