fix: improve test convention discovery and require skip reason documentation

Address PR #2212 review feedback:
- Expand "Discover conventions" to check configured test tooling (test
  scripts, dependency manifests, config files) before falling back to
  language-idiomatic defaults, so repos with test infrastructure but no
  test files yet pick the correct framework.
- Require explicit skip reason in final summary output when skipping
  tests due to missing infrastructure, improving traceability.
- Apply consistently to both step-03-implement.md and step-oneshot.md.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
DJ 2026-04-04 20:23:37 -07:00
parent 9ff99eceaf
commit 33bd6b5d86
2 changed files with 12 additions and 6 deletions

View File

@ -32,16 +32,19 @@ Hand `{spec_file}` to a sub-agent/task and let it implement. If no sub-agents ar
### Tests
**This is mandatory, not optional.** After implementation, write tests for every new or modified behavior. Follow the project's testing conventions discovered from `{project_context}`, CLAUDE.md, or the existing test suite.
**This is mandatory, not optional.** After implementation, write tests for every new or modified behavior. Follow the project's testing conventions discovered from `{project_context}`, CLAUDE.md, the existing test suite, and any configured test tooling.
1. **Discover conventions.** Identify the project's test framework, file-naming patterns, and test directory structure from existing tests. If no existing tests exist, use the project's language-idiomatic defaults.
1. **Discover conventions.** Identify the project's test framework, file-naming patterns, and test directory structure in this order:
- First, inspect existing tests.
- If existing tests are missing or insufficient, inspect the repo's configured test tooling: test scripts, dependencies/devDependencies, and test config files (for example `package.json`, `pytest.ini`, `pyproject.toml`, `jest.config.*`, `vitest.config.*`, `mocha` config, `rspec` config, `go test` conventions, etc.).
- Only if neither existing tests nor configured tooling establish conventions should you fall back to the project's language-idiomatic defaults.
2. **Write tests.** Cover:
- Happy-path behavior for each new or changed feature.
- Edge cases and error scenarios from the I/O & Edge-Case Matrix (if present in the spec).
- Regressions — any behavior that could break due to the change.
3. **Run tests.** Execute the test suite. All new and existing tests must pass. If any test fails, fix the implementation or the test before proceeding.
If the project has no test infrastructure at all (no test framework, no test directory, no test scripts), note this in the spec under `## Verification` and skip — but this is the only acceptable reason to skip tests.
If the project has no test infrastructure at all (no existing tests, no test framework, no relevant test config, no test directory, and no test scripts), note this in the spec under `## Verification` and skip — but this is the only acceptable reason to skip tests. Record this skip reason explicitly in the final summary output.
### Self-Check

View File

@ -18,13 +18,16 @@ Implement the clarified intent directly.
### Tests
**This is mandatory, not optional.** After implementation, write tests for every new or modified behavior. Follow the project's testing conventions discovered from `{project_context}`, CLAUDE.md, or the existing test suite.
**This is mandatory, not optional.** After implementation, write tests for every new or modified behavior. Follow the project's testing conventions discovered from `{project_context}`, CLAUDE.md, the existing test suite, and any configured test tooling.
1. **Discover conventions.** Identify the project's test framework, file-naming patterns, and test directory structure from existing tests. If no existing tests exist, use the project's language-idiomatic defaults.
1. **Discover conventions.** Identify the project's test framework, file-naming patterns, and test directory structure in this order:
- First, inspect existing tests.
- If existing tests are missing or insufficient, inspect the repo's configured test tooling: test scripts, dependencies/devDependencies, and test config files (for example `package.json`, `pytest.ini`, `pyproject.toml`, `jest.config.*`, `vitest.config.*`, `mocha` config, `rspec` config, `go test` conventions, etc.).
- Only if neither existing tests nor configured tooling establish conventions should you fall back to the project's language-idiomatic defaults.
2. **Write tests.** Cover happy-path behavior, edge cases, and regressions.
3. **Run tests.** All new and existing tests must pass. Fix failures before proceeding.
If the project has no test infrastructure at all (no test framework, no test directory, no test scripts), skip — but this is the only acceptable reason to skip tests.
If the project has no test infrastructure at all (no existing tests, no test framework, no relevant test config, no test directory, and no test scripts), skip — but this is the only acceptable reason to skip tests. Record this skip reason explicitly in the final summary output.
### Review