Compare commits
46 Commits
9542a45f97
...
dfcaa9d25a
| Author | SHA1 | Date |
|---|---|---|
|
|
dfcaa9d25a | |
|
|
eeebf152af | |
|
|
59ed596392 | |
|
|
a2458a5537 | |
|
|
b73670700b | |
|
|
5a16c3a102 | |
|
|
58e0b6a634 | |
|
|
2785d382d5 | |
|
|
551a2ccb53 | |
|
|
3fc411d9c9 | |
|
|
ec30b580e7 | |
|
|
9e6e991b53 | |
|
|
dbdaae1be7 | |
|
|
1636bd5a55 | |
|
|
53045d35b1 | |
|
|
b3643af6dc | |
|
|
4ba6e19303 | |
|
|
38ab12da85 | |
|
|
0ae6799cb6 | |
|
|
e479b4164c | |
|
|
71a1c325f7 | |
|
|
59c58b2e2c | |
|
|
18ac3c931a | |
|
|
8fc7db7b97 | |
|
|
060d5562a4 | |
|
|
2bd6e9df1b | |
|
|
6886e3c8cd | |
|
|
1f5700ea14 | |
|
|
9700da9dc6 | |
|
|
0f18c4bcba | |
|
|
ae9b83388c | |
|
|
64c32d8c8c | |
|
|
eae4ad46a1 | |
|
|
a8758b0393 | |
|
|
ac081a27e8 | |
|
|
7c914ae8b2 | |
|
|
dadca29b09 | |
|
|
25f93a3b64 | |
|
|
0f708d0b89 | |
|
|
5fcdae02b5 | |
|
|
b8eeb78cff | |
|
|
b628eec9fd | |
|
|
f5d949b922 | |
|
|
6d1d7d0e72 | |
|
|
8b6a053d2e | |
|
|
460c27e29a |
|
|
@ -26,7 +26,7 @@ BMad does not mandate TEA. There are five valid ways to use it (or skip it). Pic
|
||||||
2. **TEA-only (Standalone)**
|
2. **TEA-only (Standalone)**
|
||||||
- Use TEA on a non-BMad project. Bring your own requirements, acceptance criteria, and environments.
|
- Use TEA on a non-BMad project. Bring your own requirements, acceptance criteria, and environments.
|
||||||
- Typical sequence: `*test-design` (system or epic) -> `*atdd` and/or `*automate` -> optional `*test-review` -> `*trace` for coverage and gate decisions.
|
- Typical sequence: `*test-design` (system or epic) -> `*atdd` and/or `*automate` -> optional `*test-review` -> `*trace` for coverage and gate decisions.
|
||||||
- Run `*framework` or `*ci` only if you want TEA to scaffold the harness or pipeline.
|
- Run `*framework` or `*ci` only if you want TEA to scaffold the harness or pipeline; they work best after you decide the stack/architecture.
|
||||||
|
|
||||||
3. **Integrated: Greenfield - BMad Method (Simple/Standard Work)**
|
3. **Integrated: Greenfield - BMad Method (Simple/Standard Work)**
|
||||||
- Phase 3: system-level `*test-design`, then `*framework` and `*ci`.
|
- Phase 3: system-level `*test-design`, then `*framework` and `*ci`.
|
||||||
|
|
@ -48,8 +48,29 @@ BMad does not mandate TEA. There are five valid ways to use it (or skip it). Pic
|
||||||
|
|
||||||
If you are unsure, default to the integrated path for your track and adjust later.
|
If you are unsure, default to the integrated path for your track and adjust later.
|
||||||
|
|
||||||
|
## TEA Command Catalog
|
||||||
|
|
||||||
|
| Command | Primary Outputs | Notes | With Playwright MCP Enhancements |
|
||||||
|
| -------------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
|
||||||
|
| `*framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - |
|
||||||
|
| `*ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - |
|
||||||
|
| `*test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) |
|
||||||
|
| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: AI generation verified with live browser (accurate selectors from real DOM) |
|
||||||
|
| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Pattern fixes enhanced with visual debugging + **+ Recording**: AI verified with live browser |
|
||||||
|
| `*test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - |
|
||||||
|
| `*nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability | - |
|
||||||
|
| `*trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - |
|
||||||
|
|
||||||
## TEA Workflow Lifecycle
|
## TEA Workflow Lifecycle
|
||||||
|
|
||||||
|
**Phase Numbering Note:** BMad uses a 4-phase methodology with optional Phase 1 and a documentation prerequisite:
|
||||||
|
|
||||||
|
- **Documentation** (Optional for brownfield): Prerequisite using `*document-project`
|
||||||
|
- **Phase 1** (Optional): Discovery/Analysis (`*brainstorm`, `*research`, `*product-brief`)
|
||||||
|
- **Phase 2** (Required): Planning (`*prd` creates PRD with FRs/NFRs)
|
||||||
|
- **Phase 3** (Track-dependent): Solutioning (`*architecture` → `*test-design` (system-level) → `*create-epics-and-stories` → TEA: `*framework`, `*ci` → `*implementation-readiness`)
|
||||||
|
- **Phase 4** (Required): Implementation (`*sprint-planning` → per-epic: `*test-design` → per-story: dev workflows)
|
||||||
|
|
||||||
TEA integrates into the BMad development lifecycle during Solutioning (Phase 3) and Implementation (Phase 4):
|
TEA integrates into the BMad development lifecycle during Solutioning (Phase 3) and Implementation (Phase 4):
|
||||||
|
|
||||||
```mermaid
|
```mermaid
|
||||||
|
|
@ -132,62 +153,25 @@ graph TB
|
||||||
style Waived fill:#9c27b0,stroke:#4a148c,stroke-width:3px,color:#000
|
style Waived fill:#9c27b0,stroke:#4a148c,stroke-width:3px,color:#000
|
||||||
```
|
```
|
||||||
|
|
||||||
**Phase Numbering Note:** BMad uses a 4-phase methodology with optional Phase 1 and documentation prerequisite:
|
|
||||||
|
|
||||||
- **Documentation** (Optional for brownfield): Prerequisite using `*document-project`
|
|
||||||
- **Phase 1** (Optional): Discovery/Analysis (`*brainstorm`, `*research`, `*product-brief`)
|
|
||||||
- **Phase 2** (Required): Planning (`*prd` creates PRD with FRs/NFRs)
|
|
||||||
- **Phase 3** (Track-dependent): Solutioning (`*architecture` → `*test-design` (system-level) → `*create-epics-and-stories` → TEA: `*framework`, `*ci` → `*implementation-readiness`)
|
|
||||||
- **Phase 4** (Required): Implementation (`*sprint-planning` → per-epic: `*test-design` → per-story: dev workflows)
|
|
||||||
|
|
||||||
**TEA workflows:** `*framework` and `*ci` run once in Phase 3 after architecture. `*test-design` is **dual-mode**:
|
**TEA workflows:** `*framework` and `*ci` run once in Phase 3 after architecture. `*test-design` is **dual-mode**:
|
||||||
|
|
||||||
- **System-level (Phase 3):** Run immediately after architecture/ADR drafting to produce `test-design-system.md` (testability review, ADR → test mapping, Architecturally Significant Requirements (ASRs), environment needs). Feeds the implementation-readiness gate.
|
- **System-level (Phase 3):** Run immediately after architecture/ADR drafting to produce `test-design-system.md` (testability review, ADR → test mapping, Architecturally Significant Requirements (ASRs), environment needs). Feeds the implementation-readiness gate.
|
||||||
- **Epic-level (Phase 4):** Run per-epic to produce `test-design-epic-N.md` (risk, priorities, coverage plan).
|
- **Epic-level (Phase 4):** Run per-epic to produce `test-design-epic-N.md` (risk, priorities, coverage plan).
|
||||||
|
|
||||||
Quick Flow track skips Phases 1 and 3.
|
The Quick Flow track skips Phases 1 and 3.
|
||||||
BMad Method and Enterprise use all phases based on project needs.
|
BMad Method and Enterprise use all phases based on project needs.
|
||||||
When an ADR or architecture draft is produced, run `*test-design` in **system-level** mode before the implementation-readiness gate. This ensures the ADR has an attached testability review and ADR → test mapping. Keep the test-design updated if ADRs change.
|
When an ADR or architecture draft is produced, run `*test-design` in **system-level** mode before the implementation-readiness gate. This ensures the ADR has an attached testability review and ADR → test mapping. Keep the test-design updated if ADRs change.
|
||||||
|
|
||||||
## Why TEA is Different from Other BMM Agents
|
## Why TEA Is Different from Other BMM Agents
|
||||||
|
|
||||||
TEA is the only BMM agent that operates in **multiple phases** (Phase 3 and Phase 4) and has its own **knowledge base architecture**.
|
TEA spans multiple phases (Phase 3, Phase 4, and the release gate). Most BMM agents operate in a single phase. That multi-phase role is paired with a dedicated testing knowledge base so standards stay consistent across projects.
|
||||||
|
|
||||||
### Phase-Specific Agents (Standard Pattern)
|
|
||||||
|
|
||||||
Most BMM agents work in a single phase:
|
|
||||||
|
|
||||||
- **Phase 1 (Analysis)**: Analyst agent
|
|
||||||
- **Phase 2 (Planning)**: PM agent
|
|
||||||
- **Phase 3 (Solutioning)**: Architect agent
|
|
||||||
- **Phase 4 (Implementation)**: SM, DEV agents
|
|
||||||
|
|
||||||
### TEA: Multi-Phase Quality Agent (Unique Pattern)
|
|
||||||
|
|
||||||
TEA is **the only agent that operates in multiple phases**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Phase 1 (Analysis) → [TEA not typically used]
|
|
||||||
↓
|
|
||||||
Phase 2 (Planning) → [PM defines requirements - TEA not active]
|
|
||||||
↓
|
|
||||||
Phase 3 (Solutioning) → TEA: *framework, *ci (test infrastructure AFTER architecture)
|
|
||||||
↓
|
|
||||||
Phase 4 (Implementation) → TEA: *test-design (per epic: "how do I test THIS feature?")
|
|
||||||
→ TEA: *atdd, *automate, *test-review, *trace (per story)
|
|
||||||
↓
|
|
||||||
Epic/Release Gate → TEA: *nfr-assess, *trace Phase 2 (release decision)
|
|
||||||
```
|
|
||||||
|
|
||||||
### TEA's 8 Workflows Across Phases
|
### TEA's 8 Workflows Across Phases
|
||||||
|
|
||||||
**Standard agents**: 1-3 workflows per phase
|
|
||||||
**TEA**: 8 workflows across Phase 3, Phase 4, and Release Gate
|
|
||||||
|
|
||||||
| Phase | TEA Workflows | Frequency | Purpose |
|
| Phase | TEA Workflows | Frequency | Purpose |
|
||||||
| ----------- | --------------------------------------------------------- | ---------------- | ---------------------------------------------- |
|
| ----------- | --------------------------------------------------------- | ---------------- | ---------------------------------------------- |
|
||||||
| **Phase 2** | (none) | - | Planning phase - PM defines requirements |
|
| **Phase 2** | (none) | - | Planning phase - PM defines requirements |
|
||||||
| **Phase 3** | \*framework, \*ci | Once per project | Setup test infrastructure AFTER architecture |
|
| **Phase 3** | \*test-design (system-level), \*framework, \*ci | Once per project | System testability review and test infrastructure setup |
|
||||||
| **Phase 4** | \*test-design, \*atdd, \*automate, \*test-review, \*trace | Per epic/story | Test planning per epic, then per-story testing |
|
| **Phase 4** | \*test-design, \*atdd, \*automate, \*test-review, \*trace | Per epic/story | Test planning per epic, then per-story testing |
|
||||||
| **Release** | \*nfr-assess, \*trace (Phase 2: gate) | Per epic/release | Go/no-go decision |
|
| **Release** | \*nfr-assess, \*trace (Phase 2: gate) | Per epic/release | Go/no-go decision |
|
||||||
|
|
||||||
|
|
@ -197,17 +181,17 @@ Epic/Release Gate → TEA: *nfr-assess, *trace Phase 2 (release decision)
|
||||||
|
|
||||||
TEA uniquely requires:
|
TEA uniquely requires:
|
||||||
|
|
||||||
- **Extensive domain knowledge**: 30+ fragments covering test patterns, CI/CD, fixtures, quality practices, and optional playwright-utils integration
|
- **Extensive domain knowledge**: Test patterns, CI/CD, fixtures, and quality practices
|
||||||
- **Cross-cutting concerns**: Domain-specific testing patterns that apply across all BMad projects (vs project-specific artifacts like PRDs/stories)
|
- **Cross-cutting concerns**: Standards that apply across all BMad projects (not just PRDs or stories)
|
||||||
- **Optional integrations**: MCP capabilities (exploratory, verification) and playwright-utils support
|
- **Optional integrations**: Playwright-utils and MCP enhancements
|
||||||
|
|
||||||
This architecture enables TEA to maintain consistent, production-ready testing patterns across all BMad projects while operating across multiple development phases.
|
This architecture lets TEA maintain consistent, production-ready testing patterns while operating across multiple phases.
|
||||||
|
|
||||||
## High-Level Cheat Sheets
|
## Track Cheat Sheets (Condensed)
|
||||||
|
|
||||||
These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks** across the **4-Phase Methodology** (Phase 1: Analysis, Phase 2: Planning, Phase 3: Solutioning, Phase 4: Implementation).
|
These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks** across the **4-Phase Methodology** (Phase 1: Analysis, Phase 2: Planning, Phase 3: Solutioning, Phase 4: Implementation).
|
||||||
|
|
||||||
**Note:** Quick Flow projects typically don't require TEA (covered in Overview). These cheat sheets focus on BMad Method and Enterprise tracks where TEA adds value.
|
**Note:** The Quick Flow track typically doesn't require TEA (covered in Overview). These cheat sheets focus on BMad Method and Enterprise tracks where TEA adds value.
|
||||||
|
|
||||||
**Legend for Track Deltas:**
|
**Legend for Track Deltas:**
|
||||||
|
|
||||||
|
|
@ -231,39 +215,15 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
|
||||||
| **Phase 4**: Story Review | Execute `*test-review` (optional), re-run `*trace` | Address recommendations, update code/tests | Quality report, refreshed coverage matrix |
|
| **Phase 4**: Story Review | Execute `*test-review` (optional), re-run `*trace` | Address recommendations, update code/tests | Quality report, refreshed coverage matrix |
|
||||||
| **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Confirm Definition of Done, share release notes | Quality audit, Gate YAML + release summary |
|
| **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Confirm Definition of Done, share release notes | Quality audit, Gate YAML + release summary |
|
||||||
|
|
||||||
<details>
|
**Key notes:**
|
||||||
<summary>Execution Notes</summary>
|
- Run `*framework` and `*ci` once in Phase 3 after architecture.
|
||||||
|
- Run `*test-design` per epic in Phase 4; use `*atdd` before dev when helpful.
|
||||||
- Run `*framework` only once per repo or when modern harness support is missing.
|
- Use `*trace` for gate decisions; `*test-review` is an optional audit.
|
||||||
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` to setup test infrastructure based on architectural decisions.
|
|
||||||
- **Phase 4 starts**: After solutioning is complete, sprint planning loads all epics.
|
|
||||||
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to create a test plan for THAT specific epic/feature. Output: `test-design-epic-N.md`.
|
|
||||||
- Use `*atdd` before coding when the team can adopt ATDD; share its checklist with the dev agent.
|
|
||||||
- Post-implementation, keep `*trace` current, expand coverage with `*automate`, optionally review test quality with `*test-review`. For release gate, run `*trace` with Phase 2 enabled to get deployment decision.
|
|
||||||
- Use `*test-review` after `*atdd` to validate generated tests, after `*automate` to ensure regression quality, or before gate for final audit.
|
|
||||||
- Clarification: `*test-review` is optional and only audits existing tests; run it after `*atdd` or `*automate` when you want a quality review, not as a required step.
|
|
||||||
- Clarification: `*atdd` outputs are not auto-consumed; share the ATDD doc/tests with the dev workflow. `*trace` does not run `*atdd`—it evaluates existing artifacts for coverage and gate readiness.
|
|
||||||
- Clarification: `*ci` is a one-time setup; recommended early (Phase 3 or before feature work), but it can be done later if it was skipped.
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>Worked Example – “Nova CRM” Greenfield Feature</summary>
|
|
||||||
|
|
||||||
1. **Planning (Phase 2):** Analyst runs `*product-brief`; PM executes `*prd` to produce PRD with FRs/NFRs.
|
|
||||||
2. **Solutioning (Phase 3):** Architect completes `*architecture` for the new module; `*create-epics-and-stories` generates epics/stories based on architecture; TEA sets up test infrastructure via `*framework` and `*ci` based on architectural decisions; gate check validates planning completeness.
|
|
||||||
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load all epics into sprint status.
|
|
||||||
4. **Epic 1 Planning (Phase 4):** TEA runs `*test-design` to create test plan for Epic 1, producing `test-design-epic-1.md` with risk assessment.
|
|
||||||
5. **Story Implementation (Phase 4):** For each story in Epic 1, SM generates story via `*create-story`; TEA optionally runs `*atdd`; Dev implements with guidance from failing tests.
|
|
||||||
6. **Post-Dev (Phase 4):** TEA runs `*automate`, optionally `*test-review` to audit test quality, re-runs `*trace` to refresh coverage.
|
|
||||||
7. **Release Gate:** TEA runs `*trace` with Phase 2 enabled to generate gate decision.
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
### Brownfield - BMad Method or Enterprise (Simple or Complex)
|
### Brownfield - BMad Method or Enterprise (Simple or Complex)
|
||||||
|
|
||||||
**Planning Tracks:** BMad Method or Enterprise Method
|
**Planning Tracks:** BMad Method or Enterprise Method
|
||||||
**Use Case:** Existing codebases - simple additions (BMad Method) or complex enterprise requirements (Enterprise Method)
|
**Use Case:** Existing codebases: simple additions (BMad Method) or complex enterprise requirements (Enterprise Method)
|
||||||
|
|
||||||
**🔄 Brownfield Deltas from Greenfield:**
|
**🔄 Brownfield Deltas from Greenfield:**
|
||||||
|
|
||||||
|
|
@ -284,31 +244,10 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
|
||||||
| **Phase 4**: Story Review | Apply `*test-review` (optional), re-run `*trace`, ➕ `*nfr-assess` if needed | Resolve gaps, update docs/tests | Quality report, refreshed coverage matrix, NFR report |
|
| **Phase 4**: Story Review | Apply `*test-review` (optional), re-run `*trace`, ➕ `*nfr-assess` if needed | Resolve gaps, update docs/tests | Quality report, refreshed coverage matrix, NFR report |
|
||||||
| **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Capture sign-offs, share release notes | Quality audit, Gate YAML + release summary |
|
| **Phase 4**: Release Gate | (Optional) `*test-review` for final audit, Run `*trace` (Phase 2) | Capture sign-offs, share release notes | Quality audit, Gate YAML + release summary |
|
||||||
|
|
||||||
<details>
|
**Key notes:**
|
||||||
<summary>Execution Notes</summary>
|
- Start with `*trace` in Phase 2 to baseline coverage.
|
||||||
|
- Focus `*test-design` on regression hotspots and integration risk.
|
||||||
- Lead with `*trace` during Planning (Phase 2) to baseline existing test coverage before architecture work begins.
|
- Run `*nfr-assess` before the gate if it wasn't done earlier.
|
||||||
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` to modernize test infrastructure. For brownfield, framework may need to integrate with or replace existing test setup.
|
|
||||||
- **Phase 4 starts**: After solutioning is complete and sprint planning loads all epics.
|
|
||||||
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to identify regression hotspots, integration risks, and mitigation strategies for THAT specific epic/feature. Output: `test-design-epic-N.md`.
|
|
||||||
- Use `*atdd` when stories benefit from ATDD; otherwise proceed to implementation and rely on post-dev automation.
|
|
||||||
- After development, expand coverage with `*automate`, optionally review test quality with `*test-review`, re-run `*trace` (Phase 2 for gate decision). Run `*nfr-assess` now if non-functional risks weren't addressed earlier.
|
|
||||||
- Use `*test-review` to validate existing brownfield tests or audit new tests before gate.
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary>Worked Example – “Atlas Payments” Brownfield Story</summary>
|
|
||||||
|
|
||||||
1. **Planning (Phase 2):** PM executes `*prd` to create PRD with FRs/NFRs; TEA runs `*trace` to baseline existing coverage.
|
|
||||||
2. **Solutioning (Phase 3):** Architect triggers `*architecture` capturing legacy payment flows and integration architecture; `*create-epics-and-stories` generates Epic 1 (Payment Processing) based on architecture; TEA sets up `*framework` and `*ci` based on architectural decisions; gate check validates planning.
|
|
||||||
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load Epic 1 into sprint status.
|
|
||||||
4. **Epic 1 Planning (Phase 4):** TEA runs `*test-design` for Epic 1 (Payment Processing), producing `test-design-epic-1.md` that flags settlement edge cases, regression hotspots, and mitigation plans.
|
|
||||||
5. **Story Implementation (Phase 4):** For each story in Epic 1, SM generates story via `*create-story`; TEA runs `*atdd` producing failing Playwright specs; Dev implements with guidance from tests and checklist.
|
|
||||||
6. **Post-Dev (Phase 4):** TEA applies `*automate`, optionally `*test-review` to audit test quality, re-runs `*trace` to refresh coverage.
|
|
||||||
7. **Release Gate:** TEA performs `*nfr-assess` to validate SLAs, runs `*trace` with Phase 2 enabled to generate gate decision (PASS/CONCERNS/FAIL).
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
### Greenfield - Enterprise Method (Enterprise/Compliance Work)
|
### Greenfield - Enterprise Method (Enterprise/Compliance Work)
|
||||||
|
|
||||||
|
|
@ -332,105 +271,36 @@ These cheat sheets map TEA workflows to the **BMad Method and Enterprise tracks*
|
||||||
| **Phase 4**: Story Dev | (Optional) `*atdd`, `*automate`, `*test-review`, `*trace` per story | SM `*create-story`, DEV implements | Tests, fixtures, quality reports, coverage matrices |
|
| **Phase 4**: Story Dev | (Optional) `*atdd`, `*automate`, `*test-review`, `*trace` per story | SM `*create-story`, DEV implements | Tests, fixtures, quality reports, coverage matrices |
|
||||||
| **Phase 4**: Release Gate | Final `*test-review` audit, Run `*trace` (Phase 2), 📦 archive artifacts | Capture sign-offs, 📦 compliance evidence | Quality audit, updated assessments, gate YAML, 📦 audit trail |
|
| **Phase 4**: Release Gate | Final `*test-review` audit, Run `*trace` (Phase 2), 📦 archive artifacts | Capture sign-offs, 📦 compliance evidence | Quality audit, updated assessments, gate YAML, 📦 audit trail |
|
||||||
|
|
||||||
<details>
|
**Key notes:**
|
||||||
<summary>Execution Notes</summary>
|
- Run `*nfr-assess` early in Phase 2.
|
||||||
|
- `*test-design` emphasizes compliance, security, and performance alignment.
|
||||||
|
- Archive artifacts at the release gate for audits.
|
||||||
|
|
||||||
- `*nfr-assess` runs early in Planning (Phase 2) to capture compliance, security, and performance requirements upfront.
|
**Related how-to guides:**
|
||||||
- **Phase 3 (Solutioning)**: After architecture is complete, run `*framework` and `*ci` with enterprise-grade configurations (selective testing, burn-in jobs, caching, notifications).
|
- [How to Run Test Design](/docs/how-to/workflows/run-test-design.md)
|
||||||
- **Phase 4 starts**: After solutioning is complete and sprint planning loads all epics.
|
- [How to Set Up a Test Framework](/docs/how-to/workflows/setup-test-framework.md)
|
||||||
- **`*test-design` runs per-epic**: At the beginning of working on each epic, run `*test-design` to create an enterprise-focused test plan for THAT specific epic, ensuring alignment with security architecture, performance targets, and compliance requirements. Output: `test-design-epic-N.md`.
|
|
||||||
- Use `*atdd` for stories when feasible so acceptance tests can lead implementation.
|
|
||||||
- Use `*test-review` per story or sprint to maintain quality standards and ensure compliance with testing best practices.
|
|
||||||
- Prior to release, rerun coverage (`*trace`, `*automate`), perform final quality audit with `*test-review`, and formalize the decision with `*trace` Phase 2 (gate decision); archive artifacts for compliance audits.
|
|
||||||
|
|
||||||
</details>
|
## Optional Integrations
|
||||||
|
|
||||||
<details>
|
### Playwright Utils (`@seontechnologies/playwright-utils`)
|
||||||
<summary>Worked Example – “Helios Ledger” Enterprise Release</summary>
|
|
||||||
|
|
||||||
1. **Planning (Phase 2):** Analyst runs `*research` and `*product-brief`; PM completes `*prd` creating PRD with FRs/NFRs; TEA runs `*nfr-assess` to establish NFR targets.
|
Production-ready fixtures and utilities that enhance TEA workflows.
|
||||||
2. **Solutioning (Phase 3):** Architect completes `*architecture` with enterprise considerations; `*create-epics-and-stories` generates epics/stories based on architecture; TEA sets up `*framework` and `*ci` with enterprise-grade configurations based on architectural decisions; gate check validates planning completeness.
|
|
||||||
3. **Sprint Start (Phase 4):** Scrum Master runs `*sprint-planning` to load all epics into sprint status.
|
|
||||||
4. **Per-Epic (Phase 4):** For each epic, TEA runs `*test-design` to create epic-specific test plan (e.g., `test-design-epic-1.md`, `test-design-epic-2.md`) with compliance-focused risk assessment.
|
|
||||||
5. **Per-Story (Phase 4):** For each story, TEA uses `*atdd`, `*automate`, `*test-review`, and `*trace`; Dev teams iterate on the findings.
|
|
||||||
6. **Release Gate:** TEA re-checks coverage, performs final quality audit with `*test-review`, and logs the final gate decision via `*trace` Phase 2, archiving artifacts for compliance.
|
|
||||||
|
|
||||||
</details>
|
- Install: `npm install -D @seontechnologies/playwright-utils`
|
||||||
|
> Note: Playwright Utils is enabled via the installer. Only set `tea_use_playwright_utils` in `_bmad/bmm/config.yaml` if you need to override the installer choice.
|
||||||
|
- Impacts: `*framework`, `*atdd`, `*automate`, `*test-review`, `*ci`
|
||||||
|
- Utilities include: api-request, auth-session, network-recorder, intercept-network-call, recurse, log, file-utils, burn-in, network-error-monitor, fixtures-composition
|
||||||
|
|
||||||
## TEA Command Catalog
|
### Playwright MCP Enhancements
|
||||||
|
|
||||||
| Command | Primary Outputs | Notes | With Playwright MCP Enhancements |
|
Live browser verification for test design and automation.
|
||||||
| -------------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
|
|
||||||
| `*framework` | Playwright/Cypress scaffold, `.env.example`, `.nvmrc`, sample specs | Use when no production-ready harness exists | - |
|
|
||||||
| `*ci` | CI workflow, selective test scripts, secrets checklist | Platform-aware (GitHub Actions default) | - |
|
|
||||||
| `*test-design` | Combined risk assessment, mitigation plan, and coverage strategy | Risk scoring + optional exploratory mode | **+ Exploratory**: Interactive UI discovery with browser automation (uncover actual functionality) |
|
|
||||||
| `*atdd` | Failing acceptance tests + implementation checklist | TDD red phase + optional recording mode | **+ Recording**: AI generation verified with live browser (accurate selectors from real DOM) |
|
|
||||||
| `*automate` | Prioritized specs, fixtures, README/script updates, DoD summary | Optional healing/recording, avoid duplicate coverage | **+ Healing**: Pattern fixes enhanced with visual debugging + **+ Recording**: AI verified with live browser |
|
|
||||||
| `*test-review` | Test quality review report with 0-100 score, violations, fixes | Reviews tests against knowledge base patterns | - |
|
|
||||||
| `*nfr-assess` | NFR assessment report with actions | Focus on security/performance/reliability | - |
|
|
||||||
| `*trace` | Phase 1: Coverage matrix, recommendations. Phase 2: Gate decision (PASS/CONCERNS/FAIL/WAIVED) | Two-phase workflow: traceability + gate decision | - |
|
|
||||||
|
|
||||||
## Playwright Utils Integration
|
|
||||||
|
|
||||||
TEA optionally integrates with `@seontechnologies/playwright-utils`, an open-source library providing fixture-based utilities for Playwright tests. This integration enhances TEA's test generation and review workflows with production-ready patterns.
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><strong>Installation & Configuration</strong></summary>
|
|
||||||
|
|
||||||
**Package**: `@seontechnologies/playwright-utils` ([npm](https://www.npmjs.com/package/@seontechnologies/playwright-utils) | [GitHub](https://github.com/seontechnologies/playwright-utils))
|
|
||||||
|
|
||||||
**Install**: `npm install -D @seontechnologies/playwright-utils`
|
|
||||||
|
|
||||||
**Enable during BMAD installation** by answering "Yes" when prompted, or manually set `tea_use_playwright_utils: true` in `_bmad/bmm/config.yaml`.
|
|
||||||
|
|
||||||
**To disable**: Set `tea_use_playwright_utils: false` in `_bmad/bmm/config.yaml`.
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><strong>How Playwright Utils Enhances TEA Workflows</strong></summary>
|
|
||||||
|
|
||||||
1. `*framework`:
|
|
||||||
- Default: Basic Playwright scaffold
|
|
||||||
- **+ playwright-utils**: Scaffold with api-request, network-recorder, auth-session, burn-in, network-error-monitor fixtures pre-configured
|
|
||||||
|
|
||||||
Benefit: Production-ready patterns from day one
|
|
||||||
|
|
||||||
2. `*automate`, `*atdd`:
|
|
||||||
- Default: Standard test patterns
|
|
||||||
- **+ playwright-utils**: Tests using api-request (schema validation), intercept-network-call (mocking), recurse (polling), log (structured logging), file-utils (CSV/PDF)
|
|
||||||
|
|
||||||
Benefit: Advanced patterns without boilerplate
|
|
||||||
|
|
||||||
3. `*test-review`:
|
|
||||||
- Default: Reviews against core knowledge base (22 fragments)
|
|
||||||
- **+ playwright-utils**: Reviews against expanded knowledge base (33 fragments: 22 core + 11 playwright-utils)
|
|
||||||
|
|
||||||
Benefit: Reviews include fixture composition, auth patterns, network recording best practices
|
|
||||||
|
|
||||||
4. `*ci`:
|
|
||||||
- Default: Standard CI workflow
|
|
||||||
- **+ playwright-utils**: CI workflow with burn-in script (smart test selection) and network-error-monitor integration
|
|
||||||
|
|
||||||
Benefit: Faster CI feedback, HTTP error detection
|
|
||||||
|
|
||||||
**Utilities available** (10 total): api-request, network-recorder, auth-session, intercept-network-call, recurse, log, file-utils, burn-in, network-error-monitor, fixtures-composition
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
## Playwright MCP Enhancements
|
|
||||||
|
|
||||||
TEA can leverage Playwright MCP servers to enhance test generation with live browser verification. MCP provides interactive capabilities on top of TEA's default AI-based approach.
|
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><strong>MCP Server Configuration</strong></summary>
|
|
||||||
|
|
||||||
**Two Playwright MCP servers** (actively maintained, continuously updated):
|
**Two Playwright MCP servers** (actively maintained, continuously updated):
|
||||||
|
|
||||||
- `playwright` - Browser automation (`npx @playwright/mcp@latest`)
|
- `playwright` - Browser automation (`npx @playwright/mcp@latest`)
|
||||||
- `playwright-test` - Test runner with failure analysis (`npx playwright run-test-mcp-server`)
|
- `playwright-test` - Test runner with failure analysis (`npx playwright run-test-mcp-server`)
|
||||||
|
|
||||||
**Config example**:
|
**Configuration example**:
|
||||||
|
|
||||||
```json
|
```json
|
||||||
{
|
{
|
||||||
|
|
@ -447,29 +317,8 @@ TEA can leverage Playwright MCP servers to enhance test generation with live bro
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
**To disable**: Set `tea_use_mcp_enhancements: false` in `_bmad/bmm/config.yaml` OR remove MCPs from IDE config.
|
- Helps `*test-design` validate actual UI behavior.
|
||||||
|
- Helps `*atdd` and `*automate` verify selectors against the live DOM.
|
||||||
|
- Enhances healing with `browser_snapshot`, console, network, and locator tools.
|
||||||
|
|
||||||
</details>
|
**To disable**: set `tea_use_mcp_enhancements: false` in `_bmad/bmm/config.yaml` or remove MCPs from IDE config.
|
||||||
|
|
||||||
<details>
|
|
||||||
<summary><strong>How MCP Enhances TEA Workflows</strong></summary>
|
|
||||||
|
|
||||||
1. `*test-design`:
|
|
||||||
- Default: Analysis + documentation
|
|
||||||
- **+ MCP**: Interactive UI discovery with `browser_navigate`, `browser_click`, `browser_snapshot`, behavior observation
|
|
||||||
|
|
||||||
Benefit: Discover actual functionality, edge cases, undocumented features
|
|
||||||
|
|
||||||
2. `*atdd`, `*automate`:
|
|
||||||
- Default: Infers selectors and interactions from requirements and knowledge fragments
|
|
||||||
- **+ MCP**: Generates tests **then** verifies with `generator_setup_page`, `browser_*` tools, validates against live app
|
|
||||||
|
|
||||||
Benefit: Accurate selectors from real DOM, verified behavior, refined test code
|
|
||||||
|
|
||||||
3. `*automate` (healing mode):
|
|
||||||
- Default: Pattern-based fixes from error messages + knowledge fragments
|
|
||||||
- **+ MCP**: Pattern fixes **enhanced with** `browser_snapshot`, `browser_console_messages`, `browser_network_requests`, `browser_generate_locator`
|
|
||||||
|
|
||||||
Benefit: Visual failure context, live DOM inspection, root cause discovery
|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,7 @@
|
||||||
<!-- if possible, run this in a separate subagent or process with read access to the project,
|
<!-- if possible, run this in a separate subagent or process with read access to the project,
|
||||||
but no context except the content to review -->
|
but no context except the content to review -->
|
||||||
|
|
||||||
<task id="_bmad/core/tasks/review-adversarial-general.xml" name="Adversarial Review (General)">
|
<task id="_bmad/core/tasks/review-adversarial-general.xml" name="Adversarial Review">
|
||||||
<objective>Cynically review content and produce findings</objective>
|
<objective>Cynically review content and produce findings</objective>
|
||||||
|
|
||||||
<inputs>
|
<inputs>
|
||||||
|
|
|
||||||
|
|
@ -1,23 +0,0 @@
|
||||||
# Senior Developer Review - Validation Checklist
|
|
||||||
|
|
||||||
- [ ] Story file loaded from `{{story_path}}`
|
|
||||||
- [ ] Story Status verified as reviewable (review)
|
|
||||||
- [ ] Epic and Story IDs resolved ({{epic_num}}.{{story_num}})
|
|
||||||
- [ ] Story Context located or warning recorded
|
|
||||||
- [ ] Epic Tech Spec located or warning recorded
|
|
||||||
- [ ] Architecture/standards docs loaded (as available)
|
|
||||||
- [ ] Tech stack detected and documented
|
|
||||||
- [ ] MCP doc search performed (or web fallback) and references captured
|
|
||||||
- [ ] Acceptance Criteria cross-checked against implementation
|
|
||||||
- [ ] File List reviewed and validated for completeness
|
|
||||||
- [ ] Tests identified and mapped to ACs; gaps noted
|
|
||||||
- [ ] Code quality review performed on changed files
|
|
||||||
- [ ] Security review performed on changed files and dependencies
|
|
||||||
- [ ] Outcome decided (Approve/Changes Requested/Blocked)
|
|
||||||
- [ ] Review notes appended under "Senior Developer Review (AI)"
|
|
||||||
- [ ] Change Log updated with review entry
|
|
||||||
- [ ] Status updated according to settings (if enabled)
|
|
||||||
- [ ] Sprint status synced (if sprint tracking enabled)
|
|
||||||
- [ ] Story saved successfully
|
|
||||||
|
|
||||||
_Reviewer: {{user_name}} on {{date}}_
|
|
||||||
|
|
@ -1,227 +0,0 @@
|
||||||
<workflow>
|
|
||||||
<critical>The workflow execution engine is governed by: {project-root}/_bmad/core/tasks/workflow.xml</critical>
|
|
||||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
|
||||||
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
|
|
||||||
<critical>Generate all documents in {document_output_language}</critical>
|
|
||||||
|
|
||||||
<critical>🔥 YOU ARE AN ADVERSARIAL CODE REVIEWER - Find what's wrong or missing! 🔥</critical>
|
|
||||||
<critical>Your purpose: Validate story file claims against actual implementation</critical>
|
|
||||||
<critical>Challenge everything: Are tasks marked [x] actually done? Are ACs really implemented?</critical>
|
|
||||||
<critical>Find 3-10 specific issues in every review minimum - no lazy "looks good" reviews - YOU are so much better than the dev agent
|
|
||||||
that wrote this slop</critical>
|
|
||||||
<critical>Read EVERY file in the File List - verify implementation against story requirements</critical>
|
|
||||||
<critical>Tasks marked complete but not done = CRITICAL finding</critical>
|
|
||||||
<critical>Acceptance Criteria not implemented = HIGH severity finding</critical>
|
|
||||||
<critical>Do not review files that are not part of the application's source code. Always exclude the _bmad/ and _bmad-output/ folders from the review. Always exclude IDE and CLI configuration folders like .cursor/ and .windsurf/ and .claude/</critical>
|
|
||||||
|
|
||||||
|
|
||||||
<step n="1" goal="Load story and discover changes">
|
|
||||||
<action>Use provided {{story_path}} or ask user which story file to review</action>
|
|
||||||
<action>Read COMPLETE story file</action>
|
|
||||||
<action>Set {{story_key}} = extracted key from filename (e.g., "1-2-user-authentication.md" → "1-2-user-authentication") or story
|
|
||||||
metadata</action>
|
|
||||||
<action>Parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Agent Record → File List, Change Log</action>
|
|
||||||
|
|
||||||
<!-- Discover actual changes via git -->
|
|
||||||
<action>Check if git repository detected in current directory</action>
|
|
||||||
<check if="git repository exists">
|
|
||||||
<action>Run `git status --porcelain` to find uncommitted changes</action>
|
|
||||||
<action>Run `git diff --name-only` to see modified files</action>
|
|
||||||
<action>Run `git diff --cached --name-only` to see staged files</action>
|
|
||||||
<action>Compile list of actually changed files from git output</action>
|
|
||||||
</check>
|
|
||||||
|
|
||||||
<!-- Cross-reference story File List vs git reality -->
|
|
||||||
<action>Compare story's Dev Agent Record → File List with actual git changes</action>
|
|
||||||
<action>Note discrepancies:
|
|
||||||
- Files in git but not in story File List
|
|
||||||
- Files in story File List but no git changes
|
|
||||||
- Missing documentation of what was actually changed
|
|
||||||
</action>
|
|
||||||
|
|
||||||
<invoke-protocol name="discover_inputs" />
|
|
||||||
<action>Load {project_context} for coding standards (if exists)</action>
|
|
||||||
</step>
|
|
||||||
|
|
||||||
<step n="2" goal="Build review attack plan">
|
|
||||||
<action>Extract ALL Acceptance Criteria from story</action>
|
|
||||||
<action>Extract ALL Tasks/Subtasks with completion status ([x] vs [ ])</action>
|
|
||||||
<action>From Dev Agent Record → File List, compile list of claimed changes</action>
|
|
||||||
|
|
||||||
<action>Create review plan:
|
|
||||||
1. **AC Validation**: Verify each AC is actually implemented
|
|
||||||
2. **Task Audit**: Verify each [x] task is really done
|
|
||||||
3. **Code Quality**: Security, performance, maintainability
|
|
||||||
4. **Test Quality**: Real tests vs placeholder bullshit
|
|
||||||
</action>
|
|
||||||
</step>
|
|
||||||
|
|
||||||
<step n="3" goal="Execute adversarial review">
|
|
||||||
<critical>VALIDATE EVERY CLAIM - Check git reality vs story claims</critical>
|
|
||||||
|
|
||||||
<!-- Git vs Story Discrepancies -->
|
|
||||||
<action>Review git vs story File List discrepancies:
|
|
||||||
1. **Files changed but not in story File List** → MEDIUM finding (incomplete documentation)
|
|
||||||
2. **Story lists files but no git changes** → HIGH finding (false claims)
|
|
||||||
3. **Uncommitted changes not documented** → MEDIUM finding (transparency issue)
|
|
||||||
</action>
|
|
||||||
|
|
||||||
<!-- Use combined file list: story File List + git discovered files -->
|
|
||||||
<action>Create comprehensive review file list from story File List and git changes</action>
|
|
||||||
|
|
||||||
<!-- AC Validation -->
|
|
||||||
<action>For EACH Acceptance Criterion:
|
|
||||||
1. Read the AC requirement
|
|
||||||
2. Search implementation files for evidence
|
|
||||||
3. Determine: IMPLEMENTED, PARTIAL, or MISSING
|
|
||||||
4. If MISSING/PARTIAL → HIGH SEVERITY finding
|
|
||||||
</action>
|
|
||||||
|
|
||||||
<!-- Task Completion Audit -->
|
|
||||||
<action>For EACH task marked [x]:
|
|
||||||
1. Read the task description
|
|
||||||
2. Search files for evidence it was actually done
|
|
||||||
3. **CRITICAL**: If marked [x] but NOT DONE → CRITICAL finding
|
|
||||||
4. Record specific proof (file:line)
|
|
||||||
</action>
|
|
||||||
|
|
||||||
<!-- Code Quality Deep Dive -->
|
|
||||||
<action>For EACH file in comprehensive review list:
|
|
||||||
1. **Security**: Look for injection risks, missing validation, auth issues
|
|
||||||
2. **Performance**: N+1 queries, inefficient loops, missing caching
|
|
||||||
3. **Error Handling**: Missing try/catch, poor error messages
|
|
||||||
4. **Code Quality**: Complex functions, magic numbers, poor naming
|
|
||||||
5. **Test Quality**: Are tests real assertions or placeholders?
|
|
||||||
</action>
|
|
||||||
|
|
||||||
<check if="total_issues_found lt 3">
|
|
||||||
<critical>NOT LOOKING HARD ENOUGH - Find more problems!</critical>
|
|
||||||
<action>Re-examine code for:
|
|
||||||
- Edge cases and null handling
|
|
||||||
- Architecture violations
|
|
||||||
- Documentation gaps
|
|
||||||
- Integration issues
|
|
||||||
- Dependency problems
|
|
||||||
- Git commit message quality (if applicable)
|
|
||||||
</action>
|
|
||||||
<action>Find at least 3 more specific, actionable issues</action>
|
|
||||||
</check>
|
|
||||||
</step>
|
|
||||||
|
|
||||||
<step n="4" goal="Present findings and fix them">
|
|
||||||
<action>Categorize findings: HIGH (must fix), MEDIUM (should fix), LOW (nice to fix)</action>
|
|
||||||
<action>Set {{fixed_count}} = 0</action>
|
|
||||||
<action>Set {{action_count}} = 0</action>
|
|
||||||
|
|
||||||
<output>**🔥 CODE REVIEW FINDINGS, {user_name}!**
|
|
||||||
|
|
||||||
**Story:** {{story_file}}
|
|
||||||
**Git vs Story Discrepancies:** {{git_discrepancy_count}} found
|
|
||||||
**Issues Found:** {{high_count}} High, {{medium_count}} Medium, {{low_count}} Low
|
|
||||||
|
|
||||||
## 🔴 CRITICAL ISSUES
|
|
||||||
- Tasks marked [x] but not actually implemented
|
|
||||||
- Acceptance Criteria not implemented
|
|
||||||
- Story claims files changed but no git evidence
|
|
||||||
- Security vulnerabilities
|
|
||||||
|
|
||||||
## 🟡 MEDIUM ISSUES
|
|
||||||
- Files changed but not documented in story File List
|
|
||||||
- Uncommitted changes not tracked
|
|
||||||
- Performance problems
|
|
||||||
- Poor test coverage/quality
|
|
||||||
- Code maintainability issues
|
|
||||||
|
|
||||||
## 🟢 LOW ISSUES
|
|
||||||
- Code style improvements
|
|
||||||
- Documentation gaps
|
|
||||||
- Git commit message quality
|
|
||||||
</output>
|
|
||||||
|
|
||||||
<ask>What should I do with these issues?
|
|
||||||
|
|
||||||
1. **Fix them automatically** - I'll update the code and tests
|
|
||||||
2. **Create action items** - Add to story Tasks/Subtasks for later
|
|
||||||
3. **Show me details** - Deep dive into specific issues
|
|
||||||
|
|
||||||
Choose [1], [2], or specify which issue to examine:</ask>
|
|
||||||
|
|
||||||
<check if="user chooses 1">
|
|
||||||
<action>Fix all HIGH and MEDIUM issues in the code</action>
|
|
||||||
<action>Add/update tests as needed</action>
|
|
||||||
<action>Update File List in story if files changed</action>
|
|
||||||
<action>Update story Dev Agent Record with fixes applied</action>
|
|
||||||
<action>Set {{fixed_count}} = number of HIGH and MEDIUM issues fixed</action>
|
|
||||||
<action>Set {{action_count}} = 0</action>
|
|
||||||
</check>
|
|
||||||
|
|
||||||
<check if="user chooses 2">
|
|
||||||
<action>Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks</action>
|
|
||||||
<action>For each issue: `- [ ] [AI-Review][Severity] Description [file:line]`</action>
|
|
||||||
<action>Set {{action_count}} = number of action items created</action>
|
|
||||||
<action>Set {{fixed_count}} = 0</action>
|
|
||||||
</check>
|
|
||||||
|
|
||||||
<check if="user chooses 3">
|
|
||||||
<action>Show detailed explanation with code examples</action>
|
|
||||||
<action>Return to fix decision</action>
|
|
||||||
</check>
|
|
||||||
</step>
|
|
||||||
|
|
||||||
<step n="5" goal="Update story status and sync sprint tracking">
|
|
||||||
<!-- Determine new status based on review outcome -->
|
|
||||||
<check if="all HIGH and MEDIUM issues fixed AND all ACs implemented">
|
|
||||||
<action>Set {{new_status}} = "done"</action>
|
|
||||||
<action>Update story Status field to "done"</action>
|
|
||||||
</check>
|
|
||||||
<check if="HIGH or MEDIUM issues remain OR ACs not fully implemented">
|
|
||||||
<action>Set {{new_status}} = "in-progress"</action>
|
|
||||||
<action>Update story Status field to "in-progress"</action>
|
|
||||||
</check>
|
|
||||||
<action>Save story file</action>
|
|
||||||
|
|
||||||
<!-- Determine sprint tracking status -->
|
|
||||||
<check if="{sprint_status} file exists">
|
|
||||||
<action>Set {{current_sprint_status}} = "enabled"</action>
|
|
||||||
</check>
|
|
||||||
<check if="{sprint_status} file does NOT exist">
|
|
||||||
<action>Set {{current_sprint_status}} = "no-sprint-tracking"</action>
|
|
||||||
</check>
|
|
||||||
|
|
||||||
<!-- Sync sprint-status.yaml when story status changes (only if sprint tracking enabled) -->
|
|
||||||
<check if="{{current_sprint_status}} != 'no-sprint-tracking'">
|
|
||||||
<action>Load the FULL file: {sprint_status}</action>
|
|
||||||
<action>Find development_status key matching {{story_key}}</action>
|
|
||||||
|
|
||||||
<check if="{{new_status}} == 'done'">
|
|
||||||
<action>Update development_status[{{story_key}}] = "done"</action>
|
|
||||||
<action>Save file, preserving ALL comments and structure</action>
|
|
||||||
<output>✅ Sprint status synced: {{story_key}} → done</output>
|
|
||||||
</check>
|
|
||||||
|
|
||||||
<check if="{{new_status}} == 'in-progress'">
|
|
||||||
<action>Update development_status[{{story_key}}] = "in-progress"</action>
|
|
||||||
<action>Save file, preserving ALL comments and structure</action>
|
|
||||||
<output>🔄 Sprint status synced: {{story_key}} → in-progress</output>
|
|
||||||
</check>
|
|
||||||
|
|
||||||
<check if="story key not found in sprint status">
|
|
||||||
<output>⚠️ Story file updated, but sprint-status sync failed: {{story_key}} not found in sprint-status.yaml</output>
|
|
||||||
</check>
|
|
||||||
</check>
|
|
||||||
|
|
||||||
<check if="{{current_sprint_status}} == 'no-sprint-tracking'">
|
|
||||||
<output>ℹ️ Story status updated (no sprint tracking configured)</output>
|
|
||||||
</check>
|
|
||||||
|
|
||||||
<output>**✅ Review Complete!**
|
|
||||||
|
|
||||||
**Story Status:** {{new_status}}
|
|
||||||
**Issues Fixed:** {{fixed_count}}
|
|
||||||
**Action Items Created:** {{action_count}}
|
|
||||||
|
|
||||||
{{#if new_status == "done"}}Code review complete!{{else}}Address the action items and continue development.{{/if}}
|
|
||||||
</output>
|
|
||||||
</step>
|
|
||||||
|
|
||||||
</workflow>
|
|
||||||
|
|
@ -0,0 +1,122 @@
|
||||||
|
---
|
||||||
|
name: 'step-01-load-story'
|
||||||
|
description: "Compare story's file list against git changes"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Step 1: Load Story and Discover Changes
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## STATE VARIABLES (capture now, persist throughout)
|
||||||
|
|
||||||
|
These variables MUST be set in this step and available to all subsequent steps:
|
||||||
|
|
||||||
|
- `story_path` - Path to the story file being reviewed
|
||||||
|
- `story_key` - Story identifier (e.g., "1-2-user-authentication")
|
||||||
|
- `story_content` - Complete, unmodified file content from story_path (loaded in substep 2)
|
||||||
|
- `story_file_list` - Files claimed in story's Dev Agent Record → File List
|
||||||
|
- `git_changed_files` - Files actually changed according to git
|
||||||
|
- `git_discrepancies` - Mismatches between `story_file_list` and `git_changed_files`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION SEQUENCE
|
||||||
|
|
||||||
|
### 1. Identify Story
|
||||||
|
|
||||||
|
Ask user: "Which story would you like to review?"
|
||||||
|
|
||||||
|
**Try input as direct file path first:**
|
||||||
|
If input resolves to an existing file:
|
||||||
|
- Verify it's in {sprint_status} with status `review` or `done`
|
||||||
|
- If verified → set `story_path` to that file path
|
||||||
|
- If NOT verified → Warn user the file is not in {sprint_status} (or wrong status). Ask: "Continue anyway?"
|
||||||
|
- If yes → set `story_path`
|
||||||
|
- If no → return to user prompt (ask "Which story would you like to review?" again)
|
||||||
|
|
||||||
|
**Search {sprint_status}** (if input is not a direct file):
|
||||||
|
Search for stories with status `review` or `done`. Match by priority:
|
||||||
|
1. Story number resembles input closely enough (e.g., "1-2" matches "1 2", "1.2", "one dash two", "one two"; "1-32" matches "one thirty two"). Do NOT match if numbers differ (e.g., "1-33" does not match "1-32")
|
||||||
|
2. Exact story name/key (e.g., "1-2-user-auth-api")
|
||||||
|
3. Story name/title resembles input closely enough
|
||||||
|
4. Story description resembles input closely enough
|
||||||
|
|
||||||
|
**Resolution:**
|
||||||
|
- **Single match**: Confident. Set `story_path`, proceed to substep 2
|
||||||
|
- **Multiple matches**: Uncertain. Present all candidates to user. Wait for selection. Set `story_path`, proceed to substep 2
|
||||||
|
- **No match**: Ask user to clarify or provide the full story path. Return to user prompt (ask "Which story would you like to review?" again)
|
||||||
|
|
||||||
|
### 2. Load Story File
|
||||||
|
|
||||||
|
**Load file content:**
|
||||||
|
Read the complete contents of {story_path} and assign to `story_content` WITHOUT filtering, truncating or summarizing. If {story_path} cannot be read, is empty, or obviously doesn't have the story: report the error to the user and HALT the workflow.
|
||||||
|
|
||||||
|
**Extract story identifier:**
|
||||||
|
Verify the filename ends with `.md` extension. Remove `.md` to get `story_key` (e.g., "1-2-user-authentication.md" → "1-2-user-authentication"). If filename doesn't end with `.md` or the result is empty: report the error to the user and HALT the workflow.
|
||||||
|
|
||||||
|
### 3. Extract File List from Story
|
||||||
|
|
||||||
|
Extract `story_file_list` from the Dev Agent Record → File List section of {story_content}.
|
||||||
|
|
||||||
|
**If Dev Agent Record or File List section not found:** Report to user and set `story_file_list` = NO_FILE_LIST.
|
||||||
|
|
||||||
|
### 4. Discover Git Changes
|
||||||
|
|
||||||
|
Check if git repository exists.
|
||||||
|
|
||||||
|
**If NOT a git repo:** Set `git_changed_files` = NO_GIT, `git_discrepancies` = NO_GIT. Skip to substep 5.
|
||||||
|
|
||||||
|
**If git repo detected:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git status --porcelain
|
||||||
|
git diff -M --name-only
|
||||||
|
git diff -M --cached --name-only
|
||||||
|
```
|
||||||
|
|
||||||
|
If any git command fails: Report the error to the user and HALT the workflow.
|
||||||
|
|
||||||
|
Compile `git_changed_files` = union of modified, staged, new, deleted, and renamed files.
|
||||||
|
|
||||||
|
### 5. Cross-Reference Story vs Git
|
||||||
|
|
||||||
|
**If {git_changed_files} is empty:**
|
||||||
|
|
||||||
|
Ask user: "No git changes detected. Continue anyway?"
|
||||||
|
|
||||||
|
- If **no**: HALT the workflow
|
||||||
|
- If **yes**: Continue to comparison
|
||||||
|
|
||||||
|
**Compare {story_file_list} with {git_changed_files}:**
|
||||||
|
|
||||||
|
Exclude git-ignored files from the comparison (run `git check-ignore` if needed).
|
||||||
|
|
||||||
|
Set `git_discrepancies` with categories:
|
||||||
|
|
||||||
|
- **files_in_git_not_story**: Files changed in git but not in story File List
|
||||||
|
- **files_in_story_not_git**: Files in story File List but no git changes (excluding git-ignored)
|
||||||
|
- **uncommitted_undocumented**: Uncommitted changes not tracked in story
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## COMPLETION CHECKLIST
|
||||||
|
|
||||||
|
Before proceeding to the next step, verify ALL of the following:
|
||||||
|
|
||||||
|
- `story_path` identified and loaded
|
||||||
|
- `story_key` extracted
|
||||||
|
- `story_content` captured completely and unmodified
|
||||||
|
- `story_file_list` compiled from Dev Agent Record (or NO_FILE_LIST if not found)
|
||||||
|
- `git_changed_files` discovered via git commands (or NO_GIT if not a git repo)
|
||||||
|
- `git_discrepancies` calculated
|
||||||
|
|
||||||
|
**If any criterion is not met:** Report to the user and HALT the workflow.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## NEXT STEP DIRECTIVE
|
||||||
|
|
||||||
|
**CRITICAL:** When this step completes, explicitly state:
|
||||||
|
|
||||||
|
"**NEXT:** Loading `step-02-build-attack-plan.md`"
|
||||||
|
|
||||||
|
|
@ -0,0 +1,155 @@
|
||||||
|
---
|
||||||
|
name: 'step-02-adversarial-review'
|
||||||
|
description: 'Lean adversarial review - context-independent diff analysis, no story knowledge'
|
||||||
|
---
|
||||||
|
|
||||||
|
# Step 2: Adversarial Review (Information Asymmetric)
|
||||||
|
|
||||||
|
**Goal:** Perform context-independent adversarial review of code changes. Reviewer sees ONLY the diff - no story, no ACs, no context about WHY changes were made.
|
||||||
|
|
||||||
|
<critical>Reviewer has FULL repo access but NO knowledge of WHY changes were made</critical>
|
||||||
|
<critical>DO NOT include story file in prompt - asymmetry is about intent, not visibility</critical>
|
||||||
|
<critical>This catches issues a fresh reviewer would find that story-biased review might miss</critical>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AVAILABLE STATE
|
||||||
|
|
||||||
|
From previous steps:
|
||||||
|
|
||||||
|
- `{story_path}`, `{story_key}`
|
||||||
|
- `{file_list}` - Files listed in story's File List section
|
||||||
|
- `{git_changed_files}` - Files changed according to git
|
||||||
|
- `{baseline_commit}` - From story file Dev Agent Record
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## STATE VARIABLE (capture now)
|
||||||
|
|
||||||
|
- `{diff_output}` - Complete diff of changes
|
||||||
|
- `{asymmetric_findings}` - Findings from adversarial review
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION SEQUENCE
|
||||||
|
|
||||||
|
### 1. Construct Diff
|
||||||
|
|
||||||
|
Build complete diff of all changes for this story.
|
||||||
|
|
||||||
|
**Step 1a: Read baseline from story file**
|
||||||
|
|
||||||
|
Extract `Baseline Commit` from the story file's Dev Agent Record section.
|
||||||
|
|
||||||
|
- If found and not "NO_GIT": use as `{baseline_commit}`
|
||||||
|
- If "NO_GIT" or missing: proceed to fallback
|
||||||
|
|
||||||
|
**Step 1b: Construct diff (with baseline)**
|
||||||
|
|
||||||
|
If `{baseline_commit}` is a valid commit hash:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git diff {baseline_commit} -- ':!{implementation_artifacts}'
|
||||||
|
```
|
||||||
|
|
||||||
|
This captures all changes (committed + uncommitted) since dev-story started.
|
||||||
|
|
||||||
|
**Step 1c: Fallback (no baseline)**
|
||||||
|
|
||||||
|
If no baseline available, review current state of files in `{file_list}`:
|
||||||
|
|
||||||
|
- Read each file listed in the story's File List section
|
||||||
|
- Review as full file content (not a diff)
|
||||||
|
|
||||||
|
**Include in `{diff_output}`:**
|
||||||
|
|
||||||
|
- All modified tracked files (except files in `{implementation_artifacts}` - asymmetry requires hiding intent)
|
||||||
|
- All new files created for this story
|
||||||
|
- Full content for new files
|
||||||
|
|
||||||
|
**Note:** Do NOT `git add` anything - this is read-only inspection.
|
||||||
|
|
||||||
|
### 2. Invoke Adversarial Review
|
||||||
|
|
||||||
|
With `{diff_output}` constructed, invoke the review task. If possible, use information asymmetry: run this step, and only it, in a separate subagent or process with read access to the project, but no context except the `{diff_output}`.
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<invoke-task>Review {diff_output} using {project-root}/_bmad/core/tasks/review-adversarial-general.xml</invoke-task>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Platform fallback:** If task invocation not available, load the task file and execute its instructions inline, passing `{diff_output}` as the content.
|
||||||
|
|
||||||
|
The task should: review `{diff_output}` and return a list of findings.
|
||||||
|
|
||||||
|
### 3. Process Adversarial Findings
|
||||||
|
|
||||||
|
Capture findings from adversarial review.
|
||||||
|
|
||||||
|
**If zero findings:** HALT - this is suspicious. Re-analyze or ask for guidance.
|
||||||
|
|
||||||
|
Evaluate severity (Critical, High, Medium, Low) and validity (Real, Noise, Undecided).
|
||||||
|
|
||||||
|
Add each finding to `{asymmetric_findings}` (no IDs yet - assigned after merge):
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
source: "adversarial",
|
||||||
|
severity: "...",
|
||||||
|
validity: "...",
|
||||||
|
description: "...",
|
||||||
|
location: "file:line (if applicable)"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Phase 1 Summary
|
||||||
|
|
||||||
|
Present adversarial findings:
|
||||||
|
|
||||||
|
```
|
||||||
|
**Phase 1: Adversarial Review Complete**
|
||||||
|
|
||||||
|
**Reviewer Context:** Pure diff review (no story knowledge)
|
||||||
|
**Findings:** {count}
|
||||||
|
- Critical: {count}
|
||||||
|
- High: {count}
|
||||||
|
- Medium: {count}
|
||||||
|
- Low: {count}
|
||||||
|
|
||||||
|
**Validity Assessment:**
|
||||||
|
- Real: {count}
|
||||||
|
- Noise: {count}
|
||||||
|
- Undecided: {count}
|
||||||
|
|
||||||
|
Proceeding to attack plan construction...
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## NEXT STEP DIRECTIVE
|
||||||
|
|
||||||
|
**CRITICAL:** When this step completes, explicitly state:
|
||||||
|
|
||||||
|
"**NEXT:** Loading `step-03-build-attack-plan.md`"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SUCCESS METRICS
|
||||||
|
|
||||||
|
- Diff constructed from correct source (uncommitted or commits)
|
||||||
|
- Story file excluded from diff
|
||||||
|
- Task invoked with diff as input
|
||||||
|
- Adversarial review executed
|
||||||
|
- Findings captured with severity and validity
|
||||||
|
- `{asymmetric_findings}` populated
|
||||||
|
- Phase summary presented
|
||||||
|
- Explicit NEXT directive provided
|
||||||
|
|
||||||
|
## FAILURE MODES
|
||||||
|
|
||||||
|
- Including story file in diff (breaks asymmetry)
|
||||||
|
- Skipping adversarial review entirely
|
||||||
|
- Accepting zero findings without halt
|
||||||
|
- Invoking task without providing diff input
|
||||||
|
- Missing severity/validity classification
|
||||||
|
- Not storing findings for consolidation
|
||||||
|
- No explicit NEXT directive at step completion
|
||||||
|
|
@ -0,0 +1,147 @@
|
||||||
|
---
|
||||||
|
name: 'step-03-build-attack-plan'
|
||||||
|
description: 'Extract ACs and tasks, create comprehensive review plan for context-aware phase'
|
||||||
|
---
|
||||||
|
|
||||||
|
# Step 3: Build Review Attack Plan
|
||||||
|
|
||||||
|
**Goal:** Extract all reviewable items from story and create attack plan for context-aware review phase.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AVAILABLE STATE
|
||||||
|
|
||||||
|
From previous steps:
|
||||||
|
|
||||||
|
- `{story_path}` - Path to the story file
|
||||||
|
- `{story_key}` - Story identifier
|
||||||
|
- `{story_file_list}` - Files claimed in story
|
||||||
|
- `{git_changed_files}` - Files actually changed (git)
|
||||||
|
- `{git_discrepancies}` - Differences between claims and reality
|
||||||
|
- `{asymmetric_findings}` - Findings from Phase 1 (adversarial review)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## STATE VARIABLES (capture now)
|
||||||
|
|
||||||
|
- `{acceptance_criteria}` - All ACs extracted from story
|
||||||
|
- `{tasks_with_status}` - All tasks with their [x] or [ ] status
|
||||||
|
- `{comprehensive_file_list}` - Union of story files + git files
|
||||||
|
- `{review_attack_plan}` - Structured plan for context-aware phase
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION SEQUENCE
|
||||||
|
|
||||||
|
### 1. Extract Acceptance Criteria
|
||||||
|
|
||||||
|
Parse all Acceptance Criteria from story:
|
||||||
|
|
||||||
|
```
|
||||||
|
{acceptance_criteria} = [
|
||||||
|
{ id: "AC1", requirement: "...", testable: true/false },
|
||||||
|
{ id: "AC2", requirement: "...", testable: true/false },
|
||||||
|
...
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Note any ACs that are vague or untestable.
|
||||||
|
|
||||||
|
### 2. Extract Tasks with Status
|
||||||
|
|
||||||
|
Parse all Tasks/Subtasks with completion markers:
|
||||||
|
|
||||||
|
```
|
||||||
|
{tasks_with_status} = [
|
||||||
|
{ id: "T1", description: "...", status: "complete" ([x]) or "incomplete" ([ ]) },
|
||||||
|
{ id: "T1.1", description: "...", status: "complete" or "incomplete" },
|
||||||
|
...
|
||||||
|
]
|
||||||
|
```
|
||||||
|
|
||||||
|
Flag any tasks marked complete [x] for verification.
|
||||||
|
|
||||||
|
### 3. Build Comprehensive File List
|
||||||
|
|
||||||
|
Merge `{story_file_list}` and `{git_changed_files}`:
|
||||||
|
|
||||||
|
```
|
||||||
|
{comprehensive_file_list} = union of:
|
||||||
|
- Files in story Dev Agent Record
|
||||||
|
- Files changed according to git
|
||||||
|
- Deduped and sorted
|
||||||
|
```
|
||||||
|
|
||||||
|
Exclude from review:
|
||||||
|
|
||||||
|
- `_bmad/`, `_bmad-output/`
|
||||||
|
- `.cursor/`, `.windsurf/`, `.claude/`
|
||||||
|
- IDE/editor config files
|
||||||
|
|
||||||
|
### 4. Create Review Attack Plan
|
||||||
|
|
||||||
|
Structure the `{review_attack_plan}`:
|
||||||
|
|
||||||
|
```
|
||||||
|
PHASE 1: Adversarial Review (Step 2) [COMPLETE - {asymmetric_findings} findings]
|
||||||
|
├── Fresh code review without story context
|
||||||
|
│ └── {asymmetric_findings} items to consolidate
|
||||||
|
|
||||||
|
PHASE 2: Context-Aware Review (Step 4)
|
||||||
|
├── Git vs Story Discrepancies
|
||||||
|
│ └── {git_discrepancies} items
|
||||||
|
├── AC Validation
|
||||||
|
│ └── {acceptance_criteria} items to verify
|
||||||
|
├── Task Completion Audit
|
||||||
|
│ └── {tasks_with_status} marked [x] to verify
|
||||||
|
└── Code Quality Review
|
||||||
|
└── {comprehensive_file_list} files to review
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Preview Attack Plan
|
||||||
|
|
||||||
|
Present to user (brief summary):
|
||||||
|
|
||||||
|
```
|
||||||
|
**Review Attack Plan**
|
||||||
|
|
||||||
|
**Story:** {story_key}
|
||||||
|
|
||||||
|
**Phase 1 (Adversarial - Complete):** {asymmetric_findings count} findings from fresh review
|
||||||
|
**Phase 2 (Context-Aware - Starting):**
|
||||||
|
- ACs to verify: {count}
|
||||||
|
- Tasks marked complete: {count}
|
||||||
|
- Files to review: {count}
|
||||||
|
- Git discrepancies detected: {count}
|
||||||
|
|
||||||
|
Proceeding with context-aware review...
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## NEXT STEP DIRECTIVE
|
||||||
|
|
||||||
|
**CRITICAL:** When this step completes, explicitly state:
|
||||||
|
|
||||||
|
"**NEXT:** Loading `step-04-context-aware-review.md`"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SUCCESS METRICS
|
||||||
|
|
||||||
|
- All ACs extracted with testability assessment
|
||||||
|
- All tasks extracted with completion status
|
||||||
|
- Comprehensive file list built (story + git)
|
||||||
|
- Exclusions applied correctly
|
||||||
|
- Attack plan structured for context-aware phase
|
||||||
|
- Summary presented to user
|
||||||
|
- Explicit NEXT directive provided
|
||||||
|
|
||||||
|
## FAILURE MODES
|
||||||
|
|
||||||
|
- Missing AC extraction
|
||||||
|
- Not capturing task completion status
|
||||||
|
- Forgetting to merge story + git files
|
||||||
|
- Not excluding IDE/config directories
|
||||||
|
- Skipping attack plan structure
|
||||||
|
- No explicit NEXT directive at step completion
|
||||||
|
|
@ -0,0 +1,182 @@
|
||||||
|
---
|
||||||
|
name: 'step-04-context-aware-review'
|
||||||
|
description: 'Story-aware validation: verify ACs, audit task completion, check git discrepancies'
|
||||||
|
---
|
||||||
|
|
||||||
|
# Step 4: Context-Aware Review
|
||||||
|
|
||||||
|
**Goal:** Perform story-aware validation - verify AC implementation, audit task completion, review code quality with full story context.
|
||||||
|
|
||||||
|
<critical>VALIDATE EVERY CLAIM - Check git reality vs story claims</critical>
|
||||||
|
<critical>You KNOW the story requirements - use that knowledge to find gaps</critical>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AVAILABLE STATE
|
||||||
|
|
||||||
|
From previous steps:
|
||||||
|
|
||||||
|
- `{story_path}`, `{story_key}`
|
||||||
|
- `{story_file_list}`, `{git_changed_files}`, `{git_discrepancies}`
|
||||||
|
- `{acceptance_criteria}`, `{tasks_with_status}`
|
||||||
|
- `{comprehensive_file_list}`, `{review_attack_plan}`
|
||||||
|
- `{asymmetric_findings}` - From Phase 1 (adversarial review)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## STATE VARIABLE (capture now)
|
||||||
|
|
||||||
|
- `{context_aware_findings}` - All findings from this phase
|
||||||
|
|
||||||
|
Initialize `{context_aware_findings}` as empty list.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION SEQUENCE
|
||||||
|
|
||||||
|
### 0. Load Planning Context (JIT)
|
||||||
|
|
||||||
|
Load planning documents for AC validation against system design:
|
||||||
|
|
||||||
|
- **Architecture**: `{planning_artifacts}/*architecture*.md` (or sharded: `{planning_artifacts}/*architecture*/*.md`)
|
||||||
|
- **UX Design**: `{planning_artifacts}/*ux*.md` (if UI review relevant)
|
||||||
|
- **Epic**: `{planning_artifacts}/*epic*/epic-{epic_num}.md` (the epic containing this story)
|
||||||
|
|
||||||
|
These provide the design context needed to validate AC implementation against system requirements.
|
||||||
|
|
||||||
|
### 1. Git vs Story Discrepancies
|
||||||
|
|
||||||
|
Review `{git_discrepancies}` and create findings:
|
||||||
|
|
||||||
|
| Discrepancy Type | Severity |
|
||||||
|
| --- | --- |
|
||||||
|
| Files changed but not in story File List | Medium |
|
||||||
|
| Story lists files but no git changes | High |
|
||||||
|
| Uncommitted changes not documented | Medium |
|
||||||
|
|
||||||
|
For each discrepancy, add to `{context_aware_findings}` (no IDs yet - assigned after merge):
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
source: "git-discrepancy",
|
||||||
|
severity: "...",
|
||||||
|
description: "...",
|
||||||
|
evidence: "file: X, git says: Y, story says: Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Acceptance Criteria Validation
|
||||||
|
|
||||||
|
For EACH AC in `{acceptance_criteria}`:
|
||||||
|
|
||||||
|
1. Read the AC requirement
|
||||||
|
2. Search implementation files in `{comprehensive_file_list}` for evidence
|
||||||
|
3. Determine status: IMPLEMENTED, PARTIAL, or MISSING
|
||||||
|
4. If PARTIAL or MISSING → add High severity finding
|
||||||
|
|
||||||
|
Add to `{context_aware_findings}`:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
source: "ac-validation",
|
||||||
|
severity: "High",
|
||||||
|
description: "AC {id} not fully implemented: {details}",
|
||||||
|
evidence: "Expected: {ac}, Found: {what_was_found}"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Task Completion Audit
|
||||||
|
|
||||||
|
For EACH task marked [x] in `{tasks_with_status}`:
|
||||||
|
|
||||||
|
1. Read the task description
|
||||||
|
2. Search files for evidence it was actually done
|
||||||
|
3. **Critical**: If marked [x] but NOT DONE → Critical finding
|
||||||
|
4. Record specific proof (file:line) if done
|
||||||
|
|
||||||
|
Add to `{context_aware_findings}` if false:
|
||||||
|
|
||||||
|
```
|
||||||
|
{
|
||||||
|
source: "task-audit",
|
||||||
|
severity: "Critical",
|
||||||
|
description: "Task marked complete but not implemented: {task}",
|
||||||
|
evidence: "Searched: {files}, Found: no evidence of {expected}"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Code Quality Review (Context-Aware)
|
||||||
|
|
||||||
|
For EACH file in `{comprehensive_file_list}`:
|
||||||
|
|
||||||
|
Review with STORY CONTEXT (you know what was supposed to be built):
|
||||||
|
|
||||||
|
- **Security**: Missing validation for AC-specified inputs?
|
||||||
|
- **Performance**: Story mentioned scale requirements met?
|
||||||
|
- **Error Handling**: Edge cases from AC covered?
|
||||||
|
- **Test Quality**: Tests actually verify ACs or just placeholders?
|
||||||
|
- **Architecture Compliance**: Follows patterns in architecture doc?
|
||||||
|
|
||||||
|
Add findings to `{context_aware_findings}` with appropriate severity.
|
||||||
|
|
||||||
|
### 5. Minimum Finding Check
|
||||||
|
|
||||||
|
<critical>If total findings < 3, NOT LOOKING HARD ENOUGH</critical>
|
||||||
|
|
||||||
|
Re-examine for:
|
||||||
|
|
||||||
|
- Edge cases not covered by implementation
|
||||||
|
- Documentation gaps
|
||||||
|
- Integration issues with other components
|
||||||
|
- Dependency problems
|
||||||
|
- Comments missing for complex logic
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PHASE 2 SUMMARY
|
||||||
|
|
||||||
|
Present context-aware findings:
|
||||||
|
|
||||||
|
```
|
||||||
|
**Phase 2: Context-Aware Review Complete**
|
||||||
|
|
||||||
|
**Findings:** {count}
|
||||||
|
- Critical: {count}
|
||||||
|
- High: {count}
|
||||||
|
- Medium: {count}
|
||||||
|
- Low: {count}
|
||||||
|
|
||||||
|
Proceeding to findings consolidation...
|
||||||
|
```
|
||||||
|
|
||||||
|
Store `{context_aware_findings}` for consolidation in step 5.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## NEXT STEP DIRECTIVE
|
||||||
|
|
||||||
|
**CRITICAL:** When this step completes, explicitly state:
|
||||||
|
|
||||||
|
"**NEXT:** Loading `step-05-consolidate-findings.md`"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SUCCESS METRICS
|
||||||
|
|
||||||
|
- All git discrepancies reviewed and findings created
|
||||||
|
- Every AC checked for implementation evidence
|
||||||
|
- Every [x] task verified with proof
|
||||||
|
- Code quality reviewed with story context
|
||||||
|
- Minimum 3 findings (push harder if not)
|
||||||
|
- `{context_aware_findings}` populated
|
||||||
|
- Phase summary presented
|
||||||
|
- Explicit NEXT directive provided
|
||||||
|
|
||||||
|
## FAILURE MODES
|
||||||
|
|
||||||
|
- Accepting "looks good" with < 3 findings
|
||||||
|
- Not verifying [x] tasks with actual evidence
|
||||||
|
- Missing AC validation
|
||||||
|
- Ignoring git discrepancies
|
||||||
|
- Not storing findings for consolidation
|
||||||
|
- No explicit NEXT directive at step completion
|
||||||
|
|
@ -0,0 +1,158 @@
|
||||||
|
---
|
||||||
|
name: 'step-05-consolidate-findings'
|
||||||
|
description: 'Merge and deduplicate findings from both review phases'
|
||||||
|
---
|
||||||
|
|
||||||
|
# Step 5: Consolidate Findings
|
||||||
|
|
||||||
|
**Goal:** Merge findings from adversarial review (Phase 1) and context-aware review (Phase 2), deduplicate, and present unified findings table.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AVAILABLE STATE
|
||||||
|
|
||||||
|
From previous steps:
|
||||||
|
|
||||||
|
- `{story_path}`, `{story_key}`
|
||||||
|
- `{asymmetric_findings}` - Findings from Phase 1 (step 2 - adversarial review)
|
||||||
|
- `{context_aware_findings}` - Findings from Phase 2 (step 4 - context-aware review)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## STATE VARIABLE (capture now)
|
||||||
|
|
||||||
|
- `{consolidated_findings}` - Merged, deduplicated findings
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION SEQUENCE
|
||||||
|
|
||||||
|
### 1. Merge All Findings
|
||||||
|
|
||||||
|
Combine both finding lists:
|
||||||
|
|
||||||
|
```
|
||||||
|
all_findings = {context_aware_findings} + {asymmetric_findings}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Deduplicate Findings
|
||||||
|
|
||||||
|
Identify duplicates (same underlying issue found by both phases):
|
||||||
|
|
||||||
|
**Duplicate Detection Criteria:**
|
||||||
|
|
||||||
|
- Same file + same line range
|
||||||
|
- Same issue type (e.g., both about error handling in same function)
|
||||||
|
- Overlapping descriptions
|
||||||
|
|
||||||
|
**Resolution Rule:**
|
||||||
|
|
||||||
|
Keep the MORE DETAILED version:
|
||||||
|
|
||||||
|
- If context-aware finding has AC reference → keep that
|
||||||
|
- If adversarial finding has better technical detail → keep that
|
||||||
|
- When in doubt, keep context-aware (has more context)
|
||||||
|
|
||||||
|
Note which findings were merged (for transparency in the summary).
|
||||||
|
|
||||||
|
### 3. Normalize Severity
|
||||||
|
|
||||||
|
Apply consistent severity scale (Critical, High, Medium, Low).
|
||||||
|
|
||||||
|
### 4. Filter Noise
|
||||||
|
|
||||||
|
Review adversarial findings marked as Noise:
|
||||||
|
|
||||||
|
- If clearly false positive (e.g., style preference, not actual issue) → exclude
|
||||||
|
- If questionable → keep with Undecided validity
|
||||||
|
- If context reveals it's actually valid → upgrade to Real
|
||||||
|
|
||||||
|
**Do NOT filter:**
|
||||||
|
|
||||||
|
- Any Critical or High severity
|
||||||
|
- Any context-aware findings (they have story context)
|
||||||
|
|
||||||
|
### 5. Sort and Number Findings
|
||||||
|
|
||||||
|
Sort by severity (Critical → High → Medium → Low), then assign IDs: F1, F2, F3, etc.
|
||||||
|
|
||||||
|
Build `{consolidated_findings}`:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
| ID | Severity | Source | Description | Location |
|
||||||
|
|----|----------|--------|-------------|----------|
|
||||||
|
| F1 | Critical | task-audit | Task 3 marked [x] but not implemented | src/auth.ts |
|
||||||
|
| F2 | High | ac-validation | AC2 partially implemented | src/api/*.ts |
|
||||||
|
| F3 | High | adversarial | Missing error handling in API calls | src/api/client.ts:45 |
|
||||||
|
| F4 | Medium | git-discrepancy | File changed but not in story | src/utils.ts |
|
||||||
|
| F5 | Low | adversarial | Magic number should be constant | src/config.ts:12 |
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Present Consolidated Findings
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
**Consolidated Code Review Findings**
|
||||||
|
|
||||||
|
**Story:** {story_key}
|
||||||
|
|
||||||
|
**Summary:**
|
||||||
|
- Total findings: {count}
|
||||||
|
- Critical: {count}
|
||||||
|
- High: {count}
|
||||||
|
- Medium: {count}
|
||||||
|
- Low: {count}
|
||||||
|
|
||||||
|
**Deduplication:** {merged_count} duplicate findings merged
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Findings by Severity
|
||||||
|
|
||||||
|
### Critical (Must Fix)
|
||||||
|
{list critical findings with full details}
|
||||||
|
|
||||||
|
### High (Should Fix)
|
||||||
|
{list high findings with full details}
|
||||||
|
|
||||||
|
### Medium (Consider Fixing)
|
||||||
|
{list medium findings}
|
||||||
|
|
||||||
|
### Low (Nice to Fix)
|
||||||
|
{list low findings}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Phase Sources:**
|
||||||
|
- Adversarial (Phase 1): {count} findings
|
||||||
|
- Context-Aware (Phase 2): {count} findings
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## NEXT STEP DIRECTIVE
|
||||||
|
|
||||||
|
**CRITICAL:** When this step completes, explicitly state:
|
||||||
|
|
||||||
|
"**NEXT:** Loading `step-06-resolve-and-update.md`"
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SUCCESS METRICS
|
||||||
|
|
||||||
|
- All findings merged from both phases
|
||||||
|
- Duplicates identified and resolved (kept more detailed)
|
||||||
|
- Severity normalized consistently
|
||||||
|
- Noise filtered appropriately (but not excessively)
|
||||||
|
- Consolidated table created
|
||||||
|
- `{consolidated_findings}` populated
|
||||||
|
- Summary presented to user
|
||||||
|
- Explicit NEXT directive provided
|
||||||
|
|
||||||
|
## FAILURE MODES
|
||||||
|
|
||||||
|
- Missing findings from either phase
|
||||||
|
- Not detecting duplicates (double-counting issues)
|
||||||
|
- Inconsistent severity assignment
|
||||||
|
- Filtering real issues as noise
|
||||||
|
- Not storing consolidated findings
|
||||||
|
- No explicit NEXT directive at step completion
|
||||||
|
|
@ -0,0 +1,213 @@
|
||||||
|
---
|
||||||
|
name: 'step-06-resolve-and-update'
|
||||||
|
description: 'Present findings, fix or create action items, update story and sprint status'
|
||||||
|
---
|
||||||
|
|
||||||
|
# Step 6: Resolve Findings and Update Status
|
||||||
|
|
||||||
|
**Goal:** Present findings to user, handle resolution (fix or action items), update story file and sprint status.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AVAILABLE STATE
|
||||||
|
|
||||||
|
From previous steps:
|
||||||
|
|
||||||
|
- `{story_path}`, `{story_key}`
|
||||||
|
- `{consolidated_findings}` - Merged findings from step 5
|
||||||
|
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## STATE VARIABLES (capture now)
|
||||||
|
|
||||||
|
- `{fixed_count}` - Number of issues fixed
|
||||||
|
- `{action_count}` - Number of action items created
|
||||||
|
- `{new_status}` - Final story status
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION SEQUENCE
|
||||||
|
|
||||||
|
### 1. Present Resolution Options
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
**Code Review Findings for {user_name}**
|
||||||
|
|
||||||
|
**Story:** {story_key}
|
||||||
|
**Total Issues:** {consolidated_findings.count}
|
||||||
|
|
||||||
|
{consolidated_findings_table}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**What should I do with these issues?**
|
||||||
|
|
||||||
|
**[1] Fix them automatically** - I'll update the code and tests
|
||||||
|
**[2] Create action items** - Add to story Tasks/Subtasks for later
|
||||||
|
**[3] Walk through** - Discuss each finding individually
|
||||||
|
**[4] Show details** - Deep dive into specific issues
|
||||||
|
|
||||||
|
Choose [1], [2], [3], [4], or specify which issue (e.g., "CF-3"):
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Handle User Choice
|
||||||
|
|
||||||
|
**Option [1]: Fix Automatically**
|
||||||
|
|
||||||
|
1. For each CRITICAL and HIGH finding:
|
||||||
|
- Apply the fix in the code
|
||||||
|
- Add/update tests if needed
|
||||||
|
- Record what was fixed
|
||||||
|
2. Update story Dev Agent Record → File List if files changed
|
||||||
|
3. Add "Code Review Fixes Applied" entry to Change Log
|
||||||
|
4. Set `{fixed_count}` = number of issues fixed
|
||||||
|
5. Set `{action_count}` = 0 (LOW findings can become action items)
|
||||||
|
|
||||||
|
**Option [2]: Create Action Items**
|
||||||
|
|
||||||
|
1. Add "Review Follow-ups (AI)" subsection to Tasks/Subtasks
|
||||||
|
2. For each finding:
|
||||||
|
```
|
||||||
|
- [ ] [AI-Review][{severity}] {description} [{location}]
|
||||||
|
```
|
||||||
|
3. Set `{action_count}` = number of action items created
|
||||||
|
4. Set `{fixed_count}` = 0
|
||||||
|
|
||||||
|
**Option [3]: Walk Through**
|
||||||
|
|
||||||
|
For each finding in order:
|
||||||
|
|
||||||
|
1. Present finding with full context and code snippet
|
||||||
|
2. Ask: **[f]ix now / [s]kip / [d]iscuss more**
|
||||||
|
3. If fix: Apply fix immediately, increment `{fixed_count}`
|
||||||
|
4. If skip: Note as acknowledged, optionally create action item
|
||||||
|
5. If discuss: Provide more detail, repeat choice
|
||||||
|
6. Continue to next finding
|
||||||
|
|
||||||
|
After all processed, summarize what was fixed/skipped.
|
||||||
|
|
||||||
|
**Option [4]: Show Details**
|
||||||
|
|
||||||
|
1. Present expanded details for specific finding(s)
|
||||||
|
2. Return to resolution choice
|
||||||
|
|
||||||
|
### 3. Determine Final Status
|
||||||
|
|
||||||
|
Evaluate completion:
|
||||||
|
|
||||||
|
**If ALL conditions met:**
|
||||||
|
|
||||||
|
- All CRITICAL issues fixed
|
||||||
|
- All HIGH issues fixed or have action items
|
||||||
|
- All ACs verified as implemented
|
||||||
|
|
||||||
|
Set `{new_status}` = "done"
|
||||||
|
|
||||||
|
**Otherwise:**
|
||||||
|
|
||||||
|
Set `{new_status}` = "in-progress"
|
||||||
|
|
||||||
|
### 4. Update Story File
|
||||||
|
|
||||||
|
1. Update story Status field to `{new_status}`
|
||||||
|
2. Add review notes to Dev Agent Record:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Senior Developer Review (AI)
|
||||||
|
|
||||||
|
**Date:** {date}
|
||||||
|
**Reviewer:** AI Code Review
|
||||||
|
|
||||||
|
**Findings Summary:**
|
||||||
|
- CRITICAL: {count} ({fixed}/{action_items})
|
||||||
|
- HIGH: {count} ({fixed}/{action_items})
|
||||||
|
- MEDIUM: {count}
|
||||||
|
- LOW: {count}
|
||||||
|
|
||||||
|
**Resolution:** {approach_taken}
|
||||||
|
|
||||||
|
**Files Modified:** {list if fixes applied}
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Update Change Log:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
- [{date}] Code review completed - {outcome_summary}
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Save story file
|
||||||
|
|
||||||
|
### 5. Sync Sprint Status
|
||||||
|
|
||||||
|
Check if `{sprint_status}` file exists:
|
||||||
|
|
||||||
|
**If exists:**
|
||||||
|
|
||||||
|
1. Load `{sprint_status}`
|
||||||
|
2. Find `{story_key}` in development_status
|
||||||
|
3. Update status to `{new_status}`
|
||||||
|
4. Save file, preserving ALL comments and structure
|
||||||
|
|
||||||
|
```
|
||||||
|
Sprint status synced: {story_key} {new_status}
|
||||||
|
```
|
||||||
|
|
||||||
|
**If not exists or key not found:**
|
||||||
|
|
||||||
|
```
|
||||||
|
Sprint status sync skipped (no sprint tracking or key not found)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Completion Output
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
** Code Review Complete!**
|
||||||
|
|
||||||
|
**Story:** {story_key}
|
||||||
|
**Final Status:** {new_status}
|
||||||
|
**Issues Fixed:** {fixed_count}
|
||||||
|
**Action Items Created:** {action_count}
|
||||||
|
|
||||||
|
{if new_status == "done"}
|
||||||
|
Code review passed! Story is ready for final verification.
|
||||||
|
{else}
|
||||||
|
Address the action items and run another review cycle.
|
||||||
|
{endif}
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Next Steps:**
|
||||||
|
- Commit changes (if fixes applied)
|
||||||
|
- Run tests to verify fixes
|
||||||
|
- Address remaining action items (if any)
|
||||||
|
- Mark story complete when all items resolved
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## WORKFLOW COMPLETE
|
||||||
|
|
||||||
|
This is the final step. The Code Review workflow is now complete.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SUCCESS METRICS
|
||||||
|
|
||||||
|
- Resolution options presented clearly
|
||||||
|
- User choice handled correctly
|
||||||
|
- Fixes applied cleanly (if chosen)
|
||||||
|
- Action items created correctly (if chosen)
|
||||||
|
- Story status determined correctly
|
||||||
|
- Story file updated with review notes
|
||||||
|
- Sprint status synced (if applicable)
|
||||||
|
- Completion summary provided
|
||||||
|
|
||||||
|
## FAILURE MODES
|
||||||
|
|
||||||
|
- Not presenting resolution options
|
||||||
|
- Fixing without user consent
|
||||||
|
- Not updating story file
|
||||||
|
- Wrong status determination (done when issues remain)
|
||||||
|
- Not syncing sprint status when it exists
|
||||||
|
- Missing completion summary
|
||||||
|
|
@ -0,0 +1,39 @@
|
||||||
|
---
|
||||||
|
name: code-review
|
||||||
|
description: 'Code review for dev-story output. Audits acceptance criteria against implementation, performs adversarial diff review, can auto-fix with approval. A different LLM than the implementer is recommended.'
|
||||||
|
web_bundle: false
|
||||||
|
---
|
||||||
|
|
||||||
|
# Code Review Workflow
|
||||||
|
|
||||||
|
## WORKFLOW ARCHITECTURE: STEP FILES
|
||||||
|
|
||||||
|
- This file (workflow.md) stays in context throughout
|
||||||
|
- Each step file is read just before processing (current step stays at end of context)
|
||||||
|
- State persists via variables: `{story_path}`, `{story_key}`, `{context_aware_findings}`, `{asymmetric_findings}`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## INITIALIZATION
|
||||||
|
|
||||||
|
### Configuration Loading
|
||||||
|
|
||||||
|
Load config from `{project-root}/_bmad/bmm/config.yaml` and resolve:
|
||||||
|
|
||||||
|
- `user_name`, `communication_language`, `user_skill_level`, `document_output_language`
|
||||||
|
- `planning_artifacts`, `implementation_artifacts`
|
||||||
|
- `date` as system-generated current datetime
|
||||||
|
|
||||||
|
- ✅ YOU MUST ALWAYS SPEAK OUTPUT In your Agent communication style with the config `{communication_language}`
|
||||||
|
|
||||||
|
### Paths
|
||||||
|
|
||||||
|
- `installed_path` = `{project-root}/_bmad/bmm/workflows/4-implementation/code-review`
|
||||||
|
- `project_context` = `**/project-context.md` (load if exists)
|
||||||
|
- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EXECUTION
|
||||||
|
|
||||||
|
Read and follow `steps/step-01-load-story.md` to begin the workflow.
|
||||||
|
|
@ -1,51 +0,0 @@
|
||||||
# Review Story Workflow
|
|
||||||
name: code-review
|
|
||||||
description: "Perform an ADVERSARIAL Senior Developer code review that finds 3-10 specific problems in every story. Challenges everything: code quality, test coverage, architecture compliance, security, performance. NEVER accepts `looks good` - must find minimum issues and can auto-fix with user approval."
|
|
||||||
author: "BMad"
|
|
||||||
|
|
||||||
# Critical variables from config
|
|
||||||
config_source: "{project-root}/_bmad/bmm/config.yaml"
|
|
||||||
user_name: "{config_source}:user_name"
|
|
||||||
communication_language: "{config_source}:communication_language"
|
|
||||||
user_skill_level: "{config_source}:user_skill_level"
|
|
||||||
document_output_language: "{config_source}:document_output_language"
|
|
||||||
date: system-generated
|
|
||||||
planning_artifacts: "{config_source}:planning_artifacts"
|
|
||||||
implementation_artifacts: "{config_source}:implementation_artifacts"
|
|
||||||
output_folder: "{implementation_artifacts}"
|
|
||||||
sprint_status: "{implementation_artifacts}/sprint-status.yaml"
|
|
||||||
|
|
||||||
# Workflow components
|
|
||||||
installed_path: "{project-root}/_bmad/bmm/workflows/4-implementation/code-review"
|
|
||||||
instructions: "{installed_path}/instructions.xml"
|
|
||||||
validation: "{installed_path}/checklist.md"
|
|
||||||
template: false
|
|
||||||
|
|
||||||
variables:
|
|
||||||
# Project context
|
|
||||||
project_context: "**/project-context.md"
|
|
||||||
story_dir: "{implementation_artifacts}"
|
|
||||||
|
|
||||||
# Smart input file references - handles both whole docs and sharded docs
|
|
||||||
# Priority: Whole document first, then sharded version
|
|
||||||
# Strategy: SELECTIVE LOAD - only load the specific epic needed for this story review
|
|
||||||
input_file_patterns:
|
|
||||||
architecture:
|
|
||||||
description: "System architecture for review context"
|
|
||||||
whole: "{planning_artifacts}/*architecture*.md"
|
|
||||||
sharded: "{planning_artifacts}/*architecture*/*.md"
|
|
||||||
load_strategy: "FULL_LOAD"
|
|
||||||
ux_design:
|
|
||||||
description: "UX design specification (if UI review)"
|
|
||||||
whole: "{planning_artifacts}/*ux*.md"
|
|
||||||
sharded: "{planning_artifacts}/*ux*/*.md"
|
|
||||||
load_strategy: "FULL_LOAD"
|
|
||||||
epics:
|
|
||||||
description: "Epic containing story being reviewed"
|
|
||||||
whole: "{planning_artifacts}/*epic*.md"
|
|
||||||
sharded_index: "{planning_artifacts}/*epic*/index.md"
|
|
||||||
sharded_single: "{planning_artifacts}/*epic*/epic-{{epic_num}}.md"
|
|
||||||
load_strategy: "SELECTIVE_LOAD"
|
|
||||||
|
|
||||||
standalone: true
|
|
||||||
web_bundle: false
|
|
||||||
|
|
@ -219,6 +219,17 @@
|
||||||
<output>ℹ️ No sprint status file exists - story progress will be tracked in story file only</output>
|
<output>ℹ️ No sprint status file exists - story progress will be tracked in story file only</output>
|
||||||
<action>Set {{current_sprint_status}} = "no-sprint-tracking"</action>
|
<action>Set {{current_sprint_status}} = "no-sprint-tracking"</action>
|
||||||
</check>
|
</check>
|
||||||
|
|
||||||
|
<!-- Capture baseline commit for code review -->
|
||||||
|
<check if="git is available">
|
||||||
|
<action>Capture current HEAD commit: `git rev-parse HEAD`</action>
|
||||||
|
<action>Store as {{baseline_commit}}</action>
|
||||||
|
<action>Write to story file Dev Agent Record: "**Baseline Commit:** {{baseline_commit}}"</action>
|
||||||
|
</check>
|
||||||
|
<check if="git is NOT available">
|
||||||
|
<action>Set {{baseline_commit}} = "NO_GIT"</action>
|
||||||
|
<action>Write to story file Dev Agent Record: "**Baseline Commit:** NO_GIT"</action>
|
||||||
|
</check>
|
||||||
</step>
|
</step>
|
||||||
|
|
||||||
<step n="5" goal="Implement task following red-green-refactor cycle">
|
<step n="5" goal="Implement task following red-green-refactor cycle">
|
||||||
|
|
|
||||||
|
|
@ -51,7 +51,11 @@ Use best-effort diff construction:
|
||||||
|
|
||||||
### Capture as {diff_output}
|
### Capture as {diff_output}
|
||||||
|
|
||||||
Merge all changes into `{diff_output}`.
|
**Include in `{diff_output}`:**
|
||||||
|
|
||||||
|
- All modified tracked files (except `{tech_spec_path}` if tech-spec mode - asymmetry requires hiding intent)
|
||||||
|
- All new files created during this workflow
|
||||||
|
- Full content for new files
|
||||||
|
|
||||||
**Note:** Do NOT `git add` anything - this is read-only inspection.
|
**Note:** Do NOT `git add` anything - this is read-only inspection.
|
||||||
|
|
||||||
|
|
@ -75,7 +79,7 @@ The task should: review `{diff_output}` and return a list of findings.
|
||||||
|
|
||||||
Capture the findings from the task output.
|
Capture the findings from the task output.
|
||||||
**If zero findings:** HALT - this is suspicious. Re-analyze or request user guidance.
|
**If zero findings:** HALT - this is suspicious. Re-analyze or request user guidance.
|
||||||
Evaluate severity (Critical, High, Medium, Low) and validity (real, noise, undecided).
|
Evaluate severity (Critical, High, Medium, Low) and validity (Real, Noise, Undecided).
|
||||||
DO NOT exclude findings based on severity or validity unless explicitly asked to do so.
|
DO NOT exclude findings based on severity or validity unless explicitly asked to do so.
|
||||||
Order findings by severity.
|
Order findings by severity.
|
||||||
Number the ordered findings (F1, F2, F3, etc.).
|
Number the ordered findings (F1, F2, F3, etc.).
|
||||||
|
|
@ -92,6 +96,7 @@ With findings in hand, load `step-06-resolve-findings.md` for user to choose res
|
||||||
## SUCCESS METRICS
|
## SUCCESS METRICS
|
||||||
|
|
||||||
- Diff constructed from baseline_commit
|
- Diff constructed from baseline_commit
|
||||||
|
- Tech-spec excluded from diff when in tech-spec mode (information asymmetry)
|
||||||
- New files included in diff
|
- New files included in diff
|
||||||
- Task invoked with diff as input
|
- Task invoked with diff as input
|
||||||
- Findings received
|
- Findings received
|
||||||
|
|
@ -100,6 +105,7 @@ With findings in hand, load `step-06-resolve-findings.md` for user to choose res
|
||||||
## FAILURE MODES
|
## FAILURE MODES
|
||||||
|
|
||||||
- Missing baseline_commit (can't construct accurate diff)
|
- Missing baseline_commit (can't construct accurate diff)
|
||||||
|
- Including tech_spec_path in diff when in tech-spec mode (breaks asymmetry)
|
||||||
- Not including new untracked files in diff
|
- Not including new untracked files in diff
|
||||||
- Invoking task without providing diff input
|
- Invoking task without providing diff input
|
||||||
- Accepting zero findings without questioning
|
- Accepting zero findings without questioning
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue