diff --git a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-01-gather-context.md b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-01-gather-context.md
index d00d4edb8..3678d069b 100644
--- a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-01-gather-context.md
+++ b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-01-gather-context.md
@@ -2,6 +2,7 @@
diff_output: '' # set at runtime
spec_file: '' # set at runtime (path or empty)
review_mode: '' # set at runtime: "full" or "no-spec"
+story_key: '' # set at runtime when discovered from sprint status
---
# Step 1: Gather Context
@@ -23,8 +24,8 @@ review_mode: '' # set at runtime: "full" or "no-spec"
- When multiple phrases match, prefer the most specific match (e.g., "branch diff" over bare "diff").
- **If a clear match is found:** Announce the detected mode (e.g., "Detected intent: review staged changes only") and proceed directly to constructing `{diff_output}` using the corresponding sub-case from instruction 3. Skip to instruction 4 (spec question).
- **If no match from invocation text, check sprint tracking.** Look for a sprint status file (`*sprint-status*`) in `{implementation_artifacts}` or `{planning_artifacts}`. If found, scan for any story with status `review`. Handle as follows:
- - **Exactly one `review` story:** Suggest it: "I found story {{story-id}} in `review` status. Would you like to review its changes? [Y] Yes / [N] No, let me choose". If confirmed, use the story context to determine the diff source (branch name derived from story slug, or uncommitted changes). If declined, fall through to instruction 2.
- - **Multiple `review` stories:** Present them as numbered options alongside a manual choice option. Wait for user selection. Then use the selected story's context to determine the diff source as in the single-story case above, and proceed to instruction 3.
+ - **Exactly one `review` story:** Set `{story_key}` to the story's key (e.g., `1-2-user-auth`). Suggest it: "I found story {{story-id}} in `review` status. Would you like to review its changes? [Y] Yes / [N] No, let me choose". If confirmed, use the story context to determine the diff source (branch name derived from story slug, or uncommitted changes). If declined, clear `{story_key}` and fall through to instruction 2.
+ - **Multiple `review` stories:** Present them as numbered options alongside a manual choice option. Wait for user selection. If the user selects a story, set `{story_key}` to the selected story's key and use the selected story's context to determine the diff source as in the single-story case above, and proceed to instruction 3. If the user selects the manual choice, clear `{story_key}` and fall through to instruction 2.
- **If no match and no sprint tracking:** Fall through to instruction 2.
2. HALT. Ask the user: **What do you want to review?** Present these options:
diff --git a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md
index 799f05fe9..c495d4981 100644
--- a/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md
+++ b/src/bmm-skills/4-implementation/bmad-code-review/steps/step-04-present.md
@@ -14,7 +14,7 @@ deferred_work_file: '{implementation_artifacts}/deferred-work.md'
### 1. Clean review shortcut
-If zero findings remain after triage (all dismissed or none raised): state that and end the workflow.
+If zero findings remain after triage (all dismissed or none raised): state that and proceed to section 6 (Sprint Status Update).
### 2. Write findings to the story file
@@ -82,3 +82,48 @@ If `{spec_file}` is **not** set, present only options 1 and 3 (omit option 2 —
- Patches handled:
- Deferred:
- Dismissed:
+
+### 6. Update story status and sync sprint tracking
+
+Skip this section if `{spec_file}` is not set.
+
+#### Determine new status based on review outcome
+
+- If all `decision-needed` and `patch` findings were resolved (fixed or dismissed) AND no unresolved HIGH/MEDIUM issues remain: set `{new_status}` = `done`. Update the story file Status section to `done`.
+- If `patch` findings were left as action items, or unresolved issues remain: set `{new_status}` = `in-progress`. Update the story file Status section to `in-progress`.
+
+Save the story file.
+
+#### Sync sprint-status.yaml
+
+If `{story_key}` is not set, skip this subsection and note that sprint status was not synced because no story key was available.
+
+If `{sprint_status}` file exists:
+
+1. Load the FULL `{sprint_status}` file.
+2. Find the `development_status` entry matching `{story_key}`.
+3. If found: update `development_status[{story_key}]` to `{new_status}`. Update `last_updated` to current date. Save the file, preserving ALL comments and structure including STATUS DEFINITIONS.
+4. If `{story_key}` not found in sprint status: warn the user that the story file was updated but sprint-status sync failed.
+
+If `{sprint_status}` file does not exist, note that story status was updated in the story file only.
+
+#### Completion summary
+
+> **Review Complete!**
+>
+> **Story Status:** `{new_status}`
+> **Issues Fixed:**
+> **Action Items Created:**
+> **Deferred:**
+> **Dismissed:**
+
+### 7. Next steps
+
+Present the user with follow-up options:
+
+> **What would you like to do next?**
+> 1. **Start the next story** — run `dev-story` to pick up the next `ready-for-dev` story
+> 2. **Re-run code review** — address findings and review again
+> 3. **Done** — end the workflow
+
+**HALT** — I am waiting for your choice. Do not proceed until the user selects an option.
diff --git a/src/bmm-skills/4-implementation/bmad-code-review/workflow.md b/src/bmm-skills/4-implementation/bmad-code-review/workflow.md
index 6653e3c8a..2cad2d870 100644
--- a/src/bmm-skills/4-implementation/bmad-code-review/workflow.md
+++ b/src/bmm-skills/4-implementation/bmad-code-review/workflow.md
@@ -44,6 +44,7 @@ Load and read full config from `{main_config}` and resolve:
- `project_name`, `planning_artifacts`, `implementation_artifacts`, `user_name`
- `communication_language`, `document_output_language`, `user_skill_level`
- `date` as system-generated current datetime
+- `sprint_status` = `{implementation_artifacts}/sprint-status.yaml`
- `project_context` = `**/project-context.md` (load if exists)
- CLAUDE.md / memory files (load if exist)
diff --git a/src/core-skills/bmad-advanced-elicitation/SKILL.md b/src/core-skills/bmad-advanced-elicitation/SKILL.md
index d40bd5a8b..e7b60683e 100644
--- a/src/core-skills/bmad-advanced-elicitation/SKILL.md
+++ b/src/core-skills/bmad-advanced-elicitation/SKILL.md
@@ -1,6 +1,137 @@
---
name: bmad-advanced-elicitation
description: 'Push the LLM to reconsider, refine, and improve its recent output. Use when user asks for deeper critique or mentions a known deeper critique method, e.g. socratic, first principles, pre-mortem, red team.'
+agent_party: '{project-root}/_bmad/_config/agent-manifest.csv'
---
-Follow the instructions in ./workflow.md.
+# Advanced Elicitation
+
+**Goal:** Push the LLM to reconsider, refine, and improve its recent output.
+
+---
+
+## CRITICAL LLM INSTRUCTIONS
+
+- **MANDATORY:** Execute ALL steps in the flow section IN EXACT ORDER
+- DO NOT skip steps or change the sequence
+- HALT immediately when halt-conditions are met
+- Each action within a step is a REQUIRED action to complete that step
+- Sections outside flow (validation, output, critical-context) provide essential context - review and apply throughout execution
+- **YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the `communication_language`**
+
+---
+
+## INTEGRATION (When Invoked Indirectly)
+
+When invoked from another prompt or process:
+
+1. Receive or review the current section content that was just generated
+2. Apply elicitation methods iteratively to enhance that specific content
+3. Return the enhanced version back when user selects 'x' to proceed and return back
+4. The enhanced content replaces the original section content in the output document
+
+---
+
+## FLOW
+
+### Step 1: Method Registry Loading
+
+**Action:** Load and read `./methods.csv` and `{agent_party}`
+
+#### CSV Structure
+
+- **category:** Method grouping (core, structural, risk, etc.)
+- **method_name:** Display name for the method
+- **description:** Rich explanation of what the method does, when to use it, and why it's valuable
+- **output_pattern:** Flexible flow guide using arrows (e.g., "analysis -> insights -> action")
+
+#### Context Analysis
+
+- Use conversation history
+- Analyze: content type, complexity, stakeholder needs, risk level, and creative potential
+
+#### Smart Selection
+
+1. Analyze context: Content type, complexity, stakeholder needs, risk level, creative potential
+2. Parse descriptions: Understand each method's purpose from the rich descriptions in CSV
+3. Select 5 methods: Choose methods that best match the context based on their descriptions
+4. Balance approach: Include mix of foundational and specialized techniques as appropriate
+
+---
+
+### Step 2: Present Options and Handle Responses
+
+#### Display Format
+
+```
+**Advanced Elicitation Options**
+_If party mode is active, agents will join in._
+Choose a number (1-5), [r] to Reshuffle, [a] List All, or [x] to Proceed:
+
+1. [Method Name]
+2. [Method Name]
+3. [Method Name]
+4. [Method Name]
+5. [Method Name]
+r. Reshuffle the list with 5 new options
+a. List all methods with descriptions
+x. Proceed / No Further Actions
+```
+
+#### Response Handling
+
+**Case 1-5 (User selects a numbered method):**
+
+- Execute the selected method using its description from the CSV
+- Adapt the method's complexity and output format based on the current context
+- Apply the method creatively to the current section content being enhanced
+- Display the enhanced version showing what the method revealed or improved
+- **CRITICAL:** Ask the user if they would like to apply the changes to the doc (y/n/other) and HALT to await response.
+- **CRITICAL:** ONLY if Yes, apply the changes. IF No, discard your memory of the proposed changes. If any other reply, try best to follow the instructions given by the user.
+- **CRITICAL:** Re-present the same 1-5,r,x prompt to allow additional elicitations
+
+**Case r (Reshuffle):**
+
+- Select 5 random methods from methods.csv, present new list with same prompt format
+- When selecting, try to think and pick a diverse set of methods covering different categories and approaches, with 1 and 2 being potentially the most useful for the document or section being discovered
+
+**Case x (Proceed):**
+
+- Complete elicitation and proceed
+- Return the fully enhanced content back to the invoking skill
+- The enhanced content becomes the final version for that section
+- Signal completion back to the invoking skill to continue with next section
+
+**Case a (List All):**
+
+- List all methods with their descriptions from the CSV in a compact table
+- Allow user to select any method by name or number from the full list
+- After selection, execute the method as described in the Case 1-5 above
+
+**Case: Direct Feedback:**
+
+- Apply changes to current section content and re-present choices
+
+**Case: Multiple Numbers:**
+
+- Execute methods in sequence on the content, then re-offer choices
+
+---
+
+### Step 3: Execution Guidelines
+
+- **Method execution:** Use the description from CSV to understand and apply each method
+- **Output pattern:** Use the pattern as a flexible guide (e.g., "paths -> evaluation -> selection")
+- **Dynamic adaptation:** Adjust complexity based on content needs (simple to sophisticated)
+- **Creative application:** Interpret methods flexibly based on context while maintaining pattern consistency
+- Focus on actionable insights
+- **Stay relevant:** Tie elicitation to specific content being analyzed (the current section from the document being created unless user indicates otherwise)
+- **Identify personas:** For single or multi-persona methods, clearly identify viewpoints, and use party members if available in memory already
+- **Critical loop behavior:** Always re-offer the 1-5,r,a,x choices after each method execution
+- Continue until user selects 'x' to proceed with enhanced content, confirm or ask the user what should be accepted from the session
+- Each method application builds upon previous enhancements
+- **Content preservation:** Track all enhancements made during elicitation
+- **Iterative enhancement:** Each selected method (1-5) should:
+ 1. Apply to the current enhanced version of the content
+ 2. Show the improvements made
+ 3. Return to the prompt for additional elicitations or completion
diff --git a/src/core-skills/bmad-advanced-elicitation/workflow.md b/src/core-skills/bmad-advanced-elicitation/workflow.md
deleted file mode 100644
index ecb7f8391..000000000
--- a/src/core-skills/bmad-advanced-elicitation/workflow.md
+++ /dev/null
@@ -1,135 +0,0 @@
----
-agent_party: '{project-root}/_bmad/_config/agent-manifest.csv'
----
-
-# Advanced Elicitation Workflow
-
-**Goal:** Push the LLM to reconsider, refine, and improve its recent output.
-
----
-
-## CRITICAL LLM INSTRUCTIONS
-
-- **MANDATORY:** Execute ALL steps in the flow section IN EXACT ORDER
-- DO NOT skip steps or change the sequence
-- HALT immediately when halt-conditions are met
-- Each action within a step is a REQUIRED action to complete that step
-- Sections outside flow (validation, output, critical-context) provide essential context - review and apply throughout execution
-- **YOU MUST ALWAYS SPEAK OUTPUT in your Agent communication style with the `communication_language`**
-
----
-
-## INTEGRATION (When Invoked Indirectly)
-
-When invoked from another prompt or process:
-
-1. Receive or review the current section content that was just generated
-2. Apply elicitation methods iteratively to enhance that specific content
-3. Return the enhanced version back when user selects 'x' to proceed and return back
-4. The enhanced content replaces the original section content in the output document
-
----
-
-## FLOW
-
-### Step 1: Method Registry Loading
-
-**Action:** Load and read `./methods.csv` and `{agent_party}`
-
-#### CSV Structure
-
-- **category:** Method grouping (core, structural, risk, etc.)
-- **method_name:** Display name for the method
-- **description:** Rich explanation of what the method does, when to use it, and why it's valuable
-- **output_pattern:** Flexible flow guide using arrows (e.g., "analysis -> insights -> action")
-
-#### Context Analysis
-
-- Use conversation history
-- Analyze: content type, complexity, stakeholder needs, risk level, and creative potential
-
-#### Smart Selection
-
-1. Analyze context: Content type, complexity, stakeholder needs, risk level, creative potential
-2. Parse descriptions: Understand each method's purpose from the rich descriptions in CSV
-3. Select 5 methods: Choose methods that best match the context based on their descriptions
-4. Balance approach: Include mix of foundational and specialized techniques as appropriate
-
----
-
-### Step 2: Present Options and Handle Responses
-
-#### Display Format
-
-```
-**Advanced Elicitation Options**
-_If party mode is active, agents will join in._
-Choose a number (1-5), [r] to Reshuffle, [a] List All, or [x] to Proceed:
-
-1. [Method Name]
-2. [Method Name]
-3. [Method Name]
-4. [Method Name]
-5. [Method Name]
-r. Reshuffle the list with 5 new options
-a. List all methods with descriptions
-x. Proceed / No Further Actions
-```
-
-#### Response Handling
-
-**Case 1-5 (User selects a numbered method):**
-
-- Execute the selected method using its description from the CSV
-- Adapt the method's complexity and output format based on the current context
-- Apply the method creatively to the current section content being enhanced
-- Display the enhanced version showing what the method revealed or improved
-- **CRITICAL:** Ask the user if they would like to apply the changes to the doc (y/n/other) and HALT to await response.
-- **CRITICAL:** ONLY if Yes, apply the changes. IF No, discard your memory of the proposed changes. If any other reply, try best to follow the instructions given by the user.
-- **CRITICAL:** Re-present the same 1-5,r,x prompt to allow additional elicitations
-
-**Case r (Reshuffle):**
-
-- Select 5 random methods from methods.csv, present new list with same prompt format
-- When selecting, try to think and pick a diverse set of methods covering different categories and approaches, with 1 and 2 being potentially the most useful for the document or section being discovered
-
-**Case x (Proceed):**
-
-- Complete elicitation and proceed
-- Return the fully enhanced content back to the invoking skill
-- The enhanced content becomes the final version for that section
-- Signal completion back to the invoking skill to continue with next section
-
-**Case a (List All):**
-
-- List all methods with their descriptions from the CSV in a compact table
-- Allow user to select any method by name or number from the full list
-- After selection, execute the method as described in the Case 1-5 above
-
-**Case: Direct Feedback:**
-
-- Apply changes to current section content and re-present choices
-
-**Case: Multiple Numbers:**
-
-- Execute methods in sequence on the content, then re-offer choices
-
----
-
-### Step 3: Execution Guidelines
-
-- **Method execution:** Use the description from CSV to understand and apply each method
-- **Output pattern:** Use the pattern as a flexible guide (e.g., "paths -> evaluation -> selection")
-- **Dynamic adaptation:** Adjust complexity based on content needs (simple to sophisticated)
-- **Creative application:** Interpret methods flexibly based on context while maintaining pattern consistency
-- Focus on actionable insights
-- **Stay relevant:** Tie elicitation to specific content being analyzed (the current section from the document being created unless user indicates otherwise)
-- **Identify personas:** For single or multi-persona methods, clearly identify viewpoints, and use party members if available in memory already
-- **Critical loop behavior:** Always re-offer the 1-5,r,a,x choices after each method execution
-- Continue until user selects 'x' to proceed with enhanced content, confirm or ask the user what should be accepted from the session
-- Each method application builds upon previous enhancements
-- **Content preservation:** Track all enhancements made during elicitation
-- **Iterative enhancement:** Each selected method (1-5) should:
- 1. Apply to the current enhanced version of the content
- 2. Show the improvements made
- 3. Return to the prompt for additional elicitations or completion