```xml
The workflow execution engine is governed by: {project_root}/bmad/core/tasks/workflow.xml
You MUST have already loaded and processed: {installed_path}/workflow.yaml
Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}
Generate all documents in {document_output_language}
This workflow assembles a Story Context XML for a single user story by extracting ACs, tasks, relevant docs/code, interfaces, constraints, and testing guidance to support implementation.
Default execution mode: #yolo (non-interactive). Only ask if {{non_interactive}} == false. If auto-discovery fails, HALT and request 'story_path' or 'story_dir'.
DOCUMENT OUTPUT: Technical XML context file. Concise, structured, project-relative paths only. User skill level ({user_skill_level}) affects conversation style ONLY, not context content.
If {{story_path}} provided and valid → use it; extract story_key from filename/metadata; GOTO initialize_context
MUST read COMPLETE sprint-status.yaml file from start to end to preserve order
Load the FULL file: {{output_folder}}/sprint-status.yaml
Read ALL lines from beginning to end - do not skip any content
Parse the development_status section completely
Find ALL stories (reading in order from top to bottom) where:
- Key matches pattern: number-number-name (e.g., "1-2-user-auth")
- NOT an epic key (epic-X) or retrospective (epic-X-retrospective)
- Status value equals "drafted"
Collect up to 10 drafted story keys in order (limit for display purposes)
Count total drafted stories found
HALT
Display available drafted stories:
**Drafted Stories Available ({{drafted_count}} found):**
{{list_of_drafted_story_keys}}
Select the drafted story to generate context for (enter story key or number):
Auto-select first story from the list
Resolve selected story_key from user input or auto-selection
Find matching story file in {{story_dir}} using story_key pattern
Resolve {{story_path}} and READ COMPLETE file
Extract {{epic_id}}, {{story_id}}, {{story_title}}, {{story_status}} from filename/content; parse sections: Story, Acceptance Criteria, Tasks/Subtasks, Dev Notes.
Extract user story fields (asA, iWant, soThat).
Store project root path for relative path conversion: extract from {project-root} variable.
Define path normalization function: convert any absolute path to project-relative by removing project root prefix.
Initialize output by writing template to {default_output_file}.
as_a
i_want
so_that
Scan docs and src module docs for items relevant to this story's domain: search keywords from story title, ACs, and tasks.
Prefer authoritative sources: PRD, Architecture, Front-end Spec, Testing standards, module-specific docs.
For each discovered document: convert absolute paths to project-relative format by removing {project-root} prefix. Store only relative paths (e.g., "docs/prd.md" not "/Users/.../docs/prd.md").
Add artifacts.docs entries with {path, title, section, snippet}:
- path: PROJECT-RELATIVE path only (strip {project-root} prefix)
- title: Document title
- section: Relevant section name
- snippet: Brief excerpt (2-3 sentences max, NO invention)
Search source tree for modules, files, and symbols matching story intent and AC keywords (controllers, services, components, tests).
Identify existing interfaces/APIs the story should reuse rather than recreate.
Extract development constraints from Dev Notes and architecture (patterns, layers, testing requirements).
For all discovered code artifacts: convert absolute paths to project-relative format (strip {project-root} prefix).
Add artifacts.code entries with {path, kind, symbol, lines, reason}:
- path: PROJECT-RELATIVE path only (e.g., "src/services/api.js" not full path)
- kind: file type (controller, service, component, test, etc.)
- symbol: function/class/interface name
- lines: line range if specific (e.g., "45-67")
- reason: brief explanation of relevance to this story
Populate interfaces with API/interface signatures:
- name: Interface or API name
- kind: REST endpoint, GraphQL, function signature, class interface
- signature: Full signature or endpoint definition
- path: PROJECT-RELATIVE path to definition
Populate constraints with development rules:
- Extract from Dev Notes and architecture
- Include: required patterns, layer restrictions, testing requirements, coding standards
Detect dependency manifests and frameworks in the repo:
- Node: package.json (dependencies/devDependencies)
- Python: pyproject.toml/requirements.txt
- Go: go.mod
- Unity: Packages/manifest.json, Assets/, ProjectSettings/
- Other: list notable frameworks/configs found
Populate artifacts.dependencies with keys for detected ecosystems and their packages with version ranges where present
From Dev Notes, architecture docs, testing docs, and existing tests, extract testing standards (frameworks, patterns, locations).
Populate tests.standards with a concise paragraph
Populate tests.locations with directories or glob patterns where tests live
Populate tests.ideas with initial test ideas mapped to acceptance criteria IDs
Validate output XML structure and content.
Validate against checklist at {installed_path}/checklist.md using bmad/core/tasks/validate-workflow.xml
Open {{story_path}}
Find the "Status:" line (usually at the top)
Update story file: Change Status to "ready-for-dev"
Under 'Dev Agent Record' → 'Context Reference' (create if missing), add or update a list item for {default_output_file}.
Save the story file.
Load the FULL file: {{output_folder}}/sprint-status.yaml
Find development_status key matching {{story_key}}
Verify current status is "drafted" (expected previous state)
Update development_status[{{story_key}}] = "ready-for-dev"
Save file, preserving ALL comments and structure including STATUS DEFINITIONS
Communicate to {user_name} that story context has been successfully generated
Summarize what was accomplished: story ID, story key, title, context file location
Explain that story status is now "ready-for-dev" (was "drafted") and sprint status is "ready-for-dev" (was "drafted")
Highlight the value of the generated context: provides docs, code references, interfaces, constraints, and test guidance
Based on {user_skill_level}, ask if user would like to understand:
- What information was gathered in the context file
- How the context file will help during implementation
- What the next steps are
- Anything else about the context generation process
Provide clear explanations tailored to {user_skill_level}
Reference specific sections of the generated context when helpful
Once explanations are complete (or user indicates no questions), suggest logical next steps
Common next steps to suggest (but allow user flexibility):
- Review the generated context file to understand implementation guidance
- Load DEV agent and run `dev-story` workflow to implement the story
- Check sprint-status.yaml to see which stories are ready for development
- Generate context for additional drafted stories if needed
Remain flexible - allow user to choose their own path or ask for other assistance
```