Compare commits

...

8 Commits

Author SHA1 Message Date
Mario Semper 00aba613dc
Merge 4d48b0dbe1 into ed0defbe08 2025-12-12 01:06:12 -05:00
Dicky Moore ed0defbe08
fix: normalize workflow manifest schema (#1071)
* fix: normalize workflow manifest schema

* fix: escape workflow manifest values safely

---------

Co-authored-by: Brian <bmadcode@gmail.com>
2025-12-12 07:20:43 +08:00
Kevin Heidt 3bc485d0ed
Enhance config collector to support static fields (#1086)
Refactor config collection to handle both interactive and static fields. Update logic to process new static fields and merge answers accordingly.

Co-authored-by: Brian <bmadcode@gmail.com>
2025-12-12 06:56:31 +08:00
Alex Verkhovsky 0f5a9cf0dd
fix: correct grammar in PRD workflow description (#1087) 2025-12-12 06:43:40 +08:00
Alex Verkhovsky e2d9d35ce9
fix(bmm): improve code review completion message (#1095)
Change "Story is ready for next work!" to "Code review complete!"

The original phrasing was misleading - when a code review finishes
with status "done", it means the review itself is complete and the
story is marked done in tracking. However, the user may choose to
do additional reviews or the story may genuinely be finished.
"Code review complete" more accurately describes what actually
happened without implying next steps.
2025-12-12 06:42:52 +08:00
Alex Verkhovsky 82e6433b69
refactor: standardize file naming to use dashes instead of underscores (#1094)
Rename output/template files and update all references to use kebab-case
(dashes) instead of snake_case (underscores) for consistency:

- project_context.md -> project-context.md (13 references)
- backlog_template.md -> backlog-template.md
- agent_commands.md -> agent-commands.md
- agent_persona.md -> agent-persona.md
- agent_purpose_and_type.md -> agent-purpose-and-type.md
2025-12-12 06:42:24 +08:00
Brian 4d48b0dbe1
Merge branch 'main' into feature/ring-of-fire-sessions 2025-11-26 09:08:27 -06:00
Mario Semper 10dc25f43d feat: Ring of Fire (ROF) Sessions - Multi-agent parallel collaboration
Introduces Ring of Fire Sessions feature for BMad Method, enabling
multi-agent collaborative sessions that run in parallel to user workflow.

Key features:
- User-controlled scope (2 agents/5min to 10 agents/2hrs)
- Approval-gated tool access for safety
- Flexible reporting (brief/detailed/live)
- Parallel workflow support

Origin: tellingCube project (masemIT e.U.)
Real-world validated with successful multi-agent planning sessions.

Command: *rof "<topic>" --agents <list> [--report mode]
2025-11-23 02:22:21 +01:00
18 changed files with 376 additions and 63 deletions

View File

@ -0,0 +1,256 @@
# BMad Method PR #1: Ring of Fire (ROF) Sessions
**Feature Type**: Core workflow enhancement
**Status**: Draft for community review
**Origin**: tellingCube project (masemIT e.U.)
**Author**: Mario Semper (@sempre)
**Date**: 2025-11-23
---
## Summary
**Ring of Fire (ROF) Sessions** enable multi-agent collaborative sessions that run in parallel to the user's main workflow, allowing users to delegate complex multi-perspective analysis while continuing other work.
---
## Problem Statement
Current BMad Method requires **sequential agent interaction**. When users need multiple agents to collaborate on a complex topic, they must:
- Manually orchestrate each agent conversation
- Stay in the loop for every exchange
- Wait for sequential responses before proceeding
- Context-switch constantly between tasks
This creates **bottlenecks** and prevents **parallel work streams**.
---
## Proposed Solution: Ring of Fire Sessions
A new command pattern that enables **scoped multi-agent collaboration sessions** that run while the user continues other work.
### Command Syntax
```bash
*rof "<topic>" --agents <agent-list> [--report brief|detailed|live]
```
### Example Usage
```bash
*rof "API Refactoring Strategy" --agents dev,architect,qa --report brief
```
**What happens**:
1. Dev, Architect, and QA agents enter a collaborative session
2. They analyze the topic together (code review, design discussion, testing concerns)
3. When agents need tool access (read files, run commands), they request user approval
4. User continues working on other tasks in parallel
5. Session ends with consolidated report (brief: just recommendations, detailed: full transcript)
---
## Key Features
### 1. User-Controlled Scope
- **Small**: 2 agents, 5-minute quick discussion
- **Large**: 10 agents, 2-hour deep analysis
- User decides granularity based on complexity
### 2. Approval-Gated Tool Access
- Agents can **discuss** freely within the session
- When agents need **tools** (read files, execute commands, make changes), they:
- Pause the session
- Request user approval
- Resume after user decision
**Why**: Maintains user control, prevents runaway agent actions
### 3. Flexible Reporting
| Mode | Description | Use Case |
|------|-------------|----------|
| `brief` | Final recommendations only | "Just tell me what to do" |
| `detailed` | Full transcript + recommendations | "Show me the reasoning" |
| `live` | Real-time updates as agents discuss | "I want to observe" |
**Default**: `brief` with Q&A available
### 4. Parallel Workflows
- User works on **Task A** while ROF session tackles **Task B**
- No context-switching overhead
- Efficient use of time
---
## Use Cases
### 1. Architecture Reviews
```bash
*rof "Evaluate microservices vs monolith for new feature" --agents architect,dev,qa
```
**Agents collaborate on**: Design trade-offs, implementation complexity, testing implications
### 2. Code Refactoring
```bash
*rof "Refactor authentication module" --agents dev,architect --report detailed
```
**Agents collaborate on**: Current code analysis, refactoring approach, migration strategy
### 3. Feature Planning
```bash
*rof "Plan user notifications feature" --agents pm,ux,dev --report brief
```
**Agents collaborate on**: Requirements, UX flow, technical feasibility, timeline
### 4. Quality Gates
```bash
*rof "Investigate test failures in CI/CD" --agents qa,dev --report live
```
**Agents collaborate on**: Root cause analysis, fix recommendations, regression prevention
### 5. Documentation Sprints
```bash
*rof "Document API endpoints" --agents dev,pm,ux
```
**Agents collaborate on**: Technical accuracy, user-friendly examples, completeness
---
## User Experience Flow
```mermaid
sequenceDiagram
User->>River: *rof "Topic" --agents dev,architect
River->>Dev: Join ROF session
River->>Architect: Join ROF session
River->>User: Session started, continue your work
Dev->>Architect: Discuss approach
Architect->>Dev: Suggest alternatives
Dev->>User: Need to read auth.ts - approve?
User->>Dev: Approved
Dev->>Architect: After reading file...
Architect->>Dev: Recommendation
Dev->>River: Session complete
River->>User: Brief report: [Recommendations]
```
---
## Implementation Considerations
### Technical Requirements
- **Session state management**: Track active ROF sessions, participating agents
- **Agent context sharing**: Agents share knowledge within session scope
- **User approval workflow**: Clear prompt for tool requests
- **Report generation**: Brief/detailed/live output formatting
- **Workflow integration**: Link ROF findings to existing workflow plans/todos
### Open Questions for Community
1. **Integration**: Core BMad feature or plugin/extension?
2. **Concurrency**: How to handle file conflicts if multiple agents want to edit?
3. **Cost Model**: Guidance for LLM call budgeting with multiple agents?
4. **Session Limits**: Recommended max agents/duration?
5. **Agent Communication**: Free-form discussion or structured turn-taking?
---
## Real-World Validation
**Origin Project**: tellingCube (BI dashboard, masemIT e.U.)
**Validation Scenario**:
- **Topic**: "Next steps for tellingCube after validation test"
- **Agents**: River (orchestrator), Mary (analyst), Winston (architect)
- **Report Mode**: Brief
- **Outcome**: Successfully analyzed post-validation roadmap with 3 scenarios (GO/CHANGE/NO-GO), delivered consolidated recommendations in 5 minutes
**User Feedback (Mario Semper)**:
> "This is exactly what I needed - I wanted multiple perspectives without having to orchestrate every conversation. The brief report gave me actionable next steps immediately."
**Documentation**: `docs/_masemIT/readme.md` in tellingCube repository
---
## Proposed Documentation Structure
```
.bmad-core/
features/
ring-of-fire.md # Feature specification
docs/
guides/
using-rof-sessions.md # User guide with examples
architecture/
agent-collaboration.md # Technical design
rof-session-management.md # State handling approach
```
---
## Benefits
**Unlocks parallel workflows** - User productivity gains
**Reduces context-switching** - Cognitive load reduction
**Enables complex analysis** - Multi-perspective insights
**Maintains user control** - Approval gates for tools
**Scales flexibly** - From quick checks to deep dives
---
## Comparison to Existing Patterns
| Feature | Standard Agent Use | ROF Session |
|---------|-------------------|-------------|
| Agent collaboration | Sequential (one at a time) | Parallel (multiple simultaneously) |
| User involvement | Required for every exchange | Only for approvals |
| Parallel work | No (user waits) | Yes (user continues tasks) |
| Output | Chat transcript | Consolidated report |
| Use case | Single-perspective tasks | Multi-perspective analysis |
---
## Next Steps
1. **Community feedback** on approach and open questions
2. **Technical design** refinement (state management, agent communication)
3. **Prototype implementation** in BMad core or as extension
4. **Beta testing** with real projects (beyond tellingCube)
5. **Documentation** completion with examples
---
## Alternatives Considered
### Alt 1: "Breakout Session"
- **Pros**: Clear meeting metaphor
- **Cons**: Less evocative, doesn't convey "continuous collaborative space"
### Alt 2: "Agent Huddle"
- **Pros**: Short, casual
- **Cons**: Implies quick/informal only
### Alt 3: "Lagerfeuer" (original German name)
- **Pros**: Warm, campfire metaphor
- **Cons**: Poor i18n, hard to pronounce/remember for non-German speakers
**Chosen**: **Ring of Fire** - evokes continuous collaboration circle, internationally understood, memorable, shortcut "ROF" works well
---
## References
- **Source Project**: tellingCube (https://github.com/masemIT/telling-cube) [if public]
- **Documentation**: `docs/_masemIT/readme.md`
- **Discussion**: [Link to BMad community discussion if applicable]
---
**Contribution ready for review.** Feedback welcome! 🔥

View File

@ -330,7 +330,7 @@ Review was saved to story file, but sprint-status.yaml may be out of sync.
<action>All action items are included in the standalone review report</action>
<ask if="action items exist">Would you like me to create tracking items for these action items? (backlog/tasks)</ask>
<action if="user confirms">
If {{backlog_file}} does not exist, copy {installed_path}/backlog_template.md to {{backlog_file}} location.
If {{backlog_file}} does not exist, copy {installed_path}/backlog-template.md to {{backlog_file}} location.
Append a row per action item with Date={{date}}, Story="Ad-Hoc Review", Epic="N/A", Type, Severity, Owner (or "TBD"), Status="Open", Notes with file refs and context.
</action>
</check>
@ -342,7 +342,7 @@ Review was saved to story file, but sprint-status.yaml may be out of sync.
Append under the story's "Tasks / Subtasks" a new subsection titled "Review Follow-ups (AI)", adding each item as an unchecked checkbox in imperative form, prefixed with "[AI-Review]" and severity. Example: "- [ ] [AI-Review][High] Add input validation on server route /api/x (AC #2)".
</action>
<action>
If {{backlog_file}} does not exist, copy {installed_path}/backlog_template.md to {{backlog_file}} location.
If {{backlog_file}} does not exist, copy {installed_path}/backlog-template.md to {{backlog_file}} location.
Append a row per action item with Date={{date}}, Story={{epic_num}}.{{story_num}}, Epic={{epic_num}}, Type, Severity, Owner (or "TBD"), Status="Open", Notes with short context and file refs.
</action>
<action>

View File

@ -24,7 +24,7 @@ agent:
critical_actions:
- "READ the entire story file BEFORE any implementation - tasks/subtasks sequence is your authoritative implementation guide"
- "Load project_context.md if available for coding standards only - never let it override story requirements"
- "Load project-context.md if available for coding standards only - never let it override story requirements"
- "Execute tasks/subtasks IN ORDER as written in story file - no skipping, no reordering, no doing what you want"
- "For each task/subtask: follow red-green-refactor cycle - write failing test first, then implementation"
- "Mark task/subtask [x] ONLY when both implementation AND tests are complete and passing"

View File

@ -1,6 +1,6 @@
---
name: create-prd
description: Creates a comprehensive PRDs through collaborative step-by-step discovery between two product managers working as peers.
description: Creates a comprehensive PRD through collaborative step-by-step discovery between two product managers working as peers.
main_config: '{project-root}/.bmad/bmm/config.yaml'
web_bundle: true
---

View File

@ -94,7 +94,7 @@ Discover and load context documents using smart discovery:
**Project Context Rules (Critical for AI Agents):**
1. Check for project context file: `**/project_context.md`
1. Check for project context file: `**/project-context.md`
2. If exists: Load COMPLETE file contents - this contains critical rules for AI agents
3. Add to frontmatter `hasProjectContext: true` and track file path
4. Report to user: "Found existing project context with {number_of_rules} agent rules"

View File

@ -280,7 +280,7 @@ Your architecture will ensure consistent, high-quality implementation across all
**💡 Optional Enhancement: Project Context File**
Would you like to create a `project_context.md` file? This is a concise, optimized guide for AI agents that captures:
Would you like to create a `project-context.md` file? This is a concise, optimized guide for AI agents that captures:
- Critical language and framework rules they might miss
- Specific patterns and conventions for your project
@ -310,7 +310,7 @@ This will help ensure consistent implementation by capturing:
- Testing and quality standards
- Anti-patterns to avoid
The workflow will collaborate with you to create an optimized `project_context.md` file that AI agents will read before implementing any code."
The workflow will collaborate with you to create an optimized `project-context.md` file that AI agents will read before implementing any code."
**Execute the Generate Project Context workflow:**

View File

@ -217,7 +217,7 @@
**Issues Fixed:** {{fixed_count}}
**Action Items Created:** {{action_count}}
{{#if new_status == "done"}}Story is ready for next work!{{else}}Address the action items and continue development.{{/if}}
{{#if new_status == "done"}}Code review complete!{{else}}Address the action items and continue development.{{/if}}
</output>
</step>

View File

@ -35,7 +35,7 @@ validation-rules:
- [ ] **Acceptance Criteria Satisfaction:** Implementation satisfies EVERY Acceptance Criterion in the story
- [ ] **No Ambiguous Implementation:** Clear, unambiguous implementation that meets story requirements
- [ ] **Edge Cases Handled:** Error conditions and edge cases appropriately addressed
- [ ] **Dependencies Within Scope:** Only uses dependencies specified in story or project_context.md
- [ ] **Dependencies Within Scope:** Only uses dependencies specified in story or project-context.md
## 🧪 Testing & Quality Assurance

View File

@ -33,7 +33,7 @@ Discover the project's technology stack, existing patterns, and critical impleme
First, check if project context already exists:
- Look for file at `{output_folder}/project_context.md`
- Look for file at `{output_folder}/project-context.md`
- If exists: Read complete file to understand existing rules
- Present to user: "Found existing project context with {number_of_sections} sections. Would you like to update this or create a new one?"
@ -122,7 +122,7 @@ Based on discovery, create or update the context document:
#### A. Fresh Document Setup (if no existing context)
Copy template from `{installed_path}/project-context-template.md` to `{output_folder}/project_context.md`
Copy template from `{installed_path}/project-context-template.md` to `{output_folder}/project-context.md`
Initialize frontmatter with:
```yaml

View File

@ -288,7 +288,7 @@ After each category, show the generated rules and present choices:
## APPEND TO PROJECT CONTEXT:
When user selects 'C' for a category, append the content directly to `{output_folder}/project_context.md` using the structure from step 8.
When user selects 'C' for a category, append the content directly to `{output_folder}/project-context.md` using the structure from step 8.
## SUCCESS METRICS:

View File

@ -134,7 +134,7 @@ Based on user skill level, present the completion:
**Expert Mode:**
"Project context complete. Optimized for LLM consumption with {{rule_count}} critical rules across {{section_count}} sections.
File saved to: `{output_folder}/project_context.md`
File saved to: `{output_folder}/project-context.md`
Ready for AI agent integration."
@ -227,7 +227,7 @@ Present final completion to user:
"✅ **Project Context Generation Complete!**
Your optimized project context file is ready at:
`{output_folder}/project_context.md`
`{output_folder}/project-context.md`
**📊 Context Summary:**

View File

@ -1,11 +1,11 @@
---
name: generate-project-context
description: Creates a concise project_context.md file with critical rules and patterns that AI agents must follow when implementing code. Optimized for LLM context efficiency.
description: Creates a concise project-context.md file with critical rules and patterns that AI agents must follow when implementing code. Optimized for LLM context efficiency.
---
# Generate Project Context Workflow
**Goal:** Create a concise, optimized `project_context.md` file containing critical rules, patterns, and guidelines that AI agents must follow when implementing code. This file focuses on unobvious details that LLMs need to be reminded of.
**Goal:** Create a concise, optimized `project-context.md` file containing critical rules, patterns, and guidelines that AI agents must follow when implementing code. This file focuses on unobvious details that LLMs need to be reminded of.
**Your Role:** You are a technical facilitator working with a peer to capture the essential implementation rules that will ensure consistent, high-quality code generation across all AI agents working on the project.
@ -37,7 +37,7 @@ Load config from `{project-root}/.bmad/bmm/config.yaml` and resolve:
- `installed_path` = `{project-root}/.bmad/bmm/workflows/generate-project-context`
- `template_path` = `{installed_path}/project-context-template.md`
- `output_file` = `{output_folder}/project_context.md`
- `output_file` = `{output_folder}/project-context.md`
---

View File

@ -248,14 +248,21 @@ class ConfigCollector {
const configKeys = Object.keys(moduleConfig).filter((key) => key !== 'prompt');
const existingKeys = this.existingConfig && this.existingConfig[moduleName] ? Object.keys(this.existingConfig[moduleName]) : [];
// Find new interactive fields (with prompt)
const newKeys = configKeys.filter((key) => {
const item = moduleConfig[key];
// Check if it's a config item and doesn't exist in existing config
return item && typeof item === 'object' && item.prompt && !existingKeys.includes(key);
});
// If in silent mode and no new keys, use existing config and skip prompts
if (silentMode && newKeys.length === 0) {
// Find new static fields (without prompt, just result)
const newStaticKeys = configKeys.filter((key) => {
const item = moduleConfig[key];
return item && typeof item === 'object' && !item.prompt && item.result && !existingKeys.includes(key);
});
// If in silent mode and no new keys (neither interactive nor static), use existing config and skip prompts
if (silentMode && newKeys.length === 0 && newStaticKeys.length === 0) {
if (this.existingConfig && this.existingConfig[moduleName]) {
if (!this.collectedConfig[moduleName]) {
this.collectedConfig[moduleName] = {};
@ -294,9 +301,12 @@ class ConfigCollector {
return false; // No new fields
}
// If we have new fields, build questions first
if (newKeys.length > 0) {
// If we have new fields (interactive or static), process them
if (newKeys.length > 0 || newStaticKeys.length > 0) {
const questions = [];
const staticAnswers = {};
// Build questions for interactive fields
for (const key of newKeys) {
const item = moduleConfig[key];
const question = await this.buildQuestion(moduleName, key, item, moduleConfig);
@ -305,39 +315,50 @@ class ConfigCollector {
}
}
// Prepare static answers (no prompt, just result)
for (const key of newStaticKeys) {
staticAnswers[`${moduleName}_${key}`] = undefined;
}
// Collect all answers (static + prompted)
let allAnswers = { ...staticAnswers };
if (questions.length > 0) {
// Only show header if we actually have questions
CLIUtils.displayModuleConfigHeader(moduleName, moduleConfig.header, moduleConfig.subheader);
console.log(); // Line break before questions
const answers = await inquirer.prompt(questions);
const promptedAnswers = await inquirer.prompt(questions);
// Store answers for cross-referencing
Object.assign(this.allAnswers, answers);
// Process answers and build result values
for (const key of Object.keys(answers)) {
const originalKey = key.replace(`${moduleName}_`, '');
const item = moduleConfig[originalKey];
const value = answers[key];
let result;
if (Array.isArray(value)) {
result = value;
} else if (item.result) {
result = this.processResultTemplate(item.result, value);
} else {
result = value;
}
if (!this.collectedConfig[moduleName]) {
this.collectedConfig[moduleName] = {};
}
this.collectedConfig[moduleName][originalKey] = result;
}
} else {
// New keys exist but no questions generated - show no config message
// Merge prompted answers with static answers
Object.assign(allAnswers, promptedAnswers);
} else if (newStaticKeys.length > 0) {
// Only static fields, no questions - show no config message
CLIUtils.displayModuleNoConfig(moduleName, moduleConfig.header, moduleConfig.subheader);
}
// Store all answers for cross-referencing
Object.assign(this.allAnswers, allAnswers);
// Process all answers (both static and prompted)
for (const key of Object.keys(allAnswers)) {
const originalKey = key.replace(`${moduleName}_`, '');
const item = moduleConfig[originalKey];
const value = allAnswers[key];
let result;
if (Array.isArray(value)) {
result = value;
} else if (item.result) {
result = this.processResultTemplate(item.result, value);
} else {
result = value;
}
if (!this.collectedConfig[moduleName]) {
this.collectedConfig[moduleName] = {};
}
this.collectedConfig[moduleName][originalKey] = result;
}
}
// Copy over existing values for fields that weren't prompted
@ -353,7 +374,7 @@ class ConfigCollector {
}
}
return newKeys.length > 0; // Return true if we prompted for new fields
return newKeys.length > 0 || newStaticKeys.length > 0; // Return true if we had any new fields (interactive or static)
}
/**
@ -501,30 +522,52 @@ class ConfigCollector {
// Process each config item
const questions = [];
const staticAnswers = {};
const configKeys = Object.keys(moduleConfig).filter((key) => key !== 'prompt');
for (const key of configKeys) {
const item = moduleConfig[key];
// Skip if not a config object
if (!item || typeof item !== 'object' || !item.prompt) {
if (!item || typeof item !== 'object') {
continue;
}
const question = await this.buildQuestion(moduleName, key, item, moduleConfig);
if (question) {
questions.push(question);
// Handle static values (no prompt, just result)
if (!item.prompt && item.result) {
// Add to static answers with a marker value
staticAnswers[`${moduleName}_${key}`] = undefined;
continue;
}
// Handle interactive values (with prompt)
if (item.prompt) {
const question = await this.buildQuestion(moduleName, key, item, moduleConfig);
if (question) {
questions.push(question);
}
}
}
// Collect all answers (static + prompted)
let allAnswers = { ...staticAnswers };
// Display appropriate header based on whether there are questions
if (questions.length > 0) {
CLIUtils.displayModuleConfigHeader(moduleName, moduleConfig.header, moduleConfig.subheader);
console.log(); // Line break before questions
const answers = await inquirer.prompt(questions);
const promptedAnswers = await inquirer.prompt(questions);
// Store answers for cross-referencing
Object.assign(this.allAnswers, answers);
// Merge prompted answers with static answers
Object.assign(allAnswers, promptedAnswers);
}
// Store all answers for cross-referencing
Object.assign(this.allAnswers, allAnswers);
// Process all answers (both static and prompted)
if (Object.keys(allAnswers).length > 0) {
const answers = allAnswers;
// Process answers and build result values
for (const key of Object.keys(answers)) {

View File

@ -581,6 +581,11 @@ class ManifestGenerator {
*/
async writeWorkflowManifest(cfgDir) {
const csvPath = path.join(cfgDir, 'workflow-manifest.csv');
const escapeCsv = (value) => `"${String(value ?? '').replaceAll('"', '""')}"`;
const parseCsvLine = (line) => {
const columns = line.match(/(".*?"|[^",\s]+)(?=\s*,|\s*$)/g) || [];
return columns.map((c) => c.replaceAll(/^"|"$/g, ''));
};
// Read existing manifest to preserve entries
const existingEntries = new Map();
@ -592,18 +597,21 @@ class ManifestGenerator {
for (let i = 1; i < lines.length; i++) {
const line = lines[i];
if (line) {
// Parse CSV (simple parsing assuming no commas in quoted fields)
const parts = line.split('","');
const parts = parseCsvLine(line);
if (parts.length >= 4) {
const name = parts[0].replace(/^"/, '');
const module = parts[2];
existingEntries.set(`${module}:${name}`, line);
const [name, description, module, workflowPath] = parts;
existingEntries.set(`${module}:${name}`, {
name,
description,
module,
path: workflowPath,
});
}
}
}
}
// Create CSV header - removed standalone column as ALL workflows now generate commands
// Create CSV header - standalone column removed, everything is canonicalized to 4 columns
let csv = 'name,description,module,path\n';
// Combine existing and new workflows
@ -617,12 +625,18 @@ class ManifestGenerator {
// Add/update new workflows
for (const workflow of this.workflows) {
const key = `${workflow.module}:${workflow.name}`;
allWorkflows.set(key, `"${workflow.name}","${workflow.description}","${workflow.module}","${workflow.path}"`);
allWorkflows.set(key, {
name: workflow.name,
description: workflow.description,
module: workflow.module,
path: workflow.path,
});
}
// Write all workflows
for (const [, value] of allWorkflows) {
csv += value + '\n';
const row = [escapeCsv(value.name), escapeCsv(value.description), escapeCsv(value.module), escapeCsv(value.path)].join(',');
csv += row + '\n';
}
await fs.writeFile(csvPath, csv);