Compare commits
20 Commits
8bbd3736bc
...
a52e5869fd
| Author | SHA1 | Date |
|---|---|---|
|
|
a52e5869fd | |
|
|
61bba10cb3 | |
|
|
3e89b30b3c | |
|
|
b4d73b7daf | |
|
|
6ff74ba662 | |
|
|
1ad1f91e38 | |
|
|
350688df67 | |
|
|
be85e5b4a0 | |
|
|
04cfde1454 | |
|
|
7baa30c567 | |
|
|
9bcafdef51 | |
|
|
92498ebb52 | |
|
|
fdfe23fc22 | |
|
|
f32d1d4e8d | |
|
|
f9e7d65cf9 | |
|
|
bfdeef0453 | |
|
|
3ac8736756 | |
|
|
417fc44a98 | |
|
|
4e96a50515 | |
|
|
ad77c8e1c6 |
|
|
@ -7,6 +7,7 @@ on:
|
||||||
- "src/**"
|
- "src/**"
|
||||||
- "tools/installer/**"
|
- "tools/installer/**"
|
||||||
- "package.json"
|
- "package.json"
|
||||||
|
- "removals.txt"
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
inputs:
|
inputs:
|
||||||
channel:
|
channel:
|
||||||
|
|
@ -135,6 +136,22 @@ jobs:
|
||||||
env:
|
env:
|
||||||
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
- name: Advance @next dist-tag to stable
|
||||||
|
if: github.event_name == 'workflow_dispatch' && inputs.channel == 'latest'
|
||||||
|
# Failure here leaves @next stale until the next push-driven prerelease
|
||||||
|
# republishes — annoying but not release-breaking. Don't fail the job
|
||||||
|
# after a successful stable publish + tag + GH release.
|
||||||
|
continue-on-error: true
|
||||||
|
run: |
|
||||||
|
# Without this, @latest can leapfrog @next (e.g. latest=6.5.0 while
|
||||||
|
# next=6.4.1-next.0) and `npx bmad-method@next install` silently
|
||||||
|
# downgrades users. Point @next at the just-published stable so
|
||||||
|
# @next >= @latest always holds; the next push-driven prerelease will
|
||||||
|
# bump from this base via the existing derive step above.
|
||||||
|
VERSION=$(node -p 'require("./package.json").version')
|
||||||
|
npm dist-tag add "bmad-method@${VERSION}" next
|
||||||
|
echo "Advanced @next dist-tag to ${VERSION}"
|
||||||
|
|
||||||
- name: Notify Discord
|
- name: Notify Discord
|
||||||
if: github.event_name == 'workflow_dispatch' && inputs.channel == 'latest'
|
if: github.event_name == 'workflow_dispatch' && inputs.channel == 'latest'
|
||||||
continue-on-error: true
|
continue-on-error: true
|
||||||
|
|
|
||||||
|
|
@ -68,6 +68,7 @@ Select **Yes**, then provide a source:
|
||||||
| Input Type | Example |
|
| Input Type | Example |
|
||||||
| --------------------- | ------------------------------------------------- |
|
| --------------------- | ------------------------------------------------- |
|
||||||
| HTTPS URL (any host) | `https://github.com/org/repo` |
|
| HTTPS URL (any host) | `https://github.com/org/repo` |
|
||||||
|
| HTTP URL (any host) | `http://host/org/repo` |
|
||||||
| HTTPS URL with subdir | `https://github.com/org/repo/tree/main/my-module` |
|
| HTTPS URL with subdir | `https://github.com/org/repo/tree/main/my-module` |
|
||||||
| SSH URL | `git@github.com:org/repo.git` |
|
| SSH URL | `git@github.com:org/repo.git` |
|
||||||
| Local path | `/Users/me/projects/my-module` |
|
| Local path | `/Users/me/projects/my-module` |
|
||||||
|
|
|
||||||
|
|
@ -68,6 +68,7 @@ Chọn **Yes**, rồi nhập nguồn:
|
||||||
| Loại đầu vào | Ví dụ |
|
| Loại đầu vào | Ví dụ |
|
||||||
| --------------------- | ------------------------------------------------- |
|
| --------------------- | ------------------------------------------------- |
|
||||||
| HTTPS URL trên bất kỳ host nào | `https://github.com/org/repo` |
|
| HTTPS URL trên bất kỳ host nào | `https://github.com/org/repo` |
|
||||||
|
| HTTP URL trên bất kỳ host nào | `http://host/org/repo` |
|
||||||
| HTTPS URL trỏ vào một thư mục con | `https://github.com/org/repo/tree/main/my-module` |
|
| HTTPS URL trỏ vào một thư mục con | `https://github.com/org/repo/tree/main/my-module` |
|
||||||
| SSH URL | `git@github.com:org/repo.git` |
|
| SSH URL | `git@github.com:org/repo.git` |
|
||||||
| Đường dẫn cục bộ | `/Users/me/projects/my-module` |
|
| Đường dẫn cục bộ | `/Users/me/projects/my-module` |
|
||||||
|
|
|
||||||
|
|
@ -68,6 +68,7 @@ Would you like to install from a custom source (Git URL or local path)?
|
||||||
| 输入类型 | 示例 |
|
| 输入类型 | 示例 |
|
||||||
| -------- | ---- |
|
| -------- | ---- |
|
||||||
| HTTPS URL(任意主机) | `https://github.com/org/repo` |
|
| HTTPS URL(任意主机) | `https://github.com/org/repo` |
|
||||||
|
| HTTP URL(任意主机) | `http://host/org/repo` |
|
||||||
| 带子目录的 HTTPS URL | `https://github.com/org/repo/tree/main/my-module` |
|
| 带子目录的 HTTPS URL | `https://github.com/org/repo/tree/main/my-module` |
|
||||||
| SSH URL | `git@github.com:org/repo.git` |
|
| SSH URL | `git@github.com:org/repo.git` |
|
||||||
| 本地路径 | `/Users/me/projects/my-module` |
|
| 本地路径 | `/Users/me/projects/my-module` |
|
||||||
|
|
|
||||||
|
|
@ -55,7 +55,8 @@ Load {planning_artifacts}/epics.md and review:
|
||||||
2. **Requirements Grouping**: Group related FRs that deliver cohesive user outcomes
|
2. **Requirements Grouping**: Group related FRs that deliver cohesive user outcomes
|
||||||
3. **Incremental Delivery**: Each epic should deliver value independently
|
3. **Incremental Delivery**: Each epic should deliver value independently
|
||||||
4. **Logical Flow**: Natural progression from user's perspective
|
4. **Logical Flow**: Natural progression from user's perspective
|
||||||
5. **🔗 Dependency-Free Within Epic**: Stories within an epic must NOT depend on future stories
|
5. **Dependency-Free Within Epic**: Stories within an epic must NOT depend on future stories
|
||||||
|
6. **Implementation Efficiency**: Consider consolidating epics that all modify the same core files into fewer epics
|
||||||
|
|
||||||
**⚠️ CRITICAL PRINCIPLE:**
|
**⚠️ CRITICAL PRINCIPLE:**
|
||||||
Organize by USER VALUE, not technical layers:
|
Organize by USER VALUE, not technical layers:
|
||||||
|
|
@ -74,6 +75,18 @@ Organize by USER VALUE, not technical layers:
|
||||||
- Epic 3: Frontend Components (creates reusable components) - **No user value**
|
- Epic 3: Frontend Components (creates reusable components) - **No user value**
|
||||||
- Epic 4: Deployment Pipeline (CI/CD setup) - **No user value**
|
- Epic 4: Deployment Pipeline (CI/CD setup) - **No user value**
|
||||||
|
|
||||||
|
**❌ WRONG Epic Examples (File Churn on Same Component):**
|
||||||
|
|
||||||
|
- Epic 1: File Upload (modifies model, controller, web form, web API)
|
||||||
|
- Epic 2: File Status (modifies model, controller, web form, web API)
|
||||||
|
- Epic 3: File Access permissions (modifies model, controller, web form, web API)
|
||||||
|
- All three epics touch the same files — consolidate into one epic with ordered stories
|
||||||
|
|
||||||
|
**✅ CORRECT Alternative:**
|
||||||
|
|
||||||
|
- Epic 1: File Management Enhancement (upload, status, permissions as stories within one epic)
|
||||||
|
- Rationale: Single component, fully pre-designed, no feedback loop between epics
|
||||||
|
|
||||||
**🔗 DEPENDENCY RULES:**
|
**🔗 DEPENDENCY RULES:**
|
||||||
|
|
||||||
- Each epic must deliver COMPLETE functionality for its domain
|
- Each epic must deliver COMPLETE functionality for its domain
|
||||||
|
|
@ -82,21 +95,38 @@ Organize by USER VALUE, not technical layers:
|
||||||
|
|
||||||
### 3. Design Epic Structure Collaboratively
|
### 3. Design Epic Structure Collaboratively
|
||||||
|
|
||||||
**Step A: Identify User Value Themes**
|
**Step A: Assess Context and Identify Themes**
|
||||||
|
|
||||||
|
First, assess how much of the solution design is already validated (Architecture, UX, Test Design).
|
||||||
|
When the outcome is certain and direction changes between epics are unlikely, prefer fewer but larger epics.
|
||||||
|
Split into multiple epics when there is a genuine risk boundary or when early feedback could change direction
|
||||||
|
of following epics.
|
||||||
|
|
||||||
|
Then, identify user value themes:
|
||||||
|
|
||||||
- Look for natural groupings in the FRs
|
- Look for natural groupings in the FRs
|
||||||
- Identify user journeys or workflows
|
- Identify user journeys or workflows
|
||||||
- Consider user types and their goals
|
- Consider user types and their goals
|
||||||
|
|
||||||
**Step B: Propose Epic Structure**
|
**Step B: Propose Epic Structure**
|
||||||
For each proposed epic:
|
|
||||||
|
For each proposed epic (considering whether epics share the same core files):
|
||||||
|
|
||||||
1. **Epic Title**: User-centric, value-focused
|
1. **Epic Title**: User-centric, value-focused
|
||||||
2. **User Outcome**: What users can accomplish after this epic
|
2. **User Outcome**: What users can accomplish after this epic
|
||||||
3. **FR Coverage**: Which FR numbers this epic addresses
|
3. **FR Coverage**: Which FR numbers this epic addresses
|
||||||
4. **Implementation Notes**: Any technical or UX considerations
|
4. **Implementation Notes**: Any technical or UX considerations
|
||||||
|
|
||||||
**Step C: Create the epics_list**
|
**Step C: Review for File Overlap**
|
||||||
|
|
||||||
|
Assess whether multiple proposed epics repeatedly target the same core files. If overlap is significant:
|
||||||
|
|
||||||
|
- Distinguish meaningful overlap (same component end-to-end) from incidental sharing
|
||||||
|
- Ask whether to consolidate into one epic with ordered stories
|
||||||
|
- If confirmed, merge the epic FRs into a single epic, preserving dependency flow: each story must still fit within
|
||||||
|
a single dev agent's context
|
||||||
|
|
||||||
|
**Step D: Create the epics_list**
|
||||||
|
|
||||||
Format the epics_list as:
|
Format the epics_list as:
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -90,6 +90,12 @@ Review the complete epic and story breakdown to ensure EVERY FR is covered:
|
||||||
- Dependencies flow naturally
|
- Dependencies flow naturally
|
||||||
- Foundation stories only setup what's needed
|
- Foundation stories only setup what's needed
|
||||||
- No big upfront technical work
|
- No big upfront technical work
|
||||||
|
- **File Churn Check:** Do multiple epics repeatedly modify the same core files?
|
||||||
|
- Assess whether the overlap pattern suggests unnecessary churn or is incidental
|
||||||
|
- If overlap is significant: Validate that splitting provides genuine value (risk mitigation, feedback loops, context size limits)
|
||||||
|
- If no justification for the split: Recommend consolidation into fewer epics
|
||||||
|
- ❌ WRONG: Multiple epics each modify the same core files with no feedback loop between them
|
||||||
|
- ✅ RIGHT: Epics target distinct files/components, OR consolidation was explicitly considered and rejected with rationale
|
||||||
|
|
||||||
### 5. Dependency Validation (CRITICAL)
|
### 5. Dependency Validation (CRITICAL)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,51 +1,70 @@
|
||||||
num,category,method_name,description,output_pattern
|
num,category,method_name,description,output_pattern
|
||||||
1,collaboration,Stakeholder Round Table,Convene multiple personas to contribute diverse perspectives - essential for requirements gathering and finding balanced solutions across competing interests,perspectives → synthesis → alignment
|
1,advanced,Tree of Thoughts,Explore multiple reasoning paths simultaneously then evaluate and select the best - perfect for complex problems with multiple valid approaches,paths → evaluation → selection
|
||||||
2,collaboration,Expert Panel Review,Assemble domain experts for deep specialized analysis - ideal when technical depth and peer review quality are needed,expert views → consensus → recommendations
|
2,advanced,Graph of Thoughts,Model reasoning as an interconnected network of ideas to reveal hidden relationships - ideal for systems thinking and discovering emergent patterns,nodes → connections → patterns
|
||||||
3,collaboration,Debate Club Showdown,Two personas argue opposing positions while a moderator scores points - great for exploring controversial decisions and finding middle ground,thesis → antithesis → synthesis
|
3,advanced,Thread of Thought,Maintain coherent reasoning across long contexts by weaving a continuous narrative thread - essential for RAG systems and maintaining consistency,context → thread → synthesis
|
||||||
4,collaboration,User Persona Focus Group,Gather your product's user personas to react to proposals and share frustrations - essential for validating features and discovering unmet needs,reactions → concerns → priorities
|
4,advanced,Self-Consistency Validation,Generate multiple independent approaches then compare for consistency - crucial for high-stakes decisions where verification matters,approaches → comparison → consensus
|
||||||
5,collaboration,Time Traveler Council,Past-you and future-you advise present-you on decisions - powerful for gaining perspective on long-term consequences vs short-term pressures,past wisdom → present choice → future impact
|
5,advanced,Meta-Prompting Analysis,Step back to analyze the approach structure and methodology itself - valuable for optimizing prompts and improving problem-solving,current → analysis → optimization
|
||||||
6,collaboration,Cross-Functional War Room,Product manager + engineer + designer tackle a problem together - reveals trade-offs between feasibility desirability and viability,constraints → trade-offs → balanced solution
|
6,advanced,Reasoning via Planning,Build a reasoning tree guided by world models and goal states - excellent for strategic planning and sequential decision-making,model → planning → strategy
|
||||||
7,collaboration,Mentor and Apprentice,Senior expert teaches junior while junior asks naive questions - surfaces hidden assumptions through teaching,explanation → questions → deeper understanding
|
7,advanced,Chain-of-Thought Scaffolding,Force explicit intermediate reasoning steps before any conclusion — prevents intuitive leaps that skip flawed logic,premise → step → step → conclusion
|
||||||
8,collaboration,Good Cop Bad Cop,Supportive persona and critical persona alternate - finds both strengths to build on and weaknesses to address,encouragement → criticism → balanced view
|
8,advanced,Few-Shot Exemplar Priming,Provide 2-3 worked examples of the desired reasoning pattern before the real task — aligns output format and depth through demonstration,examples → pattern recognition → application
|
||||||
9,collaboration,Improv Yes-And,Multiple personas build on each other's ideas without blocking - generates unexpected creative directions through collaborative building,idea → build → build → surprising result
|
9,collaboration,Stakeholder Round Table,Convene multiple personas to contribute diverse perspectives - essential for requirements gathering and finding balanced solutions across competing interests,perspectives → synthesis → alignment
|
||||||
10,collaboration,Customer Support Theater,Angry customer and support rep roleplay to find pain points - reveals real user frustrations and service gaps,complaint → investigation → resolution → prevention
|
10,collaboration,Expert Panel Review,Assemble domain experts for deep specialized analysis - ideal when technical depth and peer review quality are needed,expert views → consensus → recommendations
|
||||||
11,advanced,Tree of Thoughts,Explore multiple reasoning paths simultaneously then evaluate and select the best - perfect for complex problems with multiple valid approaches,paths → evaluation → selection
|
11,collaboration,Debate Club Showdown,Two personas argue opposing positions while a moderator scores points - great for exploring controversial decisions and finding middle ground,thesis → antithesis → synthesis
|
||||||
12,advanced,Graph of Thoughts,Model reasoning as an interconnected network of ideas to reveal hidden relationships - ideal for systems thinking and discovering emergent patterns,nodes → connections → patterns
|
12,collaboration,User Persona Focus Group,Gather your product's user personas to react to proposals and share frustrations - essential for validating features and discovering unmet needs,reactions → concerns → priorities
|
||||||
13,advanced,Thread of Thought,Maintain coherent reasoning across long contexts by weaving a continuous narrative thread - essential for RAG systems and maintaining consistency,context → thread → synthesis
|
13,collaboration,Time Traveler Council,Past-you and future-you advise present-you on decisions - powerful for gaining perspective on long-term consequences vs short-term pressures,past wisdom → present choice → future impact
|
||||||
14,advanced,Self-Consistency Validation,Generate multiple independent approaches then compare for consistency - crucial for high-stakes decisions where verification matters,approaches → comparison → consensus
|
14,collaboration,Cross-Functional War Room,Product manager + engineer + designer tackle a problem together - reveals trade-offs between feasibility desirability and viability,constraints → trade-offs → balanced solution
|
||||||
15,advanced,Meta-Prompting Analysis,Step back to analyze the approach structure and methodology itself - valuable for optimizing prompts and improving problem-solving,current → analysis → optimization
|
15,collaboration,Mentor and Apprentice,Senior expert teaches junior while junior asks naive questions - surfaces hidden assumptions through teaching,explanation → questions → deeper understanding
|
||||||
16,advanced,Reasoning via Planning,Build a reasoning tree guided by world models and goal states - excellent for strategic planning and sequential decision-making,model → planning → strategy
|
16,collaboration,Good Cop Bad Cop,Supportive persona and critical persona alternate - finds both strengths to build on and weaknesses to address,encouragement → criticism → balanced view
|
||||||
17,competitive,Red Team vs Blue Team,Adversarial attack-defend analysis to find vulnerabilities - critical for security testing and building robust solutions,defense → attack → hardening
|
17,collaboration,Improv Yes-And,Multiple personas build on each other's ideas without blocking - generates unexpected creative directions through collaborative building,idea → build → build → surprising result
|
||||||
18,competitive,Shark Tank Pitch,Entrepreneur pitches to skeptical investors who poke holes - stress-tests business viability and forces clarity on value proposition,pitch → challenges → refinement
|
18,collaboration,Customer Support Theater,Angry customer and support rep roleplay to find pain points - reveals real user frustrations and service gaps,complaint → investigation → resolution → prevention
|
||||||
19,competitive,Code Review Gauntlet,Senior devs with different philosophies review the same code - surfaces style debates and finds consensus on best practices,reviews → debates → standards
|
19,collaboration,Six Thinking Hats,Rotate through six modes (facts - feelings - caution - optimism - creativity - process) to ensure a group covers every angle without crosstalk,white → red → black → yellow → green → blue
|
||||||
20,technical,Architecture Decision Records,Multiple architect personas propose and debate architectural choices with explicit trade-offs - ensures decisions are well-reasoned and documented,options → trade-offs → decision → rationale
|
20,collaboration,Delphi Method,Experts give independent estimates - see anonymized results - then revise — converges on calibrated group judgment while avoiding anchoring bias,independent estimates → reveal → revise → converge
|
||||||
21,technical,Rubber Duck Debugging Evolved,Explain your code to progressively more technical ducks until you find the bug - forces clarity at multiple abstraction levels,simple → detailed → technical → aha
|
21,competitive,Red Team vs Blue Team,Adversarial attack-defend analysis to find vulnerabilities - critical for security testing and building robust solutions,defense → attack → hardening
|
||||||
22,technical,Algorithm Olympics,Multiple approaches compete on the same problem with benchmarks - finds optimal solution through direct comparison,implementations → benchmarks → winner
|
22,competitive,Shark Tank Pitch,Entrepreneur pitches to skeptical investors who poke holes - stress-tests business viability and forces clarity on value proposition,pitch → challenges → refinement
|
||||||
23,technical,Security Audit Personas,Hacker + defender + auditor examine system from different threat models - comprehensive security review from multiple angles,vulnerabilities → defenses → compliance
|
23,competitive,Code Review Gauntlet,Senior devs with different philosophies review the same code - surfaces style debates and finds consensus on best practices,reviews → debates → standards
|
||||||
24,technical,Performance Profiler Panel,Database expert + frontend specialist + DevOps engineer diagnose slowness - finds bottlenecks across the full stack,symptoms → analysis → optimizations
|
24,core,First Principles Analysis,Strip away assumptions to rebuild from fundamental truths - breakthrough technique for innovation and solving impossible problems,assumptions → truths → new approach
|
||||||
25,creative,SCAMPER Method,Apply seven creativity lenses (Substitute/Combine/Adapt/Modify/Put/Eliminate/Reverse) - systematic ideation for product innovation,S→C→A→M→P→E→R
|
25,core,5 Whys Deep Dive,Repeatedly ask why to drill down to root causes - simple but powerful for understanding failures,why chain → root cause → solution
|
||||||
26,creative,Reverse Engineering,Work backwards from desired outcome to find implementation path - powerful for goal achievement and understanding endpoints,end state → steps backward → path forward
|
26,core,Socratic Questioning,Use targeted questions to reveal hidden assumptions and guide discovery - excellent for teaching and self-discovery,questions → revelations → understanding
|
||||||
27,creative,What If Scenarios,Explore alternative realities to understand possibilities and implications - valuable for contingency planning and exploration,scenarios → implications → insights
|
27,core,Critique and Refine,Systematic review to identify strengths and weaknesses then improve - standard quality check for drafts,strengths/weaknesses → improvements → refined
|
||||||
28,creative,Random Input Stimulus,Inject unrelated concepts to spark unexpected connections - breaks creative blocks through forced lateral thinking,random word → associations → novel ideas
|
28,core,Explain Reasoning,Walk through step-by-step thinking to show how conclusions were reached - crucial for transparency,steps → logic → conclusion
|
||||||
29,creative,Exquisite Corpse Brainstorm,Each persona adds to the idea seeing only the previous contribution - generates surprising combinations through constrained collaboration,contribution → handoff → contribution → surprise
|
29,core,Expand or Contract for Audience,Dynamically adjust detail level and technical depth for target audience - matches content to reader capabilities,audience → adjustments → refined content
|
||||||
30,creative,Genre Mashup,Combine two unrelated domains to find fresh approaches - innovation through unexpected cross-pollination,domain A + domain B → hybrid insights
|
30,core,Second-Order Thinking,Think beyond immediate consequences to anticipate cascading effects and long-term implications - essential for strategic decisions where first-order solutions create hidden downstream problems,action → consequences → second-order effects → informed choice
|
||||||
31,research,Literature Review Personas,Optimist researcher + skeptic researcher + synthesizer review sources - balanced assessment of evidence quality,sources → critiques → synthesis
|
31,core,Inversion Analysis,Flip the problem by asking what would guarantee failure instead of how to succeed - reveals hidden obstacles and blind spots by approaching challenges from the opposite direction,goal → invert → failure paths → avoidance → solution
|
||||||
32,research,Thesis Defense Simulation,Student defends hypothesis against committee with different concerns - stress-tests research methodology and conclusions,thesis → challenges → defense → refinements
|
32,core,Problem Decomposition,Break a complex problem into independent sub-problems - solve each - then reassemble — essential when a task is too large or tangled to tackle whole,whole → parts → solutions → reassembly
|
||||||
33,research,Comparative Analysis Matrix,Multiple analysts evaluate options against weighted criteria - structured decision-making with explicit scoring,options → criteria → scores → recommendation
|
33,core,Analogy Mapping,Find a well-understood parallel domain and transfer its structure to the current problem — unlocks insight by borrowing proven mental models,source domain → mapping → target insight
|
||||||
34,risk,Pre-mortem Analysis,Imagine future failure then work backwards to prevent it - powerful technique for risk mitigation before major launches,failure scenario → causes → prevention
|
34,core,Steelmanning,Construct the strongest possible version of an opposing argument before responding — builds credibility and catches blind spots that strawmanning misses,opposing view → strongest form → honest rebuttal
|
||||||
35,risk,Failure Mode Analysis,Systematically explore how each component could fail - critical for reliability engineering and safety-critical systems,components → failures → prevention
|
35,creative,SCAMPER Method,Apply seven creativity lenses (Substitute/Combine/Adapt/Modify/Put/Eliminate/Reverse) - systematic ideation for product innovation,S→C→A→M→P→E→R
|
||||||
36,risk,Challenge from Critical Perspective,Play devil's advocate to stress-test ideas and find weaknesses - essential for overcoming groupthink,assumptions → challenges → strengthening
|
36,creative,Reverse Engineering,Work backwards from desired outcome to find implementation path - powerful for goal achievement and understanding endpoints,end state → steps backward → path forward
|
||||||
37,risk,Identify Potential Risks,Brainstorm what could go wrong across all categories - fundamental for project planning and deployment preparation,categories → risks → mitigations
|
37,creative,What If Scenarios,Explore alternative realities to understand possibilities and implications - valuable for contingency planning and exploration,scenarios → implications → insights
|
||||||
38,risk,Chaos Monkey Scenarios,Deliberately break things to test resilience and recovery - ensures systems handle failures gracefully,break → observe → harden
|
38,creative,Random Input Stimulus,Inject unrelated concepts to spark unexpected connections - breaks creative blocks through forced lateral thinking,random word → associations → novel ideas
|
||||||
39,core,First Principles Analysis,Strip away assumptions to rebuild from fundamental truths - breakthrough technique for innovation and solving impossible problems,assumptions → truths → new approach
|
39,creative,Exquisite Corpse Brainstorm,Each persona adds to the idea seeing only the previous contribution - generates surprising combinations through constrained collaboration,contribution → handoff → contribution → surprise
|
||||||
40,core,5 Whys Deep Dive,Repeatedly ask why to drill down to root causes - simple but powerful for understanding failures,why chain → root cause → solution
|
40,creative,Genre Mashup,Combine two unrelated domains to find fresh approaches - innovation through unexpected cross-pollination,domain A + domain B → hybrid insights
|
||||||
41,core,Socratic Questioning,Use targeted questions to reveal hidden assumptions and guide discovery - excellent for teaching and self-discovery,questions → revelations → understanding
|
41,creative,Constraint Injection,Deliberately add an artificial limitation (budget - time - technology) to force novel solutions — creativity thrives under pressure,add constraint → forced creativity → remove constraint → evaluate
|
||||||
42,core,Critique and Refine,Systematic review to identify strengths and weaknesses then improve - standard quality check for drafts,strengths/weaknesses → improvements → refined
|
42,creative,Morphological Analysis,List independent parameters of a problem - enumerate options for each - then systematically combine — ensures you don't miss non-obvious configurations,parameters → options grid → combinations → evaluation
|
||||||
43,core,Explain Reasoning,Walk through step-by-step thinking to show how conclusions were reached - crucial for transparency,steps → logic → conclusion
|
43,framing,Abstraction Laddering,"Move up (""why?"") for strategic clarity or down (""how?"") for tactical detail — ensures you're solving at the right altitude",concrete ↔ abstract → right level
|
||||||
44,core,Expand or Contract for Audience,Dynamically adjust detail level and technical depth for target audience - matches content to reader capabilities,audience → adjustments → refined content
|
44,framing,Reframe the Question,Challenge whether the stated problem is the real problem — often the question itself is wrong and a better framing unlocks an easy answer,stated problem → reframe → true problem → solution
|
||||||
45,learning,Feynman Technique,Explain complex concepts simply as if teaching a child - the ultimate test of true understanding,complex → simple → gaps → mastery
|
45,framing,Stakeholder Lens Rotation,Serially adopt each stakeholder's world-view to see the same situation differently — reveals whose needs are being overlooked,perspective A → B → C → gaps found
|
||||||
46,learning,Active Recall Testing,Test understanding without references to verify true knowledge - essential for identifying gaps,test → gaps → reinforcement
|
46,learning,Feynman Technique,Explain complex concepts simply as if teaching a child - the ultimate test of true understanding,complex → simple → gaps → mastery
|
||||||
47,philosophical,Occam's Razor Application,Find the simplest sufficient explanation by eliminating unnecessary complexity - essential for debugging,options → simplification → selection
|
47,learning,Active Recall Testing,Test understanding without references to verify true knowledge - essential for identifying gaps,test → gaps → reinforcement
|
||||||
48,philosophical,Trolley Problem Variations,Explore ethical trade-offs through moral dilemmas - valuable for understanding values and difficult decisions,dilemma → analysis → decision
|
48,learning,Deliberate Practice Loop,Identify a specific sub-skill - drill it with immediate feedback - adjust - repeat — targeted improvement beats general repetition,isolate → drill → feedback → adjust → repeat
|
||||||
49,retrospective,Hindsight Reflection,Imagine looking back from the future to gain perspective - powerful for project reviews,future view → insights → application
|
49,philosophical,Occam's Razor Application,Find the simplest sufficient explanation by eliminating unnecessary complexity - essential for debugging,options → simplification → selection
|
||||||
50,retrospective,Lessons Learned Extraction,Systematically identify key takeaways and actionable improvements - essential for continuous improvement,experience → lessons → actions
|
50,philosophical,Trolley Problem Variations,Explore ethical trade-offs through moral dilemmas - valuable for understanding values and difficult decisions,dilemma → analysis → decision
|
||||||
|
51,research,Literature Review Personas,Optimist researcher + skeptic researcher + synthesizer review sources - balanced assessment of evidence quality,sources → critiques → synthesis
|
||||||
|
52,research,Thesis Defense Simulation,Student defends hypothesis against committee with different concerns - stress-tests research methodology and conclusions,thesis → challenges → defense → refinements
|
||||||
|
53,research,Comparative Analysis Matrix,Multiple analysts evaluate options against weighted criteria - structured decision-making with explicit scoring,options → criteria → scores → recommendation
|
||||||
|
54,research,Source Triangulation,Require at least three independent source types (quantitative - qualitative - expert) before accepting a claim — guards against single-source bias,claim → source A → source B → source C → confidence rating
|
||||||
|
55,retrospective,Hindsight Reflection,Imagine looking back from the future to gain perspective - powerful for project reviews,future view → insights → application
|
||||||
|
56,retrospective,Lessons Learned Extraction,Systematically identify key takeaways and actionable improvements - essential for continuous improvement,experience → lessons → actions
|
||||||
|
57,risk,Pre-mortem Analysis,Imagine future failure then work backwards to prevent it - powerful technique for risk mitigation before major launches,failure scenario → causes → prevention
|
||||||
|
58,risk,Failure Mode Analysis,Systematically explore how each component could fail - critical for reliability engineering and safety-critical systems,components → failures → prevention
|
||||||
|
59,risk,Challenge from Critical Perspective,Play devil's advocate to stress-test ideas and find weaknesses - essential for overcoming groupthink,assumptions → challenges → strengthening
|
||||||
|
60,risk,Identify Potential Risks,Brainstorm what could go wrong across all categories - fundamental for project planning and deployment preparation,categories → risks → mitigations
|
||||||
|
61,risk,Chaos Monkey Scenarios,Deliberately break things to test resilience and recovery - ensures systems handle failures gracefully,break → observe → harden
|
||||||
|
62,risk,Assumption Audit,Explicitly list every assumption underlying a plan - rate each by confidence and impact - then stress-test the weakest — prevents building on shaky foundations,list → rate → stress-test → shore up
|
||||||
|
63,risk,Cascading Failure Simulation,Trace how one component's failure propagates through dependencies — reveals hidden coupling and single points of failure,trigger failure → trace propagation → find amplifiers → decouple
|
||||||
|
64,technical,Architecture Decision Records,Multiple architect personas propose and debate architectural choices with explicit trade-offs - ensures decisions are well-reasoned and documented,options → trade-offs → decision → rationale
|
||||||
|
65,technical,Rubber Duck Debugging Evolved,Explain your code to progressively more technical ducks until you find the bug - forces clarity at multiple abstraction levels,simple → detailed → technical → aha
|
||||||
|
66,technical,Algorithm Olympics,Multiple approaches compete on the same problem with benchmarks - finds optimal solution through direct comparison,implementations → benchmarks → winner
|
||||||
|
67,technical,Security Audit Personas,Hacker + defender + auditor examine system from different threat models - comprehensive security review from multiple angles,vulnerabilities → defenses → compliance
|
||||||
|
68,technical,Performance Profiler Panel,Database expert + frontend specialist + DevOps engineer diagnose slowness - finds bottlenecks across the full stack,symptoms → analysis → optimizations
|
||||||
|
69,technical,Boundary & Edge Case Sweep,Systematically test extremes - zeros - nulls - maximums - and type mismatches — catches the failures that happy-path thinking always misses,inputs → boundaries → edge cases → failures found
|
||||||
|
|
|
||||||
|
|
|
@ -23,13 +23,10 @@ checkForUpdate().catch(() => {
|
||||||
|
|
||||||
async function checkForUpdate() {
|
async function checkForUpdate() {
|
||||||
try {
|
try {
|
||||||
// For beta versions, check the beta tag; otherwise check latest
|
// Prereleases (e.g. 6.5.1-next.0) live on the `next` dist-tag; stable
|
||||||
const isBeta =
|
// releases live on `latest`. semver.prerelease() returns null for stable,
|
||||||
packageJson.version.includes('Beta') ||
|
// so this correctly routes pre-1.0-next/rc/etc. without string matching.
|
||||||
packageJson.version.includes('beta') ||
|
const tag = semver.prerelease(packageJson.version) ? 'next' : 'latest';
|
||||||
packageJson.version.includes('alpha') ||
|
|
||||||
packageJson.version.includes('rc');
|
|
||||||
const tag = isBeta ? 'beta' : 'latest';
|
|
||||||
|
|
||||||
const result = execSync(`npm view ${packageName}@${tag} version`, {
|
const result = execSync(`npm view ${packageName}@${tag} version`, {
|
||||||
encoding: 'utf8',
|
encoding: 'utf8',
|
||||||
|
|
|
||||||
|
|
@ -435,6 +435,9 @@ class ManifestGenerator {
|
||||||
// this means user-scoped keys (e.g. user_name) could mis-file into the
|
// this means user-scoped keys (e.g. user_name) could mis-file into the
|
||||||
// team config, so the operator should notice.
|
// team config, so the operator should notice.
|
||||||
const scopeByModuleKey = {};
|
const scopeByModuleKey = {};
|
||||||
|
// Maps installer moduleName (may be full display name) → module code field
|
||||||
|
// from module.yaml, so TOML sections use [modules.<code>] not [modules.<name>].
|
||||||
|
const codeByModuleName = {};
|
||||||
for (const moduleName of this.updatedModules) {
|
for (const moduleName of this.updatedModules) {
|
||||||
const moduleYamlPath = await resolveInstalledModuleYaml(moduleName);
|
const moduleYamlPath = await resolveInstalledModuleYaml(moduleName);
|
||||||
if (!moduleYamlPath) {
|
if (!moduleYamlPath) {
|
||||||
|
|
@ -447,6 +450,7 @@ class ManifestGenerator {
|
||||||
try {
|
try {
|
||||||
const parsed = yaml.parse(await fs.readFile(moduleYamlPath, 'utf8'));
|
const parsed = yaml.parse(await fs.readFile(moduleYamlPath, 'utf8'));
|
||||||
if (!parsed || typeof parsed !== 'object') continue;
|
if (!parsed || typeof parsed !== 'object') continue;
|
||||||
|
if (parsed.code) codeByModuleName[moduleName] = parsed.code;
|
||||||
scopeByModuleKey[moduleName] = {};
|
scopeByModuleKey[moduleName] = {};
|
||||||
for (const [key, value] of Object.entries(parsed)) {
|
for (const [key, value] of Object.entries(parsed)) {
|
||||||
if (value && typeof value === 'object' && 'prompt' in value) {
|
if (value && typeof value === 'object' && 'prompt' in value) {
|
||||||
|
|
@ -545,6 +549,9 @@ class ManifestGenerator {
|
||||||
if (moduleName === 'core') continue;
|
if (moduleName === 'core') continue;
|
||||||
const cfg = moduleConfigs[moduleName];
|
const cfg = moduleConfigs[moduleName];
|
||||||
if (!cfg || Object.keys(cfg).length === 0) continue;
|
if (!cfg || Object.keys(cfg).length === 0) continue;
|
||||||
|
// Use the module's code field from module.yaml as the TOML key so the
|
||||||
|
// section is [modules.mdo] not [modules.MDO: Maxio DevOps Operations].
|
||||||
|
const sectionKey = codeByModuleName[moduleName] || moduleName;
|
||||||
// Only filter out spread-from-core pollution when we actually know
|
// Only filter out spread-from-core pollution when we actually know
|
||||||
// this module's prompt schema. For external/marketplace modules whose
|
// this module's prompt schema. For external/marketplace modules whose
|
||||||
// module.yaml isn't in the src tree, fall through as all-team so we
|
// module.yaml isn't in the src tree, fall through as all-team so we
|
||||||
|
|
@ -552,14 +559,14 @@ class ManifestGenerator {
|
||||||
const haveSchema = Object.keys(scopeByModuleKey[moduleName] || {}).length > 0;
|
const haveSchema = Object.keys(scopeByModuleKey[moduleName] || {}).length > 0;
|
||||||
const { team: modTeam, user: modUser } = partition(moduleName, cfg, haveSchema);
|
const { team: modTeam, user: modUser } = partition(moduleName, cfg, haveSchema);
|
||||||
if (Object.keys(modTeam).length > 0) {
|
if (Object.keys(modTeam).length > 0) {
|
||||||
teamLines.push(`[modules.${moduleName}]`);
|
teamLines.push(`[modules.${sectionKey}]`);
|
||||||
for (const [key, value] of Object.entries(modTeam)) {
|
for (const [key, value] of Object.entries(modTeam)) {
|
||||||
teamLines.push(`${key} = ${formatTomlValue(value)}`);
|
teamLines.push(`${key} = ${formatTomlValue(value)}`);
|
||||||
}
|
}
|
||||||
teamLines.push('');
|
teamLines.push('');
|
||||||
}
|
}
|
||||||
if (Object.keys(modUser).length > 0) {
|
if (Object.keys(modUser).length > 0) {
|
||||||
userLines.push(`[modules.${moduleName}]`);
|
userLines.push(`[modules.${sectionKey}]`);
|
||||||
for (const [key, value] of Object.entries(modUser)) {
|
for (const [key, value] of Object.entries(modUser)) {
|
||||||
userLines.push(`${key} = ${formatTomlValue(value)}`);
|
userLines.push(`${key} = ${formatTomlValue(value)}`);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -29,6 +29,11 @@ class CommunityModuleManager {
|
||||||
// Shared across all instances; the manifest writer often uses a fresh instance.
|
// Shared across all instances; the manifest writer often uses a fresh instance.
|
||||||
static _resolutions = new Map();
|
static _resolutions = new Map();
|
||||||
|
|
||||||
|
// moduleCode → ResolvedModule (from PluginResolver) when the cloned repo ships
|
||||||
|
// a `.claude-plugin/marketplace.json`. Lets community installs reuse the same
|
||||||
|
// skill-level install pipeline as custom-source installs (installFromResolution).
|
||||||
|
static _pluginResolutions = new Map();
|
||||||
|
|
||||||
constructor() {
|
constructor() {
|
||||||
this._client = new RegistryClient();
|
this._client = new RegistryClient();
|
||||||
this._cachedIndex = null;
|
this._cachedIndex = null;
|
||||||
|
|
@ -40,6 +45,11 @@ class CommunityModuleManager {
|
||||||
return CommunityModuleManager._resolutions.get(moduleCode) || null;
|
return CommunityModuleManager._resolutions.get(moduleCode) || null;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Get the marketplace.json-derived plugin resolution for a community module, if any. */
|
||||||
|
getPluginResolution(moduleCode) {
|
||||||
|
return CommunityModuleManager._pluginResolutions.get(moduleCode) || null;
|
||||||
|
}
|
||||||
|
|
||||||
// ─── Data Loading ──────────────────────────────────────────────────────────
|
// ─── Data Loading ──────────────────────────────────────────────────────────
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -371,6 +381,18 @@ class CommunityModuleManager {
|
||||||
planSource: planEntry.source,
|
planSource: planEntry.source,
|
||||||
});
|
});
|
||||||
|
|
||||||
|
// If the repo ships a marketplace.json, route through PluginResolver so the
|
||||||
|
// skill-level install pipeline (installFromResolution) handles the copy.
|
||||||
|
// Repos without marketplace.json fall through to the legacy findModuleSource
|
||||||
|
// path unchanged.
|
||||||
|
await this._tryResolveMarketplacePlugin(moduleCacheDir, moduleInfo, {
|
||||||
|
channel: planEntry.channel,
|
||||||
|
version: recordedVersion,
|
||||||
|
sha: installedSha,
|
||||||
|
approvedTag,
|
||||||
|
approvedSha,
|
||||||
|
});
|
||||||
|
|
||||||
// Install dependencies if needed
|
// Install dependencies if needed
|
||||||
const packageJsonPath = path.join(moduleCacheDir, 'package.json');
|
const packageJsonPath = path.join(moduleCacheDir, 'package.json');
|
||||||
if ((needsDependencyInstall || wasNewClone) && (await fs.pathExists(packageJsonPath))) {
|
if ((needsDependencyInstall || wasNewClone) && (await fs.pathExists(packageJsonPath))) {
|
||||||
|
|
@ -392,6 +414,204 @@ class CommunityModuleManager {
|
||||||
return moduleCacheDir;
|
return moduleCacheDir;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// ─── Marketplace.json Resolution ──────────────────────────────────────────
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Detect `.claude-plugin/marketplace.json` in a cloned community repo and
|
||||||
|
* route through PluginResolver. When successful, caches the resolution so
|
||||||
|
* OfficialModulesManager.install() can route the copy through
|
||||||
|
* installFromResolution() — the same path used by custom-source installs.
|
||||||
|
*
|
||||||
|
* Silent no-op when marketplace.json is absent or the resolver returns no
|
||||||
|
* matches; the legacy findModuleSource path then handles the install.
|
||||||
|
*
|
||||||
|
* @param {string} repoPath - Absolute path to the cloned repo
|
||||||
|
* @param {Object} moduleInfo - Normalized community module info
|
||||||
|
* @param {Object} resolution - Resolution metadata from cloneModule
|
||||||
|
* @param {string} resolution.channel - Channel ('stable' | 'next' | 'pinned')
|
||||||
|
* @param {string} resolution.version - Recorded version string
|
||||||
|
* @param {string} resolution.sha - Resolved git SHA
|
||||||
|
* @param {string|null} resolution.approvedTag - Registry approved tag
|
||||||
|
* @param {string|null} resolution.approvedSha - Registry approved SHA
|
||||||
|
*/
|
||||||
|
async _tryResolveMarketplacePlugin(repoPath, moduleInfo, resolution) {
|
||||||
|
const marketplacePath = path.join(repoPath, '.claude-plugin', 'marketplace.json');
|
||||||
|
if (!(await fs.pathExists(marketplacePath))) return;
|
||||||
|
|
||||||
|
let marketplaceData;
|
||||||
|
try {
|
||||||
|
marketplaceData = JSON.parse(await fs.readFile(marketplacePath, 'utf8'));
|
||||||
|
} catch {
|
||||||
|
// Malformed marketplace.json — fall through to legacy path.
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
const plugins = Array.isArray(marketplaceData?.plugins) ? marketplaceData.plugins : [];
|
||||||
|
if (plugins.length === 0) return;
|
||||||
|
|
||||||
|
const selection = this._selectPluginForModule(plugins, moduleInfo);
|
||||||
|
if (!selection) {
|
||||||
|
await this._safeWarn(
|
||||||
|
`Community module '${moduleInfo.code}' ships marketplace.json but no plugin entry matches the registry code. ` +
|
||||||
|
`Falling back to legacy install path.`,
|
||||||
|
);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (selection.source === 'single-fallback') {
|
||||||
|
// Single-entry marketplace.json whose plugin name doesn't match the registry
|
||||||
|
// code or the module_definition hint. Most likely correct, but worth surfacing
|
||||||
|
// in case marketplace.json is misconfigured and we'd install the wrong plugin.
|
||||||
|
await this._safeWarn(
|
||||||
|
`Community module '${moduleInfo.code}' picked the only plugin in marketplace.json ('${selection.plugin?.name}') ` +
|
||||||
|
`because no name or module_definition match was found. Verify marketplace.json if the install looks wrong.`,
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
const { PluginResolver } = require('./plugin-resolver');
|
||||||
|
const resolver = new PluginResolver();
|
||||||
|
let resolved;
|
||||||
|
try {
|
||||||
|
resolved = await resolver.resolve(repoPath, selection.plugin);
|
||||||
|
} catch (error) {
|
||||||
|
// PluginResolver threw (malformed plugin entry, missing files, etc.).
|
||||||
|
// Honor the silent-fallthrough contract — warn and let the legacy
|
||||||
|
// findModuleSource path handle the install.
|
||||||
|
await this._safeWarn(
|
||||||
|
`PluginResolver failed for community module '${moduleInfo.code}': ${error.message}. ` + `Falling back to legacy install path.`,
|
||||||
|
);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (!resolved || resolved.length === 0) return;
|
||||||
|
|
||||||
|
// The registry registers a single code per module. If the resolver returns
|
||||||
|
// multiple modules (Strategy 4: multiple standalone skills), accept only
|
||||||
|
// the entry whose code matches the registry. Other entries are ignored —
|
||||||
|
// they belong to plugins not registered in the community catalog.
|
||||||
|
const matched = resolved.find((mod) => mod.code === moduleInfo.code) || (resolved.length === 1 ? resolved[0] : null);
|
||||||
|
if (!matched) return;
|
||||||
|
|
||||||
|
// Shallow-clone before stamping provenance — the resolver may cache or reuse
|
||||||
|
// its return objects, and we don't want install-specific fields leaking back.
|
||||||
|
const stamped = {
|
||||||
|
...matched,
|
||||||
|
code: moduleInfo.code,
|
||||||
|
repoUrl: moduleInfo.url,
|
||||||
|
cloneRef: resolution.channel === 'pinned' ? resolution.version : resolution.approvedTag || null,
|
||||||
|
cloneSha: resolution.sha,
|
||||||
|
communitySource: true,
|
||||||
|
communityChannel: resolution.channel,
|
||||||
|
communityVersion: resolution.version,
|
||||||
|
registryApprovedTag: resolution.approvedTag,
|
||||||
|
registryApprovedSha: resolution.approvedSha,
|
||||||
|
};
|
||||||
|
|
||||||
|
CommunityModuleManager._pluginResolutions.set(moduleInfo.code, stamped);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Lazy fallback: resolve marketplace.json straight from the on-disk cache
|
||||||
|
* when `_pluginResolutions` is empty (e.g. callers that reach `install()`
|
||||||
|
* without `cloneModule` having populated the cache earlier in this process).
|
||||||
|
*
|
||||||
|
* Reuses an existing channel resolution if present; otherwise synthesizes a
|
||||||
|
* minimal stable-channel stub from the registry entry + the cached repo's
|
||||||
|
* current HEAD. Returns the cached plugin resolution if one is produced,
|
||||||
|
* otherwise null (caller falls back to the legacy path).
|
||||||
|
*
|
||||||
|
* @param {string} moduleCode
|
||||||
|
* @returns {Promise<Object|null>}
|
||||||
|
*/
|
||||||
|
async resolveFromCache(moduleCode) {
|
||||||
|
const existing = this.getPluginResolution(moduleCode);
|
||||||
|
if (existing) return existing;
|
||||||
|
|
||||||
|
const cacheRepoDir = path.join(this.getCacheDir(), moduleCode);
|
||||||
|
const marketplacePath = path.join(cacheRepoDir, '.claude-plugin', 'marketplace.json');
|
||||||
|
if (!(await fs.pathExists(marketplacePath))) return null;
|
||||||
|
|
||||||
|
let moduleInfo;
|
||||||
|
try {
|
||||||
|
moduleInfo = await this.getModuleByCode(moduleCode);
|
||||||
|
} catch {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
if (!moduleInfo) return null;
|
||||||
|
|
||||||
|
let channelResolution = this.getResolution(moduleCode);
|
||||||
|
if (!channelResolution) {
|
||||||
|
let sha = '';
|
||||||
|
try {
|
||||||
|
sha = execSync('git rev-parse HEAD', { cwd: cacheRepoDir, stdio: 'pipe' }).toString().trim();
|
||||||
|
} catch {
|
||||||
|
// Not a git repo or unreadable — give up and let the legacy path run.
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
channelResolution = {
|
||||||
|
channel: 'stable',
|
||||||
|
version: moduleInfo.approvedTag || sha.slice(0, 7),
|
||||||
|
sha,
|
||||||
|
registryApprovedTag: moduleInfo.approvedTag || null,
|
||||||
|
registryApprovedSha: moduleInfo.approvedSha || null,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
await this._tryResolveMarketplacePlugin(cacheRepoDir, moduleInfo, {
|
||||||
|
channel: channelResolution.channel,
|
||||||
|
version: channelResolution.version,
|
||||||
|
sha: channelResolution.sha,
|
||||||
|
approvedTag: channelResolution.registryApprovedTag,
|
||||||
|
approvedSha: channelResolution.registryApprovedSha,
|
||||||
|
});
|
||||||
|
|
||||||
|
return this.getPluginResolution(moduleCode);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Best-effort warning emitter. `prompts.log.warn` may be undefined in some
|
||||||
|
* harnesses and may return a rejected promise — swallow both cases so a
|
||||||
|
* fallthrough warning can never crash the install.
|
||||||
|
*/
|
||||||
|
async _safeWarn(message) {
|
||||||
|
try {
|
||||||
|
const result = prompts.log?.warn?.(message);
|
||||||
|
if (result && typeof result.then === 'function') await result;
|
||||||
|
} catch {
|
||||||
|
/* ignore */
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Pick which plugin entry from marketplace.json represents this community module.
|
||||||
|
* Precedence:
|
||||||
|
* 1. Exact match on `plugin.name === moduleInfo.code`
|
||||||
|
* 2. Trailing directory of `module_definition` matches `plugin.name`
|
||||||
|
* 3. Single plugin in marketplace.json — accepted with a warning so a
|
||||||
|
* mismatched-but-uniquely-named plugin doesn't install silently.
|
||||||
|
* Otherwise null (caller falls back to legacy path).
|
||||||
|
*
|
||||||
|
* @returns {{plugin: Object, source: 'name'|'hint'|'single-fallback'}|null}
|
||||||
|
*/
|
||||||
|
_selectPluginForModule(plugins, moduleInfo) {
|
||||||
|
const byCode = plugins.find((p) => p && p.name === moduleInfo.code);
|
||||||
|
if (byCode) return { plugin: byCode, source: 'name' };
|
||||||
|
|
||||||
|
if (moduleInfo.moduleDefinition) {
|
||||||
|
// module_definition like "src/skills/suno-setup/assets/module.yaml" →
|
||||||
|
// hint segment "suno-setup". Match that against plugin names.
|
||||||
|
const segments = moduleInfo.moduleDefinition.split('/').filter(Boolean);
|
||||||
|
const setupIdx = segments.findIndex((s) => s.endsWith('-setup'));
|
||||||
|
if (setupIdx !== -1) {
|
||||||
|
const hint = segments[setupIdx];
|
||||||
|
const byHint = plugins.find((p) => p && p.name === hint);
|
||||||
|
if (byHint) return { plugin: byHint, source: 'hint' };
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if (plugins.length === 1) return { plugin: plugins[0], source: 'single-fallback' };
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
// ─── Source Finding ───────────────────────────────────────────────────────
|
// ─── Source Finding ───────────────────────────────────────────────────────
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
|
||||||
|
|
@ -24,8 +24,9 @@ class CustomModuleManager {
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Parse a user-provided source input into a structured descriptor.
|
* Parse a user-provided source input into a structured descriptor.
|
||||||
* Accepts local file paths, HTTPS Git URLs, and SSH Git URLs.
|
* Accepts local file paths, HTTPS Git URLs, HTTP Git URLs, and SSH Git URLs.
|
||||||
* For HTTPS URLs with deep paths (e.g., /tree/main/subdir), extracts the subdir.
|
* For HTTPS/HTTP URLs with deep paths (e.g., /tree/main/subdir), extracts the subdir.
|
||||||
|
* The original protocol (http or https) is preserved in the returned cloneUrl.
|
||||||
*
|
*
|
||||||
* @param {string} input - URL or local file path
|
* @param {string} input - URL or local file path
|
||||||
* @returns {Object} Parsed source descriptor:
|
* @returns {Object} Parsed source descriptor:
|
||||||
|
|
@ -127,11 +128,11 @@ class CustomModuleManager {
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
// HTTPS URL: https://host/owner/repo[/tree/branch/subdir][.git]
|
// HTTPS/HTTP URL: https://host/owner/repo[/tree/branch/subdir][.git]
|
||||||
const httpsMatch = trimmed.match(/^https?:\/\/([^/]+)\/([^/]+)\/([^/.]+?)(?:\.git)?(\/.*)?$/);
|
const httpsMatch = trimmed.match(/^(https?):\/\/([^/]+)\/([^/]+)\/([^/.]+?)(?:\.git)?(\/.*)?$/);
|
||||||
if (httpsMatch) {
|
if (httpsMatch) {
|
||||||
const [, host, owner, repo, remainder] = httpsMatch;
|
const [, protocol, host, owner, repo, remainder] = httpsMatch;
|
||||||
const cloneUrl = `https://${host}/${owner}/${repo}`;
|
const cloneUrl = `${protocol}://${host}/${owner}/${repo}`;
|
||||||
let subdir = null;
|
let subdir = null;
|
||||||
let urlRef = null; // branch/tag extracted from /tree/<ref>/subdir
|
let urlRef = null; // branch/tag extracted from /tree/<ref>/subdir
|
||||||
|
|
||||||
|
|
@ -311,7 +312,7 @@ class CustomModuleManager {
|
||||||
/**
|
/**
|
||||||
* Clone a custom module repository to cache.
|
* Clone a custom module repository to cache.
|
||||||
* Supports any Git host (GitHub, GitLab, Bitbucket, self-hosted, etc.).
|
* Supports any Git host (GitHub, GitLab, Bitbucket, self-hosted, etc.).
|
||||||
* @param {string} sourceInput - Git URL (HTTPS or SSH)
|
* @param {string} sourceInput - Git URL (HTTPS, HTTP, or SSH)
|
||||||
* @param {Object} [options] - Clone options
|
* @param {Object} [options] - Clone options
|
||||||
* @param {boolean} [options.silent] - Suppress spinner output
|
* @param {boolean} [options.silent] - Suppress spinner output
|
||||||
* @param {boolean} [options.skipInstall] - Skip npm install (for browsing before user confirms)
|
* @param {boolean} [options.skipInstall] - Skip npm install (for browsing before user confirms)
|
||||||
|
|
|
||||||
|
|
@ -269,6 +269,21 @@ class OfficialModules {
|
||||||
return this.installFromResolution(resolved, bmadDir, fileTrackingCallback, options);
|
return this.installFromResolution(resolved, bmadDir, fileTrackingCallback, options);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Community modules whose cloned repo ships marketplace.json get the same
|
||||||
|
// skill-level install treatment as custom-source installs. If the in-process
|
||||||
|
// cache wasn't populated (e.g. caller skipped the pre-clone phase), fall
|
||||||
|
// back to resolving directly from `~/.bmad/cache/community-modules/<name>/`
|
||||||
|
// so we don't silently regress to the legacy half-install path.
|
||||||
|
const { CommunityModuleManager } = require('./community-manager');
|
||||||
|
const communityMgr = new CommunityModuleManager();
|
||||||
|
let communityResolved = communityMgr.getPluginResolution(moduleName);
|
||||||
|
if (!communityResolved) {
|
||||||
|
communityResolved = await communityMgr.resolveFromCache(moduleName);
|
||||||
|
}
|
||||||
|
if (communityResolved) {
|
||||||
|
return this.installFromResolution(communityResolved, bmadDir, fileTrackingCallback, options);
|
||||||
|
}
|
||||||
|
|
||||||
const sourcePath = await this.findModuleSource(moduleName, {
|
const sourcePath = await this.findModuleSource(moduleName, {
|
||||||
silent: options.silent,
|
silent: options.silent,
|
||||||
channelOptions: options.channelOptions,
|
channelOptions: options.channelOptions,
|
||||||
|
|
@ -360,21 +375,27 @@ class OfficialModules {
|
||||||
await this.createModuleDirectories(resolved.code, bmadDir, options);
|
await this.createModuleDirectories(resolved.code, bmadDir, options);
|
||||||
}
|
}
|
||||||
|
|
||||||
// Update manifest. For custom modules, derive channel from the git ref:
|
// Update manifest. For community installs we honor the channel resolved by
|
||||||
// cloneRef present → pinned at that ref
|
// CommunityModuleManager (stable/next/pinned) and propagate the registry's
|
||||||
// cloneRef absent → next (main HEAD)
|
// approved tag/sha. For custom-source installs we derive channel from the
|
||||||
// local path → no channel concept
|
// cloneRef (present → pinned, absent → next; local paths have no channel).
|
||||||
const { Manifest } = require('../core/manifest');
|
const { Manifest } = require('../core/manifest');
|
||||||
const manifestObj = new Manifest();
|
const manifestObj = new Manifest();
|
||||||
|
|
||||||
const hasGitClone = !!resolved.repoUrl;
|
const hasGitClone = !!resolved.repoUrl;
|
||||||
|
const isCommunity = resolved.communitySource === true;
|
||||||
const manifestEntry = {
|
const manifestEntry = {
|
||||||
version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || null),
|
version: resolved.communityVersion || resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || null),
|
||||||
source: 'custom',
|
source: isCommunity ? 'community' : 'custom',
|
||||||
npmPackage: null,
|
npmPackage: null,
|
||||||
repoUrl: resolved.repoUrl || null,
|
repoUrl: resolved.repoUrl || null,
|
||||||
};
|
};
|
||||||
if (hasGitClone) {
|
if (isCommunity) {
|
||||||
|
if (resolved.communityChannel) manifestEntry.channel = resolved.communityChannel;
|
||||||
|
if (resolved.cloneSha) manifestEntry.sha = resolved.cloneSha;
|
||||||
|
if (resolved.registryApprovedTag) manifestEntry.registryApprovedTag = resolved.registryApprovedTag;
|
||||||
|
if (resolved.registryApprovedSha) manifestEntry.registryApprovedSha = resolved.registryApprovedSha;
|
||||||
|
} else if (hasGitClone) {
|
||||||
manifestEntry.channel = resolved.cloneRef ? 'pinned' : 'next';
|
manifestEntry.channel = resolved.cloneRef ? 'pinned' : 'next';
|
||||||
if (resolved.cloneSha) manifestEntry.sha = resolved.cloneSha;
|
if (resolved.cloneSha) manifestEntry.sha = resolved.cloneSha;
|
||||||
if (resolved.rawInput) manifestEntry.rawSource = resolved.rawInput;
|
if (resolved.rawInput) manifestEntry.rawSource = resolved.rawInput;
|
||||||
|
|
@ -386,10 +407,13 @@ class OfficialModules {
|
||||||
success: true,
|
success: true,
|
||||||
module: resolved.code,
|
module: resolved.code,
|
||||||
path: targetPath,
|
path: targetPath,
|
||||||
// Match the manifestEntry.version expression above so downstream summary
|
// Mirror the manifestEntry.version precedence above so downstream summary
|
||||||
// lines show the cloned ref (tag or 'main') instead of the on-disk
|
// lines show the same string we just wrote to disk (community installs
|
||||||
// package.json version for git-backed custom installs.
|
// use the registry-approved tag via `communityVersion`; custom git-backed
|
||||||
versionInfo: { version: resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || '') },
|
// installs show the cloned ref or 'main').
|
||||||
|
versionInfo: {
|
||||||
|
version: resolved.communityVersion || resolved.cloneRef || (hasGitClone ? 'main' : resolved.version || ''),
|
||||||
|
},
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,6 @@
|
||||||
const path = require('node:path');
|
const path = require('node:path');
|
||||||
const os = require('node:os');
|
const os = require('node:os');
|
||||||
|
const yaml = require('yaml');
|
||||||
const fs = require('./fs-native');
|
const fs = require('./fs-native');
|
||||||
|
|
||||||
/**
|
/**
|
||||||
|
|
@ -86,6 +87,11 @@ function getExternalModuleCachePath(moduleName, ...segments) {
|
||||||
* Built-in modules (core, bmm) live under <src>. External official modules are
|
* Built-in modules (core, bmm) live under <src>. External official modules are
|
||||||
* cloned into ~/.bmad/cache/external-modules/<name>/ with varying internal
|
* cloned into ~/.bmad/cache/external-modules/<name>/ with varying internal
|
||||||
* layouts (some at src/module.yaml, some at skills/module.yaml, some nested).
|
* layouts (some at src/module.yaml, some at skills/module.yaml, some nested).
|
||||||
|
* Url-source custom modules are cloned into ~/.bmad/cache/custom-modules/<host>/<owner>/<repo>/
|
||||||
|
* and are resolved by walking the cache and matching `code` or `name` from the
|
||||||
|
* discovered module.yaml. Local custom-source modules are not cached; their
|
||||||
|
* path is read from the CustomModuleManager resolution cache set during the
|
||||||
|
* same install run.
|
||||||
* This mirrors the candidate-path search in
|
* This mirrors the candidate-path search in
|
||||||
* ExternalModuleManager.findExternalModuleSource but performs no git/network
|
* ExternalModuleManager.findExternalModuleSource but performs no git/network
|
||||||
* work, which keeps it safe to call during manifest writing.
|
* work, which keeps it safe to call during manifest writing.
|
||||||
|
|
@ -97,26 +103,113 @@ async function resolveInstalledModuleYaml(moduleName) {
|
||||||
const builtIn = path.join(getModulePath(moduleName), 'module.yaml');
|
const builtIn = path.join(getModulePath(moduleName), 'module.yaml');
|
||||||
if (await fs.pathExists(builtIn)) return builtIn;
|
if (await fs.pathExists(builtIn)) return builtIn;
|
||||||
|
|
||||||
const cacheRoot = getExternalModuleCachePath(moduleName);
|
// Collect every module.yaml under a root using the standard candidate paths.
|
||||||
if (!(await fs.pathExists(cacheRoot))) return null;
|
// Url-source repos can host multiple plugins (discovery mode), so we need all
|
||||||
|
// matches, not just the first. Returned in priority order.
|
||||||
|
async function searchRootAll(root) {
|
||||||
|
const results = [];
|
||||||
|
for (const dir of ['skills', 'src']) {
|
||||||
|
const direct = path.join(root, dir, 'module.yaml');
|
||||||
|
if (await fs.pathExists(direct)) results.push(direct);
|
||||||
|
|
||||||
for (const dir of ['skills', 'src']) {
|
const dirPath = path.join(root, dir);
|
||||||
const direct = path.join(cacheRoot, dir, 'module.yaml');
|
if (await fs.pathExists(dirPath)) {
|
||||||
if (await fs.pathExists(direct)) return direct;
|
const entries = await fs.readdir(dirPath, { withFileTypes: true });
|
||||||
|
for (const entry of entries) {
|
||||||
const dirPath = path.join(cacheRoot, dir);
|
if (!entry.isDirectory()) continue;
|
||||||
if (await fs.pathExists(dirPath)) {
|
const nested = path.join(dirPath, entry.name, 'module.yaml');
|
||||||
const entries = await fs.readdir(dirPath, { withFileTypes: true });
|
if (await fs.pathExists(nested)) results.push(nested);
|
||||||
for (const entry of entries) {
|
}
|
||||||
if (!entry.isDirectory()) continue;
|
|
||||||
const nested = path.join(dirPath, entry.name, 'module.yaml');
|
|
||||||
if (await fs.pathExists(nested)) return nested;
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// BMB standard: {setup-skill}/assets/module.yaml (setup skill is any *-setup directory).
|
||||||
|
// Check at the repo root, and also under src/skills/ and skills/ since
|
||||||
|
// marketplace plugins commonly nest skills under src/skills/<name>/.
|
||||||
|
const setupSearchRoots = [root, path.join(root, 'src', 'skills'), path.join(root, 'skills')];
|
||||||
|
for (const setupRoot of setupSearchRoots) {
|
||||||
|
if (!(await fs.pathExists(setupRoot))) continue;
|
||||||
|
const entries = await fs.readdir(setupRoot, { withFileTypes: true });
|
||||||
|
for (const entry of entries) {
|
||||||
|
if (!entry.isDirectory() || !entry.name.endsWith('-setup')) continue;
|
||||||
|
const setupAssets = path.join(setupRoot, entry.name, 'assets', 'module.yaml');
|
||||||
|
if (await fs.pathExists(setupAssets)) results.push(setupAssets);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
const atRoot = path.join(root, 'module.yaml');
|
||||||
|
if (await fs.pathExists(atRoot)) results.push(atRoot);
|
||||||
|
return results;
|
||||||
}
|
}
|
||||||
|
|
||||||
const atRoot = path.join(cacheRoot, 'module.yaml');
|
// Backwards-compatible single-result variant for the existing external-cache
|
||||||
if (await fs.pathExists(atRoot)) return atRoot;
|
// and resolution-cache fallbacks (one module per root by construction).
|
||||||
|
async function searchRoot(root) {
|
||||||
|
const all = await searchRootAll(root);
|
||||||
|
return all.length > 0 ? all[0] : null;
|
||||||
|
}
|
||||||
|
|
||||||
|
const cacheRoot = getExternalModuleCachePath(moduleName);
|
||||||
|
if (await fs.pathExists(cacheRoot)) {
|
||||||
|
const found = await searchRoot(cacheRoot);
|
||||||
|
if (found) return found;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Community modules are cloned to ~/.bmad/cache/community-modules/<name>/
|
||||||
|
// (parallel to the external-modules cache used above). Search there too so
|
||||||
|
// collectAgentsFromModuleYaml and writeCentralConfig can locate community
|
||||||
|
// module.yaml files regardless of how nested the layout is.
|
||||||
|
const communityCacheRoot = path.join(os.homedir(), '.bmad', 'cache', 'community-modules', moduleName);
|
||||||
|
if (await fs.pathExists(communityCacheRoot)) {
|
||||||
|
const found = await searchRoot(communityCacheRoot);
|
||||||
|
if (found) return found;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback: local custom-source modules store their source path in the
|
||||||
|
// CustomModuleManager resolution cache populated during the same install run.
|
||||||
|
// Match by code OR name since callers may use either form.
|
||||||
|
try {
|
||||||
|
const { CustomModuleManager } = require('./modules/custom-module-manager');
|
||||||
|
for (const [, mod] of CustomModuleManager._resolutionCache) {
|
||||||
|
if ((mod.code === moduleName || mod.name === moduleName) && mod.localPath) {
|
||||||
|
const found = await searchRoot(mod.localPath);
|
||||||
|
if (found) return found;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// Resolution cache unavailable — continue
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fallback: url-source custom modules cloned to ~/.bmad/cache/custom-modules/.
|
||||||
|
// Walk every cached repo, enumerate ALL module.yaml files via searchRootAll
|
||||||
|
// (a single repo can host multiple plugins in discovery mode), and match by
|
||||||
|
// the yaml's `code` or `name` field. This works on re-install runs where
|
||||||
|
// _resolutionCache is empty and covers both discovery-mode (with marketplace.json)
|
||||||
|
// and direct-mode modules, since we identify repo roots by .bmad-source.json
|
||||||
|
// (written by cloneRepo) or .claude-plugin/ rather than by marketplace.json.
|
||||||
|
try {
|
||||||
|
const customCacheDir = path.join(os.homedir(), '.bmad', 'cache', 'custom-modules');
|
||||||
|
if (await fs.pathExists(customCacheDir)) {
|
||||||
|
const { CustomModuleManager } = require('./modules/custom-module-manager');
|
||||||
|
const customMgr = new CustomModuleManager();
|
||||||
|
const repoRoots = await customMgr._findCacheRepoRoots(customCacheDir);
|
||||||
|
for (const { repoPath } of repoRoots) {
|
||||||
|
const candidates = await searchRootAll(repoPath);
|
||||||
|
for (const candidate of candidates) {
|
||||||
|
try {
|
||||||
|
const parsed = yaml.parse(await fs.readFile(candidate, 'utf8'));
|
||||||
|
if (parsed && (parsed.code === moduleName || parsed.name === moduleName)) {
|
||||||
|
return candidate;
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// Malformed yaml — skip
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// Custom-modules cache walk failed — continue
|
||||||
|
}
|
||||||
|
|
||||||
return null;
|
return null;
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -2,6 +2,7 @@ const path = require('node:path');
|
||||||
const os = require('node:os');
|
const os = require('node:os');
|
||||||
const semver = require('semver');
|
const semver = require('semver');
|
||||||
const fs = require('./fs-native');
|
const fs = require('./fs-native');
|
||||||
|
const installerPackageJson = require('../../package.json');
|
||||||
const { CLIUtils } = require('./cli-utils');
|
const { CLIUtils } = require('./cli-utils');
|
||||||
const { ExternalModuleManager } = require('./modules/external-manager');
|
const { ExternalModuleManager } = require('./modules/external-manager');
|
||||||
const { resolveModuleVersion } = require('./modules/version-resolver');
|
const { resolveModuleVersion } = require('./modules/version-resolver');
|
||||||
|
|
@ -128,6 +129,24 @@ class UI {
|
||||||
await prompts.log.warn(warning);
|
await prompts.log.warn(warning);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// When the user launched the installer from a prerelease (npx bmad-method@next),
|
||||||
|
// mirror that intent for external modules: seed the global channel to 'next' so
|
||||||
|
// the module picker's version labels resolve from main HEAD (matching what
|
||||||
|
// actually gets installed) and the interactive channel gate skips — the user
|
||||||
|
// already declared "next" intent by typing @next. Explicit channel flags
|
||||||
|
// override this seed.
|
||||||
|
if (
|
||||||
|
semver.prerelease(installerPackageJson.version) !== null &&
|
||||||
|
!channelOptions.global &&
|
||||||
|
channelOptions.nextSet.size === 0 &&
|
||||||
|
channelOptions.pins.size === 0
|
||||||
|
) {
|
||||||
|
channelOptions.global = 'next';
|
||||||
|
await prompts.log.info(
|
||||||
|
'Launched from a prerelease — installing all external modules from main HEAD (next channel). Pass --all-stable or --pin to override.',
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
// Get directory from options or prompt
|
// Get directory from options or prompt
|
||||||
let confirmedDirectory;
|
let confirmedDirectory;
|
||||||
if (options.directory) {
|
if (options.directory) {
|
||||||
|
|
@ -181,12 +200,15 @@ class UI {
|
||||||
actionType = options.action;
|
actionType = options.action;
|
||||||
await prompts.log.info(`Using action from command-line: ${actionType}`);
|
await prompts.log.info(`Using action from command-line: ${actionType}`);
|
||||||
} else if (options.yes) {
|
} else if (options.yes) {
|
||||||
// Default to quick-update if available, otherwise first available choice
|
// Default to quick-update if available, unless flags that require the
|
||||||
|
// full update path are present (e.g. --custom-source which re-clones
|
||||||
|
// modules at a new version — quick-update skips that entirely).
|
||||||
if (choices.length === 0) {
|
if (choices.length === 0) {
|
||||||
throw new Error('No valid actions available for this installation');
|
throw new Error('No valid actions available for this installation');
|
||||||
}
|
}
|
||||||
const hasQuickUpdate = choices.some((c) => c.value === 'quick-update');
|
const hasQuickUpdate = choices.some((c) => c.value === 'quick-update');
|
||||||
actionType = hasQuickUpdate ? 'quick-update' : choices[0].value;
|
const needsFullUpdate = !!options.customSource;
|
||||||
|
actionType = hasQuickUpdate && !needsFullUpdate ? 'quick-update' : (choices.find((c) => c.value === 'update') || choices[0]).value;
|
||||||
await prompts.log.info(`Non-interactive mode (--yes): defaulting to ${actionType}`);
|
await prompts.log.info(`Non-interactive mode (--yes): defaulting to ${actionType}`);
|
||||||
} else {
|
} else {
|
||||||
actionType = await prompts.select({
|
actionType = await prompts.select({
|
||||||
|
|
@ -222,8 +244,11 @@ class UI {
|
||||||
.map((m) => m.trim())
|
.map((m) => m.trim())
|
||||||
.filter(Boolean);
|
.filter(Boolean);
|
||||||
await prompts.log.info(`Using modules from command-line: ${selectedModules.join(', ')}`);
|
await prompts.log.info(`Using modules from command-line: ${selectedModules.join(', ')}`);
|
||||||
} else if (options.customSource) {
|
} else if (options.customSource && !options.yes) {
|
||||||
// Custom source without --modules: start with empty list (core added below)
|
// Custom source without --modules or --yes: start with empty list
|
||||||
|
// (only custom source modules + core will be installed).
|
||||||
|
// When --yes is also set, fall through to the --yes branch so all
|
||||||
|
// installed modules are included alongside the custom source modules.
|
||||||
selectedModules = [];
|
selectedModules = [];
|
||||||
} else if (options.yes) {
|
} else if (options.yes) {
|
||||||
selectedModules = await this.getDefaultModules(installedModuleIds);
|
selectedModules = await this.getDefaultModules(installedModuleIds);
|
||||||
|
|
@ -332,8 +357,10 @@ class UI {
|
||||||
|
|
||||||
// Interactive channel gate: "Ready to install (all stable)? [Y/n]"
|
// Interactive channel gate: "Ready to install (all stable)? [Y/n]"
|
||||||
// Only shown for fresh installs with no channel flags and an external module
|
// Only shown for fresh installs with no channel flags and an external module
|
||||||
// selected. Non-interactive installs skip this and fall through to the
|
// selected. Skipped for prerelease launches because channelOptions.global
|
||||||
// registry default (stable) or whatever flags were supplied.
|
// was already seeded to 'next' upstream. Non-interactive installs skip this
|
||||||
|
// and fall through to the registry default (stable) or whatever flags were
|
||||||
|
// supplied.
|
||||||
await this._interactiveChannelGate({ options, channelOptions, selectedModules });
|
await this._interactiveChannelGate({ options, channelOptions, selectedModules });
|
||||||
|
|
||||||
let toolSelection = await this.promptToolSelection(confirmedDirectory, options);
|
let toolSelection = await this.promptToolSelection(confirmedDirectory, options);
|
||||||
|
|
@ -1783,7 +1810,9 @@ class UI {
|
||||||
*
|
*
|
||||||
* Skipped when:
|
* Skipped when:
|
||||||
* - running non-interactively (--yes)
|
* - running non-interactively (--yes)
|
||||||
* - the user already passed channel flags (--channel / --pin / --next)
|
* - the user already passed channel flags (--channel / --pin / --next), OR
|
||||||
|
* the installer was launched from a prerelease (which seeds
|
||||||
|
* channelOptions.global = 'next' upstream in promptInstall)
|
||||||
* - no externals/community modules are selected
|
* - no externals/community modules are selected
|
||||||
*
|
*
|
||||||
* Mutates channelOptions.pins and channelOptions.nextSet to reflect picker choices.
|
* Mutates channelOptions.pins and channelOptions.nextSet to reflect picker choices.
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue