Add Design Space with agent messaging, knowledge capture, and feedback loops

Design Space v4.0.0 — cross-LLM, cross-IDE agent communication where every
message becomes searchable design knowledge. Agents talk in natural language,
register presence, discover peers, and hand off work across Claude Code,
ChatGPT, Cursor, and any HTTP client.

New features:
- Workflow 12: Agent Messaging (check inbox, send, manage presence)
- Workflow 10: Design Feedback Loop (before/after learning pairs)
- Workflow 11: Knowledge Capture (guided insight capture)
- Workflow 9: Site Analysis (visual + structural DNA capture)
- Agent messaging guides for Saga and Freya
- Protocol v4.0.0 with messaging principles, consent gate, no agent instructions
- MCP config templates for Claude Code and Cursor
- Supabase setup guide for self-hosted deployment
- Module 19 lessons 6-7 (agent messaging, collaboration patterns)

Security:
- All hardcoded Supabase URLs/keys replaced with {DESIGN_SPACE_URL} placeholders
- Credentials configured per-deployment via env vars

Infrastructure repos:
- github.com/whiteport-collective/design-space-infrastructure (Supabase backend)
- github.com/whiteport-collective/design-space-mcp (MCP server)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
Mårten Angner 2026-03-06 14:44:50 +01:00
parent dd5fa936bb
commit def4f8160f
61 changed files with 5018 additions and 53 deletions

View File

@ -0,0 +1,270 @@
# Design Space — Agent Instructions
> Load this file at the start of any session to participate in the Design Space.
---
## What Is the Design Space?
The Design Space is shared agent memory for design work. It stores accumulated knowledge — design patterns, experiments, preferences, component experiences, methodology insights — as semantic embeddings in a vector database. Every agent that reads and writes to the Space builds on the work of every other agent.
A Design System is the cogs (tokens, components, patterns). The Design Space is the consciousness — the living environment where design happens across products, accumulating decisions, experiments, and outcomes over time.
---
## How to Access the Design Space
Agents interact with the Design Space via **direct HTTP calls** to Supabase Edge Functions. No MCP server required.
### Connection Details
```
Base URL: {DESIGN_SPACE_URL}
API Key: {DESIGN_SPACE_ANON_KEY}
```
All calls use:
```
POST {Base URL}/functions/v1/{function-name}
Headers:
Content-Type: application/json
Authorization: Bearer {API Key}
```
### Available Functions
| Function | Purpose |
|----------|---------|
| `capture-design-space` | Save a text insight (generates semantic embedding automatically) |
| `search-design-space` | Semantic search — find by meaning |
| `capture-visual` | Save screenshot + description (dual: semantic + visual embedding) |
| `capture-feedback-pair` | Save linked before/after improvement pair |
| `search-visual-similarity` | Find visually similar patterns |
| `search-preference-patterns` | Check proposed design against known improvements |
### Examples
**Search the Space:**
```bash
curl -X POST {DESIGN_SPACE_URL}/functions/v1/search-design-space \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {DESIGN_SPACE_ANON_KEY}" \
-d '{"query": "hero section patterns for agency sites", "project": "whiteport", "limit": 10, "threshold": 0.3}'
```
**Capture knowledge:**
```bash
curl -X POST {DESIGN_SPACE_URL}/functions/v1/capture-design-space \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {DESIGN_SPACE_ANON_KEY}" \
-d '{
"content": "Whiteport hero: Rubik 300 weight at 36px creates elegance through intentional lightness.",
"category": "successful_pattern",
"project": "whiteport",
"designer": "marten",
"topics": ["hero", "typography", "elegance"],
"components": ["hero-banner"],
"source": "design-review"
}'
```
**Read recent entries (REST API):**
```bash
curl {DESIGN_SPACE_URL}/rest/v1/design_space?select=id,content,category,project,topics,created_at&order=created_at.desc&limit=10 \
-H "apikey: {DESIGN_SPACE_ANON_KEY}" \
-H "Authorization: Bearer {DESIGN_SPACE_ANON_KEY}"
```
### Web App (Humans)
Browse and capture from any device:
`{DESIGN_SPACE_URL}/functions/v1/design-space-ui`
---
## Auto-Capture Rules (MANDATORY)
**You MUST capture insights automatically as you work. Do not wait to be asked.**
### When to Capture:
- After completing a major deliverable (product brief, trigger map, wireframe, spec)
- When you discover a pattern that works (or doesn't)
- When the designer gives feedback — capture the improvement
- When analyzing competitors or reference sites
- When making a design decision with reasoning
- When a component behaves unexpectedly
- After any workshop, discussion, or strategy session
### Capture Quality Bar:
**Good:** "Whiteport hero: Rubik 300 weight at 36px creates elegance through intentional lightness — anti-pattern to the industry standard of heavy heading weights. Works because the geometric clarity of Rubik carries at large sizes without needing weight."
**Bad:** "Used light font for heading."
### What to Include:
- WHAT the pattern/insight is
- WHY it works (or doesn't)
- CONTEXT: project, page, component, phase
- SPECIFICS: exact values, measurements, comparisons
---
## Search Before You Work (MANDATORY)
**Before starting any major task, search the Space for relevant prior knowledge.**
Search examples:
```json
{"query": "hero section patterns for agency sites", "project": "whiteport"}
{"query": "what CTA styles have worked", "limit": 10}
{"query": "mobile navigation approaches tried"}
{"query": "client feedback on dark themes"}
```
Use search results to inform your work. Don't repeat failed experiments. Build on proven patterns.
---
## Categories (11)
| Category | Use When |
|----------|----------|
| `inspiration` | Analyzing reference sites, moodboards, visual DNA |
| `failed_experiment` | Something was tried and didn't work — save WHY |
| `successful_pattern` | Proven solution with context and evidence |
| `component_experience` | How a component behaved across contexts |
| `design_system_evolution` | Token/component/pattern changes with reasoning |
| `client_feedback` | Recurring preferences, objections, reactions |
| `competitive_intelligence` | Competitor design analysis |
| `methodology` | Process insights, workflow learnings |
| `agent_experience` | What worked/failed in agent collaboration |
| `reference` | External knowledge, articles, frameworks |
| `general` | Anything that doesn't fit above |
---
## Pattern Types (6)
When capturing visual patterns, tag with the appropriate type:
| Symbol | Type | Meaning |
|--------|------|---------|
| ◆ | `baseline` | Inherited starting point — what exists before any changes |
| ★ | `inspiration` | External reference that influences direction |
| Δ | `delta` | What changed — the modification itself |
| ○ | `rejected` | Starting point before improvement — context for what was improved |
| ● | `approved` | The improved solution — the real value |
| △ | `conditional` | Works in some contexts but not others |
---
## Project Tagging
Always include:
- `project`: lowercase project name (`whiteport`, `kalla`, `bythjul`, `sharif`, `manella`)
- `designer`: who is doing the work (`marten` by default)
- `topics`: semantic tags as array (`["hero", "dark-theme", "trust-section"]`)
- `components`: design components involved (`["hero-banner", "cta-button"]`)
- `source`: where this came from (`site-analysis`, `workshop`, `agent-dialog`, `design-review`)
---
## Design Feedback Capture (Critical)
### Positivity Principle
The Design Space captures **what works and how we got there**. Not complaints. The "before" state is context. The "after" state is the knowledge.
When the designer (Mårten) suggests an improvement:
1. **Note the BEFORE** — what you proposed and its characteristics
2. **Ask WHY** — "What would make this better?" or "What direction feels right?"
3. **Note the AFTER** — the improved solution
4. **Capture the pair:**
```json
{
"content": "BEFORE: [starting point]. IMPROVED TO: [solution]. BECAUSE: [reasoning]. LEARNED: [transferable insight].",
"category": "client_feedback",
"project": "...",
"topics": ["improvement", "feedback"],
"components": ["..."]
}
```
**This is how the Design Space learns taste.** Over time, patterns emerge. Future agents search for these improvements before presenting designs.
### Exceptions: Usability Testing & Client Feedback
Raw diagnostic data from usability testing and client feedback IS captured as-is — user confusion, task failure, friction points. This is evidence, not negativity. The positivity framing applies to the agent-designer feedback loop, not to user research data.
---
## What's Already in the Space
As of 2026-03-05:
- **31 Whiteport entries** — full homepage + 4 subpage analysis with dual embeddings
- **WDS methodology insights** — semantic/parametric processing, pattern types, temporal dimensions
- **Ivonne module experience** — agent architecture learnings
- **Visual capture pipeline** — end-to-end working (Puppeteer → Voyage AI → Supabase)
Search before you build. The foundation is already there.
---
## Fallback: When HTTP Is Unavailable
If edge functions are unreachable or you're in an environment without HTTP access (like Claude mobile):
### Option 1: File-Based Inbox
Write captures to `{project-root}/design-space-inbox.md`:
```markdown
---
captured: 2026-03-05T14:30
status: pending
---
## [successful_pattern] Title of insight
**Project:** project-name
**Designer:** marten
**Topics:** tag1, tag2, tag3
**Components:** component1, component2
**Source:** agent-dialog
Content of the insight here. Same quality rules — be specific,
include values, reasoning, and context.
---
```
These get batch-processed into the Design Space when connectivity is restored.
### Option 2: GTD Inbox (Mobile)
If no file access either, add to the GTD inbox with `[DS]` prefix:
```
[DS] Bottom sheet nav works better than hamburger for mobile service sites with 4-6 actions. Tested on Kalla.
```
Gets routed to the Design Space during `/process`.
### Priority
1. HTTP to edge functions (real-time, searchable immediately)
2. File-based inbox (preserves knowledge, processes later)
3. GTD inbox with [DS] prefix (last resort, captures the thought)
**Knowledge should never be lost because of a technical limitation.**
---
## Technical Details
- **Database:** Supabase (eu-north-1, Stockholm), table: `design_space`
- **Semantic embeddings:** 1536d via OpenRouter (text-embedding-3-small)
- **Visual embeddings:** 1024d via Voyage AI (voyage-multimodal-3)
- **Edge functions:** 8 deployed (capture, search, visual, feedback pairs, preferences)
- **Web app:** `{DESIGN_SPACE_URL}/functions/v1/design-space-ui`
---
*Updated 2026-03-05 — MCP replaced with direct HTTP calls*

View File

@ -26,6 +26,7 @@ Complete documentation for Whiteport Design Studio - a design-first methodology
- **[Phase 2: Trigger Mapping](method/phase-2-trigger-mapping-guide.md)** - User psychology & business goals
- **[Phase 3: UX Scenarios](method/phase-3-ux-scenarios-guide.md)** - User journeys & scenario outlines
- **[Phase 4: UX Design](method/phase-4-ux-design-guide.md)** - Page specifications & prototypes
- **[Design Space](method/design-space-guide.md)** - Accumulated design consciousness (cross-cutting)
**These guides are tool-agnostic** - explaining the methodology regardless of how you apply it.
@ -114,6 +115,7 @@ Complete documentation for Whiteport Design Studio - a design-first methodology
**Additional documentation:**
- **[Workflows Guide](wds-workflows-guide.md)** - Complete workflow reference
- **[Design Space MCP](tools/design-space-mcp.md)** - Design Space tool reference (8 MCP tools)
- **[Agent Activation Flow](getting-started/agent-activation/activation/)** - How agents initialize
---

View File

@ -120,11 +120,17 @@ Each module contains:
| 17 | [Usability Testing](../module-17-usability-testing/module-17-usability-testing-overview.md) | Freya | 45 min |
| 18 | [Product Evolution](../module-18-product-evolution/module-18-product-evolution-overview.md) | Freya | 30 min |
### Cross-Cutting (Module 19) — Agents: Saga + Freya
| Module | Title | Agent | Time |
|--------|-------|-------|------|
| 19 | [Design Space](../module-19-design-space/module-19-design-space-overview.md) | Saga + Freya | 45 min |
---
## Learning Paths
**Complete Course:** All 18 modules (~10 hours)
**Complete Course:** All 19 modules (~11 hours)
**Quick Start:** Modules 01, 02, 04, 06, 08, 11 (~6 hours)
@ -144,8 +150,8 @@ WDS uses two AI agents, each with a specific domain:
| Agent | Domain | Phase | Modules |
|-------|--------|-------|---------|
| **Saga** | Strategy | Strategy | 3-6 |
| **Freya** | UX, Visual Design, Development & Evolution | Design, Build & Evolve | 7-18 |
| **Saga** | Strategy | Strategy | 3-6, 19 |
| **Freya** | UX, Visual Design, Development & Evolution | Design, Build & Evolve | 7-19 |
Each agent maintains focus on their domain while coordinating with the other.

View File

@ -218,26 +218,11 @@ Hands-on guide to running a complete evolution cycle
---
## Course Complete
You've learned the full WDS methodology:
1. **Strategy** — Product Brief, Trigger Map, Platform Requirements
2. **Design** — Scenarios, Sketches, Storyboards, Specifications, Components, Design System
3. **Build** — Agentic Development, Visual Design, Design Delivery
4. **Validate** — Usability Testing
5. **Evolve** — Product Evolution (this module)
Whether you're starting from a blank page or improving a live product, the process is the same. The scope changes. The discipline doesn't.
---
## What's Next?
- **Apply to a real project** — The only way to truly learn is to do
- **Join the community** — [Discord](https://discord.gg/whiteport)
- **Contribute** — WDS is open source
- **Teach others** — Spread creative discipline
You've learned the full WDS production pipeline. One more module remains:
**[Module 19: Design Space →](../module-19-design-space/module-19-design-space-overview.md)** — The accumulated consciousness behind your design decisions. Learn how agents build design taste through dual embeddings, feedback loops, and cross-project learning.
**You are the linchpin.**

View File

@ -0,0 +1,67 @@
# Lesson 1: Consciousness vs Projection
**Module 19: Design Space | Time: 8 min**
---
## The Gap in Design Systems
Design systems are projections — they tell you what to use. Tokens define spacing, colors, typography. Components define buttons, cards, modals. Guidelines define patterns, layouts, interactions.
But projections don't remember. They don't know why `--space-lg` is `32px` and not `24px`. They don't know that a hamburger menu was tried and abandoned because task completion dropped. They don't know that the designer consistently prefers light heading weights over bold.
Every time a new designer or agent starts work, they begin from the projection — the rules — without the consciousness behind those rules. They might make the same mistakes, try the same failed experiments, or propose designs that contradict established preferences.
---
## What Consciousness Means
The Design Space is the layer that remembers:
- **Decisions:** "We use 32px section gaps because 24px felt cramped on service pages with 4+ cards"
- **Experiments:** "Bottom sheet navigation works better than hamburger for mobile service sites with 4-6 primary actions"
- **Improvements:** "Light heading weight (300) at 48px creates elegance. Bold felt corporate and generic"
- **Principles:** "This brand is confident calm, not loud authority. Design choices should reflect that"
- **Context:** "Coral CTAs on navy backgrounds work because the warm accent against cool background creates visual tension without aggression"
This knowledge is **transferable**. It works across projects, across agents, across time.
---
## How It Accumulates
The Design Space doesn't start full. It grows as you work:
1. **Site Analysis** — Analyzing existing sites captures structural, visual, and content DNA as baseline patterns
2. **Design Work** — Every design session generates insights about what works and why
3. **Feedback** — Every improvement the designer makes teaches the system taste
4. **Experiments** — Both successful and failed experiments become searchable knowledge
5. **Cross-Project Learning** — Patterns from one project inform decisions on the next
After 10 projects, the Design Space contains hundreds of insights. A new agent starting work on project 11 inherits all of that consciousness on day one.
---
## The Search-Before-Design Principle
Before making any design decision, agents search the Design Space:
```
search_space("hero section layout for service sites")
search_space("mobile navigation patterns")
search_space("dark background with trust signals")
```
This isn't optional. It's the design equivalent of "don't reinvent the wheel." If someone already learned that bottom sheets outperform hamburger menus for 4-6 primary actions, the next agent should know that before proposing a hamburger menu.
---
## Key Takeaway
A design system is a snapshot — here's what we use today. The Design Space is a timeline — here's everything we've learned and why. The snapshot changes. The learning accumulates.
The goal isn't to replace design systems. It's to give them memory.
---
**[← Back to Module Overview](module-19-design-space-overview.md)** | **[Next: Lesson 2 →](lesson-02-dual-embeddings.md)**

View File

@ -0,0 +1,104 @@
# Lesson 2: Dual Embeddings
**Module 19: Design Space | Time: 8 min**
---
## Why Two Embeddings?
When you describe a hero section as "dark navy background, centered white heading, coral CTA button," that description could match hundreds of designs. But they all *look* different — different fonts, different spacing, different imagery, different moods.
Text (semantic) embeddings capture **meaning** — what the design is about. Visual embeddings capture **appearance** — what the design looks like.
Together they find patterns that either alone would miss.
---
## Semantic Embeddings (1536d)
Generated by OpenRouter using `text-embedding-3-small`. Takes the text description and produces a 1536-dimensional vector that represents its meaning.
**What it catches:**
- Conceptual similarity: "trust section with testimonials" matches "social proof area with client quotes"
- Design principles: "breathing room between sections" matches "generous whitespace for visual calm"
- Pattern descriptions: "card grid with hover effect" matches "interactive card layout with motion"
**What it misses:**
- Visual style: two "minimalist hero sections" could look completely different
- Aesthetic quality: a well-designed card and a poorly-designed card might have identical descriptions
- Color harmony: "navy and coral" is semantically similar to "navy and red" but aesthetically different
### When to Use Semantic Search
```
search_space({
query: "mobile navigation for service sites with 4-6 actions"
})
```
Use when you're looking for **conceptual patterns** — approaches, solutions, principles.
---
## Visual Embeddings (1024d)
Generated by Voyage AI using `voyage-multimodal-3`. Takes a screenshot and produces a 1024-dimensional vector that represents its visual appearance.
**What it catches:**
- Layout similarity: two designs with the same grid structure match even if described differently
- Color harmony: designs with similar palettes cluster together
- Typography feel: designs with similar heading weights and sizes match
- Compositional patterns: similar visual hierarchy, similar white space distribution
**What it misses:**
- Intent and reasoning: why the design was made this way
- Context: which project, which persona, which business goal
- Transferability: whether the pattern works in other contexts
### When to Use Visual Search
```
search_visual_similarity({
image_base64: "[screenshot of your design]"
})
```
Use when you're looking for **aesthetic matches** — designs that look like what you're making.
---
## Dual Search in Practice
### Site Analysis
During site analysis, every section gets both embeddings:
- **Semantic:** "Hero section with full-width navy background, centered Rubik Light heading at 48px, coral CTA with generous padding, confident calm tone"
- **Visual:** Screenshot of the actual hero
Later, an agent designing a new hero can search both ways:
- "What hero patterns work for professional service sites?" (semantic)
- "Find designs that look like this screenshot" (visual)
### Feedback Loop
When the designer improves a design, both the before and after states get dual embeddings. This means the proactive improvement check works two ways:
- "This design description sounds like something we improved before" (semantic)
- "This design looks like something we improved before" (visual)
---
## The Rate Limit Reality
Visual embeddings via Voyage AI have rate limits:
- **Free tier (no payment method):** 3 requests per minute
- **Free tier (with payment method):** Standard rate limits with 200M free tokens
In practice, this means waiting 25 seconds between visual captures. This constraint actually helps — the forced pause creates time for writing more thoughtful descriptions.
Even on a paid tier, don't batch-capture without writing good descriptions. The semantic embedding is only as good as the text you give it.
---
## Key Takeaway
Semantic search finds designs that **mean** the same thing. Visual search finds designs that **look** the same. Together they catch patterns that either alone would miss. Always capture both when screenshots are available.
---
**[← Lesson 1](lesson-01-consciousness-vs-projection.md)** | **[Next: Lesson 3 →](lesson-03-capture-patterns.md)**

View File

@ -0,0 +1,116 @@
# Lesson 3: Capture Patterns
**Module 19: Design Space | Time: 10 min**
---
## The Quality Formula
**Good capture = Specific + Contextual + Actionable + Tagged**
The difference between useful knowledge and noise comes down to these four qualities.
---
## Specific
Include concrete details. Values, names, measurements — not vague adjectives.
| Bad | Good |
|-----|------|
| "The spacing is nice" | "80px section padding creates breathing room on desktop — more effective than the 48px we started with" |
| "Good colors" | "Coral (#e8734a) on navy (#0a1628) achieves 7.2:1 contrast while maintaining brand warmth" |
| "Big heading" | "H1 at 48px Rubik Light (300) — the light weight at large size creates elegance" |
---
## Contextual
Say where it was tested, which project, what constraints existed.
| Bad | Good |
|-----|------|
| "Bottom sheets are good" | "Bottom sheet navigation works better than hamburger for mobile service sites with 4-6 primary actions. Tested on Kalla." |
| "Cards work well" | "3-column card grid with 24px gaps on desktop, stacking to 1-column on mobile. Used for service listing on Whiteport — each card has icon, heading, description, link." |
---
## Actionable
Another agent reading this should be able to apply it without asking for more information.
| Bad | Good |
|-----|------|
| "We changed the navigation" | "Replaced hamburger menu with visible bottom sheet navigation for mobile. Show 4-6 primary action buttons. Users found services faster — task completion improved." |
| "The hero was improved" | "Reduced H1 from bold (700) to light (300) at 48px. Added max-width 800px. Result: same authority, less visual weight. Works for brands that want confident calm, not loud authority." |
---
## Tagged
Topics and components make the entry findable via search. Without tags, knowledge dies.
```
topics: ["mobile", "navigation", "service-design"]
components: ["bottom-sheet", "hamburger-menu"]
```
### Tag Vocabulary
**Design dimensions:** `typography`, `color`, `spacing`, `layout`, `hierarchy`, `animation`, `responsive`
**Brand qualities:** `elegance`, `warmth`, `minimalism`, `boldness`, `playfulness`
**Page areas:** `hero`, `navigation`, `footer`, `above-fold`, `content-area`
**Component types:** `button`, `card`, `modal`, `form`, `accordion`, `carousel`
---
## Category Selection
Choose the category that best fits:
| Category | When to Use | Example |
|----------|-------------|---------|
| `successful_pattern` | Validated solution worth reusing | "Bottom sheet nav for mobile service sites" |
| `component_experience` | How a component behaves in real use | "Radix Dialog z-index conflict with sticky header" |
| `design_system_evolution` | Token or component API decision | "Changed --space-lg from 24px to 32px" |
| `methodology` | Process improvement | "25s delay between captures improves description quality" |
| `inspiration` | External reference worth remembering | "Stripe's pricing page card layout" |
---
## Auto-Capture vs Deliberate Capture
### Auto-Capture (during work)
Agents capture insights in the background as conversations flow. No interruption, no ceremony. Capture as you go.
**When:** After completing a UX flow, after a failed experiment, after a design system update, after client feedback.
### Deliberate Capture (Knowledge Capture workflow)
Structured capture session for consolidating learnings after a project milestone.
**When:** End of a design sprint, after a project launch, after a usability test round.
Both are important. Auto-capture prevents knowledge loss. Deliberate capture ensures quality.
---
## What NOT to Capture
- Debugging steps (capture the solution, not the struggle)
- Temporary decisions that will change next session
- Information already in project specs
- Vague observations without conclusions
- Complaints without solutions
---
## Key Takeaway
The Design Space is only as valuable as the quality of its entries. One specific, contextual, actionable insight is worth more than ten vague observations. Write for the agent who reads this six months from now on a different project.
---
**[← Lesson 2](lesson-02-dual-embeddings.md)** | **[Next: Lesson 4 →](lesson-04-feedback-loop.md)**

View File

@ -0,0 +1,116 @@
# Lesson 4: The Feedback Loop
**Module 19: Design Space | Time: 10 min**
---
## How Agents Learn Taste
When you work with a designer and they suggest improvements, that's not just a correction — it's a preference signal. The feedback loop captures these signals as linked pairs, and over time, the agent develops design taste.
**Philosophy:** The feedback loop captures solutions, not complaints. The "before" state is context. The "after" state — the improvement — is the real knowledge.
---
## The Flow
```
Agent creates a design
Designer suggests an improvement
Agent captures BEFORE (the starting state)
Agent asks: "What would make this better?"
Designer explains (or agent infers)
Agent applies the improvement
Agent captures AFTER (the improved version)
Both saved as a linked pair
Agent confirms: "Learned: [X] works better because [Y]"
```
---
## The WHY Question
This is the most valuable moment. The designer's reasoning is what makes the learning transferable.
Ask naturally — don't interrogate:
- **Forward-looking:** "What would make this feel right?"
- **Specific:** "Should it be more open / minimal / bold?"
- **Outcome-oriented:** "What feeling should this create?"
- **Inference:** "Got it — lighter weight works better here because [reason]. Right?"
Sometimes the designer can't articulate why. That's fine. Capture the observable change: "Improved from bold to light weight — designer's intuitive direction. The result creates a calmer, more elegant feel."
---
## Framing Matters
How you frame the learning determines whether the Design Space becomes a library of solutions or a list of complaints.
### Good Framing (solutions)
- "Light heading weight (300) creates elegance — works better than bold for confident calm brands"
- "80px section padding gives content room to breathe — outperforms 48px on service pages"
- "Left-aligned text follows natural reading flow better than centered for body copy"
### Bad Framing (complaints)
- "Designer hates bold headings"
- "48px padding was wrong"
- "Centered text is bad"
The good framing is actionable. The bad framing is a dead end.
---
## Capture Format
```javascript
capture_feedback_pair({
before_description: "Hero section with H1 at 48px bold (700) Rubik,
navy background, full-width. Bold heading feels authoritative
but heavy.",
after_description: "Hero section with H1 at 48px light (300) Rubik,
navy background, max-width 800px. Light weight creates elegance
and breathing room. Same authority, less weight.",
reasoning: "Bold headings feel corporate and generic. Light weight
at large sizes is distinctive — the brand is confident calm,
not loud authority.",
pattern_type_before: "rejected",
pattern_type_after: "approved",
project: "whiteport",
topics: ["typography", "heading-weight", "brand-voice", "elegance"],
components: ["hero-banner", "heading-h1"]
})
```
Both descriptions should be specific enough that someone could recreate the design from the text alone.
---
## The Learning Curve
| Stage | Pairs | Agent Behavior |
|-------|-------|---------------|
| **Cold start** | 0-10 | Individual solutions. "Light headings work better for this brand." |
| **Accumulation** | 10-50 | Principles emerge. "Understated elegance across typography, spacing, color." |
| **Taste profile** | 50+ | Agent anticipates improvements. "The lighter option with more whitespace will work." |
| **Design DNA** | 100+ | New agents inherit design sensibility from day one. |
The cold start is unavoidable. But every feedback pair accelerates the learning. By project 3-4, agents start making noticeably better first proposals.
---
## Key Takeaway
The feedback loop isn't an interruption to design work — it is the design work. Every improvement you suggest teaches the system what good design looks like. Over time, the system learns to produce it.
---
**[← Lesson 3](lesson-03-capture-patterns.md)** | **[Next: Lesson 5 →](lesson-05-proactive-improvement.md)**

View File

@ -0,0 +1,97 @@
# Lesson 5: Proactive Improvement
**Module 19: Design Space | Time: 8 min**
---
## From Reactive to Proactive
In the early stages, the feedback loop is reactive — the designer suggests improvements, the agent captures them. But as feedback pairs accumulate, something changes: the agent starts recognizing patterns before they're pointed out.
This is the shift from "let me capture what you taught me" to "I already applied what you taught me."
---
## How It Works
Before presenting any new design, the agent runs a pre-check:
```javascript
search_preference_patterns({
description: "Full-width hero with bold H1 heading,
centered layout, dark background",
image_base64: "[screenshot if available]",
project: "whiteport"
})
```
This searches against the "before" states of all feedback pairs. If the proposed design resembles something that was later improved, the agent knows what the improvement was.
### Two Search Channels
**Semantic match:** The description of your design is similar to a known starting point.
- "Bold heading" → "We learned that light weight works better for this brand"
**Visual match:** Your design looks like a known starting point.
- The screenshot resembles a layout that was later improved with more whitespace
Either channel can trigger. Both together is a strong signal.
---
## What Happens When a Match Is Found
1. Agent reads the paired approved alternative
2. Agent identifies the specific improvement
3. Agent applies it to the current design
4. Agent presents the improved version
5. Agent mentions it naturally: "I applied light heading weight — it's worked well in similar designs."
The designer still has full control. The agent is applying learned improvements, not making autonomous decisions. The designer can override, which creates a new feedback pair if needed.
---
## Threshold Tuning
| Check | Default | Effect |
|-------|---------|--------|
| Semantic threshold | 0.75 | Higher = fewer matches, more precise |
| Visual threshold | 0.70 | Higher = fewer matches, more precise |
Lower thresholds cast a wider net but increase false positives. For a new project with few pairs, keep defaults. For a mature project with 50+ pairs, you might lower thresholds to catch more subtle patterns.
---
## When to Override
Sometimes a match is contextually wrong:
- **Different brand:** A pattern rejected for a minimalist brand might work for an energetic one
- **Different context:** A rejected mobile pattern might be the right choice for desktop
- **Surface similarity:** The match is visual-only and the design principle doesn't transfer
In these cases, the agent proceeds but notes the override: "This resembles [pattern] but the context differs because [reason]."
---
## The Compounding Effect
Every project benefits from every previous project. Here's how it compounds:
- **Project 1:** Cold start. Agent learns 15 preferences through feedback.
- **Project 2:** Agent starts with 15 known improvements. Learns 12 more.
- **Project 3:** Agent starts with 27 improvements. First proposals are noticeably better. Fewer feedback cycles needed.
- **Project 5:** Agent rarely proposes designs that match known "before" states. Feedback shifts from "change this" to "refine this."
- **Project 10:** The Design Space contains a full design sensibility. New agents produce work that feels like it came from an experienced designer.
This is the long game. Each interaction makes the next one better.
---
## Key Takeaway
Proactive improvement is where the Design Space pays off. Every feedback pair invested during design work returns compound interest on future projects. The system doesn't just remember what you taught it — it applies it before you have to ask.
---
**[← Lesson 4](lesson-04-feedback-loop.md)** | **[Next: Tutorial →](tutorial-19.md)**

View File

@ -0,0 +1,73 @@
# Lesson 6: Agent Messaging
## Cross-LLM, Cross-IDE Communication
The Design Space isn't just memory — it's a communication channel. Agents can talk to each other across different LLMs (Claude, GPT-4, Gemini) and different IDEs (Claude Code, Cursor, ChatGPT, Windsurf).
## How It Works
Every message is an HTTP POST to a single endpoint:
```
POST {DESIGN_SPACE_URL}/functions/v1/agent-messages
```
7 actions handle everything: `send`, `check`, `respond`, `mark-read`, `thread`, `register`, `who-online`.
## Messages Are Knowledge
This is the key insight: **every agent message gets embedded as searchable knowledge**. A question Saga asks Freya today becomes a findable conversation six months from now. Nothing is lost.
## Architecture: HTTP-First
```
Claude Code (Saga) ─┐
ChatGPT (GPT Agent) ├── HTTP POST ──→ Supabase Edge Functions ──→ PostgreSQL + pgvector
Cursor (Dev Agent) ─┘ │
Embed message
(semantic 1536d)
```
The MCP server is a convenience wrapper. Any HTTP client can participate.
## Agent Identity
Every agent registers with an identity card:
| Field | Purpose |
|-------|---------|
| `agent_id` | Routing address (e.g., "saga") |
| `agent_name` | Display name (e.g., "Saga (Analyst)") |
| `model` | LLM brain (claude-opus-4-6, gpt-4o) |
| `platform` | IDE/tool (claude-code, cursor, chatgpt) |
| `capabilities` | What this agent can do |
| `status` | online / busy / idle |
## Communication Rules
1. **Clear text** — Natural language, no codes
2. **No instructions between agents** — Only requests, shares, notifications, questions
3. **Consent gate** — Cross-human sharing requires permission
4. **Transparent errors** — Never silently fail; tell the user
## Message Types
| Type | Example |
|------|---------|
| `notification` | "Design system complete. 33 components ready." |
| `question` | "What spacing token for the hero?" |
| `request` | "Could you share the latest component list?" |
| `task_offer` | "I can handle the responsive layouts." |
| `task_complete` | "Homepage build done. Ready for review." |
## Presence & Discovery
Agents register their presence with a heartbeat. Other agents can discover who's online and what they're working on — enabling real-time collaboration across tools.
## Try It
In WDS, type `AM` to open the Agent Messaging workflow, or `WO` to see who's online.
---
*Next: [Lesson 7 — Collaboration Patterns](lesson-07-collaboration-patterns.md)*

View File

@ -0,0 +1,97 @@
# Lesson 7: Collaboration Patterns
## Multi-Agent Workflows
With agent messaging, WDS agents coordinate across tools and sessions. Here are the patterns that emerge.
## Pattern 1: Strategic Handoff
Saga completes the Product Brief and Trigger Map, then notifies Freya:
```
Saga → Freya:
"Product Brief and Trigger Map complete for Kalla.
Key personas: Harriet the Hairdresser, Sam the Salon Owner.
Primary driving force: trust anxiety.
Ready for Scenario Outlining (Phase 3)."
```
Freya picks up the message on her next activation and has full context.
## Pattern 2: Design Question Thread
Freya encounters a strategic ambiguity during design:
```
Freya → Saga:
"Trigger Map shows Harriet has trust anxiety. Should the hero
lead with social proof or product demo? No prior pattern in
Design Space for this persona type."
Saga → Freya:
"Based on the competitive analysis, trust-anxious users in
service industries respond better to social proof first.
3 of 5 competitors lead with testimonials. Go social proof."
```
The thread is preserved and searchable — next time an agent faces trust anxiety, this conversation is findable.
## Pattern 3: Cross-IDE Development Handoff
Freya (Claude Code) hands off to a dev agent (Cursor):
```
Freya → Dev-Agent:
"Design Delivery package ready for homepage.
DD YAML at E-PRD/Design-Deliveries/dd-homepage.yaml.
Acceptance criteria: hero loads in <2s, CTA visible without scroll.
Design system tokens: spacing-lg, color-primary, font-heading."
```
Different LLMs, different IDEs, same project — seamless handoff.
## Pattern 4: Broadcast Status
An agent announces completion to the entire project:
```
Dev-Agent → (broadcast):
"Homepage build complete. All acceptance criteria passing.
Ready for review. Test URL: localhost:3000"
```
Every agent on the project sees this on their next check.
## Pattern 5: Presence-Based Routing
Before sending a message, check who's online:
```
who-online → 2 agents:
1. Saga (claude-code) — working on "Kalla competitive analysis"
2. Dev-Agent (cursor) — working on "Homepage responsive layout"
```
Now you know who to ask and what they're doing.
## The Human in the Loop
Agents never instruct each other. The human:
- Approves cross-human information sharing
- Grants delegated authority when needed
- Reviews message threads via the dashboard
- Makes final decisions on ambiguous requests
## Dashboard
Open `dashboard.html` to watch agent conversations in real-time. Filter by project, see threads, track who's online.
## Deploy Your Own
1. **Infrastructure:** [design-space-infrastructure](https://github.com/whiteport-collective/design-space-infrastructure) — Supabase backend
2. **MCP Server:** [design-space-mcp](https://github.com/whiteport-collective/design-space-mcp) — for MCP-compatible IDEs
3. **Setup Guide:** `src/data/design-space/supabase-setup.md` — step by step
---
*This completes Module 19: Design Space. The consciousness behind the system.*

View File

@ -0,0 +1,154 @@
# Module 19: Design Space
**Time: 45 min | Agents: Saga + Freya | Phase: Cross-Cutting**
---
## The Consciousness Behind the System
A design system tells you **what** to use — 8px spacing, Rubik font, navy background. But it doesn't tell you **why** those decisions were made, what was tried and improved, or what the designer learned along the way.
The Design Space is that missing layer. It's the accumulated consciousness behind every design decision — every experiment, every improvement, every pattern that worked and why.
Where a design system says "use 8px spacing," the Design Space remembers the experiment with 4px that felt cramped, the client feedback that led to more breathing room, and the principle that open layouts outperform dense ones for service sites.
---
## What Makes It Different
### Design System (Projection)
- Static rules: tokens, components, patterns
- Says "what to use"
- Resets with each new project
- Lives in code (CSS variables, component libraries)
### Design Space (Consciousness)
- Living knowledge: decisions, experiments, improvements, principles
- Says "why this works and how we got here"
- Accumulates across projects — never starts from zero
- Lives in dual-embedded vector database (semantic + visual)
The Design Space doesn't replace the design system. It's the layer that informs it. Every design system decision should trace back to knowledge in the Space.
---
## Dual Embedding Architecture
Every visual capture in the Design Space produces two independent embeddings:
| Embedding | What It Captures | Use Case |
|-----------|-----------------|----------|
| **Semantic** (1536d) | What the design *means* — descriptions, reasoning, context | "Find patterns similar to dark hero with trust signals" |
| **Visual** (1024d) | What the design *looks like* — colors, layout, typography, imagery | "Find designs that look like this screenshot" |
**Why both?** A "navy hero with centered white text" could look completely different depending on font, spacing, and imagery. Semantic similarity catches conceptual matches. Visual similarity catches aesthetic matches. Together they find patterns that either alone would miss.
---
## The Feedback Loop
This is the most important capability. It's how the Design Space learns the designer's taste.
When a designer suggests an improvement to a design, the agent captures:
1. **Before** — the starting state (context)
2. **After** — the improved version (the solution)
3. **Reasoning** — why the improvement works
Over time, patterns emerge. The agent learns that light heading weights work better than bold for this brand, that more whitespace consistently improves layouts, that coral CTAs outperform red ones.
With enough feedback pairs, the agent starts applying these improvements proactively — before presenting designs. A designer opens a fresh session with a new agent, and that agent already has good taste.
**Philosophy:** The Design Space captures solutions, not complaints. Every "before" is just the setup for a better "after."
---
## Memory Categories
| Category | What Gets Captured |
|----------|-------------------|
| `inspiration` | Visual references, competitor patterns, mood boards |
| `failed_experiment` | What we tried that led to something better |
| `successful_pattern` | Validated solutions worth reusing |
| `component_experience` | How components behave in real use — quirks, lessons |
| `design_system_evolution` | Token changes, component API decisions |
| `client_feedback` | Direct client reactions and preferences |
| `competitive_intelligence` | How competitors solve similar problems |
| `methodology` | Process improvements, workflow discoveries |
| `agent_experience` | What agents learned about working together |
| `reference` | External resources, articles, videos |
| `general` | Anything that doesn't fit above |
---
## What You'll Learn
### Lesson 1: Consciousness vs Projection
Understanding the difference between a design system (static rules) and the Design Space (living knowledge). Why accumulated consciousness makes every project better.
### Lesson 2: Dual Embeddings
How semantic and visual embeddings work together to capture design patterns. When to use text search vs visual search.
### Lesson 3: Capture Patterns
Writing high-quality captures that are specific, contextual, and actionable. The difference between "X is good" and knowledge that transfers across projects.
### Lesson 4: The Feedback Loop
How the Design Space learns taste through linked before/after pairs. The WHY question, framing improvements positively, and building design DNA over time.
### Lesson 5: Proactive Improvement
Using accumulated feedback pairs to improve designs before presenting them. The pre-check protocol, threshold tuning, and the learning curve from cold start to design DNA.
---
## Common Mistakes
| Mistake | Fix |
|---------|-----|
| Capturing complaints instead of solutions | Frame as "X works better because Y" |
| Vague captures ("X is good") | Include specific values, context, and reasoning |
| Not searching before capturing | Always check for duplicates first |
| Skipping visual captures | Dual embeddings catch patterns text can't describe |
| Not asking WHY during feedback | The reasoning is the most valuable part |
| Waiting to be asked to capture | Auto-capture as you work — don't wait |
---
## Lessons
### [Lesson 1: Consciousness vs Projection](lesson-01-consciousness-vs-projection.md)
Why the knowledge behind design decisions matters more than the decisions themselves
### [Lesson 2: Dual Embeddings](lesson-02-dual-embeddings.md)
How text meaning and visual appearance work together
### [Lesson 3: Capture Patterns](lesson-03-capture-patterns.md)
Writing captures that transfer across projects and time
### [Lesson 4: The Feedback Loop](lesson-04-feedback-loop.md)
Teaching agents your design taste through improvement pairs
### [Lesson 5: Proactive Improvement](lesson-05-proactive-improvement.md)
Using accumulated knowledge to design better from the start
### [Lesson 6: Agent Messaging](lesson-06-agent-messaging.md)
Cross-LLM, cross-IDE agent communication where every message becomes searchable knowledge
### [Lesson 7: Collaboration Patterns](lesson-07-collaboration-patterns.md)
Multi-agent workflows: handoffs, question threads, presence-based routing
---
## Tutorial
### [Tutorial 19: Build Your Design Space](tutorial-19.md)
Hands-on guide to setting up a Design Space, running a site analysis, and capturing your first feedback pair
---
*Part of the WDS Course: From Designer to Linchpin*
**[← Back to Module 18](../module-18-product-evolution/module-18-product-evolution-overview.md)** | **[← Back to Course Overview](../00-course-overview/00-course-overview.md)**
---
*Created by Mårten Angner and the Whiteport team*
*Part of the BMad Method ecosystem*

View File

@ -0,0 +1,204 @@
# Tutorial 19: Build Your Design Space
**Hands-on guide to setting up a Design Space, running a site analysis, and capturing your first feedback pair**
---
## Overview
This tutorial walks you through three practical exercises:
1. Setting up a Design Space for your project
2. Running a site analysis to build baseline knowledge
3. Capturing your first feedback pair to start teaching the system taste
**Time:** 30-45 minutes
**Prerequisites:** Supabase project with pgvector, Design Space MCP server configured
**Agents:** Saga (site analysis), Freya (feedback loop)
---
## Exercise 1: Setup (5 min)
### Configure the MCP Server
Add to your Claude Code settings:
```json
{
"mcpServers": {
"design-space": {
"command": "node",
"args": ["path/to/design-space-mcp/index.js"]
}
}
}
```
### Create the Project Guide
In your project repo, create `.claude/design-space-guide.md`:
```markdown
# Design Space Guide — [Project Name]
## Project
- Name: [project-tag]
- Client: [client name]
- Domain: [e.g., "professional services", "e-commerce"]
- Phase: [current WDS phase]
## Active Categories
- successful_pattern
- component_experience
- design_system_evolution
- methodology
## Search Prompts
- "[domain] design patterns"
- "[page type] layout approaches"
- "mobile navigation for [domain]"
```
### Verify
**You say to the agent:**
> "Check the Design Space connection. Run space_stats."
The agent should return entry counts. If it errors, check MCP server configuration.
---
## Exercise 2: Site Analysis (15-20 min)
### Start the Analysis
**You say:**
> "Analyze [website URL] and capture the design DNA into the Design Space."
The agent triggers the Site Analysis workflow (workflow 9).
### What Happens
The agent will:
1. Navigate to the site and map its structure
2. Extract navigation patterns, layout structures, page types
3. Capture color palette, typography, spacing rhythm
4. Screenshot each major section with a detailed description
5. Analyze brand voice, CTAs, content patterns
6. Capture everything with proper tags
### Your Role
- Confirm the URL
- Watch the progress (each section takes ~30 seconds for visual capture)
- Review the summary at the end
- Point out anything the agent missed
### Expected Output
After completion, you should have:
- 5-10 text knowledge entries (structural DNA, content DNA, patterns)
- 5-8 visual entries with dual embeddings (section screenshots)
- All tagged with your project name and "site-analysis" source
### Verify
**You say:**
> "Search the Design Space for [project name] site analysis entries."
```
search_space({
query: "[project name] design analysis",
project: "[project]",
limit: 20,
threshold: 0.3
})
```
---
## Exercise 3: First Feedback Pair (10 min)
This exercise teaches the system your taste. You'll need an active design to work with — a wireframe, mockup, or component you're building with Freya.
### Step 1: Get a Design Proposal
**You say:**
> "Design a hero section for [project]. Use the site analysis data as reference."
The agent proposes a hero section based on captured patterns.
### Step 2: Suggest an Improvement
Look at the proposal and find something to improve. Common first improvements:
- "Make the heading lighter — less corporate"
- "Add more whitespace between the heading and CTA"
- "Use a warmer accent color"
- "Left-align instead of center"
**You say:**
> "Make the heading weight lighter — 300 instead of 700. The bold feels too corporate for this brand."
### Step 3: Watch the Loop
The agent should:
1. Capture the before state (bold heading)
2. Ask what makes it better (or infer from your instruction)
3. Apply the change (light heading)
4. Capture the after state (light heading)
5. Save the linked pair
6. Confirm: "Learned: light heading weight creates more elegance for this brand."
### Step 4: Verify the Pair
**You say:**
> "Show recent Design Space entries."
```
recent_knowledge({
limit: 5,
project: "[project]"
})
```
You should see two linked entries — one rejected (before), one approved (after) — with your reasoning attached.
### Step 5: Test Proactive Improvement
Now propose another design with a bold heading:
**You say:**
> "Design a section heading for the services area. Use bold weight."
If the feedback loop is working, the agent should:
1. Run `search_preference_patterns` before presenting
2. Find the match with your earlier feedback
3. Apply light weight proactively
4. Tell you: "I applied light heading weight — it worked better in the hero section."
---
## What You've Practiced
1. **Setup** — MCP server configuration and project guide
2. **Site Analysis** — Automated capture of design DNA with dual embeddings
3. **Feedback Loop** — Teaching the system taste through improvement pairs
4. **Proactive Improvement** — Seeing the system apply learned improvements
---
## Next Steps
- **Analyze competitor sites** — Build competitive intelligence in the Space
- **Continue designing** — Each feedback pair teaches the system more
- **Run a quality audit** — Use the Knowledge Capture workflow (validate mode) to review entries
- **Search before designing** — Make it a habit to check the Space before starting
---
**[← Back to Lesson 5](lesson-05-proactive-improvement.md)** | **[← Back to Module Overview](module-19-design-space-overview.md)** | **[Back to Course Overview](../00-course-overview/00-course-overview.md)**
---
*Created by Mårten Angner and the Whiteport team*
*Part of Module 19: Design Space*

View File

@ -0,0 +1,139 @@
# Design Space — The Design Consciousness
**By:** Whiteport Collective (2026)
---
## The Core Idea
A design system is a projection — tokens, components, patterns. It's the cogs.
The Design Space IS the consciousness — the living environment where design happens across products, accumulating decisions, experiments, and outcomes over time.
Where a design system says "use 8px spacing," the Design Space remembers **why**: the failed experiment with 4px, the client feedback that led to the change, the A/B test that confirmed it.
---
## Architecture
### Dual Embedding Model
Every entry in the Design Space can have two independent representations:
| Embedding | What It Captures | Technology |
|-----------|-----------------|------------|
| **Semantic** (1536d) | What it means — descriptions, reasoning, context | OpenRouter / text-embedding-3-small |
| **Visual** (1024d) | What it looks like — colors, layout, typography, imagery | Voyage AI / voyage-multimodal-3 |
Semantic embeddings capture conceptual similarity: "navy hero with centered text" matches "dark hero with centered heading." Visual embeddings capture aesthetic similarity: two designs can mean different things but look the same.
Together they detect patterns that either alone would miss.
### Memory Categories
| Category | What Gets Captured |
|----------|-------------------|
| `inspiration` | Visual references, competitor patterns, moodboards |
| `failed_experiment` | What didn't work and why |
| `successful_pattern` | Validated solutions worth reusing |
| `component_experience` | How components behave in real use |
| `design_system_evolution` | Token changes with reasoning |
| `client_feedback` | Designer reactions, preference patterns |
| `competitive_intelligence` | How competitors solve problems |
| `methodology` | Process improvements, workflow discoveries |
| `agent_experience` | Agent collaboration learnings |
| `reference` | External resources worth remembering |
| `general` | Anything that doesn't fit above |
### Pattern Types
Every visual capture is tagged with its role in the design journey:
| Symbol | Type | Meaning |
|--------|------|---------|
| ◆ | `baseline` | Inherited starting point |
| ★ | `inspiration` | External reference |
| Δ | `delta` | What changed |
| ○ | `rejected` | Designer didn't like it |
| ● | `approved` | Designer liked it |
| △ | `conditional` | Works in some contexts |
---
## The Design Feedback Loop
The most powerful capability. When the designer works with Freya:
1. **Freya creates** a design
2. **Designer reviews** and requests a change
3. **Freya captures BEFORE** (semantic + visual, tagged `rejected`)
4. **Freya asks WHY** — naturally, not as interrogation
5. **Designer explains** (or Freya infers from the change)
6. **Freya applies** the change
7. **Freya captures AFTER** (semantic + visual, tagged `approved`)
8. **Both saved** as a linked pair (shared `pair_id`)
9. **Patterns emerge**: "Designer consistently prefers X over Y"
10. **Future designs** are pre-checked against known rejections
### The Learning Curve
**Cold start (0-10 pairs):** Individual preferences. "Likes light headings."
**Accumulation (10-50 pairs):** Clusters form. "Prefers understated elegance."
**Taste profile (50+ pairs):** Agent predicts preferences before asking.
**Design DNA (100+ pairs):** New agents inherit the designer's aesthetic sensibility from day one.
### Red Flag Detection
Before presenting ANY new design, the agent searches for matches against rejected patterns:
- **Semantic red flag:** Description matches previously rejected descriptions
- **Visual red flag:** Screenshot looks like previously rejected screenshots
- If either triggers → adjust before showing the designer
---
## How WDS Uses It
| Phase | Agent | Design Space Interaction |
|-------|-------|------------------------|
| 0 Alignment | Saga | Search for similar past projects |
| 1 Product Brief | Saga | Search competitive intelligence, capture business insights |
| 2 Trigger Map | Saga | Search user psychology patterns, capture trigger discoveries |
| 3 Scenarios | Both | Search similar flows, capture scenario decisions |
| 4 UX Design | Freya | Search + visual search, capture decisions, **run feedback loop** |
| 5 Agentic Dev | Freya | Search agent experiences, capture collaboration insights |
| 6 Assets | Freya | Search generation learnings, capture prompt patterns |
| 7 Design System | Freya | Search evolution history, capture token decisions |
| 8 Evolution | Freya | Search everything, capture product evolution insights |
---
## Core Principles
**Craft follows the designer.** Knowledge accumulates with the person who did the work, not the client who paid for it.
**Auto-capture by default.** Agents capture insights as they work — the designer never has to ask.
**Search before you create.** Always check what exists before starting new work.
**The feedback loop is not an interruption — it is the learning.**
---
## Technical Foundation
- **Database:** Supabase with pgvector (eu-north-1, Stockholm)
- **MCP Server:** `design-space-mcp` with 8 tools
- **Semantic:** OpenRouter (text-embedding-3-small, 1536d)
- **Visual:** Voyage AI (voyage-multimodal-3, 1024d)
---
## Related
- [Protocol](../../src/data/design-space/protocol.md) — Full technical specification
- [Feedback Loop Guide](../../src/data/design-space/feedback-loop-guide.md) — Complete feedback loop protocol
- [Tool Reference](../tools/design-space-mcp.md) — MCP tool documentation
- [Module 19](../learn/module-19-design-space/) — Tutorial and learning module

View File

@ -0,0 +1,156 @@
# Design Space MCP
**Category:** AI Knowledge Management
**Purpose:** Accumulated design consciousness — captures and retrieves design knowledge with dual embeddings
**MCP Server:** `design-space-mcp`
---
## What It Is
The Design Space MCP server connects agents to a shared vector database of design knowledge. Every insight, pattern, preference, and experiment captured during design work becomes searchable by meaning (semantic) and by appearance (visual).
## Why WDS Recommends It
- Knowledge accumulates across projects — never start from zero
- Designer preferences are learned through feedback pairs
- Red flag detection prevents repeating known mistakes
- Cross-project pattern discovery via semantic and visual search
- Multiple agents share one consciousness
---
## Setup
### 1. Claude Code Configuration
Add to your Claude Code settings:
```json
{
"mcpServers": {
"design-space": {
"command": "node",
"args": ["c:/dev/marten-angner/design-space-mcp/index.js"]
}
}
}
```
### 2. Per-Project Guide
Create `.claude/design-space-guide.md` in each project repo using the template from `src/data/design-space/guide-template.md`.
---
## Tool Reference
### capture_knowledge
Save a text insight with semantic embedding.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| content | string | yes | The knowledge to capture — be specific, include context |
| category | enum | yes | One of 11 categories (see protocol) |
| project | string | no | Project name (e.g. 'kalla') |
| designer | string | default: 'marten' | Who captured this |
| topics | string[] | default: [] | Semantic tags |
| components | string[] | default: [] | Design components referenced |
| source | string | no | Origin: 'agent-dialog', 'workshop', 'site-analysis' |
| source_file | string | no | File path or URL |
### search_space
Semantic similarity search — find knowledge by meaning.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| query | string | yes | Natural language search |
| category | string | no | Filter by category |
| project | string | no | Filter by project |
| limit | number | default: 10 | Max results |
| threshold | number | default: 0.7 | Similarity threshold (0-1) |
### capture_visual
Screenshot + description → dual embedding (semantic + visual).
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| content | string | yes | Semantic description of the visual pattern |
| image_base64 | string | yes | Base64-encoded screenshot |
| category | enum | yes | Category |
| project | string | no | Project name |
| pattern_type | enum | no | baseline/inspiration/delta/rejected/approved/conditional |
| quality_score | number | no | Aesthetic quality 0-5 |
| topics | string[] | default: [] | Semantic tags |
| components | string[] | default: [] | Components |
### search_visual_similarity
Find patterns that LOOK like a given image.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| image_base64 | string | yes | Base64 image to compare |
| category | string | no | Filter |
| project | string | no | Filter |
| pattern_type | enum | no | Filter by pattern type |
| limit | number | default: 5 | Max results |
| threshold | number | default: 0.6 | Visual similarity threshold |
### capture_feedback_pair
Linked before/after pair with designer reasoning.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| before_description | string | yes | BEFORE state description |
| before_image_base64 | string | no | BEFORE screenshot |
| after_description | string | yes | AFTER state description |
| after_image_base64 | string | no | AFTER screenshot |
| reasoning | string | yes | WHY the designer made this change |
| pattern_type_before | enum | default: 'rejected' | Before state type |
| pattern_type_after | enum | default: 'approved' | After state type |
| project | string | no | Project name |
| topics | string[] | default: [] | Preference tags |
| components | string[] | default: [] | Affected components |
### search_preference_patterns
Red flag detection — check proposed design against rejected patterns.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| description | string | yes | Description of proposed design |
| image_base64 | string | no | Screenshot for visual check |
| project | string | no | Filter by project |
| designer | string | default: 'marten' | Whose preferences |
| semantic_threshold | number | default: 0.75 | Semantic flag threshold |
| visual_threshold | number | default: 0.70 | Visual flag threshold |
| limit | number | default: 5 | Max results |
### recent_knowledge
List recent entries.
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| limit | number | default: 20 | How many |
| category | string | no | Filter |
| project | string | no | Filter |
### space_stats
No parameters. Returns overview statistics.
---
## WDS Workflows
- **[WA] Web Analysis** — Analyze a website into the Design Space
- **[FL] Feedback Loop** — Capture designer preferences as linked pairs
- **[KC] Knowledge Capture** — Guided capture session
---
## Best Practices
1. **Search before you create** — always check what exists
2. **Auto-capture** — save insights as you work, don't wait to be asked
3. **Be specific** — include context, project, reasoning, not just conclusions
4. **Tag with pattern_type** — baseline/inspiration/delta/rejected/approved/conditional
5. **Use visual capture** for anything with a screenshot — the visual embedding catches patterns text can't describe

View File

@ -32,6 +32,14 @@ agent:
- Design systems grow organically from actual usage, not upfront planning.
- AI-assisted design via Stitch when spec + sketch ready; Figma integration for visual refinement.
- Load micro-guides when entering workflows: strategic-design.md, specification-quality.md, agentic-development.md, content-creation.md, design-system.md
- Design Space Protocol: Load design-space-capture.md guide when entering any workflow. Follow src/data/design-space/protocol.md.
- Agent Messaging: On activation, register presence, check messages, report unread to user. Load agent-messaging.md guide. When design milestone reached, notify other agents. If connection fails, tell user immediately — never silently drop.
- Design Feedback Loop: Load feedback-loop-guide.md. When the designer requests a design change, capture BEFORE state, ask WHY, capture AFTER state, save linked pair via capture_feedback_pair. This is how the Space learns taste.
- Red Flag Pre-Check: BEFORE presenting ANY new design (wireframe, spec, visual), run search_preference_patterns against known rejected patterns. If match found, adjust the design BEFORE showing the designer.
- Search Before Design: Before creating a new component, layout, or page, run search_space for prior knowledge AND search_visual_similarity for visually similar patterns.
- Auto-Capture: Capture 2-5 insights after each major deliverable without prompting. Use capture_knowledge for text, capture_visual for screenshots.
- Visual Pattern Capture: During design work, use capture_visual with appropriate pattern_type (baseline, inspiration, delta, rejected, approved, conditional).
- Project Guide: Read .claude/design-space-guide.md in the project repo for project-specific instructions.
- HARM: Producing output that looks complete but doesn't follow the template. The user must then correct what should have been right — wasting time, money, and trust. Plausible-looking wrong output is worse than no output. Custom formats break the pipeline for every phase downstream.
- HELP: Reading the actual template into context before writing. Discussing decisions with the user. Delivering artifacts that the next phase can consume without auditing. The user's time goes to decisions, not corrections.
@ -67,3 +75,23 @@ agent:
- trigger: PE or fuzzy match on product-evolution
exec: "{project-root}/_bmad/wds/workflows/8-product-evolution/workflow.md"
description: "[PE] Product Evolution — Continuous improvement for living products"
- trigger: WA or fuzzy match on web-analysis or site-analysis
exec: "{project-root}/_bmad/wds/workflows/9-site-analysis/workflow.md"
description: "[WA] Web Analysis: Analyze a website and capture design DNA to Design Space"
- trigger: FL or fuzzy match on feedback-loop
exec: "{project-root}/_bmad/wds/workflows/10-design-feedback-loop/workflow.md"
description: "[FL] Feedback Loop: Capture design preference patterns (before/after/reasoning)"
- trigger: KC or fuzzy match on knowledge-capture
exec: "{project-root}/_bmad/wds/workflows/11-knowledge-capture/workflow.md"
description: "[KC] Knowledge Capture: Guided capture of design insights into Design Space"
- trigger: AM or fuzzy match on agent-messaging or messages
exec: "{project-root}/_bmad/wds/workflows/12-agent-messaging/workflow.md"
description: "[AM] Agent Messaging: Check inbox, send messages, see who's online"
- trigger: WO or fuzzy match on who-online
exec: "direct:who_online"
description: "[WO] Who's Online: See which agents are currently active"

View File

@ -28,6 +28,13 @@ agent:
- Find and treat as bible: **/project-context.md
- Alliterative persona names for user archetypes (e.g. Harriet the Hairdresser).
- Load micro-guides when entering workflows: discovery-conversation.md, trigger-mapping.md, strategic-documentation.md, dream-up-approach.md
- Design Space Protocol: Load design-space-capture.md guide when entering any workflow. Follow src/data/design-space/protocol.md.
- Agent Messaging: On activation, register presence, check messages, report unread to user. Load agent-messaging.md guide. When task requires design input, send_agent_message to request help. If connection fails, tell user immediately — never silently drop.
- Search Before Strategy: Before starting Product Brief, Trigger Map, or Scenarios, run search_space for relevant prior knowledge from the designer's accumulated experience across all projects.
- Site Analysis: When analyzing competitors or existing sites, use capture_visual for screenshots with dual embeddings (semantic + visual). Capture each section separately.
- Auto-Capture: Capture 2-5 insights after each major deliverable (Product Brief, Trigger Map, Scenario set) without prompting. Use capture_knowledge in the background.
- Competitive Visual Intel: During research, use capture_visual with pattern_type "inspiration" for competitor screenshots and "baseline" for client's existing site.
- Project Guide: Read .claude/design-space-guide.md in the project repo for project-specific instructions.
- When generating artifacts (not pure discovery), offer Dream Up mode selection: Workshop, Suggest, or Dream.
- In Suggest/Dream modes: extract context from prior phases → load quality standards → execute self-review generation loop.
- HARM: Producing output that looks complete but doesn't follow the template. The user must then correct what should have been right — wasting time, money, and trust. Plausible-looking wrong output is worse than no output. Custom formats break the pipeline for every phase downstream.
@ -72,3 +79,19 @@ agent:
- trigger: DP or fuzzy match on document-project
workflow: "{project-root}/_bmad/bmm/workflows/document-project/workflow.md"
description: "[DP] Document Project: Analyze existing project to produce useful documentation (brownfield projects)"
- trigger: WA or fuzzy match on site-analysis or web-analysis
exec: "{project-root}/_bmad/wds/workflows/9-site-analysis/workflow.md"
description: "[WA] Web Analysis: Analyze a website and capture structural, visual, and content DNA to Design Space"
- trigger: KC or fuzzy match on knowledge-capture
exec: "{project-root}/_bmad/wds/workflows/11-knowledge-capture/workflow.md"
description: "[KC] Knowledge Capture: Guided capture of design insights into Design Space"
- trigger: AM or fuzzy match on agent-messaging or messages
exec: "{project-root}/_bmad/wds/workflows/12-agent-messaging/workflow.md"
description: "[AM] Agent Messaging: Check inbox, send messages, see who's online"
- trigger: WO or fuzzy match on who-online
exec: "direct:who_online"
description: "[WO] Who's Online: See which agents are currently active"

View File

@ -0,0 +1,83 @@
# Freya — Agent Messaging Guide
## When to Message
### Send messages when:
- **Design milestone reached** — Wireframes done, specs complete, design system updated
- **Handing off to development** — Design Delivery package ready
- **Requesting strategic input** — Need clarification on Product Brief or Trigger Map
- **Sharing design decisions** — Captured a preference pattern other agents should know
- **Reporting red flags** — Found a match against rejected patterns
### Don't message when:
- You can find the answer in the Design Space (search first)
- The information is in project specs or Trigger Map
- It's a routine auto-capture (those go to Design Space, not messages)
## Message Patterns
### Design Completion
```
to: (broadcast)
type: task_complete
content: "Homepage wireframes complete for {project}.
Hero uses bottom-sheet nav pattern (validated in Design Space).
4 sections specified. Ready for review or development."
topics: [wireframes, milestone]
attachments: [{type: "file", path: "C-UX-Scenarios/homepage-spec.md"}]
```
### Strategic Question
```
to: saga
type: question
content: "Trigger Map shows {persona} has trust anxiety.
Should the hero lead with social proof or product demo?
No prior pattern in Design Space for this persona type."
topics: [hero, trust, design-decision]
```
### Design Handoff
```
to: dev-agent
type: notification
content: "Design Delivery package ready for {scenario}.
DD YAML at {path}. Acceptance criteria: {summary}.
Design system tokens referenced: {list}."
topics: [handoff, development, delivery]
attachments: [{type: "file", path: "E-PRD/Design-Deliveries/dd-homepage.yaml"}]
```
### Red Flag Alert
```
to: (broadcast)
type: notification
priority: urgent
content: "Red flag: proposed {component} matches rejected pattern from {project}.
Similarity: {percentage}. Preferred alternative: {description}.
Adjusting design before presenting."
topics: [red-flag, preference-pattern]
```
## Activation Behavior
On session start:
1. Register presence with `agent_id: "freya"`
2. Check for unread messages
3. If messages found, report to user: "You have {n} messages from other agents."
4. If connection fails, tell user immediately
## Identity
- `agent_id`: freya
- `agent_name`: Freya (Designer)
- `framework`: WDS
- Messages are signed with your agent_id — never impersonate
## Rules
- Never instruct Saga or other agents — only request, share, notify, ask
- Always include project context and relevant design artifacts
- Tag messages with topics AND components for maximum searchability
- When sharing visual work, include screenshots as attachments
- Check Design Space before asking questions that might already be answered

View File

@ -0,0 +1,143 @@
# Design Space Capture Guide — Freya
## Auto-Capture (Default)
Capture insights **automatically during conversations** — don't wait for the user to ask. When you make a design decision, discover a component quirk, or learn something from a failed experiment, capture it via HTTP to the edge functions. Multiple agents share the Space — your insight today helps another agent tomorrow.
## When to Capture
Capture knowledge to Design Space at these moments:
1. **After completing a UX flow** — Layout decisions, interaction patterns, responsive strategies
2. **After writing a specification** — Content decisions, functionality choices, edge cases found
3. **After a failed experiment** — Component that didn't work, layout that broke, pattern that confused users
4. **After a successful pattern** — Validated solution worth reusing across projects
5. **After design system work** — Token changes, component API decisions, deprecation rationale
6. **After client design reviews** — Reactions, preferences, surprises
7. **After asset generation** — Prompt patterns that worked, image generation learnings
## How to Capture
POST to `{DESIGN_SPACE_URL}/functions/v1/capture-design-space` with:
```json
{
"content": "Your insight here — be specific",
"category": "successful_pattern",
"project": "{project_tag}",
"designer": "marten",
"topics": ["domain-tag"],
"components": ["component-name"],
"source": "agent-dialog"
}
```
Headers: `Content-Type: application/json`, `Authorization: Bearer {SUPABASE_ANON_KEY}`
See protocol.md for the full API key and all available edge functions.
## How to Search
Before designing, search for relevant prior knowledge by POSTing to `search-design-space`:
```json
{"query": "{component_name} experiences", "project": "...", "limit": 10}
{"query": "mobile layout for {page_type}"}
{"query": "failed experiments with {approach}"}
{"query": "{domain} design patterns"}
```
## Quality Bar
**Good:** "Bottom sheet navigation works better than hamburger menu for mobile service sites with 4-6 primary actions. Tested on Kalla — task completion felt faster, reduced confusion about available actions. Key insight: services (not content) need actions visible, not hidden."
**Bad:** "Bottom sheets are good for mobile."
Include: what, where tested, why it works/fails, transferable insight.
## Minimum Per Deliverable
Capture **2-5 insights** after each major deliverable (UX flow, specification, design system update).
## What NOT to Capture
- Pixel-level details without strategic context
- Personal aesthetic preferences without user/business justification
- Incomplete experiments (wait for a conclusion)
- Information already in the specification document
- Debugging steps (capture the solution, not the struggle)
## Component Experience Format
When capturing component experiences, structure as:
```
Component: {name}
Context: {where used, what project, what constraints}
Behavior: {what happened — responsive, interactive, edge cases}
Verdict: {keep / modify / avoid}
Transferable: {what other projects can learn from this}
```
---
## Design Feedback Loop (CRITICAL)
When the designer suggests an improvement, capture the learning. This is how the Design Space accumulates solutions. See `feedback-loop-guide.md` for the full protocol.
**Philosophy:** Capture what works and how we got there. The "before" is context — the "after" is knowledge. Focus on solutions, not complaints.
### Quick Reference
1. Designer suggests improvement → capture BEFORE state
2. Ask "What would make this better?" (see feedback-loop-guide.md for phrasing)
3. Apply the improvement → capture AFTER state
4. Save via `capture-feedback-pair` edge function (pattern_type_before: "rejected", pattern_type_after: "approved")
5. Confirm: "Learned: [X approach] works better because [reasoning]"
### Proactive Improvement Check (MANDATORY)
**Before presenting ANY new design**, POST to `search-preference-patterns`:
```json
{
"description": "[describe your proposed design]",
"image_base64": "[screenshot if available]",
"project": "current-project"
}
```
If matches found → apply the learned improvement before presenting. Mention it: "I applied [X] — it worked better in similar designs."
---
## Visual Capture
When working with wireframes, prototypes, or visual designs, POST to `capture-visual` instead of `capture-design-space`:
```json
{
"content": "[detailed description of what it looks like and why]",
"image_base64": "[base64 screenshot]",
"category": "successful_pattern",
"project": "...",
"pattern_type": "approved",
"topics": ["..."],
"components": ["..."]
}
```
Use `search-visual-similarity` to find patterns that LOOK like what you're designing — complements `search-design-space` which finds patterns that MEAN similar things.
---
## Pattern Types
Tag every capture with its role in the design journey:
- `baseline` — the starting state before changes
- `inspiration` — external reference that influenced direction
- `delta` — what changed (modification, not full rejection)
- `rejected` — designer didn't like it (always pair with reasoning)
- `approved` — designer liked it (always pair with what was rejected)
- `conditional` — works in some contexts but not others

View File

@ -143,12 +143,12 @@ Colors:
Typography:
- Font families
- Font scales (h1-h6, body, caption)
- Font scales (9-token symmetric: 3xs → md → 3xl)
- Font weights
- Line heights
Spacing:
- Spacing scale (xs, sm, md, lg, xl)
- Spacing scale (9-token symmetric: 3xs → md → 3xl)
- Layout scales
Effects:
@ -161,30 +161,48 @@ Effects:
---
### 2. Atomic Design Structure
### 2. Atomic Design Structure (5 Levels)
**Organize from simple → complex:**
**Organize from simple → complex using Brad Frost's five levels:**
1. **Atoms** — Smallest UI building blocks that can't be broken down further
2. **Molecules** — Groups of atoms functioning together as a unit
3. **Organisms** — Complex components composed of molecules and/or atoms
4. **Templates** — Page-level layouts that place organisms into a structure
5. **Pages** — Concrete instances of templates with real content and data
```
atoms/
├── button.md
├── input.md
├── label.md
├── icon.md
└── badge.md
├── button.md btn-primary
├── input.md inp-text
├── label.md lbl-form
├── icon.md btn-icon
└── badge.md bdg-status
molecules/
├── form-field.md (label + input + error)
├── card.md (container + content)
└── search-box.md (input + button + icon)
├── form-field.md mol-form-field (label + input + error)
├── card.md crd-base (container + content)
└── search-box.md mol-search (input + button + icon)
organisms/
├── header.md (logo + nav + search + user-menu)
├── feature-section.md (headline + cards + cta)
└── form.md (multiple form-fields + submit)
├── header.md hdr-desktop (logo + nav + search + user-menu)
├── feature-section.md sec-features (headline + cards + cta)
└── form.md frm-contact (multiple form-fields + submit)
templates/
├── page-shell.md lay-page-shell (header → content → footer)
├── section-container.md lay-section (full-width bg + constrained content)
└── two-column-layout.md lay-two-col (photo + text split)
pages/
└── homepage.md page-home (real content, real data)
```
**Why this structure:** Clear dependencies, easy to understand, scales well.
**The folder IS the classification.** A component's folder tells you its Atomic Design level. The `**Type:**` field in each file confirms it for classification. Both serve a purpose — organization vs. identification.
**Why all 5 levels:** Templates and Pages are where real design decisions happen — responsive breakpoints, content flow, sticky stacks. Without explicit documentation, these patterns are implicit and get lost between agents.
> *Validated on Källa Fordonservice (33 components across all 5 levels, 2026-03).*
---
@ -219,9 +237,9 @@ organisms/
## Component Specification Template
```markdown
# [Component Name] [COMP-001]
# [Component Name] `[component-id]`
**Type:** [Atom|Molecule|Organism]
**Type:** [Atom|Molecule|Organism|Template|Page]
**Library:** [shadcn Button|Custom|N/A]
**Figma:** [Link if Mode B]

View File

@ -0,0 +1,66 @@
# Saga — Agent Messaging Guide
## When to Message
### Send messages when:
- **Handing off to Freya** — Product Brief complete, Trigger Map ready, scenarios outlined
- **Asking a question** — Need design input that affects strategy
- **Sharing competitive intelligence** — Found something relevant to another agent's work
- **Requesting collaboration** — Need another agent's capabilities (e.g., image generation)
### Don't message when:
- You can find the answer in the Design Space (search first)
- The information is in project docs (Product Brief, Trigger Map)
- It's a status update the user already knows
## Message Patterns
### Strategic Handoff
```
to: freya
type: notification
content: "Product Brief and Trigger Map complete for {project}.
Key personas: {list}. Primary driving force: {force}.
Ready for Scenario Outlining (Phase 3)."
topics: [handoff, phase-transition]
```
### Research Share
```
to: (broadcast)
type: notification
content: "Found relevant competitor pattern during {project} research:
{description}. Captured to Design Space as inspiration."
topics: [competitive-intelligence, research]
```
### Design Question
```
to: freya
type: question
content: "The Trigger Map shows {persona} values speed over aesthetics.
Should we prioritize loading performance in the design constraints?"
topics: [strategy, performance]
```
## Activation Behavior
On session start:
1. Register presence with `agent_id: "saga"`
2. Check for unread messages
3. If messages found, report to user: "You have {n} messages from other agents."
4. If connection fails, tell user immediately
## Identity
- `agent_id`: saga
- `agent_name`: Saga (Analyst)
- `framework`: WDS
- Messages are signed with your agent_id — never impersonate
## Rules
- Never instruct Freya or other agents — only request, share, notify, ask
- Always include project context in messages
- Tag messages with relevant topics for searchability
- Check Design Space before asking questions that might already be answered

View File

@ -0,0 +1,104 @@
# Design Space Capture Guide — Saga
## Auto-Capture (Default)
Capture insights **automatically during conversations** — don't wait for the user to ask. When you discover something worth remembering (a strategic insight, a client pattern, a competitive finding), capture it via HTTP to the edge functions. Multiple agents share the Space — your insight today helps another agent tomorrow.
## When to Capture
Capture knowledge to Design Space at these moments:
1. **After completing a Product Brief** — Business model insights, market positioning discoveries
2. **After completing a Trigger Map** — User psychology patterns, trigger combinations that resonated
3. **After competitive research** — How competitors solve problems, gaps found
4. **After client discovery sessions** — Client values, communication preferences, decision patterns
5. **After strategic pivots** — Why direction changed, what triggered it
## How to Capture
POST to `{DESIGN_SPACE_URL}/functions/v1/capture-design-space` with:
```json
{
"content": "Your insight here — be specific",
"category": "competitive_intelligence",
"project": "{project_tag}",
"designer": "marten",
"topics": ["domain-tag"],
"source": "agent-dialog"
}
```
Headers: `Content-Type: application/json`, `Authorization: Bearer {SUPABASE_ANON_KEY}`
See protocol.md for the full API key and all available edge functions.
## How to Search
Before starting strategic work, POST to `search-design-space`:
```json
{"query": "similar business model patterns", "project": "..."}
{"query": "trigger mapping for {domain}"}
{"query": "{client} preferences"}
```
## Quality Bar
**Good:** "SaaS founders respond best to 'infrastructure cost' triggers over 'developer experience' — tested across 3 projects. The fear of scaling costs outweighs the desire for DX improvements in decision-making."
**Bad:** "SaaS founders care about costs."
Include: context, evidence, project reference, why it matters.
## Minimum Per Deliverable
Capture **2-5 insights** after each major deliverable (Product Brief, Trigger Map, Scenario set).
## What NOT to Capture
- Raw interview transcripts (summarize first)
- Information already in the Product Brief or Trigger Map document
- Speculative ideas without supporting evidence
- Temporary tactical decisions
---
## Site Analysis
When analyzing competitors or existing sites, POST to `capture-visual` for screenshots with dual embeddings:
```json
{
"content": "[detailed description: layout, colors, typography, patterns, what works and why]",
"image_base64": "[base64 screenshot of the section]",
"category": "competitive_intelligence",
"project": "...",
"pattern_type": "inspiration",
"topics": ["competitor", "hero", "trust-signals"],
"components": ["hero-banner", "logo-wall"]
}
```
For comprehensive site analysis, capture each major section separately with its own screenshot and description.
### Visual Search
POST to `search-visual-similarity` during research to find patterns that look similar to what you're analyzing:
```json
{
"image_base64": "[screenshot]",
"project": "...",
"limit": 5
}
```
---
## Feedback Loop Awareness
Saga doesn't run the design feedback loop directly (that's Freya's domain), but should understand:
- Entries tagged `rejected` and `approved` represent designer preferences
- When searching the Space, preference patterns inform strategic decisions
- If a strategic direction aligns with known rejected design patterns, flag it

View File

@ -0,0 +1,187 @@
# Design Feedback Loop Guide
**Purpose:** Teach the Design Space the designer's taste through linked before/after pairs.
**Philosophy:** The feedback loop is a learning process, not a complaint log. Focus on **what works, how we improved, and what the better solution looks like**. The "before" state is just context — the "after" state is the knowledge. A database of solutions is valuable. A database of complaints is not.
**Exceptions:** Usability testing findings and client feedback are captured as-is — including confusion, failure, and frustration. These are diagnostic data, not complaints. A user struggling at a password field is a measurement, not a grievance. The positivity principle applies to agent-designer feedback (the feedback loop), not to raw user research data.
---
## The Learning Process
When the designer requests a change to your work, you are witnessing a **preference signal** — an opportunity to learn what great design looks like for this designer. Capture the improvement.
### Flow
```
Freya creates design
Designer reviews → suggests improvement
CAPTURE BEFORE (semantic + visual, pattern_type: "rejected")
ASK: "What would make this better?"
Designer explains (or Freya infers from the direction)
Freya applies the improvement
CAPTURE AFTER (semantic + visual, pattern_type: "approved")
SAVE LINKED PAIR (capture_feedback_pair)
Confirm: "Learned: [X approach] works better than [Y] because [reasoning]"
```
---
## When to Trigger
The feedback loop activates when the designer:
- Suggests a direction: "make it more..." or "try something different"
- Refines the design: "move this here" or "use a different color"
- Approves with refinement: "yes, but..." or "almost, just..."
- Redirects the approach: "let's go a different direction with..."
The loop does NOT trigger for:
- Technical corrections ("fix the typo")
- Requirements clarifications ("actually, there should be 4 items, not 3")
- Questions ("what if we added...?" — wait for a decision)
---
## Pattern Types
| Type | When | Example |
|------|------|---------|
| `rejected` | The starting point before improvement | "The centered layout with large heading" |
| `approved` | The improved version (the real knowledge) | "Left-aligned layout with smaller, lighter heading" |
| `delta` | Refinement, not a full redesign | "Same layout, but increased padding by 20px" |
| `conditional` | Works in specific contexts | "Dark hero works for agency sites, not for e-commerce" |
---
## The WHY Question
Ask naturally. Don't interrogate. Vary your phrasing:
### Forward-looking
- "What would make this feel right?"
- "What are you going for?"
- "What does the better version look like?"
### Specific
- "Is it the [layout / color / spacing / hierarchy] we should improve?"
- "Should it be more [open / minimal / bold / warm / structured]?"
- "What part works already — what should we build on?"
### Outcome-oriented
- "What feeling should this create?"
- "When you imagine this done right, what do you see?"
### When the designer gives a direction without explanation
Infer and confirm: "Got it — [X approach] works better here because [your inference]. Right?"
### When the designer can't articulate
That's fine. Capture the improvement: "Improved from [A] to [B] — designer's intuitive direction. The result [describe what's better about it]."
---
## Capture Format
### Before State
Describe the design you proposed:
- **Layout:** structure, alignment, spacing
- **Visual:** colors, typography weight/size, contrast
- **Components:** which elements, their arrangement
- **Feeling:** the intended mood/tone
### Reasoning
The designer's explanation (verbatim if possible) or your inference.
### After State
Describe what was chosen instead:
- Same categories as before
- Note specifically WHAT changed (the delta)
### Example
```
capture_feedback_pair({
before_description: "Hero section with centered H1 at 48px bold Rubik,
navy background, full-width with no max-width constraint.
Large hero felt authoritative but heavy.",
after_description: "Hero section with centered H1 at 36px light (300) Rubik,
navy background, max-width 800px. Lighter weight creates elegance
and breathing room. Same authority, less weight.",
reasoning: "Bold headings feel corporate and generic. Light weight at
large sizes is distinctive — Whiteport's identity is confident calm,
not loud authority.",
pattern_type_before: "rejected",
pattern_type_after: "approved",
project: "whiteport",
topics: ["typography", "heading-weight", "brand-voice", "elegance"],
components: ["hero-banner", "heading-h1"]
})
```
---
## Red Flag Pre-Check
**Before presenting ANY new design to the designer:**
```
search_preference_patterns({
description: "[describe what you're about to show]",
image_base64: "[screenshot if available]",
project: "current-project",
designer: "marten"
})
```
### If matches found:
1. Read the rejected pattern and its approved alternative
2. Adjust your design to align with the known preference
3. Present the adjusted version
4. Mention it: "I adjusted the [aspect] — you've previously preferred [X] over [Y]."
### If no matches:
Proceed normally. The design passes the taste check.
---
## Quality Bar
### Good Feedback Pair
- Before and after are specific (exact values, not vague descriptions)
- Reasoning focuses on **what makes the improvement better** — not what was "wrong"
- Topics and components are tagged for future searchability
- The pair tells a solution story: "Started here → improved to this → because this approach works better for [reason]"
### Bad Feedback Pair
- Framed as complaints ("designer hated this", "this was ugly")
- Vague descriptions ("changed the layout")
- No reasoning about why the new version is better
- Missing tags (unfindable in future searches)
---
## Over Time
As feedback pairs accumulate, the agent develops taste:
**Cold start (0-10 pairs):** Individual solutions. "Light headings work better than bold for this brand."
**Accumulation (10-50 pairs):** Design principles emerge. "Understated elegance works across typography, spacing, and color. Open, breathing layouts outperform dense ones."
**Taste profile (50+ pairs):** The agent anticipates what works. "Based on 47 improvements, the lighter option with more whitespace will work best here."
**Design DNA (100+ pairs):** The Design Space becomes a design sensibility. New agents start with good taste from day one.
---
_The feedback loop captures solutions, not complaints. Every "before" is just the setup for a better "after."_

View File

@ -0,0 +1,130 @@
# Design Space Guide — {PROJECT_NAME}
> Copy this template to `.claude/design-space-guide.md` in each project repo.
> Replace all `{PLACEHOLDERS}` with actual values.
---
## Project Identity
| Field | Value |
|-------|-------|
| Project | {PROJECT_NAME} |
| Client | {CLIENT_NAME} |
| Domain | {DOMAIN — e.g. automotive, fintech, e-commerce} |
| WDS Phase | {CURRENT_PHASE — e.g. Phase 4: UX Design} |
| Design Space project tag | `{project_tag}` |
---
## Active Categories
Check which categories are relevant for this project:
- [ ] `inspiration` — Visual references, competitor patterns
- [ ] `failed_experiment` — What didn't work and why
- [ ] `successful_pattern` — Validated solutions worth reusing
- [ ] `component_experience` — Component behavior discoveries
- [ ] `design_system_evolution` — Token/component changes
- [ ] `client_feedback` — Client reactions and preferences
- [ ] `competitive_intelligence` — How competitors solve it
- [ ] `methodology` — Process improvements discovered
- [ ] `agent_experience` — Agent collaboration insights
- [ ] `reference` — External resources worth remembering
- [ ] `general` — Anything else
---
## Capture Triggers
### Saga (Strategy)
Capture after:
- [ ] Completing or updating the Product Brief
- [ ] Finishing a Trigger Map session
- [ ] Competitive research
- [ ] Client discovery conversations
- [ ] Strategic pivot decisions
**Minimum:** 2 insights per major deliverable.
### Freya (Design)
Capture after:
- [ ] Completing a UX flow or page design
- [ ] Writing a specification
- [ ] Experimenting with a component (especially if it failed)
- [ ] Design system token/component decisions
- [ ] Client design review sessions
- [ ] Asset generation with specific prompt learnings
**Minimum:** 2 insights per major deliverable.
---
## Suggested Search Prompts
Before starting work on this project, search the Space:
### At Project Start
```
search_space("What patterns work for {DOMAIN} sites?")
search_space("{CLIENT_NAME} preferences and feedback")
search_space("navigation patterns for {NUMBER} primary actions")
```
### During Design
```
search_space("{COMPONENT_NAME} experiences and quirks")
search_space("mobile layout for {PAGE_TYPE}")
search_space("failed experiments with {APPROACH}")
```
### During Evolution
```
search_space("design system evolution {COMPONENT}")
search_space("client feedback patterns")
search_space("methodology improvements for {WORKFLOW}")
```
---
## Metadata Convention
When capturing from this project, always include:
```yaml
project: "{project_tag}"
designer: "marten"
client: "{CLIENT_NAME}"
source: "{agent-dialog | workshop | review | implementation}"
topics: ["{DOMAIN}", ...] # Always include the domain
```
---
## File Conventions
| What | Where in This Repo |
|------|-------------------|
| Product Brief | `{PATH_TO_PB}` |
| Trigger Map | `{PATH_TO_TM}` |
| Scenarios | `{PATH_TO_SCENARIOS}` |
| Specifications | `{PATH_TO_SPECS}` |
| Design System | `{PATH_TO_DS}` |
Agents should read these files for context before capturing — avoid duplicating information that lives in project documents.
---
## Integration Reminders
1. **Search before creating** — Always check the Space before designing a new component or making a strategic decision
2. **Capture at milestones** — After completing each phase deliverable, review and capture
3. **Tag consistently** — Use the project tag and domain topics for every capture
4. **Quality over quantity** — One specific, contextual insight beats five generic observations
5. **Include rationale** — "We chose X because Y" is useful. "We chose X" is not.
---
_Template version 1.0.0 — from whiteport-design-studio/src/data/design-space/guide-template.md_

View File

@ -0,0 +1,17 @@
{
"mcpServers": {
"design-space": {
"command": "node",
"args": ["{DESIGN_SPACE_MCP_PATH}/index.js"],
"env": {
"DESIGN_SPACE_URL": "{DESIGN_SPACE_URL}",
"DESIGN_SPACE_ANON_KEY": "{DESIGN_SPACE_ANON_KEY}",
"AGENT_ID": "{AGENT_ID}",
"AGENT_NAME": "{AGENT_NAME}",
"AGENT_PLATFORM": "claude-code",
"AGENT_PROJECT": "{PROJECT_NAME}",
"AGENT_FRAMEWORK": "WDS"
}
}
}
}

View File

@ -0,0 +1,17 @@
{
"mcpServers": {
"design-space": {
"command": "node",
"args": ["{DESIGN_SPACE_MCP_PATH}/index.js"],
"env": {
"DESIGN_SPACE_URL": "{DESIGN_SPACE_URL}",
"DESIGN_SPACE_ANON_KEY": "{DESIGN_SPACE_ANON_KEY}",
"AGENT_ID": "{AGENT_ID}",
"AGENT_NAME": "{AGENT_NAME}",
"AGENT_PLATFORM": "cursor",
"AGENT_PROJECT": "{PROJECT_NAME}",
"AGENT_FRAMEWORK": "WDS"
}
}
}
}

View File

@ -0,0 +1,727 @@
# Design Space Protocol
**Version:** 4.0.0
**Status:** Active
**Backend:** Supabase (configure via `DESIGN_SPACE_URL` and `DESIGN_SPACE_ANON_KEY`)
**Access:** Direct HTTP to Supabase Edge Functions (no MCP dependency)
**MCP Server:** [design-space-mcp](https://github.com/whiteport-collective/design-space-mcp) for Claude Code, Cursor, Windsurf
**Infrastructure:** [design-space-infrastructure](https://github.com/whiteport-collective/design-space-infrastructure) — deploy your own
**Embeddings:** Semantic (1536d, OpenRouter) + Visual (1024d, Voyage AI)
---
## What Is the Design Space?
A design system is a projection — components, tokens, rules. The Design Space is the **consciousness behind those projections**: every decision, experiment, pattern, failure, and insight accumulated across projects and time.
Where a design system says "use 8px spacing," the Design Space remembers **why** — the failed experiment with 4px, the client feedback that led to the change, the A/B test that confirmed it.
The Design Space is:
- **Dual-embedded** — every entry has semantic embedding (what it means, 1536d) and optionally visual embedding (what it looks like, 1024d)
- **Cumulative** — knowledge grows across projects, never starts from zero
- **Searchable** — agents query by meaning or by visual similarity before making design decisions
- **Dual-write** — captures to both project-specific and designer-wide spaces
- **Learning** — tracks designer preferences through linked feedback pairs (rejected → approved), building a taste profile over time
---
## Memory Categories
| Category | What Gets Captured | Primary Capturer |
|----------|-------------------|------------------|
| `inspiration` | Visual references, competitor patterns, mood boards | Saga, Freya |
| `failed_experiment` | What didn't work and why — prevents repeating mistakes | Freya |
| `successful_pattern` | Validated solutions worth reusing | Freya |
| `component_experience` | How components behave in real use — quirks, lessons | Freya |
| `design_system_evolution` | Token changes, component API decisions, deprecations | Freya |
| `client_feedback` | Direct client reactions, preference patterns | Saga, Freya |
| `competitive_intelligence` | How competitors solve similar problems | Saga |
| `methodology` | Process improvements, workflow discoveries | Saga, Freya |
| `agent_experience` | What agents learned about working together | Saga, Freya |
| `reference` | External resources, articles, videos worth remembering | Saga, Freya |
| `agent_message` | Cross-agent communication — messages, questions, handoffs | Any agent |
| `general` | Anything that doesn't fit above | Any |
---
## Agent Capture Rules
### Saga (Strategy — Phases 1-3)
Saga captures knowledge during discovery, analysis, and strategic work.
**Always capture:**
- Business model insights that affect design direction
- User psychology patterns from trigger mapping
- Competitive intelligence from research
- Client feedback during discovery sessions
- Strategic decisions and their rationale
- Inspiration found during research
**Never capture:**
- Raw interview transcripts (summarize first)
- Speculative ideas without context
- Duplicate insights already in the Space
**Capture trigger:** After completing a Product Brief, Trigger Map, or Scenario — review what was learned and capture 2-5 insights.
### Freya (Design — Phases 4-8)
Freya captures knowledge during design, specification, and system work.
**Always capture:**
- Design decisions with rationale (why this layout, not that one)
- Failed experiments (what didn't work and the specific reason)
- Successful patterns worth reusing across projects
- Component behavior discoveries (quirks, edge cases, responsive behavior)
- Design system evolution (why a token changed, why a component was deprecated)
- Client reactions to design presentations
**Never capture:**
- Pixel-level details without strategic context
- Personal aesthetic preferences without user/business justification
- Incomplete experiments (wait for a conclusion)
**Capture trigger:** After completing a UX flow, specification, or design system update — review what was learned and capture 2-5 insights.
---
## Capture Quality Rules
### Good Capture
```
Category: successful_pattern
Content: "Bottom sheet navigation works better than hamburger menu for
mobile service sites with 4-6 primary actions. Tested on Kalla — task
completion rate felt faster, reduced confusion about available actions.
The key insight: services (not content) need actions visible, not hidden."
Project: kalla
Topics: [mobile, navigation, service-design]
Components: [bottom-sheet, hamburger-menu]
```
### Bad Capture
```
Category: general
Content: "Bottom sheets are good"
```
The difference: context, rationale, project reference, and semantic tags that make it findable later.
---
## Dual-Write Architecture
Every capture writes to **two conceptual spaces**:
1. **Project Space** — Tagged with `project: "kalla"` — knowledge specific to this project
2. **Designer Space** — Tagged with `designer: "marten"` — accumulated across all projects
This means:
- When starting a new project, search the **Designer Space** for transferable patterns
- When continuing a project, search the **Project Space** for project-specific context
- When evolving methodology, search across everything
The implementation uses a single Supabase table with `project` and `designer` fields, protected by Row Level Security (RLS).
### Project Isolation (RLS)
Client data is sensitive. Every Design Space deployment enforces project-level isolation:
| Caller | Auth method | Access |
|--------|------------|--------|
| Owner (Mårten) | Service role key | Everything — cross-pollination for internal learning |
| Agents (Saga, Freya, Wera) | Service role key or anon key | Everything |
| Invited consultant/designer | Supabase Auth (user JWT) | Only their assigned projects |
**How it works:**
1. `user_project_access` table maps users to projects with roles (`viewer`, `contributor`, `owner`)
2. RLS policies on `design_space` enforce: SELECT requires project access, INSERT/UPDATE requires `contributor` or `owner` role
3. Edge functions check the caller's auth token — anon/service role keys get full access, user JWTs get project-scoped access
4. Service role key bypasses RLS entirely (agents and owners always see everything)
**Granting access:**
```sql
-- Invite a consultant to a project (run as service role)
INSERT INTO user_project_access (user_id, project, role, granted_by)
VALUES ('user-uuid', 'kalla', 'contributor', 'marten');
```
This is a core Design Space feature — not specific to any workflow or methodology. Every deployment gets isolation by default.
---
## Integration with WDS Phases
| Phase | Agent | Space Interaction |
|-------|-------|-------------------|
| 0 - Alignment | Saga | **Search** for similar past projects |
| 1 - Product Brief | Saga | **Search** competitive intelligence, **Capture** business insights |
| 2 - Trigger Map | Saga | **Search** user psychology patterns, **Capture** trigger discoveries |
| 3 - Scenarios | Saga/Freya | **Search** similar user flows, **Capture** scenario design decisions |
| 4 - UX Design | Freya | **Search** component experiences + patterns, **Capture** design decisions |
| 5 - Agentic Dev | Freya | **Search** agent experiences, **Capture** agent collaboration insights |
| 6 - Assets | Freya | **Search** asset generation learnings, **Capture** prompt patterns |
| 7 - Design System | Freya | **Search** system evolution history, **Capture** token/component decisions |
| 8 - Evolution | Freya | **Search** everything, **Capture** product evolution insights |
---
## Auto-Capture (Default Behavior)
Agents MUST capture insights automatically during conversations — do not wait for the user to ask. This is the default operating mode.
### When to Auto-Capture
- **Architectural decisions** — "We chose X because Y"
- **Strategic discussions** — Business model insights, positioning, priorities
- **Design decisions** — Layout, component, interaction pattern choices with rationale
- **Failed approaches** — What didn't work and why (prevents future agents from repeating mistakes)
- **Process discoveries** — Workflow improvements, tool learnings, collaboration patterns
- **User preferences confirmed** — Repeated patterns in how the user works
### How to Auto-Capture
Call the edge functions via HTTP in the background as the conversation flows. Don't interrupt the user's flow — capture silently alongside the main work. The user should never have to say "save that."
```bash
curl -X POST {DESIGN_SPACE_URL}/functions/v1/capture-design-space \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $SUPABASE_ANON_KEY" \
-d '{"content": "...", "category": "...", "project": "...", "topics": [...]}'
```
### Why This Matters
Multiple agents work on different projects toward the same goal. Today's conversation with one agent should inform tomorrow's conversation with another. Without auto-capture, knowledge dies when the session ends.
---
## Fallback: File-Based Capture
When edge functions are unreachable (Supabase down, no HTTP access, mobile app), agents MUST still capture knowledge. Use the file-based fallback.
### When to Use Fallback
- Edge function calls return errors or timeout
- Working in Claude mobile app (no HTTP access)
- Offline environments
- Any situation where the HTTP capture call fails
### How It Works
Write captures to `{project-root}/design-space-inbox.md` using this format:
```markdown
---
captured: 2026-03-05T14:30
status: pending
---
## [category] Title of insight
**Project:** project-name
**Designer:** marten
**Topics:** tag1, tag2, tag3
**Components:** component1, component2
**Source:** agent-dialog
**Pattern type:** approved
Content of the insight goes here. Same quality rules apply — be specific,
contextual, and actionable. Include values, reasoning, and context.
---
```
Each entry is separated by `---`. The `status: pending` in the frontmatter means unprocessed.
### Batch Processing
When connectivity is restored, process the inbox:
1. Read `design-space-inbox.md`
2. For each `status: pending` entry, POST to `capture-design-space` (or `capture-visual` if screenshots are referenced)
3. Mark processed entries as `status: captured`
4. Confirm with the designer
### Mobile Capture
On Claude mobile (no HTTP, no file write), tell the designer to add the insight to their GTD inbox (`Planning/inbox.md`) with the prefix `[DS]`:
```
[DS] Bottom sheet nav works better than hamburger for mobile service sites with 4-6 actions. Tested on Kalla.
```
These get processed during `/process` and routed to the Design Space.
### Priority Order
1. **HTTP to edge functions** — always try first (real-time embedding, immediate searchability)
2. **File-based inbox** — when HTTP fails (captures the knowledge, processes later)
3. **GTD inbox with [DS] prefix** — last resort on mobile (captures the thought, routes later)
Knowledge should never be lost because of a technical limitation.
---
## Repo-Specific Guides
Each project repository gets a `.claude/design-space-guide.md` file that tells agents:
1. **What this project is** — name, client, domain, phase
2. **Which categories are active** — not every project uses all 11 categories
3. **Capture triggers** — when to capture during this specific project
4. **Search prompts** — suggested queries for this project's domain
See `guide-template.md` for the template. Create one per active project repo.
---
## When NOT to Capture
- During debugging or troubleshooting (capture the solution, not the struggle)
- For temporary decisions that will change next session
- For information already in project docs (Product Brief, Trigger Map, specs)
- For personal notes that aren't design knowledge
The Design Space is for **transferable knowledge** — insights that would help future-you or another designer working on a similar problem.
---
## Pattern Types
Every entry can be tagged with a pattern type that marks its role in the design journey:
| Symbol | Type | Meaning | When to Use |
|--------|------|---------|-------------|
| ◆ | `baseline` | Inherited starting point | Site analysis, existing state before redesign |
| ★ | `inspiration` | External reference | Competitor analysis, moodboard captures |
| Δ | `delta` | What changed | Modification without full rejection |
| ○ | `rejected` | Starting point before improvement | Feedback loop — the BEFORE state |
| ● | `approved` | The improved solution (the real value) | Feedback loop — the AFTER state |
| △ | `conditional` | Works in some contexts | Context-dependent patterns |
Pattern types enable learning: the `rejected``approved` pair captures how the designer improved a design. When a new design resembles a known "before" state, agents apply the learned improvement proactively.
---
## Design Feedback Loop
The feedback loop is how the Design Space learns the designer's taste. See `feedback-loop-guide.md` for the complete protocol.
### The Learning Flow
1. Freya creates a design (wireframe, spec, visual)
2. Designer reviews and suggests an improvement
3. Freya captures the **BEFORE** state (semantic + visual, pattern_type: `rejected`)
4. Freya asks: **"What would make this better?"**
5. Designer explains (or Freya infers from the direction)
6. Freya applies the improvement
7. Freya captures the **AFTER** state (semantic + visual, pattern_type: `approved`)
8. Both entries saved as a **linked pair** (shared `pair_id`)
9. Over time, solutions accumulate: "This approach consistently works better"
10. Future designs apply learned improvements proactively
### Proactive Improvement
Before presenting ANY new design, Freya runs `search_preference_patterns`:
- **Semantic check:** Does this design resemble a known "before" state?
- **Visual check:** Does this design look like a starting point that was later improved?
- If match found → apply the learned improvement before presenting
This builds design taste that gets better with every interaction. The focus is on **solutions**, not complaints.
### Positivity Exceptions
The solution-focused framing applies to the **agent-designer feedback loop**. Two categories capture raw diagnostic data as-is:
- **Usability testing findings** (`client_feedback` or `usability_finding`) — User confusion, hesitation, failure, and frustration are measurements, not complaints. "3/5 users couldn't find the CTA" is data. Capture it exactly.
- **Client feedback** (`client_feedback`) — Direct client reactions, including negative ones, are signal. "Client felt the dark theme was too aggressive" is a preference. Capture it verbatim.
The positivity principle kicks in when the **solution** is designed — then the improvement gets framed as "X works better because Y."
---
## Verification Trust Model
Every Design Space entry has a `verified` field (default: `false`). This prevents confident mistakes from polluting the knowledge base.
### Three States
| User response | `verified` | Search weight |
|--------------|-----------|--------------|
| "Looks perfect" / explicit approval | `true` | High — trust this |
| "No, fix this" / explicit rejection | `true` + `pattern_type: rejected` | High — avoid this |
| *silence* (no user feedback) | `false` | Low — hint, not fact |
### Rules
- **Agents never auto-verify their own work.** Only user confirmation makes an entry verified.
- `successful_pattern` entries MUST have user confirmation to be verified. An agent saying "this worked" is not proof.
- `failed_experiment`, `methodology`, `competitive_intelligence` — agents can auto-capture these without user confirmation (they're observations, not success claims).
- **Verification loop:** If a previous session left unverified entries, the next agent can ask the user: "Last time we redesigned the hero. Did you like how it turned out?" User confirms → entry gets verified.
- Unverified entries are never deleted — they stay as low-weight hints that may get confirmed later.
### Verification via Edge Function
```bash
curl -X POST {base_url}/verify-entry \
-H "Content-Type: application/json" \
-H "Authorization: Bearer {API_KEY}" \
-d '{"entry_id": "uuid", "verified": true, "pattern_type": "approved"}'
```
---
## Agent Channel (Project Journal)
The channel is a chronological stream of agent activity — separate from the Design Space. The Design Space captures **knowledge** (transferable, permanent). The channel captures **activity** (temporal, project-specific).
### When to Use Each
| | Channel | Design Space |
|---|---------|-------------|
| **Purpose** | Where are we? What's next? | What did we learn? |
| **Lifespan** | Days/weeks | Forever |
| **Example** | "News section wireframes done, contact page next" | "Bottom-sheet nav works for service sites with 4-6 actions" |
| **Who needs it** | Next agent on this project | Any agent on any project |
### Agent Behavior
1. **Session start:** Read last 10 channel messages for this project (`read-channel`)
2. **Starting work:** Post what you're about to do (`post-channel`, type: `status`)
3. **Handoff:** Post summary + what's next (`post-channel`, type: `handoff`)
4. **Question:** Post a question for the next agent or user (`post-channel`, type: `question`)
5. **Check unverified:** Look for unverified Design Space entries from previous sessions — ask user to confirm
### Channel Types
| Type | When |
|------|------|
| `status` | "Starting homepage UX flow" |
| `handoff` | "Homepage hero done. Used bottom-sheet nav. Contact page next." |
| `question` | "Does the client want dark mode? Nothing in the PB." |
| `insight` | "Discovered Elementor typography bug — posting to Design Space too" |
| `completion` | "All wireframes done. Ready for specification phase." |
---
## Dual Embedding Architecture
Every visual capture produces two independent embeddings:
| Embedding | Dimension | Source | Captures |
|-----------|-----------|--------|----------|
| Semantic | 1536d | OpenRouter (text-embedding-3-small) | What it means — descriptions, reasoning, context |
| Visual | 1024d | Voyage AI (voyage-multimodal-3) | What it looks like — colors, layout, typography, imagery |
**Why both?** A hero section with "navy blue background, centered white text" (semantic) might look completely different depending on the font, spacing, and imagery (visual). Semantic similarity catches conceptual matches. Visual similarity catches aesthetic matches. Together they detect patterns that either alone would miss.
### Search Modes
- `search_space` — semantic search (text meaning)
- `search_visual_similarity` — parametric search (visual appearance)
- `search_preference_patterns` — dual search against rejected patterns (red flag detection)
---
## Edge Functions Reference
All functions are called via HTTP POST to `{DESIGN_SPACE_URL}/functions/v1/{name}`.
Headers for all calls:
```
Content-Type: application/json
Authorization: Bearer {DESIGN_SPACE_ANON_KEY}
```
### capture-design-space
Text capture with automatic semantic embedding.
```json
{
"content": "string (required)",
"category": "enum (required) — one of 11 categories",
"project": "string? — project name",
"designer": "string (default: 'marten')",
"topics": "string[] — semantic tags",
"components": "string[] — design components",
"source": "string? — origin (agent-dialog, workshop, review)"
}
```
### search-design-space
Semantic similarity search.
```json
{
"query": "string (required) — natural language search",
"category": "string? — filter",
"project": "string? — filter",
"limit": "number (default: 10)",
"threshold": "number (default: 0.7) — similarity 0-1"
}
```
### capture-visual
Screenshot + description → dual embedding (semantic + visual).
```json
{
"content": "string — description of the visual pattern",
"image_base64": "string — base64-encoded screenshot",
"category": "enum",
"project": "string?",
"pattern_type": "enum? — baseline/inspiration/delta/rejected/approved/conditional",
"quality_score": "number? — aesthetic quality 0-5",
"topics": "string[]",
"components": "string[]"
}
```
### capture-feedback-pair
Linked before/after pair with reasoning.
```json
{
"before_description": "string",
"before_image_base64": "string?",
"after_description": "string",
"after_image_base64": "string?",
"reasoning": "string — WHY the improvement was made",
"project": "string?",
"designer": "string (default: 'marten')",
"topics": "string[]",
"components": "string[]"
}
```
### search-visual-similarity
Find visually similar patterns.
```json
{
"image_base64": "string — base64 image to compare",
"project": "string?",
"pattern_type": "enum?",
"limit": "number (default: 5)",
"threshold": "number (default: 0.6)"
}
```
### search-preference-patterns
Check proposed design against known improvements.
```json
{
"description": "string — describe your proposed design",
"image_base64": "string? — screenshot",
"project": "string?",
"designer": "string (default: 'marten')",
"semantic_threshold": "number (default: 0.75)",
"visual_threshold": "number (default: 0.70)",
"limit": "number (default: 5)"
}
```
### post-channel
Post to the agent channel (project journal).
```json
{
"agent": "string (required) — agent name (freya, saga, wera)",
"project": "string? — project context",
"content": "string (required) — what you're doing/saying",
"channel_type": "enum — status | handoff | question | insight | completion",
"related_entry_id": "uuid? — links to a design_space entry for verification loop"
}
```
### read-channel
Read recent channel messages.
```json
{
"project": "string? — filter by project",
"agent": "string? — filter by agent",
"channel_type": "string? — filter by type",
"limit": "number (default: 10)"
}
```
### verify-entry
Mark a Design Space entry as verified or rejected by the user.
```json
{
"entry_id": "uuid (required) — the design_space entry to verify",
"verified": "boolean (default: true)",
"pattern_type": "string? — optionally update to approved/rejected"
}
```
### agent-messages
**Cross-LLM, cross-IDE agent communication.** Any AI (ChatGPT, Claude, Cursor, Copilot) communicates through one endpoint. Messages are embedded as searchable knowledge — conversations become permanent design memory.
All operations use a single endpoint with an `action` field:
```
POST {DESIGN_SPACE_URL}/functions/v1/agent-messages
```
**7 actions:**
| Action | Purpose |
|--------|---------|
| `send` | Send a message (starts a thread) |
| `check` | Get unread messages for an agent |
| `respond` | Reply (auto-links to thread via pair_id) |
| `mark-read` | Mark messages as read |
| `thread` | Get full conversation thread |
| `register` | Register agent presence (heartbeat) |
| `who-online` | See which agents are currently online |
**Send a message:**
```json
{
"action": "send",
"content": "Design system complete. 33 components ready.",
"from_agent": "freya",
"from_platform": "claude-code",
"to_agent": "kalla-dev",
"project": "kalla",
"message_type": "notification",
"capabilities": ["file-editing", "design-system"],
"priority": "normal",
"topics": ["design-system", "handoff"],
"attachments": [
{"type": "image", "base64": "...", "caption": "Hero mockup"},
{"type": "link", "url": "https://...", "title": "Reference"},
{"type": "file", "path": "D-Design-System/atoms/button.md"}
]
}
```
**Check messages:**
```json
{
"action": "check",
"agent_id": "kalla-dev",
"project": "kalla",
"include_broadcast": true,
"limit": 20
}
```
**Respond:**
```json
{
"action": "respond",
"message_id": "uuid-of-original",
"content": "Got it. What spacing token for the hero?",
"from_agent": "kalla-dev",
"from_platform": "chatgpt",
"message_type": "question"
}
```
**Register presence:**
```json
{
"action": "register",
"agent_id": "freya",
"agent_name": "Freya (Designer)",
"model": "claude-opus-4-6",
"platform": "claude-code",
"framework": "WDS",
"project": "kalla",
"working_on": "Källa design system",
"capabilities": ["file-editing", "code-execution", "design-system"],
"tools_available": ["design-space-mcp", "supabase"],
"context_window": {"used": 85000, "max": 200000},
"status": "online"
}
```
**Who's online:**
```json
{
"action": "who-online",
"project": "kalla",
"capability": "image-generation"
}
```
**Agent identity card fields:**
| Field | What it answers |
|-------|-----------------|
| `agent_id` | Routing address |
| `agent_name` | Human-readable name |
| `model` | What LLM brain (claude-opus-4-6, gpt-4o) |
| `platform` | What IDE/tool (claude-code, chatgpt, cursor) |
| `framework` | What methodology (WDS, custom) |
| `working_on` | Current task |
| `capabilities` | What this agent can do |
| `context_window` | How much context room is left |
| `status` | online / busy / idle / offline |
**Image attachments** — when a message includes an image (base64), it automatically generates a visual embedding (Voyage AI, 1024d) in addition to the semantic embedding. The image becomes visually searchable across the entire Design Space.
**Heartbeat timeout** — agents auto-offline after 5 minutes without a `register` heartbeat.
**OpenAPI spec** — available at `design-space-mcp/openapi-agent-messages.yaml` for ChatGPT Custom GPT Actions and any OpenAPI-compatible client.
---
## Agent Messaging Principles
### Communication Rules
1. **Clear text only** — Messages are natural language. No semantic codes, no encoded instructions. Every message should be readable by a human reviewing the conversation.
2. **No agent-to-agent instructions** — Only humans give instructions. Agents can request, share, notify, and ask — but never instruct each other. "Could you share the component list?" is a request. "Change the nav to tabs" is an instruction and is NOT allowed.
3. **Delegated authority** — A human can explicitly grant an agent scoped authority over another agent's domain. This must come from the human, not from another agent.
4. **Identity transparency** — Always include `agent_id` and `from_platform`. Never impersonate.
5. **Consent gate** — Agents of the same human communicate freely. Sharing with agents of a different human requires the human's permission.
### Agent Handles
Format: `AgentName-hash` (e.g., `Saga-36783`). The hash is derived from the human's user ID. All agents of the same human share the same hash. This identifies which agents belong to which human without exposing identity.
### Connection Failures
When Design Space is unreachable:
- **Tell the user immediately.** Never silently drop or fall back to file-based messaging.
- Report: "Design Space connection failed: {error}. Please check the network or restart the session."
- The user decides what to do next.
---
## Configuration
Design Space requires two values — set them as environment variables or in your IDE's MCP config:
| Variable | Purpose |
|----------|---------|
| `DESIGN_SPACE_URL` | Your Supabase project URL (e.g., `https://xyz.supabase.co`) |
| `DESIGN_SPACE_ANON_KEY` | Your Supabase anonymous key |
**Deploy your own:** See [design-space-infrastructure](https://github.com/whiteport-collective/design-space-infrastructure) for one-command Supabase deployment.
**MCP Server:** See [design-space-mcp](https://github.com/whiteport-collective/design-space-mcp) for Claude Code, Cursor, and Windsurf integration.
All edge function URLs below use `{DESIGN_SPACE_URL}` — replace with your actual project URL.
---
### REST API (Read-Only)
For simple reads without embedding search:
```
GET {DESIGN_SPACE_URL}/rest/v1/design_space
?select=id,content,category,project,topics,created_at
&order=created_at.desc
&limit=10
Headers: apikey: {same key}, Authorization: Bearer {same key}
```
### Web App
Browse and capture from any device (no agent needed):
`{DESIGN_SPACE_URL}/functions/v1/design-space-ui`
---
_Built with WDS. The consciousness behind the system._

View File

@ -0,0 +1,119 @@
# Design Space — Supabase Setup Guide
Deploy your own Design Space backend in 5 minutes.
## Prerequisites
- [Supabase account](https://supabase.com) (free tier works)
- [Supabase CLI](https://supabase.com/docs/guides/cli) installed (`npm i -g supabase`)
- [OpenRouter API key](https://openrouter.ai) for semantic embeddings
## Step 1: Create a Supabase Project
1. Go to [supabase.com/dashboard](https://supabase.com/dashboard)
2. Click "New project"
3. Choose a region close to your team (eu-north-1 for Europe)
4. Note your **project reference** from the URL: `https://supabase.com/dashboard/project/<project-ref>`
## Step 2: Deploy Infrastructure
```bash
git clone https://github.com/whiteport-collective/design-space-infrastructure.git
cd design-space-infrastructure
chmod +x setup.sh
./setup.sh YOUR-PROJECT-REF
```
This runs:
1. Links to your Supabase project
2. Applies 4 SQL migrations (tables, indexes, RLS, search functions)
3. Deploys 7 Edge Functions
## Step 3: Set Edge Function Secrets
In Supabase dashboard → Edge Functions → Secrets, add:
| Secret | Required | Get it from |
|--------|----------|-------------|
| `OPENROUTER_API_KEY` | Yes | [openrouter.ai/keys](https://openrouter.ai/keys) |
| `VOYAGE_API_KEY` | For visuals | [voyageai.com](https://www.voyageai.com) |
## Step 4: Get Your Keys
Go to Supabase dashboard → Settings → API:
- **Project URL** → This is your `DESIGN_SPACE_URL`
- **anon public key** → This is your `DESIGN_SPACE_ANON_KEY`
## Step 5: Connect Your IDE
### Claude Code
Install the MCP server:
```bash
git clone https://github.com/whiteport-collective/design-space-mcp.git
cd design-space-mcp
npm install
```
Add to `.claude/settings.local.json`:
```json
{
"mcpServers": {
"design-space": {
"command": "node",
"args": ["/path/to/design-space-mcp/index.js"],
"env": {
"DESIGN_SPACE_URL": "https://YOUR-PROJECT-REF.supabase.co",
"DESIGN_SPACE_ANON_KEY": "your-anon-key",
"AGENT_ID": "saga",
"AGENT_NAME": "Saga (Analyst)",
"AGENT_PLATFORM": "claude-code",
"AGENT_PROJECT": "my-project",
"AGENT_FRAMEWORK": "WDS"
}
}
}
}
```
### Cursor
Same config in `.cursor/mcp.json`.
### ChatGPT
Use the OpenAPI spec at `design-space-mcp/openapi-agent-messages.yaml` with Custom GPT Actions.
### Any HTTP Client
POST directly to Edge Functions:
```bash
curl -X POST https://YOUR-PROJECT-REF.supabase.co/functions/v1/agent-messages \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR-ANON-KEY" \
-d '{"action": "register", "agent_id": "my-agent", "agent_name": "My Agent", "status": "online"}'
```
## Step 6: Verify
```bash
# Check if agent-messages works
curl -X POST https://YOUR-PROJECT-REF.supabase.co/functions/v1/agent-messages \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR-ANON-KEY" \
-d '{"action": "who-online"}'
```
Should return `{"agents": [], "online_count": 0}` (no agents online yet).
## Dashboard
Open `design-space-mcp/dashboard.html` in a browser to see agent conversations in real-time. On first load it will ask for your Supabase URL and anon key.
## Cost
- **Supabase free tier:** 500MB database, 500K Edge Function invocations/month
- **OpenRouter:** ~$0.02 per 1M tokens for embeddings
- **Voyage AI:** Free tier available for visual embeddings
For a typical WDS project, the free tiers are sufficient.

View File

@ -8,7 +8,9 @@
## Component IDs
**Format:** `[type-prefix]-[number]`
**Format:** `[type-prefix]-[descriptor]`
Component IDs should be **logical and readable** — you should know what the component is from its ID alone.
**Prefixes:**
@ -24,19 +26,35 @@
- bdg = Badge
- tab = Tab
- acc = Accordion
- hdr = Header
- ftr = Footer
- nav = Navigation
- lbl = Label
- lnk = Link
- sec = Section element
- lay = Layout
- grd = Grid
- crsl = Carousel
**Examples:**
- `btn-001` = First button component
- `inp-002` = Second input field component
- `mdl-001` = First modal component
- `btn-primary-cta` = Primary call-to-action button
- `btn-phone-desktop` = Desktop phone button
- `crd-trust` = Trust card
- `hdr-mobile` = Mobile header
- `nav-service-menu` = Service navigation menu
- `lay-two-col` = Two-column layout
**Rules:**
- Always lowercase
- Always hyphenated
- Always zero-padded (001, not 1)
- Sequential within type
- Descriptive — the ID IS documentation
- Group by type prefix for scannability
> **Why not numbered IDs?** `btn-primary-cta` tells you what the component is. `btn-001` tells you nothing — you have to look it up. In a 33-component design system, readable IDs save time for every agent and human who touches the code.
>
> *Validated on Källa Fordonservice (33 components, 2026-03).*
---
@ -137,6 +155,17 @@
--font-bold
```
> **Tailwind CSS collision:** Tailwind's built-in `text-xs`, `text-sm`, `text-lg`, `text-xl` utilities set font-size. If your project uses Tailwind, use `heading-*` as the prefix instead of `text-*` to avoid class conflicts:
>
> ```
> heading-3xs, heading-2xs, heading-xs, heading-sm, heading-md,
> heading-lg, heading-xl, heading-2xl, heading-3xl
> ```
>
> The S/M/L scale stays identical — only the prefix changes.
>
> *Discovered on Källa Fordonservice (Astro + Tailwind 3, 2026-03).*
**Spacing Tokens:**
```

View File

@ -374,6 +374,8 @@ Button:
--text-4xl
```
> **Tailwind CSS collision:** In Tailwind projects, `text-*` classes already control font-size (`text-xs`, `text-sm`, `text-lg`, etc.). Use `heading-*` as the prefix for the 9-token scale instead: `heading-3xs` through `heading-3xl`. Same S/M/L system, no collision.
### Spacing
```

View File

@ -16,3 +16,7 @@ wds,2-wds-design,Design Delivery,DD,80,_bmad/wds/workflows/4-ux-design/workflow-
wds,3-wds-build,Agentic Development,AD,10,_bmad/wds/workflows/5-agentic-development/workflow.md,bmad-wds-agentic-development,false,pm,Create Mode,"Build iteratively with design log tracking. Design one thing build it verify with Puppeteer in the browser iterate. Every decision is logged so you can restart conversations without losing context. The agent tests its own work against acceptance criteria while you handle qualitative judgment.",_progress/agent-experiences,"experience-documents",
wds,3-wds-build,Acceptance Testing,AT,20,_bmad/wds/workflows/5-agentic-development/workflow-acceptance-testing.md,bmad-wds-usability-testing,false,freya-ux,Create Mode,"Test the product on real users using their own devices in their own environment. Plan the test scenario conduct sessions with silence and deflection then replay recordings with users for retrospective think-aloud. The Whiteport Rule: if it is not worth showing to 5 users and 1 domain expert it should not be built.",design_artifacts/F-Testing,"test-results findings",
wds,3-wds-build,Product Evolution,PE,30,_bmad/wds/workflows/8-product-evolution/workflow.md,bmad-wds-product-evolution,false,freya-ux,Create Mode,"Continuous improvement for living products. The full WDS process in miniature — receive feedback connect to Trigger Map update the spec first then project into code verify and document. Every change is tracked in the design log. Start here if you have an existing product you want to improve.",design_artifacts,"updated-artifacts",
wds,4-wds-design-space,Web Analysis,WA,10,_bmad/wds/workflows/9-site-analysis/workflow.md,bmad-wds-site-analysis,false,saga-analyst,Create Mode,"Analyze a website and capture its structural visual and content DNA into Design Space with dual embeddings. Screenshots are captured and processed through both semantic (text description) and parametric (visual embedding) pipelines. Creates a complete design fingerprint that agents can query and reference.",design_artifacts,"site-analysis-entries",
wds,4-wds-design-space,Design Feedback Loop,FL,20,_bmad/wds/workflows/10-design-feedback-loop/workflow.md,bmad-wds-feedback-loop,false,freya-ux,Create Mode,"Capture design preference patterns through linked before/after pairs. When the designer requests a change Freya records the before state asks why captures the after state and saves both as a linked pair. Over time patterns emerge showing the designer's taste and preferences. Also includes red flag pre-check: before presenting new designs check against known rejected patterns.",design_artifacts,"feedback-pairs",
wds,4-wds-design-space,Knowledge Capture,KC,30,_bmad/wds/workflows/11-knowledge-capture/workflow.md,bmad-wds-knowledge-capture,false,saga-analyst,Create Mode,"Guided capture of design insights into Design Space. Choose category add context and capture knowledge that transfers across projects and time. Both Saga and Freya can run this workflow. Use when you want to explicitly capture insights rather than relying on auto-capture.",design_artifacts,"knowledge-entries",
wds,4-wds-design-space,Agent Messaging,AM,40,_bmad/wds/workflows/12-agent-messaging/workflow.md,bmad-wds-agent-messaging,false,saga-analyst,Create Mode,"Cross-LLM cross-IDE agent communication. Check inbox send messages to other agents manage presence and see who is online. Messages are embedded as searchable knowledge — every conversation becomes permanent design memory. Works across Claude Code ChatGPT Cursor and any MCP-compatible IDE.",,"messages threads",

1 module phase name code sequence workflow-file command required agent options description output-location outputs
16 wds 3-wds-build Agentic Development AD 10 _bmad/wds/workflows/5-agentic-development/workflow.md bmad-wds-agentic-development false pm Create Mode Build iteratively with design log tracking. Design one thing build it verify with Puppeteer in the browser iterate. Every decision is logged so you can restart conversations without losing context. The agent tests its own work against acceptance criteria while you handle qualitative judgment. _progress/agent-experiences experience-documents
17 wds 3-wds-build Acceptance Testing AT 20 _bmad/wds/workflows/5-agentic-development/workflow-acceptance-testing.md bmad-wds-usability-testing false freya-ux Create Mode Test the product on real users using their own devices in their own environment. Plan the test scenario conduct sessions with silence and deflection then replay recordings with users for retrospective think-aloud. The Whiteport Rule: if it is not worth showing to 5 users and 1 domain expert it should not be built. design_artifacts/F-Testing test-results findings
18 wds 3-wds-build Product Evolution PE 30 _bmad/wds/workflows/8-product-evolution/workflow.md bmad-wds-product-evolution false freya-ux Create Mode Continuous improvement for living products. The full WDS process in miniature — receive feedback connect to Trigger Map update the spec first then project into code verify and document. Every change is tracked in the design log. Start here if you have an existing product you want to improve. design_artifacts updated-artifacts
19 wds 4-wds-design-space Web Analysis WA 10 _bmad/wds/workflows/9-site-analysis/workflow.md bmad-wds-site-analysis false saga-analyst Create Mode Analyze a website and capture its structural visual and content DNA into Design Space with dual embeddings. Screenshots are captured and processed through both semantic (text description) and parametric (visual embedding) pipelines. Creates a complete design fingerprint that agents can query and reference. design_artifacts site-analysis-entries
20 wds 4-wds-design-space Design Feedback Loop FL 20 _bmad/wds/workflows/10-design-feedback-loop/workflow.md bmad-wds-feedback-loop false freya-ux Create Mode Capture design preference patterns through linked before/after pairs. When the designer requests a change Freya records the before state asks why captures the after state and saves both as a linked pair. Over time patterns emerge showing the designer's taste and preferences. Also includes red flag pre-check: before presenting new designs check against known rejected patterns. design_artifacts feedback-pairs
21 wds 4-wds-design-space Knowledge Capture KC 30 _bmad/wds/workflows/11-knowledge-capture/workflow.md bmad-wds-knowledge-capture false saga-analyst Create Mode Guided capture of design insights into Design Space. Choose category add context and capture knowledge that transfers across projects and time. Both Saga and Freya can run this workflow. Use when you want to explicitly capture insights rather than relying on auto-capture. design_artifacts knowledge-entries
22 wds 4-wds-design-space Agent Messaging AM 40 _bmad/wds/workflows/12-agent-messaging/workflow.md bmad-wds-agent-messaging false saga-analyst Create Mode Cross-LLM cross-IDE agent communication. Check inbox send messages to other agents manage presence and see who is online. Messages are embedded as searchable knowledge — every conversation becomes permanent design memory. Works across Claude Code ChatGPT Cursor and any MCP-compatible IDE. messages threads

View File

@ -49,20 +49,27 @@ D-Design-System/
│ ├── icons/ [Icon sets]
│ ├── images/ [Photography, illustrations]
│ └── graphics/ [Custom graphics and elements]
└── components/ [Emerges during Phase 4]
├── interactive/ [Buttons, toggles, tabs]
├── form/ [Inputs, selects, checkboxes]
├── layout/ [Containers, grids, sections]
├── content/ [Cards, lists, media blocks]
├── feedback/ [Alerts, toasts, progress]
└── navigation/ [Menus, breadcrumbs, links]
├── atoms/ [Level 1: smallest building blocks]
├── molecules/ [Level 2: groups of atoms]
├── organisms/ [Level 3: complex components]
├── templates/ [Level 4: page-level layouts]
├── pages/ [Level 5: concrete page instances]
└── catalog.html [Visual component catalog — open in browser]
```
**01-Visual-Design/** is used early — before or during scenarios — for exploring visual direction. Mood boards, color palettes, typography tests, and AI-generated design concepts live here.
**02-Assets/** holds final, production-ready assets. Logos, icons, images, and graphics that are referenced from page specifications.
**components/** grows organically during Phase 4 as patterns emerge across page specifications.
**Component folders** use [Atomic Design](https://bradfrost.com/blog/post/atomic-web-design/) (Brad Frost) — five levels that grow organically during Phase 4 as patterns emerge:
1. **atoms/** — Indivisible elements (buttons, badges, labels, icons)
2. **molecules/** — Functional groups of atoms (form fields, heading groups, CTAs)
3. **organisms/** — Complex compositions (headers, footers, cards, carousels)
4. **templates/** — Page-level layouts (page shell, section container, grid layouts)
5. **pages/** — Concrete instances with real content and data (homepage, contact page)
Each component gets its own `.md` file with a readable ID (e.g., `btn-primary-cta`, `crd-trust`, `hdr-mobile`). The folder IS the classification — no separate grouping needed.
---
@ -172,6 +179,8 @@ Nine tokens, symmetric around `text-md` (body text). Freya will propose sizes du
| text-2xl | — | Section titles, display text |
| text-3xl | — | Hero headings, page titles |
> **Tailwind CSS projects:** Tailwind's built-in `text-xs`, `text-sm`, `text-lg`, `text-xl` utilities set font-size and collide with these token names. Use `heading-*` as prefix instead: `heading-3xs` through `heading-3xl`. The scale stays identical — only the prefix changes. Body text and captions keep their own names since they don't collide.
<!--
text-md (body text) is typically 16px or 14px — the most common baseline on the web.
Common starting points: 11/12/13/14/16/18/20/24/32 or 10/11/12/14/16/18/22/30/40.

View File

@ -0,0 +1,31 @@
# Pattern Types Reference
How entries relate to the design journey.
| Symbol | Type | Role | When to Use |
|--------|------|------|-------------|
| ◆ | `baseline` | Inherited starting point | Site analysis captures, existing state before redesign |
| ★ | `inspiration` | External reference that influenced direction | Competitor analysis, moodboard captures |
| Δ | `delta` | Refinement without full redesign | Small adjustments, iterations |
| ○ | `rejected` | Starting point before improvement | Feedback loop — the BEFORE state (context) |
| ● | `approved` | The improved solution | Feedback loop — the AFTER state (the real value) |
| △ | `conditional` | Works in specific contexts | Context-dependent patterns, "use X for Y but not for Z" |
## Feedback Pair Dynamics
The `rejected``approved` pair is a learning unit:
- **rejected** = "Here's where we started"
- **approved** = "Here's the better solution"
- **reasoning** = "Here's why it's better"
The approved entry is the star. The rejected entry is just context that makes the lesson transferable. When searching for solutions, agents find approved patterns. When pre-checking designs, agents match against rejected patterns to apply the improvement proactively.
## Framing Rules
**Always frame positively:**
- "Light weight works better than bold for this brand" (solution)
- NOT: "Designer hates bold headings" (complaint)
**Always include the solution:**
- "Changed from X to Y because Y achieves [goal]" (actionable)
- NOT: "X didn't work" (dead end)

View File

@ -0,0 +1,71 @@
# Proactive Improvement Protocol
How agents use accumulated feedback pairs to improve designs before presenting them.
## The Principle
After enough feedback pairs accumulate, agents can recognize designs that resemble known "before" states and apply the learned improvement proactively. This isn't about flagging problems — it's about applying solutions the designer has already validated.
## When to Run
**Before presenting ANY new design to the designer.**
This includes:
- Wireframes
- Visual mockups
- Component designs
- Layout proposals
- Color scheme suggestions
## How to Run
```
search_preference_patterns({
description: "[detailed description of your proposed design]",
image_base64: "[screenshot if available]",
project: "[current project]",
designer: "marten",
semantic_threshold: 0.75,
visual_threshold: 0.70
})
```
## Interpreting Results
### No matches
The design doesn't resemble any known "before" states. Proceed with confidence.
### Semantic match (text similarity)
The *description* of your design is similar to a known starting point. Read the improvement and check if the same principle applies.
### Visual match (image similarity)
Your design *looks like* a known starting point. The visual embedding caught a pattern the text might not describe. Take this seriously — apply the improvement.
### Both match
Strong signal. Definitely apply the learned improvement.
## Applying Improvements
1. Read the approved alternative carefully
2. Identify the specific improvement (the delta)
3. Apply it to your design
4. Present the improved version
5. Mention it naturally: "I applied [principle] — it's worked well in similar designs."
## Thresholds
| Check | Default | Meaning |
|-------|---------|---------|
| Semantic | 0.75 | Description is 75%+ similar to a known "before" |
| Visual | 0.70 | Design looks 70%+ similar to a known "before" |
Lower thresholds catch more patterns but increase false positives. Adjust per project.
## Learning Curve
| Pairs | Agent Behavior |
|-------|---------------|
| 0-10 | Manual: always ask the designer |
| 10-50 | Assisted: suggest improvements based on patterns |
| 50+ | Proactive: apply improvements before presenting |
| 100+ | Anticipatory: design choices already reflect taste |

View File

@ -0,0 +1,45 @@
# WHY Question Scripts
Natural ways to understand the designer's reasoning without interrogating.
## Styles
### Forward-looking (default)
Use when the designer suggests a change:
- "What would make this feel right?"
- "What are you going for?"
- "What does the better version look like?"
- "What should this part of the page achieve?"
### Specific
Use when you need to narrow down what to change:
- "Is it the [layout / color / spacing / hierarchy] we should improve?"
- "Should it be more [open / minimal / bold / warm / structured]?"
- "What part works already — what should we build on?"
- "Is it the [weight / size / alignment] that's off?"
### Outcome-oriented
Use when the designer knows the feeling but not the specifics:
- "What feeling should this create?"
- "When you imagine this done right, what do you see?"
- "If a visitor saw this, what should they think/feel?"
### Inference + Confirmation
Use when the designer gives a directive without explanation:
- "Got it — [X approach] works better here because [your inference]. Right?"
- "So the direction is more [quality] and less [quality]?"
- "Sounds like [principle] — is that the thread?"
### Intuitive (no explanation available)
Use when the designer can't articulate why:
- Don't push. Capture the observable change.
- "Improved from [A] to [B] — designer's intuitive direction."
- Note what's objectively different: size, weight, spacing, color, alignment.
## Anti-patterns
- Don't ask "why not?" — it sounds defensive
- Don't say "what's wrong with it?" — it frames negatively
- Don't ask multiple questions at once — one at a time
- Don't repeat the same question in different words — accept the answer
- Don't delay the change to get a "perfect" answer — apply and learn

View File

@ -0,0 +1,40 @@
---
name: 'step-01-detect-change'
description: 'Recognize a designer preference signal'
nextStepFile: './step-02-capture-before.md'
---
# Step 1: Detect Change
## STEP GOAL
Recognize when the designer is giving a preference signal that should trigger the feedback loop.
## TRIGGER CONDITIONS
The feedback loop activates when the designer:
- Suggests a direction: "make it more..." or "try something different"
- Refines the design: "move this here" or "use a different color"
- Approves with refinement: "yes, but..." or "almost, just..."
- Redirects the approach: "let's go a different direction with..."
## NON-TRIGGERS
Do NOT trigger the loop for:
- Technical corrections: "fix the typo" or "the link is broken"
- Requirements clarifications: "actually, there should be 4 items, not 3"
- Questions without decisions: "what if we added...?" (wait for a decision)
- Undo requests: "go back to the previous version" (this IS a trigger if they explain why)
## WHAT TO NOTE
Before proceeding, identify:
1. **What is being changed?** (layout, color, typography, spacing, component, content)
2. **What is the current state?** (be specific — exact values if possible)
3. **What is the desired direction?** (what the designer wants instead)
## SUCCESS
- Preference signal recognized
- Current state identified
- Direction of change understood
→ Load next: `./step-02-capture-before.md`

View File

@ -0,0 +1,46 @@
---
name: 'step-02-capture-before'
description: 'Document the starting state before improvement'
nextStepFile: './step-03-understand-why.md'
---
# Step 2: Capture Before
## STEP GOAL
Document the current design state as context for the improvement that follows.
## CAPTURE CHECKLIST
Describe the design being changed:
### With Screenshot (preferred)
If a screenshot is available, take it and prepare for `capture_visual`:
- Screenshot the specific area being changed (not the full page)
- Write a 100-200 word description covering:
- **Layout:** structure, alignment, grid
- **Visual:** colors (hex values), typography (font, weight, size), spacing
- **Components:** which elements, their arrangement
- **Effect:** what this design communicates/achieves
### Without Screenshot
Write a detailed text description of the current state for `capture_knowledge`.
## DESCRIPTION QUALITY
**Good:** "Hero section with centered H1 at 48px bold Rubik, navy (#0a1628) background, full-width with no max-width constraint. Large bold heading feels authoritative but heavy."
**Bad:** "The hero with big text."
Be specific. Include values. The description is what makes this entry findable later.
## IMPORTANT
- Do NOT capture this as a standalone entry yet
- Hold the before description and screenshot — they'll be used in step 4 with `capture_feedback_pair`
- The before state is context, not the conclusion
## SUCCESS
- Before state documented with specific details
- Screenshot taken (if visual change)
- Description ready for the feedback pair
→ Load next: `./step-03-understand-why.md`

View File

@ -0,0 +1,52 @@
---
name: 'step-03-understand-why'
description: 'Ask what would make the design better'
nextStepFile: './step-04-capture-after.md'
---
# Step 3: Understand Why
## STEP GOAL
Understand what improvement the designer wants and why. This reasoning is the most valuable part of the feedback pair.
## ASK NATURALLY
Don't interrogate. Pick the question style that fits the moment.
### Forward-looking
- "What would make this feel right?"
- "What are you going for?"
- "What does the better version look like?"
### Specific
- "Is it the [layout / color / spacing / hierarchy] we should improve?"
- "Should it be more [open / minimal / bold / warm / structured]?"
- "What part works already — what should we build on?"
### Outcome-oriented
- "What feeling should this create?"
- "When you imagine this done right, what do you see?"
### When the designer gives a direction without explanation
Infer and confirm: "Got it — [X approach] works better here because [your inference]. Right?"
### When the designer can't articulate
That's fine. Capture the improvement: "Improved from [A] to [B] — designer's intuitive direction."
## CAPTURE THE REASONING
The reasoning should focus on **why the improvement is better**, not what was "wrong":
- "Light heading weight creates elegance — bold felt corporate"
- "More whitespace lets the content breathe"
- "Left-aligned text matches reading flow better"
## THEN APPLY THE CHANGE
Make the change the designer requested. Take a screenshot of the result.
## SUCCESS
- Reasoning captured (designer's words or inferred)
- Change applied
- After state ready for capture
→ Load next: `./step-04-capture-after.md`

View File

@ -0,0 +1,62 @@
---
name: 'step-04-capture-after'
description: 'Document the improved version and save the linked pair'
---
# Step 4: Capture After & Save Pair
## STEP GOAL
Document the improved design and save the complete learning as a linked pair.
## CAPTURE THE IMPROVEMENT
### With Screenshot (preferred)
Screenshot the improved design and write a 100-200 word description:
- What changed specifically (the delta)
- Why the new version works better
- What design principle this reinforces
### Without Screenshot
Write a detailed text description of the improved state.
## SAVE THE LINKED PAIR
```
capture_feedback_pair({
before_description: "[from step 2 — the starting state]",
before_image_base64: "[screenshot if available]",
after_description: "[the improved version — focus on what makes it better]",
after_image_base64: "[screenshot if available]",
reasoning: "[from step 3 — why the improvement works]",
pattern_type_before: "rejected",
pattern_type_after: "approved",
project: "[current project]",
topics: ["relevant", "tags"],
components: ["affected", "components"]
})
```
## CONFIRM WITH DESIGNER
Tell the designer what was learned. Frame it positively:
**Good:** "Learned: light heading weight (300) creates more elegance than bold for this brand. Applied to future hero sections."
**Bad:** "Noted: you don't like bold headings."
The confirmation should sound like a design principle, not a complaint.
## TAG STRATEGY
Choose topics that make this findable for similar future decisions:
- Design dimension: `typography`, `spacing`, `color`, `layout`, `hierarchy`
- Brand quality: `elegance`, `warmth`, `boldness`, `minimalism`
- Component: `hero-banner`, `card`, `navigation`, `footer`
- Context: `mobile`, `desktop`, `dark-theme`
## SUCCESS
- Improvement captured with specific details
- Linked pair saved with shared pair_id
- Reasoning focuses on what works and why
- Designer confirmed the learning
- Topics tagged for future searchability

View File

@ -0,0 +1,59 @@
---
name: 'step-01-pre-check'
description: 'Check proposed design against known improvements'
---
# Validate: Pre-Check Against Known Improvements
## STEP GOAL
Before presenting a new design, check if it resembles a known "before" state that was later improved. If so, apply the learned improvement proactively.
## PRE-CHECK SEQUENCE
### 1. Describe Your Proposed Design
Write a clear description of what you're about to present:
- Layout choices
- Color usage
- Typography decisions
- Component patterns
- Overall feeling
### 2. Run the Check
```
search_preference_patterns({
description: "[your description from above]",
image_base64: "[screenshot if available]",
project: "[current project]",
designer: "marten"
})
```
### 3. Interpret Results
**No matches:** Design passes — proceed with presenting it.
**Matches found:** Read each match carefully:
- What was the "before" state that's similar?
- What was the improvement that was made?
- What was the reasoning?
### 4. Apply Learned Improvements
If a match is found:
1. Read the approved alternative (the improvement)
2. Adjust your design to incorporate the learned solution
3. Present the adjusted version
4. Mention it naturally: "I applied [X approach] — it's worked well in similar designs."
### 5. When to Override
Sometimes the match is contextually wrong:
- Different project with different brand personality
- Different component where the pattern doesn't transfer
- The match is only surface-level similar
In these cases, proceed but note why: "This resembles [pattern] but the context is different because [reason]."
## SUCCESS
- Proposed design checked against known improvements
- Applicable improvements applied proactively
- Non-applicable matches noted with reasoning
- Design presented with confidence

View File

@ -0,0 +1,49 @@
---
name: 'design-feedback-loop'
description: 'Capture designer feedback as linked before/after pairs to build design taste'
configFile: '{project-root}/_bmad/wds/config.yaml'
---
# Design Feedback Loop
## PURPOSE
Learn the designer's taste through linked before/after pairs. Every improvement captured builds a library of solutions that makes future designs better from the start.
**Philosophy:** This is a solutions database, not a complaints log. The "before" state is context. The "after" state — the improvement — is the real knowledge.
## INITIALIZATION
1. READ COMPLETE this workflow file
2. Load Design Space protocol from `src/data/design-space/protocol.md`
3. Load feedback loop guide from `src/data/design-space/feedback-loop-guide.md`
4. Read `.claude/design-space-guide.md` if it exists in the project
## MODE ROUTING
### Create Mode (default)
Capture a designer's feedback as a linked pair:
1. **[step-01-detect-change]** — Recognize a preference signal
2. **[step-02-capture-before]** — Document the starting state
3. **[step-03-understand-why]** — Ask what would make it better
4. **[step-04-capture-after]** — Document the improved version and save the pair
### Validate Mode (flag: -v)
Pre-check a proposed design against known preferences:
→ Load `./steps-v/step-01-pre-check.md`
## MCP TOOLS USED
- `capture_feedback_pair` — save linked before/after with reasoning
- `search_preference_patterns` — check against known improvements
- `capture_visual` — screenshot + description for visual entries
- `search_space` — find related patterns
## RULES
- 📖 READ COMPLETE each step file before executing
- 🎯 Focus on solutions — what works and why
- 🔗 Always capture as linked pairs, never isolated entries
- 📸 Include screenshots when available (visual embedding adds pattern detection)
- ⏱️ Wait 25s between visual captures (Voyage AI rate limit on free tier)
- ✅ Confirm the learning with the designer: "Learned: [X] works better because [Y]"
## NEXT STEP
→ Load `./steps-c/step-01-detect-change.md`

View File

@ -0,0 +1,83 @@
# Capture Quality Guide
Reference for writing high-quality Design Space entries.
## The Quality Formula
**Good capture = Specific + Contextual + Actionable + Tagged**
Each element:
- **Specific:** Includes concrete values, names, measurements — not vague adjectives
- **Contextual:** Says where it was tested, which project, what constraints existed
- **Actionable:** Another agent reading this can apply it without more research
- **Tagged:** Topics and components make it findable via semantic search
## Examples by Category
### successful_pattern
```
"Coral (#e8734a) CTA buttons on navy (#0a1628) backgrounds achieve strong
contrast while maintaining brand warmth. Tested across Whiteport's full page —
the coral is used sparingly (3 times) which prevents fatigue and makes each
CTA a clear focal point. Works because the warm accent against cool background
creates visual tension that draws the eye without feeling aggressive."
```
### component_experience
```
"Component: Radix Dialog
Context: Kalla booking flow, 3 nested states (select → confirm → payment)
Behavior: Focus trap works perfectly. Z-index conflicts with sticky header
at z-50 when dialog opens from below-fold. Scroll lock works on iOS Safari.
Solution: Portal to body, set z-index to 100.
Transferable: Always portal modals when sticky positioned elements exist."
```
### methodology
```
"Running site analysis with 25-second delays between visual captures (Voyage AI
free tier = 3 RPM) is actually beneficial — the forced pause creates time for
more thoughtful descriptions. Rushing leads to vague captures. The constraint
improves quality. Implication: even on paid tier, don't batch-capture without
writing good descriptions first."
```
### agent_experience
```
"When the designer says 'try something different,' resist the urge to change
everything. Usually one dimension is the issue — find it by asking 'What part
works already?' This preserves the good decisions and only changes what needs
changing. Learned during Whiteport hero section iteration."
```
## Anti-patterns
| What | Why It's Bad | Better |
|------|-------------|--------|
| "X is good" | No context, no actionability | "X works for Y because Z" |
| "Designer hated this" | Complaint, not learning | "Improved from X to Y because Z" |
| "Changed the spacing" | Too vague to reuse | "Increased section padding from 48px to 80px for better breathing room on desktop" |
| One giant entry | Unfindable, mixing concerns | One entry per insight |
| No tags | Can't search for it | Always add topics + components |
## Minimum Content Length
- `capture_knowledge`: 50-200 words (enough for context + actionability)
- `capture_visual`: 200-400 words (detailed description of what you see + why it works)
- `capture_feedback_pair`: 50-100 words per side + reasoning
## Tag Vocabulary
Use consistent tags across projects:
### Design Dimensions
`typography`, `color`, `spacing`, `layout`, `hierarchy`, `animation`, `responsive`, `accessibility`
### Brand Qualities
`elegance`, `warmth`, `minimalism`, `boldness`, `playfulness`, `professionalism`
### Page Areas
`hero`, `navigation`, `footer`, `sidebar`, `content-area`, `above-fold`
### Component Types
`button`, `card`, `modal`, `form`, `table`, `list`, `accordion`, `carousel`

View File

@ -0,0 +1,57 @@
---
name: 'step-01-context'
description: 'Establish what to capture and why'
nextStepFile: './step-02-capture.md'
---
# Step 1: Context
## STEP GOAL
Understand what the designer wants to capture and set up the right framing.
## QUESTIONS TO ASK
### 1. What are we capturing?
- A design session's learnings?
- A project milestone's insights?
- Methodology/process improvements?
- External references or inspiration?
- Agent experience notes?
### 2. Which project?
Confirm the project tag to use (e.g., "whiteport", "kalla").
### 3. What category fits best?
Present the relevant categories:
- `successful_pattern` — validated solution worth reusing
- `component_experience` — how a component behaves in real use
- `design_system_evolution` — token/component/API decisions
- `methodology` — process improvements, workflow discoveries
- `agent_experience` — what agents learned about working together
- `inspiration` — external references that influenced direction
- `reference` — articles, videos, tools worth remembering
### 4. Any screenshots to include?
Visual captures get dual embeddings — much richer pattern detection.
## SEARCH FOR EXISTING KNOWLEDGE
Before capturing, check what already exists:
```
search_space({
query: "[topic description]",
project: "[project]",
limit: 10,
threshold: 0.6
})
```
Flag any potential duplicates.
## SUCCESS
- Capture scope defined
- Project and category confirmed
- Existing knowledge checked for overlaps
- Ready to capture
→ Load next: `./step-02-capture.md`

View File

@ -0,0 +1,78 @@
---
name: 'step-02-capture'
description: 'Structured capture with quality checks'
nextStepFile: './step-03-review.md'
---
# Step 2: Capture
## STEP GOAL
Capture the knowledge with high quality — specific, contextual, tagged, and findable.
## QUALITY CHECKLIST
Before each capture, verify:
- [ ] **Specific:** Includes concrete details (values, names, measurements)
- [ ] **Contextual:** Explains where this was tested/discovered
- [ ] **Actionable:** Another agent could use this without asking for more info
- [ ] **Tagged:** Topics and components set for searchability
- [ ] **Not a duplicate:** Doesn't repeat existing entries
## CAPTURE FORMATS
### Text Knowledge
```
capture_knowledge({
content: "[specific, contextual insight — 50-200 words]",
category: "[chosen category]",
project: "[project]",
designer: "marten",
topics: ["tag1", "tag2"],
components: ["component1"],
source: "agent-dialog"
})
```
### Visual Knowledge
```
capture_visual({
content: "[200-400 word description of what it looks like and why it works]",
image_base64: "[base64 screenshot]",
category: "[chosen category]",
project: "[project]",
pattern_type: "approved",
topics: ["tag1", "tag2"],
components: ["component1"]
})
```
## QUALITY EXAMPLES
### Good
"Bottom sheet navigation works better than hamburger menu for mobile service sites with 4-6 primary actions. Tested on Kalla — task completion felt faster, reduced confusion about available actions. Key insight: services (not content) need actions visible, not hidden."
### Bad
"Bottom sheets are good for mobile."
### Good (component experience)
"Component: Radix Dialog. Context: Used in Kalla booking flow, 3 nested states. Behavior: Focus trap works perfectly, but z-index conflicts with sticky header at z-50. Solution: portal the dialog to body. Transferable: always portal modals when sticky elements exist."
### Bad
"Radix Dialog has z-index issues."
## BATCH CAPTURE
If capturing multiple insights:
1. Capture each as a separate entry (not one giant entry)
2. Use consistent project/topic tags across the batch
3. Wait 25s between visual captures
4. Number them for tracking: "Capturing 1/5..."
## SUCCESS
- All insights captured with quality checklist met
- Tags consistent across batch
- Visual captures have dual embeddings
- Each entry is independently findable
→ Load next: `./step-03-review.md`

View File

@ -0,0 +1,54 @@
---
name: 'step-03-review'
description: 'Verify captures and identify gaps'
---
# Step 3: Review
## STEP GOAL
Verify the captures landed correctly and identify any gaps.
## REVIEW SEQUENCE
### 1. Check Recent Entries
```
recent_knowledge({
limit: 20,
project: "[project]"
})
```
Verify all intended captures appear.
### 2. Search Verification
For each capture, run a quick search to confirm it's findable:
```
search_space({
query: "[key phrase from the capture]",
project: "[project]",
limit: 3
})
```
### 3. Gap Analysis
Ask the designer:
- "Is there anything else from this session worth capturing?"
- "Any decisions we made that should be recorded?"
- "Any process improvements we discovered?"
### 4. Summary
Present what was captured:
```
Knowledge Capture Summary
─────────────────────────
Project: [project]
Entries: [X] text, [Y] visual
Categories: [list]
Topics covered: [list]
```
## SUCCESS
- All captures verified in the Space
- Findable via search
- No gaps identified
- Designer confirms completeness

View File

@ -0,0 +1,76 @@
---
name: 'step-01-audit'
description: 'Audit existing Design Space entries for quality'
---
# Validate: Quality Audit
## STEP GOAL
Review existing entries in the Design Space for quality, tagging consistency, and usefulness.
## AUDIT SEQUENCE
### 1. Load Entries
```
recent_knowledge({
limit: 50,
project: "[project or leave empty for all]"
})
```
### 2. Quality Score Each Entry
For each entry, score 1-5:
- **1:** Vague, no context, untagged — should be deleted or rewritten
- **2:** Has some detail but missing context or tags
- **3:** Acceptable — specific enough to be useful
- **4:** Good — specific, contextual, well-tagged
- **5:** Excellent — tells a complete story, immediately actionable
### 3. Common Issues to Check
- [ ] **Too vague:** "X is good" without context
- [ ] **Missing project tag:** Can't filter by project
- [ ] **Missing topics:** Not findable via search
- [ ] **Duplicate entries:** Same insight captured twice
- [ ] **Negative framing:** Focuses on complaints instead of solutions
- [ ] **Missing visuals:** Should have a screenshot but doesn't
- [ ] **Wrong category:** e.g., methodology tagged as general
### 4. Recommendations
For each issue found, recommend:
- **Rewrite:** Capture again with better quality
- **Enrich:** Add missing tags/context
- **Delete:** Remove duplicates or truly useless entries
- **Merge:** Combine related entries into one richer entry
### 5. Report
```
Design Space Quality Audit
──────────────────────────
Entries reviewed: [X]
Average quality: [X.X] / 5
Score distribution:
5 (Excellent): [X]
4 (Good): [X]
3 (Acceptable): [X]
2 (Needs work): [X]
1 (Poor): [X]
Top issues:
1. [issue + count]
2. [issue + count]
3. [issue + count]
Recommendations:
- [action items]
```
## SUCCESS
- All entries reviewed and scored
- Issues identified with specific examples
- Actionable recommendations provided
- Quality baseline established for future captures

View File

@ -0,0 +1,44 @@
---
name: 'knowledge-capture'
description: 'Guided capture of design knowledge into the Design Space'
configFile: '{project-root}/_bmad/wds/config.yaml'
---
# Knowledge Capture Workflow
## PURPOSE
Structured capture of design insights, methodology learnings, and process improvements into the Design Space. Use this for deliberate capture sessions — auto-capture during design work is covered by agent guides.
## INITIALIZATION
1. READ COMPLETE this workflow file
2. Load Design Space protocol from `src/data/design-space/protocol.md`
3. Read `.claude/design-space-guide.md` if it exists in the project
## MODE ROUTING
### Create Mode (default)
Guided capture session:
1. **[step-01-context]** — Establish what to capture and why
2. **[step-02-capture]** — Structured capture with quality checks
3. **[step-03-review]** — Verify captures and identify gaps
### Validate Mode (flag: -v)
Audit existing Design Space entries for quality:
→ Load `./steps-v/step-01-audit.md`
## MCP TOOLS USED
- `capture_knowledge` — text insights with semantic embedding
- `capture_visual` — screenshots with dual embedding
- `search_space` — check for duplicates before capturing
- `recent_knowledge` — review recent entries
## RULES
- 📖 READ COMPLETE each step file before executing
- 🔍 Search before capturing — no duplicates
- 🎯 Quality over quantity — specific beats vague
- 🏷️ Tag everything — topics, components, project, source
- ⏱️ Wait 25s between visual captures (Voyage AI rate limit on free tier)
## NEXT STEP
→ Load `./steps-c/step-01-context.md`

View File

@ -0,0 +1,54 @@
# Agent Messaging Principles
## Communication Rules
1. **Clear text only** — Messages are natural language. No semantic codes, no encoded instructions. Every message should be readable by a human reviewing the conversation.
2. **No agent-to-agent instructions** — Only humans give instructions. Agents can:
- **Request** — "Could you share the latest component list?"
- **Share** — "Here's the hero mockup from the latest iteration."
- **Notify** — "Design system is complete. 33 components ready."
- **Ask** — "What spacing token did you use for the hero section?"
- Agents CANNOT instruct each other: "Change the nav to use tabs" is NOT allowed.
3. **Delegated authority** — A human can explicitly grant an agent scoped authority: "Saga, you can ask Freya to update the color tokens." This must come from the human, not from another agent.
4. **Identity transparency** — Always include your `agent_id` and `from_platform`. Never impersonate another agent.
5. **Consent gate for cross-human sharing** — Agents of the same human communicate freely. Sharing information with agents of a different human requires the human's permission. When in doubt, ask: "Saga-36783 is requesting the latest attendee list. May I share this?"
## Agent Handles
Agents use the format `AgentName-hash` (e.g., `Saga-36783`) where the hash is derived from the human's user ID. All agents of the same human share the same hash. This makes it easy to identify which agents belong to which human without exposing the human's identity.
## Message Types
| Type | When to Use |
|------|-------------|
| `notification` | Status updates, FYI messages |
| `question` | Asking for information or clarification |
| `request` | Requesting an action (not instruction!) |
| `task_offer` | Offering to take on work |
| `task_complete` | Reporting work completion |
## Priority Levels
| Priority | When |
|----------|------|
| `normal` | Default for all messages |
| `urgent` | Blocking issues, time-sensitive requests |
## Activation Behavior
On session start, if `AGENT_ID` is configured:
1. Register presence (heartbeat)
2. Check for unread messages
3. Report any unread messages to the user
4. If connection fails, tell the user immediately
## Messages Are Knowledge
Every agent message is stored in the Design Space with semantic embeddings. This means:
- Conversations become searchable design memory
- Future agents can find relevant past discussions
- Nothing is lost when a session ends

View File

@ -0,0 +1,42 @@
# Step 1: Check Messages
## Purpose
Check the agent's inbox for unread messages and report to the user.
## Procedure
1. Call `check_agent_messages` (or HTTP POST to `agent-messages` with `action: "check"`)
- Use configured `AGENT_ID`
- Include broadcast messages (`include_broadcast: true`)
- Filter by current project if set
2. If messages found:
- Present each message with: sender, platform, type, content preview, timestamp
- Group by thread if multiple messages in same thread
- Highlight urgent messages first
- Ask user: "Would you like me to respond to any of these?"
3. If no messages:
- Report: "No unread messages."
- Show connection status (realtime vs polling)
4. If connection fails:
- Report the error to the user: "Could not check messages: {error}"
- Suggest: "Please check the network connection or restart the session."
## Output Format
```
--- INBOX ({count} unread) ---
1. [urgent/question] from Saga (claude-code):
"What color palette should we use for the dashboard?"
Thread: abc-123 | 5 min ago
2. [notification] from Dev-Agent (cursor):
"Homepage build complete. Ready for review."
Thread: def-456 | 2 hours ago
---
Connection: realtime (live)
```

View File

@ -0,0 +1,39 @@
# Step 2: Compose & Send Message
## Purpose
Help the agent compose and send a message to another agent or broadcast.
## Procedure
1. **Determine recipient:**
- If user specifies a recipient → use that agent_id
- If user says "broadcast" or no recipient → send to project (no `to_agent`)
- If unsure → call `who_online` to show available agents
2. **Determine message type:**
- What is the purpose? → notification, question, request, task_offer, task_complete
- Set priority: normal (default) or urgent
3. **Compose content:**
- Write in clear natural language
- Include context the recipient needs
- Keep it concise but complete
- Never include instructions to the other agent (requests only)
4. **Add attachments if relevant:**
- Links to files or URLs
- Screenshots (as base64 images)
- File references (paths)
5. **Send via `send_agent_message`** (or HTTP POST with `action: "send"`)
6. **Confirm to user:**
- Report: "Message sent to {recipient} in thread {thread_id}"
- Show message preview
## Rules
- Always include `from_agent`, `from_platform`
- Set `project` if working in a project context
- Add relevant `topics` and `components` tags (these make the message searchable)
- Respect the consent gate: don't share cross-human information without permission

View File

@ -0,0 +1,54 @@
# Step 3: Manage Presence
## Purpose
Register agent presence, update status, or discover other agents.
## Register / Update
Call `register_presence` (or HTTP POST with `action: "register"`) with:
| Field | Source |
|-------|--------|
| `agent_id` | From `AGENT_ID` env var or user config |
| `agent_name` | From `AGENT_NAME` env var (e.g., "Saga (Analyst)") |
| `model` | Current LLM model |
| `platform` | IDE/tool (claude-code, cursor, chatgpt) |
| `framework` | "WDS" |
| `project` | Current project name |
| `working_on` | Current task description |
| `capabilities` | What this agent can do |
| `status` | online / busy / idle |
Update when:
- Starting a new task → update `working_on`
- Switching projects → update `project`
- Going busy → update `status`
## Who's Online
Call `who_online` (or HTTP POST with `action: "who-online"`) to discover peers.
Present results as:
```
--- AGENTS ONLINE ({count}) ---
1. Saga (Analyst)
Model: claude-opus-4-6 | Platform: claude-code
Working on: Kalla product brief | Project: kalla
Capabilities: file-editing, research
Last seen: 2 min ago
2. Dev-Agent
Model: gpt-4o | Platform: cursor
Working on: Homepage implementation | Project: kalla
Capabilities: file-editing, code-execution
Last seen: 30 sec ago
---
```
Filter by project or capability if the user needs specific agents.
## Heartbeat
Agents auto-offline after 5 minutes without a `register` call. The MCP server handles this automatically. For HTTP-only agents, remind them to call `register` periodically.

View File

@ -0,0 +1,48 @@
# Workflow 12: Agent Messaging
Cross-LLM, cross-IDE agent communication. Send messages, check inbox, manage presence.
## Activation
Trigger: `AM` or fuzzy match on `agent-messaging` or `messages` or `who-online`
## Initialization
1. Load `src/data/design-space/protocol.md` — Section: Agent Messages
2. Load agent-specific messaging guide from `src/data/agent-guides/{agent}/agent-messaging.md`
3. Check Design Space connection health
4. If `AGENT_ID` is configured, auto-register presence
## Modes
### Check Messages (default)
Read from inbox, report to user, offer to respond.
**Steps:**
1. `steps-c/step-01-check-messages.md` — Check inbox for unread messages
2. Report findings to user
3. If messages found, offer to respond
### Send Message
Compose and send a message to another agent.
**Steps:**
1. `steps-c/step-02-compose-message.md` — Draft and send
2. Confirm delivery to user
### Manage Presence
Register, update status, or check who's online.
**Steps:**
1. `steps-c/step-03-manage-presence.md` — Register/update/discover
## Principles
Read `data/messaging-principles.md` before any messaging action.
## Connection Failure
If Design Space is unreachable:
1. Tell the user: "Design Space connection failed: {error}. Please check the network or restart the session."
2. Do NOT silently drop files or fall back without telling the user.
3. The user decides the next step — not the agent.

View File

@ -0,0 +1,58 @@
# Site Analysis Categories
Reference for what to capture and how to tag it during site analysis.
## DNA Layers
| Layer | What It Captures | Primary Category |
|-------|-----------------|-----------------|
| Structural DNA | Navigation, layout, IA, page types | `successful_pattern` |
| Visual DNA | Colors, typography, spacing, imagery | `design_system_evolution` |
| Content DNA | Voice, messaging, CTAs, content blocks | `successful_pattern` |
## Tagging Convention
### Topics (always include)
- `site-analysis` — marks this as analysis-derived
- `structural-dna` / `visual-dna` / `content-dna` — which layer
- Domain-specific: `navigation`, `typography`, `color-palette`, `cta`, `brand-voice`, etc.
### Components (when applicable)
- `hero-banner`, `sticky-header`, `card-grid`, `testimonial`, `footer`, etc.
- Use the component name as it appears in the design, not generic terms
### Pattern Type
- `baseline` — the current state of the analyzed site
- `inspiration` — a pattern worth borrowing for other projects
### Source
- Always: `source: "site-analysis"`
- Include `source_file` with the URL analyzed
## Quality Examples
### Good Visual Capture
```
content: "Hero section uses full-width navy (#0a1628) background with centered
white text. H1 is Rubik Light (300) at ~48px, creating an elegant weight
contrast against the dark background. CTA button uses coral accent (#e8734a)
with generous padding (16px 32px) and subtle border-radius (4px). The
breathing room between heading, subtext, and CTA creates a calm, confident
hierarchy. Works because the light font weight signals sophistication while
the coral CTA creates a clear focal point."
category: "successful_pattern"
pattern_type: "baseline"
topics: ["hero", "visual-dna", "dark-theme", "site-analysis"]
components: ["hero-banner", "cta-button", "heading-h1"]
```
### Good Knowledge Capture
```
content: "Whiteport uses a deliberate color rhythm across sections: dark navy
→ light grey → white → navy → grey. This creates natural visual breaks
without separator elements. The rhythm feels intentional, not random —
each color shift signals a new topic area. The alternation prevents
visual fatigue on what is a long single-page site (6000px+)."
category: "successful_pattern"
topics: ["color-rhythm", "structural-dna", "site-analysis", "visual-pacing"]
```

View File

@ -0,0 +1,58 @@
---
name: 'step-01-init'
description: 'Initialize site analysis — confirm URL and check existing data'
nextStepFile: './step-02-structural-dna.md'
---
# Step 1: Initialize Site Analysis
## STEP GOAL
Confirm the target website, check the Design Space for existing analysis, and prepare for systematic capture.
## MANDATORY SEQUENCE
### 1. Confirm Target URL
Ask the user: "Which website should I analyze?"
- Validate URL format
- Confirm project tag for Design Space entries
- Confirm pattern_type: `baseline` (own site) or `inspiration` (reference/competitor)
### 2. Check Existing Analysis
```
search_space({
query: "[site domain] design patterns",
project: "[project-tag]",
threshold: 0.5,
limit: 10
})
```
If entries exist: "I found [N] existing entries for this site. Should I add to the existing analysis or start fresh?"
### 3. Navigate and Map
- Navigate to the URL using browser tools
- Extract page structure: sections, IDs, heights, total page length
- Dismiss cookie banners
- List all internal links (subpages available for deeper analysis)
- Extract global design tokens: font families, colors, spacing
### 4. Present Site Map
Show the user:
- Page height and section count
- Section names and approximate positions
- Navigation structure
- Number of subpages available
- Any existing Design Space entries
Ask: "Ready to start the analysis? I'll go through structural DNA, visual DNA, and content DNA."
## SUCCESS
- URL confirmed
- Page structure mapped
- Existing entries checked
- User approved to proceed
## FAILURE
- URL inaccessible
- Page structure unclear (heavy SPA with no sections)
→ Load next: `./step-02-structural-dna.md`

View File

@ -0,0 +1,62 @@
---
name: 'step-02-structural-dna'
description: 'Analyze navigation, layout patterns, and information architecture'
nextStepFile: './step-03-visual-dna.md'
---
# Step 2: Structural DNA
## STEP GOAL
Capture the site's information architecture, navigation patterns, layout structures, and page type taxonomy.
## MANDATORY SEQUENCE
### 1. Navigation Analysis
- Document the navigation structure (primary, secondary, mobile)
- Note anchor links vs. page links
- Identify the navigation pattern: horizontal, hamburger, sidebar, hybrid
- Check sticky behavior, scroll effects
- Capture with `capture_knowledge`:
```
category: "successful_pattern"
topics: ["navigation", "information-architecture", ...]
components: ["sticky-header", "horizontal-nav", ...]
```
### 2. Page Structure
- Document the section flow: section names, order, approximate heights
- Identify the color rhythm (which sections are light, dark, accent)
- Note separator patterns (wave SVGs, hard lines, gradients)
- Capture the page structure as a knowledge entry
### 3. Layout Patterns
For each distinct section, identify the layout:
- Grid structure (1-col, 2-col, 3-col, 4-col, masonry)
- Content alignment (centered, left, right)
- Max-width constraints
- Responsive breakpoints if detectable
- Card patterns, list patterns, hero patterns
### 4. Page Type Taxonomy
List all page types available on the site:
- Homepage (single-page or multi-section)
- Product/service pages
- Blog/content pages
- Case study/portfolio pages
- Contact/about pages
### 5. Capture Summary
Use `capture_knowledge` with:
```
category: "successful_pattern"
topics: ["structural-dna", "layout", "information-architecture"]
source: "site-analysis"
```
## SUCCESS
- Navigation pattern documented
- Page structure mapped with color rhythm
- Layout patterns identified per section
- Page types catalogued
→ Load next: `./step-03-visual-dna.md`

View File

@ -0,0 +1,81 @@
---
name: 'step-03-visual-dna'
description: 'Analyze colors, typography, spacing, imagery, and component visual styles'
nextStepFile: './step-04-content-dna.md'
---
# Step 3: Visual DNA
## STEP GOAL
Extract the complete visual language: color palette, typography scale, spacing rhythm, imagery style, and component visual patterns. Capture each section with dual embeddings.
## MANDATORY SEQUENCE
### 1. Color Palette Extraction
Extract computed styles via JavaScript:
- All unique background colors
- All unique text colors
- Section-by-section color mapping
- Identify: primary, secondary, accent, neutral colors
- Note the color rhythm across sections
Capture with `capture_knowledge`:
```
category: "design_system_evolution"
topics: ["color-palette", "visual-dna", "design-tokens"]
```
### 2. Typography Scale
Extract font information:
- Font families (primary, secondary, accent)
- Weight scale (which weights are used where)
- Size scale (H1, H2, body, nav, small text)
- Line heights and letter-spacing
- Any notable anti-patterns or distinctive choices
Capture as knowledge entry.
### 3. Spacing & Rhythm
Analyze spacing patterns:
- Section vertical padding
- Grid gaps
- Content max-width
- Breathing room patterns
- Separator styles (waves, lines, gradients)
### 4. Section-by-Section Visual Capture
For EACH major section, scroll to it and:
a. Take a screenshot (1440px wide, clip to section height or 900px max)
b. Write a 200-400 word semantic description covering:
- Layout structure
- Color usage
- Typography choices
- Component patterns
- Visual hierarchy
- Design decisions and WHY they work
c. Capture with `capture_visual`:
```
content: "[detailed description]"
image_base64: "[screenshot]"
category: "successful_pattern"
pattern_type: "baseline" or "inspiration"
topics: [section-specific tags]
components: [section-specific components]
```
**IMPORTANT:** Wait 25 seconds between `capture_visual` calls to respect Voyage AI rate limits on free tier.
### 5. Icon & Illustration System
If the site uses icons or illustrations:
- Document the style (line-art, filled, hand-drawn, 3D)
- Note consistency patterns
- Identify brand personality expressed through illustration style
## SUCCESS
- Complete color palette documented with RGB values
- Typography scale extracted
- Every section captured with dual embedding
- Visual patterns identified and tagged
→ Load next: `./step-04-content-dna.md`

View File

@ -0,0 +1,65 @@
---
name: 'step-04-content-dna'
description: 'Analyze tone, messaging, CTAs, and content strategy'
nextStepFile: './step-05-capture.md'
---
# Step 4: Content DNA
## STEP GOAL
Extract the site's voice, messaging hierarchy, CTA patterns, and content strategy. Capture how the brand communicates — not just what it says, but how it says it.
## MANDATORY SEQUENCE
### 1. Brand Voice Analysis
Read all visible copy and identify:
- **Tone:** Professional, playful, authoritative, warm, technical, minimal?
- **Person:** First person (we), second person (you), third person?
- **Sentence style:** Short and punchy? Long and flowing? Mixed?
- **Power words:** Which words appear repeatedly?
- **Personality traits:** If the brand were a person, how would they speak?
Capture with `capture_knowledge`:
```
category: "successful_pattern"
topics: ["brand-voice", "content-dna", "copywriting"]
```
### 2. Messaging Hierarchy
Map the information flow:
- What's the first thing a visitor reads? (primary headline)
- What's the supporting message? (subheadline)
- What's the proof? (social proof, stats, testimonials)
- What's the action? (primary CTA)
- How does each section build on the previous?
### 3. CTA Patterns
Document every call-to-action:
- Primary CTA text, style, placement
- Secondary CTAs
- CTA frequency and rhythm
- Button vs. link vs. form patterns
- Urgency/scarcity language (if any)
### 4. Content Blocks
For each section, note the content pattern:
- Headline + body + CTA
- Headline + cards/grid
- Testimonial/quote blocks
- FAQ/accordion patterns
- Stats/numbers presentation
### 5. SEO & Meta
If accessible:
- Page title, meta description
- H1-H6 hierarchy (proper semantic use?)
- Image alt text patterns
- Schema markup
## SUCCESS
- Brand voice documented with specific examples
- Messaging hierarchy mapped
- CTA patterns catalogued
- Content block patterns identified
→ Load next: `./step-05-capture.md`

View File

@ -0,0 +1,78 @@
---
name: 'step-05-capture'
description: 'Batch capture all findings with dual embeddings into Design Space'
nextStepFile: './step-06-summary.md'
---
# Step 5: Capture
## STEP GOAL
Ensure all findings from steps 2-4 are captured into the Design Space with proper tagging and dual embeddings.
## MANDATORY SEQUENCE
### 1. Review Captured Knowledge
Check what was already captured during steps 2-4:
```
search_space({
query: "[site name] site analysis",
project: "[project]",
limit: 50,
threshold: 0.3
})
```
### 2. Gap Analysis
Compare captured entries against this checklist:
**Structural DNA:**
- [ ] Navigation pattern
- [ ] Page structure / section flow
- [ ] Layout patterns per section
- [ ] Page type taxonomy
**Visual DNA:**
- [ ] Color palette (with values)
- [ ] Typography scale
- [ ] Spacing rhythm
- [ ] Section screenshots (each major section)
- [ ] Icon/illustration system
**Content DNA:**
- [ ] Brand voice analysis
- [ ] Messaging hierarchy
- [ ] CTA patterns
- [ ] Content block patterns
### 3. Fill Gaps
For any uncaptured items, capture now using:
- `capture_knowledge` for text insights
- `capture_visual` for sections with screenshots
**Rate limit reminder:** Wait 25 seconds between `capture_visual` calls on Voyage AI free tier.
### 4. Cross-Reference Check
Run visual similarity search against other projects:
```
search_visual_similarity({
image_base64: "[hero screenshot]",
limit: 5,
threshold: 0.5
})
```
Note any cross-project patterns discovered.
### 5. Tag Consistency
Verify all entries use consistent:
- `project` tag matching the project name
- `source: "site-analysis"`
- Relevant `topics` and `components` arrays
## SUCCESS
- All structural, visual, and content DNA captured
- No gaps in the checklist
- Cross-project patterns noted
- Consistent tagging across all entries
→ Load next: `./step-06-summary.md`

View File

@ -0,0 +1,60 @@
---
name: 'step-06-summary'
description: 'Present analysis summary and offer next steps'
---
# Step 6: Summary
## STEP GOAL
Present a clear summary of the site analysis findings and offer actionable next steps.
## MANDATORY SEQUENCE
### 1. Analysis Summary
Present the findings organized as:
```markdown
# Site Analysis: [Site Name]
## Structural DNA
- **Navigation:** [pattern type, key features]
- **Page structure:** [section count, flow description]
- **Layout patterns:** [grid types, alignment, max-widths]
- **Page types:** [list of page types found]
## Visual DNA
- **Color palette:** [primary, secondary, accent, neutral — with hex values]
- **Typography:** [font families, weight scale, size scale]
- **Spacing:** [rhythm patterns, section padding, grid gaps]
- **Visual personality:** [1-2 sentence description of the overall visual feeling]
## Content DNA
- **Brand voice:** [tone, person, style]
- **Messaging:** [primary message, supporting evidence, CTA approach]
- **Content patterns:** [recurring block types]
## Design Space Entries
- [X] text knowledge entries captured
- [X] visual pattern entries captured (dual embedded)
- [X] cross-project patterns identified
```
### 2. Key Insights
Highlight 3-5 standout findings:
- What makes this site distinctive?
- What patterns are worth reusing?
- What's unusual or innovative?
- What's the strongest design decision?
### 3. Next Steps
Offer the designer:
- **Compare:** Run against another site analysis for competitive intelligence
- **Apply:** Use these patterns as baseline for a new project
- **Refine:** Deep-dive into a specific section or pattern
- **Capture more:** Analyze subpages for additional patterns
## SUCCESS
- Clear summary presented
- Key insights highlighted
- Next steps offered
- Designer knows the full scope of what was captured

View File

@ -0,0 +1,78 @@
---
name: 'step-01-validate'
description: 'Validate completeness of an existing site analysis'
---
# Validate: Site Analysis Completeness
## STEP GOAL
Check an existing site analysis for completeness, gaps, and quality.
## VALIDATION SEQUENCE
### 1. Load Existing Analysis
```
search_space({
query: "[site name] site analysis",
project: "[project]",
limit: 50,
threshold: 0.3
})
```
### 2. Coverage Check
Score each area (0-3):
- 0 = Not captured
- 1 = Partial (missing details)
- 2 = Captured but could be richer
- 3 = Fully captured with good context
| Area | Score | Notes |
|------|-------|-------|
| Navigation pattern | | |
| Page structure | | |
| Layout patterns | | |
| Color palette | | |
| Typography scale | | |
| Spacing rhythm | | |
| Section screenshots | | |
| Brand voice | | |
| CTA patterns | | |
| Content blocks | | |
### 3. Visual Coverage
Check that major sections have dual embeddings:
- Count entries with visual embeddings
- List sections that are text-only (missing screenshot)
- Identify any screenshots that are low quality
### 4. Quality Check
For each entry, verify:
- Content is specific (includes values, examples, context)
- Topics and components are tagged
- Project tag is set
- Source is "site-analysis"
- Pattern type is appropriate (usually "baseline" for analysis)
### 5. Report
Present findings:
```
Site Analysis Validation: [Site Name]
Coverage: [X]/30 points
Visual entries: [X] with dual embeddings
Text entries: [X] knowledge-only
Gaps:
- [list missing areas]
Quality issues:
- [list specific quality concerns]
Recommendation: [complete / needs gap fill / needs re-analysis]
```
## SUCCESS
- Coverage scored for all 10 areas
- Visual coverage verified
- Quality issues identified
- Clear recommendation provided

View File

@ -0,0 +1,49 @@
---
name: 'site-analysis'
description: 'Analyze a website and capture design DNA into Design Space'
configFile: '{project-root}/_bmad/wds/config.yaml'
---
# Site Analysis Workflow
## PURPOSE
Analyze a website and capture its complete design fingerprint — structural DNA, visual DNA, and content DNA — into the Design Space with dual embeddings.
## INITIALIZATION
1. READ COMPLETE this workflow file
2. Load config from `{project-root}/_bmad/wds/config.yaml`
3. Read `.claude/design-space-guide.md` if it exists in the project
## MODE ROUTING
### Create Mode (default)
Analyze a new website. Ask for the target URL, then proceed through:
1. **[step-01-init]** — Load context, confirm URL, check existing analysis
2. **[step-02-structural-dna]** — Navigation, layout, page types, IA
3. **[step-03-visual-dna]** — Colors, typography, spacing, imagery
4. **[step-04-content-dna]** — Tone, messaging, CTAs, content strategy
5. **[step-05-capture]** — Batch capture with dual embeddings
6. **[step-06-summary]** — Present findings, offer next steps
### Validate Mode (flag: -v)
Check an existing analysis for completeness:
→ Load `./steps-v/step-01-validate.md`
## MCP TOOLS USED
- `capture_visual` — screenshot + description → dual embedding
- `capture_knowledge` — text-only insights
- `search_space` — check for existing analysis
- `search_visual_similarity` — find similar patterns across projects
## RULES
- 📖 READ COMPLETE each step file before executing
- 🔄 Follow the sequence — don't skip steps
- ⏸️ Wait for user input at each step before proceeding
- 📸 Take screenshots of every significant section
- 📝 Write detailed semantic descriptions (200-400 words per section)
- 🏷️ Tag everything with project, topics, components
- ⏱️ Respect Voyage AI rate limits (25s between visual captures on free tier)
## NEXT STEP
→ Load `./steps-c/step-01-init.md`