837 lines
26 KiB
Markdown
837 lines
26 KiB
Markdown
# Module 07: The Design Phase
|
|
|
|
## Lesson 2: Why Specifications Matter
|
|
|
|
**The New Code: Specifications as the Foundation of Professional AI-Assisted Development**
|
|
|
|
---
|
|
|
|
## The Reason Vibe coding fails - The Amateur Question
|
|
|
|
*"Why are we wasting time with scenarios and specifications? Can't we just tell the AI agent to code the app and be done with it?"*
|
|
|
|
This question reveals a fundamental misunderstanding about what creates value in software development—especially in the age of AI.
|
|
|
|
**The honest answer:** You can try "prompt to code." It'll be fast, broken, unmaintainable, and you'll burn 10x the credits fixing it.
|
|
|
|
But more importantly, you'll have missed the entire point of professional software development.
|
|
|
|
---
|
|
|
|
## Foundation: The New Paradigm
|
|
|
|
This lesson applies the philosophy from Sean Grove's talk "The New Code" to the WDS methodology.
|
|
|
|
**→ Read the core framework:** [Model: Specifications as the New Code](../../models/specifications-as-the-new-code.md)
|
|
|
|
**Watch the full talk:** [The New Code — Sean Grove, OpenAI](https://www.youtube.com/watch?v=8rABwKRsec4)
|
|
|
|
**Key insight:**
|
|
> "Code is sort of 10 to 20% of the value that you bring. The other 80 to 90% is in structured communication."
|
|
> — Sean Grove
|
|
|
|
This lesson shows **how WDS implements this philosophy** through its specification-driven workflow.
|
|
|
|
---
|
|
|
|
|
|
|
|
## What Actually Creates Value
|
|
|
|
> "Code is sort of 10 to 20% of the value that you bring. The other 80 to 90% is in structured communication."
|
|
>
|
|
> — Sean Grove, "The New Code"
|
|
|
|
This statement captures the fundamental truth about software development:
|
|
|
|
**Code is the OUTPUT. Specifications are the WORK.**
|
|
|
|
**Code is the PROJECTION. Specifications are the SOURCE.**
|
|
|
|
**Code is what MACHINES execute. Specifications are how HUMANS align.**
|
|
|
|
### The Shifted Paradigm
|
|
|
|
**Traditional thinking:**
|
|
- Code is the primary artifact
|
|
- Documentation is secondary (often skipped)
|
|
- "Just ship it and we'll figure it out"
|
|
|
|
**Professional AI-assisted development:**
|
|
- Specifications are the primary artifact (80-90% of value)
|
|
- Code is secondary, a lossy projection (10-20% of value)
|
|
- "Get the specs right, code follows correctly"
|
|
|
|
---
|
|
|
|
## Your App Exists in Specifications
|
|
|
|
This is the critical insight that makes WDS unique:
|
|
|
|
**Your software exists in the specifications.**
|
|
|
|
The code is just one possible projection of that specification—like compiled output for one target architecture.
|
|
|
|
### Think Like a Compiler
|
|
|
|
When you compile C++ code:
|
|
- **Source:** Your C++ program (human-readable intent)
|
|
- **Target 1:** x86 machine code
|
|
- **Target 2:** ARM machine code
|
|
- **Target 3:** WebAssembly
|
|
|
|
**Same source. Multiple targets.** All generated from the specification.
|
|
|
|
### Your Specifications Work the Same Way
|
|
|
|
**Source:** Your scenario specifications (human-aligned intent)
|
|
- **Target 1:** TypeScript + React
|
|
- **Target 2:** Rust + WASM
|
|
- **Target 3:** Python + Django
|
|
- **Target 4:** User documentation
|
|
- **Target 5:** Tutorial content
|
|
- **Target 6:** Test suites
|
|
|
|
**Same specifications. Multiple outputs.** All generated by AI agents from your specs.
|
|
|
|
**This is what makes specifications the primary artifact.**
|
|
|
|
---
|
|
|
|
## Strategic Thinking Just Got Cheaper Too
|
|
|
|
Everyone knows AI made code cheap.
|
|
|
|
**What most people missed:** AI made strategic thinking cheap too.
|
|
|
|
**→ Read the complete strategic framework:** [Specifications as the New Code](../../models/specifications-as-the-new-code.md)
|
|
|
|
**Key insights:**
|
|
|
|
### The Three Eras
|
|
|
|
| Era | Code Cost | Specs Cost | Communication Cost | Result |
|
|
|-----|-----------|------------|-------------------|--------|
|
|
| **Waterfall** | High | High | High | Upfront specs, code once |
|
|
| **Agile** | High | Skipped | Low | Skip specs, meet constantly |
|
|
| **AI (amateur)** | Low | Still skipped | High | Churn prototypes endlessly |
|
|
| **AI (WDS)** | Low | **Low (AI-assisted)** | High | **Forge specs, generate correctly** |
|
|
|
|
**The insight everyone missed:**
|
|
|
|
Agile killed the PRD because communication was cheap and coding was expensive.
|
|
|
|
AI revived the PRD because **both strategic thinking AND coding are now cheap.**
|
|
|
|
**WDS exists because you can now:**
|
|
- Use AI to forge strategic thinking (Saga, Freya as thinking partners)
|
|
- Create polished specifications cheaply (AI-assisted)
|
|
- Generate code from complete specs (AI agents)
|
|
- Present ideas, not prototypes (efficient communication)
|
|
|
|
**The amateur approach:**
|
|
> "Code is cheap. Skip specs. Prompt AI. Churn prototypes."
|
|
|
|
**The professional approach:**
|
|
> "Strategic thinking is cheap too. Forge specs with AI. Generate code correctly."
|
|
|
|
**Read the full framework to understand why WDS is the professional standard for AI-assisted development.**
|
|
|
|
---
|
|
|
|
## Code Is a Lossy Projection
|
|
|
|
> "Code is actually a lossy projection from the specification."
|
|
>
|
|
> — Sean Grove
|
|
|
|
Think about what gets LOST when you jump from "make a login screen" to code:
|
|
|
|
**Lost from the specification:**
|
|
- WHY this login exists (business goal)
|
|
- WHO uses it (persona psychology)
|
|
- WHAT drives them (trigger map drivers)
|
|
- HOW they feel (emotional state)
|
|
- WHEN it should adapt (context changes)
|
|
- WHERE it fits in the journey (scenario flow)
|
|
|
|
**What the code captures:**
|
|
- A form with two inputs and a button
|
|
- Some validation logic
|
|
- An API call
|
|
|
|
**The code is 10% of the story.** The specification is 100%.
|
|
|
|
### Example: The Login Scenario
|
|
|
|
**Just the code knows:**
|
|
```typescript
|
|
<form onSubmit={handleLogin}>
|
|
<input type="email" />
|
|
<input type="password" />
|
|
<button>Login</button>
|
|
</form>
|
|
```
|
|
|
|
**The specification knows:**
|
|
1. **Why:** This addresses "fear of unauthorized access" (Driver #3, Remote Team Leads)
|
|
2. **Who:** Remote managers protecting team data
|
|
3. **What:** Need quick, secure access without friction
|
|
4. **How:** Proactive mode ("Everything is secure") not reactive ("Nothing can go wrong")
|
|
5. **When:** Often logging in from different devices/locations
|
|
6. **Where:** First step in "Daily Team Check" scenario
|
|
7. **Default state:** Empty fields, disabled submit, "Remember me" pre-checked
|
|
8. **Validation:** Real-time email format, password strength indicator
|
|
9. **Error states:** Network failure, invalid credentials, account locked, too many attempts
|
|
10. **Success flow:** Loading state (0.5s), redirect to dashboard, persist session 30 days
|
|
11. **Accessibility:** Tab order, ARIA labels, screen reader announcements, focus management
|
|
12. **Responsive:** Mobile-first, touch targets 44px minimum, no horizontal scroll
|
|
13. **Edge cases:** Rate limiting, password reset flow, SSO option, back button behavior
|
|
|
|
**The code is a projection of this.**
|
|
|
|
When you need to rewrite in a different framework, the specification remains the source of truth.
|
|
|
|
---
|
|
|
|
## Specifications Align Humans
|
|
|
|
> "A written specification is what enables you to align humans on the shared set of goals."
|
|
>
|
|
> — Sean Grove
|
|
|
|
This is the 80-90% of value that specifications provide.
|
|
|
|
### Before Specification
|
|
|
|
**Stakeholder A thinks:** "Login should be quick and simple"
|
|
**Stakeholder B thinks:** "Login should be highly secure"
|
|
**Developer thinks:** "Login should use OAuth"
|
|
**Designer thinks:** "Login should feel welcoming"
|
|
**User thinks:** "I just want to get to my stuff"
|
|
|
|
**Five people. Five different login screens in their heads.**
|
|
|
|
### After Specification
|
|
|
|
**Everyone reads the same specification:**
|
|
- Security through 2FA, not friction
|
|
- Quick for returning users (remembered device)
|
|
- OAuth + password options (user choice)
|
|
- Welcoming through microcopy, not animation
|
|
- "Get to my stuff" is measured (< 3 seconds from landing)
|
|
|
|
**Five people. One shared login screen.**
|
|
|
|
**This alignment is the majority of the work.** The code is just typing after you've aligned.
|
|
|
|
---
|
|
|
|
## The Professional Approach: Prompt → Spec → Code
|
|
|
|
**This is what pros do.** Not because they like extra work, but because they understand how AI agents actually work at scale.
|
|
|
|
| Approach | Reliability | Testing | Maintenance | Dimensions | Credits |
|
|
|----------|-------------|---------|-------------|------------|---------|
|
|
| **Prompt → Spec → Code** | ✅ Reliable, repeatable | ✅ Test against spec | ✅ Maintainable, documented | ✅ Accessibility, i18n, SEO | ✅ Building, not debugging |
|
|
| **Prompt → Code** | ❌ Inconsistent each time | ❌ No testing reference | ❌ "What does this do?" | ❌ Forgot accessibility | ❌ Endless regeneration |
|
|
|
|
### Why "Prompt to Code" Fails at Scale
|
|
|
|
**The amateur sees:**
|
|
- "I prompted, I got code, I shipped. Done!"
|
|
- *Shows you the first 10 seconds*
|
|
|
|
**The professional sees:**
|
|
- Attempt 1: "Build login" → Wrong validation
|
|
- Attempt 2: "Fix validation" → Breaks mobile
|
|
- Attempt 3: "Fix mobile" → Lost accessibility
|
|
- Attempt 4: "Fix accessibility" → Errors unclear
|
|
- Attempt 5: "Better errors" → Now navigation broke
|
|
- Attempt 6-20: Still not right
|
|
- *Shows you the next 10 hours*
|
|
|
|
**The AI showoffs and scammers don't show you the debugging marathon.**
|
|
|
|
---
|
|
|
|
## Understanding Context Windows
|
|
|
|
Here's what most people miss about AI agents:
|
|
|
|
**You can't fit your entire app in one prompt.**
|
|
|
|
Even with 200k tokens, you're trying to describe:
|
|
- Every user flow
|
|
- Every edge case
|
|
- Every interaction
|
|
- Every validation rule
|
|
- Every error state
|
|
- Accessibility requirements
|
|
- Responsive behaviors
|
|
- Loading states
|
|
- Empty states
|
|
- Animation timings
|
|
- Microcopy variations
|
|
- Error messages
|
|
- Success states
|
|
- Navigation patterns
|
|
- Data persistence
|
|
- Session management
|
|
- Permission systems
|
|
- ...and 1000 more micro-decisions
|
|
|
|
**Try cramming that into "Build me a todo app."**
|
|
|
|
### What Happens When You Try
|
|
|
|
The agent makes **thousands of micro-decisions** without your input.
|
|
|
|
**Most will be wrong for:**
|
|
- YOUR app
|
|
- YOUR users
|
|
- YOUR psychology drivers
|
|
- YOUR business goals
|
|
- YOUR brand voice
|
|
- YOUR accessibility standards
|
|
- YOUR performance requirements
|
|
|
|
**Because the agent had to guess.** You didn't specify.
|
|
|
|
---
|
|
|
|
## Specifications Are Meta-Prompts
|
|
|
|
**This is the key insight:**
|
|
|
|
Creating specifications IS creating the ultimate meta-prompt—a "super prompt" that agents reference throughout development.
|
|
|
|
**You're building a digital twin**—a complete blueprint of your software that exists BEFORE the code.
|
|
|
|
### Specification = Super Prompt That:
|
|
|
|
- ✅ Breaks down complexity into manageable chunks
|
|
- ✅ Provides exact context for each piece
|
|
- ✅ Ensures consistency across the entire app
|
|
- ✅ Enables autonomous agent work WITHOUT guessing
|
|
- ✅ Creates a testing reference for every feature
|
|
- ✅ Documents accessibility, i18n, SEO requirements
|
|
- ✅ Captures edge cases and error states
|
|
- ✅ Defines success criteria for validation
|
|
|
|
**Each specification is a prompt.** Together they form the master prompt for your entire application.
|
|
|
|
### Example: Login Scenario Specification
|
|
|
|
**Prompt → Code approach:**
|
|
> "Build a login screen with email and password"
|
|
|
|
**Agent must guess:**
|
|
- What happens on submit?
|
|
- What validation rules?
|
|
- What error messages?
|
|
- Where does it redirect?
|
|
- What if network fails?
|
|
- What about password reset?
|
|
- What about "remember me"?
|
|
- Mobile responsive how?
|
|
- Accessibility labels?
|
|
- Screen reader support?
|
|
- Keyboard navigation?
|
|
- Focus management?
|
|
|
|
**Result:** 12+ ambiguous decisions made without your input.
|
|
|
|
---
|
|
|
|
**Prompt → Spec → Code approach:**
|
|
|
|
Specification documents exactly:
|
|
|
|
1. **Default State**
|
|
- Empty email/password fields
|
|
- Submit button disabled until both valid
|
|
- "Remember me" checkbox (pre-checked)
|
|
- "Forgot password?" link below password
|
|
|
|
2. **Validation Rules**
|
|
- Email: Standard format, real-time feedback
|
|
- Password: Minimum 8 characters, show/hide toggle
|
|
- Submit: Enabled only when both fields valid
|
|
|
|
3. **Error States**
|
|
- Network failure: "Connection lost. Check your network." + Retry button
|
|
- Invalid credentials: "Email or password incorrect. Try again or reset password."
|
|
- Account locked: "Too many failed attempts. Account locked for 30 minutes."
|
|
- Too many attempts: "Please wait 5 minutes before trying again."
|
|
|
|
4. **Success Flow**
|
|
- Loading state: Spinner on button, "Logging you in..."
|
|
- Redirect: Dashboard (new users) or Last Visited Page (returning)
|
|
- Session: 30 days if "Remember me", 24 hours if not
|
|
|
|
5. **Accessibility**
|
|
- Tab order: Email → Password → Remember me → Forgot password → Submit
|
|
- ARIA: `aria-label="Email address"` on email input
|
|
- Screen reader: Announces errors immediately upon validation
|
|
- Focus management: On error, focus first invalid field
|
|
- Keyboard: Enter submits from any field
|
|
|
|
6. **Responsive**
|
|
- Mobile: Full-width inputs, 16px minimum font (prevents zoom)
|
|
- Tablet: Centered card, max-width 480px
|
|
- Desktop: Centered card, max-width 480px
|
|
- Touch targets: 44px minimum height for all interactive elements
|
|
|
|
7. **Edge Cases**
|
|
- Rate limiting: 5 attempts per 5 minutes per IP
|
|
- Password reset: Email link, expires in 1 hour
|
|
- SSO option: "Or sign in with Google" button below form
|
|
- Back button: Returns to landing page, doesn't re-submit
|
|
- Browser autofill: Compatible, doesn't break validation
|
|
|
|
**Now the agent has ZERO ambiguity.** It builds exactly what you specified.
|
|
|
|
**Result:** First attempt works. All edge cases handled. Accessibility built in. Tests pass.
|
|
|
|
---
|
|
|
|
## The Multi-Dimensional Benefits
|
|
|
|
Specifications enable things that "prompt to code" completely misses:
|
|
|
|
| Dimension | With Specs | Without Specs |
|
|
|-----------|------------|---------------|
|
|
| **Testing** | Test against spec, know what "correct" means | No reference, can't validate, just hope |
|
|
| **Accessibility** | ARIA labels, keyboard nav, screen reader documented | Forgotten or inconsistent, fails WCAG |
|
|
| **Internationalization** | String IDs, translation-ready, locale-aware | Hardcoded English everywhere, expensive fix |
|
|
| **SEO** | Semantic HTML, meta tags, structured data specified | Generic divs, no SEO thought, poor rankings |
|
|
| **Error handling** | Every error state documented and testable | Agent guesses, misses critical cases |
|
|
| **Consistency** | Same patterns throughout, design system enforced | Different approach each screen, chaos |
|
|
| **Maintenance** | Documentation exists, anyone can understand | "What does this even do?", knowledge silos |
|
|
| **Handoff** | Any dev/designer can understand the intent | Only original prompter knows, bus factor 1 |
|
|
| **Evolution** | Clear what changes when requirements change | Ripple effects unknown, fear of changes |
|
|
| **Onboarding** | New team members read specs and understand | Archaeological dig through code |
|
|
|
|
### Example: Accessibility
|
|
|
|
**Prompt → Code:**
|
|
> "Make it accessible"
|
|
|
|
**What the agent does:**
|
|
- Adds some ARIA labels randomly
|
|
- Misses keyboard navigation
|
|
- Forgets focus management
|
|
- No screen reader testing
|
|
- Fails WCAG audit
|
|
|
|
**Result:** Inaccessible to users with disabilities, potential legal liability.
|
|
|
|
---
|
|
|
|
**Prompt → Spec → Code:**
|
|
|
|
**Specification states:**
|
|
- Tab order: Email → Password → Submit → Forgot password
|
|
- ARIA: `aria-label="Email address"` on email field, `aria-label="Password"` on password field
|
|
- Screen reader: Announces errors immediately when validation fails
|
|
- Focus management: On error, focus moves to error message, then invalid field
|
|
- Keyboard: Enter key submits from any field, Escape clears form
|
|
- Error announcements: `role="alert"` on error messages for immediate announcement
|
|
- Loading state: `aria-busy="true"` during submission
|
|
|
|
**Agent implements EXACTLY this.** Nothing missed. Nothing guessed.
|
|
|
|
**Result:** WCAG 2.1 AA compliant, works for all users.
|
|
|
|
---
|
|
|
|
## Why Scenarios Before Code
|
|
|
|
You're creating a **blueprint**. A **digital twin** of your software before it exists.
|
|
|
|
**Scenario = Complete journey specification**
|
|
- What views does the user see?
|
|
- In what order?
|
|
- What data flows between them?
|
|
- What can go wrong at each step?
|
|
- How do we handle it?
|
|
- What's the user's emotional state?
|
|
- How do psychology drivers affect the flow?
|
|
|
|
### Without Scenarios
|
|
|
|
**What happens:**
|
|
- Agent builds isolated screens with no journey context
|
|
- Navigation breaks between screens
|
|
- Data flow undefined or inconsistent
|
|
- Edge cases missed completely
|
|
- User gets stuck, can't complete tasks
|
|
- No consideration of emotional arc
|
|
- Psychology drivers ignored
|
|
|
|
**Example:** User completes login, but where do they land? Dashboard? Last page? Onboarding? The agent guesses.
|
|
|
|
### With Scenarios
|
|
|
|
**What happens:**
|
|
- Agent understands the complete flow end-to-end
|
|
- Builds navigation correctly with proper routing
|
|
- Handles data persistence properly across views
|
|
- Covers edge cases from the spec
|
|
- User journey works end-to-end
|
|
- Emotional arc designed intentionally
|
|
- Psychology drivers shape each step
|
|
|
|
**Example:** Spec says "Returning users land on Last Visited Page. New users see Onboarding Step 1. Users from password reset see Success message then Dashboard." Agent implements exactly this.
|
|
|
|
---
|
|
|
|
## The OpenAI Model Spec Example
|
|
|
|
Sean Grove references OpenAI's "Model Spec" as a perfect example of this approach:
|
|
|
|
> "OpenAI put out their model spec, which is meant to describe how the model ought to behave. It's like 30 pages long, and it's almost incomprehensibly specific about the tiniest details of how it should behave."
|
|
|
|
**Why did OpenAI write 30 pages of specification?**
|
|
|
|
Because **code is a lossy projection from the specification.**
|
|
|
|
The Model Spec is the **source of truth**. The model behavior is the **compiled output**.
|
|
|
|
When behavior is wrong, they fix the **spec** and regenerate. They don't patch the output.
|
|
|
|
**WDS applies this same principle to your software:**
|
|
|
|
- Your specifications are the source of truth
|
|
- Your code is the compiled output
|
|
- When behavior is wrong, fix the spec and regenerate
|
|
- Don't patch code without updating specs
|
|
|
|
---
|
|
|
|
## The Credit Economics
|
|
|
|
### Prompt → Code Burns Credits on Chaos
|
|
|
|
```
|
|
Attempt 1: "Build login" → Wrong validation
|
|
Attempt 2: "Fix validation" → Breaks mobile
|
|
Attempt 3: "Fix mobile" → Lost accessibility
|
|
Attempt 4: "Fix accessibility" → Errors unclear
|
|
Attempt 5: "Better errors" → Now navigation broke
|
|
Attempt 6: "Fix navigation" → Forgot edge cases
|
|
Attempt 7: "Add edge cases" → Performance issues
|
|
Attempt 8: "Optimize" → Broke original functionality
|
|
Attempt 9: "Fix original" → Mobile broke again
|
|
Attempt 10: "Fix mobile again" → Lost the optimizations
|
|
...
|
|
Attempt 20: Still not right, out of credits, frustrated
|
|
```
|
|
|
|
**Credits spent:** Massive regeneration loop, endless debugging
|
|
|
|
**Time spent:** Days or weeks of back-and-forth
|
|
|
|
**Quality:** Inconsistent, fragile, missing features
|
|
|
|
---
|
|
|
|
### Prompt → Spec → Code Invests Credits Wisely
|
|
|
|
```
|
|
Spec phase: Define exactly what you want (upfront investment)
|
|
Workshop 1: Business goals (15 min with Saga)
|
|
Workshop 2: Target groups (20 min with Saga)
|
|
Workshop 3: Driving forces (20 min with Saga)
|
|
Workshop 4: Prioritization (15 min with Saga)
|
|
Workshop 5: Feature scoring (15 min with Saga)
|
|
Scenario outlines: Define journeys (with Freya)
|
|
Specifications: Document every detail (with Freya)
|
|
|
|
Code phase: Agent builds it correctly (first time)
|
|
Freya reads specs
|
|
Generates code matching specifications exactly
|
|
Includes tests from specs
|
|
|
|
Test phase: Verify against spec (passes)
|
|
Run tests
|
|
Validate against specifications
|
|
Fix any discrepancies in specs, regenerate code
|
|
```
|
|
|
|
**Credits spent:** One-time build, minimal fixes
|
|
|
|
**Time spent:** Organized effort, predictable timeline
|
|
|
|
**Quality:** Consistent, robust, complete features
|
|
|
|
---
|
|
|
|
## Professional vs Amateur
|
|
|
|
**The AI showoffs and scammers say:**
|
|
> "Just prompt to code! Look how fast!"
|
|
|
|
*They show you:*
|
|
- 10 seconds of prompting
|
|
- Code appearing
|
|
- "Look, it works!"
|
|
|
|
*They don't show you:*
|
|
- 10 hours of debugging
|
|
- Missing accessibility
|
|
- No error handling
|
|
- Broken on mobile
|
|
- Inconsistent patterns
|
|
- Untestable code
|
|
- No documentation
|
|
- Impossible to maintain
|
|
|
|
---
|
|
|
|
**The professionals say:**
|
|
> "Spec first. Code second. Ship once."
|
|
|
|
*They show you:*
|
|
- Working software that passes tests
|
|
- Handles edge cases correctly
|
|
- Works for all users (accessibility)
|
|
- Performs well on all devices
|
|
- Can be maintained and evolved
|
|
- Documentation exists
|
|
- Team aligned on goals
|
|
- Strategic decisions traceable
|
|
|
|
---
|
|
|
|
## This Is Why WDS Works
|
|
|
|
**WDS forces you to think before you build:**
|
|
|
|
```
|
|
1. Strategy first
|
|
↓
|
|
Trigger Map ensures you're solving the right problem
|
|
Business goals → Target groups → Driving forces → Priorities → Features
|
|
|
|
2. Scenarios second
|
|
↓
|
|
Outline the journeys before the screens
|
|
User flows → Data flows → Edge cases → Emotional arcs
|
|
|
|
3. Specifications third
|
|
↓
|
|
Document every detail before code
|
|
Default states → Interactions → Validations → Errors → Success → Accessibility
|
|
|
|
4. Code fourth
|
|
↓
|
|
Agents build exactly what's specified
|
|
TypeScript → React → Tests (or any other target)
|
|
|
|
5. Test fifth
|
|
↓
|
|
Verify against specs (they pass!)
|
|
Automated tests reference specifications
|
|
```
|
|
|
|
**Each step is a meta-prompt for the next step.**
|
|
|
|
- The trigger map prompts scenarios
|
|
- The scenarios prompt specifications
|
|
- The specifications prompt code
|
|
- The code validates against specifications
|
|
|
|
**It's a coherent system, not random prompting.**
|
|
|
|
---
|
|
|
|
## The Spirit of BMad v6 Through Design
|
|
|
|
This philosophy—**specifications as the primary artifact**—represents:
|
|
|
|
> "The spirit of BMad v6 through the lens of a designer with 25 years experience."
|
|
|
|
### What BMad v6 Taught Us
|
|
|
|
**BMad v6's innovation was conversational strategy:**
|
|
- Guided dialog instead of forms
|
|
- Questions that make you think deeply
|
|
- Documentation that emerges from conversation
|
|
- Strategic alignment before tactical decisions
|
|
|
|
**BMad v6 showed:** "Talk it through properly, the right artifacts emerge."
|
|
|
|
### WDS Extends This to Design
|
|
|
|
**WDS applies the same principle to specifications:**
|
|
- Guided creation instead of blank-page syndrome
|
|
- Prompts that surface the right details
|
|
- Specifications that emerge from strategic foundation
|
|
- Complete blueprints before code
|
|
|
|
**WDS shows:** "Spec it properly, the right code emerges."
|
|
|
|
---
|
|
|
|
## Engineering as Precise Exploration
|
|
|
|
> "Engineering is the precise exploration by humans of software solutions to human problems."
|
|
>
|
|
> — Sean Grove
|
|
|
|
**This statement captures what WDS is designed for:**
|
|
|
|
- **Engineering:** Not just coding, but disciplined problem-solving
|
|
- **Precise:** Specifications create precision, not vague "just build it"
|
|
- **Exploration:** WDS supports iteration and discovery
|
|
- **By humans:** AI assists, but humans drive strategy and alignment
|
|
- **Software solutions:** Code is the output, not the primary work
|
|
- **To human problems:** Trigger Map ensures you solve real problems
|
|
|
|
**WDS provides the structure for this precise exploration:**
|
|
|
|
1. **Discover the human problem** (Trigger Map)
|
|
2. **Explore software solutions** (Scenarios)
|
|
3. **Specify precisely** (Specifications)
|
|
4. **Generate implementations** (Code)
|
|
|
|
---
|
|
|
|
## What Makes WDS Unique
|
|
|
|
Many methodologies exist. What makes WDS different?
|
|
|
|
### 1. Specifications as Primary Artifact
|
|
|
|
**Other methodologies:**
|
|
- Code-first, docs later (or never)
|
|
- "Move fast and break things"
|
|
- Documentation as afterthought
|
|
|
|
**WDS:**
|
|
- Specs-first, code generated from specs
|
|
- "Think precisely and build right"
|
|
- Specifications as source code
|
|
|
|
### 2. AI-Native from the Ground Up
|
|
|
|
**Other methodologies:**
|
|
- Designed for human developers
|
|
- AI retrofitted awkwardly
|
|
- "How do we use AI to help with our process?"
|
|
|
|
**WDS:**
|
|
- Designed for human-AI collaboration
|
|
- AI agents as first-class participants
|
|
- "How do we structure work so humans and AI each do what they're best at?"
|
|
|
|
### 3. Psychology-Driven Strategy
|
|
|
|
**Other methodologies:**
|
|
- "Build features users ask for"
|
|
- Feature lists without strategic grounding
|
|
- Stakeholder opinions drive decisions
|
|
|
|
**WDS:**
|
|
- "Build features that address psychological drivers"
|
|
- Every feature traced to trigger map
|
|
- Data-driven strategic decisions
|
|
|
|
### 4. Complete Traceability
|
|
|
|
**Other methodologies:**
|
|
- "Why did we build this feature?" → "Someone asked for it"
|
|
- Orphaned features with no clear purpose
|
|
- "That's how we've always done it"
|
|
|
|
**WDS:**
|
|
- "Why did we build this feature?" → Shows complete chain:
|
|
- Feature → Driver → Persona → Business Goal
|
|
- Every decision documented with reasoning
|
|
- Strategic changes cascade correctly
|
|
|
|
---
|
|
|
|
## Remember: Your App Exists in Specifications
|
|
|
|
**Let this sink in:**
|
|
|
|
Your software doesn't exist in code.
|
|
|
|
Your software exists in specifications.
|
|
|
|
The code is just one possible projection—one compiled output—of your specifications.
|
|
|
|
**When you need to:**
|
|
- Rewrite in a different framework → Keep specs, generate new code
|
|
- Add accessibility → Update specs, regenerate implementation
|
|
- Support new language → Update specs, generate translations
|
|
- Create documentation → Specs ARE the documentation
|
|
- Onboard new developer → They read specs, not code archaeology
|
|
|
|
**The specifications are the source of truth.**
|
|
|
|
The code is the artifact you ship, but specifications are the asset you maintain.
|
|
|
|
---
|
|
|
|
## Key Takeaways
|
|
|
|
✅ **Code is 10-20% of value** — Structured communication (specs) is 80-90%
|
|
|
|
✅ **Specifications are the primary artifact** — Code is a lossy projection from specs
|
|
|
|
✅ **Specs align humans on goals** — This alignment is the majority of the work
|
|
|
|
✅ **Specs are meta-prompts** — Digital twin / blueprint for autonomous AI agents
|
|
|
|
✅ **Specs enable dimensions code misses** — Testing, accessibility, i18n, SEO, maintenance
|
|
|
|
✅ **Specs save credits and time** — Build once right vs. endless regeneration loop
|
|
|
|
✅ **Specs can target multiple outputs** — TypeScript, Rust, docs, tests, tutorials (like compiler targets)
|
|
|
|
✅ **Professional approach:** Prompt → Spec → Code — Amateur approach: Prompt → Code → Debug hell
|
|
|
|
✅ **WDS is AI-native** — Designed for human-AI collaboration from the ground up
|
|
|
|
✅ **Spirit of BMad v6** — Guided conversation creates strategic artifacts, through designer lens
|
|
|
|
✅ **Engineering is precise exploration** — Specs provide the precision, WDS provides the structure
|
|
|
|
---
|
|
|
|
## What's Next
|
|
|
|
Now that you understand WHY specifications matter, the next lessons show you HOW to create them effectively with Freya.
|
|
|
|
You'll learn:
|
|
- How to outline scenarios (user journeys)
|
|
- How to create conceptual sketches (visualize default states)
|
|
- How to build storyboards (show transformations)
|
|
- How to write detailed specifications (document every decision)
|
|
|
|
**Armed with this philosophy, you're ready to design professionally.**
|
|
|
|
---
|
|
|
|
## Further Reading
|
|
|
|
**Core framework (pure philosophy, no WDS-specific content):**
|
|
→ [Model: Specifications as the New Code](../../models/specifications-as-the-new-code.md)
|
|
|
|
**Original talk:**
|
|
→ [The New Code — Sean Grove, OpenAI (YouTube)](https://www.youtube.com/watch?v=8rABwKRsec4)
|
|
|
|
---
|
|
|
|
**[Continue to Lesson 3: Meet Freya →](lesson-03-meet-freya.md)**
|
|
|
|
---
|
|
|
|
[← Back to Lesson 1](lesson-01-entering-design.md) | [Module Overview](module-07-design-phase-overview.md)
|
|
|
|
*Part of Module 07: Design Phase*
|