feat(dev-agent): Add version-based critical research assessment

This commit enhances the dependency analysis workflow to intelligently determine when internet research is critical versus optional, preventing the model doesn't know what it doesn't know problem.

Key improvements:
- Version Criticality Analysis: Compares package.json versions against model training cutoff to identify potentially unknown libraries
- Smart Fallback Logic: Distinguishes between critical research (newer versions, latest specifiers) and optional research (general best practices)
- Risk-Aware Halting: Stops execution for critical version mismatches with clear explanation of risks
- User Choice Preservation: Allows informed decisions about proceeding without current knowledge

Critical triggers include:
- Package versions newer than model training data
- latest/^ version specifiers for major dependencies
- Completely new frameworks not in training data
- Major version bumps requiring migration knowledge

This prevents implementation failures due to outdated patterns while maintaining workflow efficiency for routine development tasks.
This commit is contained in:
kevlingo 2025-06-22 17:58:28 -04:00
parent 07d34127a2
commit aa296aff4d
1 changed files with 30 additions and 7 deletions

View File

@ -16,15 +16,24 @@ To execute a user story with a proactive analysis and review cycle, ensuring ali
- **Action**: Use the built-in IDE semantic search capabilities.
- **Query**: Search the entire codebase for implementations, functions, components, or patterns that are semantically related to the user story's title, description, and Acceptance Criteria.
- **Goal**: Identify code for potential reuse, and understand existing patterns relevant to the new work. Log these findings internally for use during implementation.
4. **Dependency & Standards Analysis with Internet Fallback**:
4. **Dependency & Standards Analysis with Version-Based Criticality Assessment**:
- Read the `package.json` (or equivalent) to identify currently installed libraries relevant to the story.
- [[LLM: **Attempt Internet Search:** Perform a targeted internet search for "best practices for [library/feature] in [current year]" for any new or significantly updated libraries.]]
- **Version Criticality Analysis**:
- Compare dependency versions against model training cutoff (typically 2024 for current models)
- Flag packages with versions newer than training data or using "latest"/"^" specifiers
- Identify completely new packages not in model's training data
- [[LLM: **Attempt Internet Search:** If critical versions detected OR for general best practices research, perform a targeted internet search for "best practices for [library/feature] in [current year]".]]
- [[LLM: **On Success:** Silently incorporate the findings and proceed to the next step.]]
- [[LLM: **On Failure (If unable to access the internet):**
1. **Announce the situation clearly:** "I am currently unable to access the internet to research the latest standards for this task."
2. **Offer a choice to the user:** "Is this expected, or should I attempt to enable access? Alternatively, I can proceed using the best practices from my existing training data."
3. **Request explicit instruction to continue:** "Please let me know how you would like to proceed."
4. **HALT and await user response.** Once the user confirms to continue (with or without them enabling internet access), proceed to the next step using the available knowledge.]]
- [[LLM: **On Failure with Critical Versions Detected:**
1. **Announce the critical situation:** "I detected [library] version [X.Y.Z] which is newer than my training data (cutoff: [date]). Without MCP search tools, I cannot safely implement features using this version."
2. **Explain the risk:** "Proceeding without current knowledge could result in deprecated patterns, security vulnerabilities, or broken implementations."
3. **Request explicit decision:** "Please enable MCP search tools or confirm you want me to proceed with potentially outdated patterns. Type 'proceed-anyway' to continue at your own risk."
4. **HALT and await user response.** Only continue after explicit user confirmation.]]
- [[LLM: **On Failure with Non-Critical Research:**
1. **Announce the situation:** "I am currently unable to access the internet to research the latest standards for this task."
2. **Offer a choice:** "This appears to be non-critical research. I can proceed using best practices from my existing training data."
3. **Request confirmation:** "Please let me know if you'd like me to continue or if you prefer to enable MCP search tools first."
4. **HALT and await user response.** Continue based on user preference.]]
5. **Initial Complexity Assessment & Mode Declaration**:
- Calculate a "Story Complexity" score using Fibonacci scale (1, 2, 3, 5, 8, 13).
- If Review Mode was forced by the user OR if `Story Complexity` > `agentThresholdStory`, declare: "**Entering high-scrutiny 'Review Mode' for this story. Each task will be individually assessed.**"
@ -39,6 +48,20 @@ To execute a user story with a proactive analysis and review cycle, ensuring ali
- **8**: Major architectural changes (new services, database schema)
- **13**: High-risk/high-uncertainty work (new frameworks, complex integrations)
### Version Analysis Guidelines
**Critical Version Indicators** (Require MCP search):
- Package versions released after model training cutoff
- "latest", "^", or "~" version specifiers for major dependencies
- Completely new packages/frameworks not in training data
- Major version bumps (e.g., React 17→19, Node 16→20)
**Non-Critical Research** (Optional MCP search):
- Established libraries within training data timeframe
- Minor version updates with backward compatibility
- General coding patterns and best practices
- Documentation and styling techniques
## 2. Unified Task Execution Phase
[[LLM: Proceed with the `Tasks / Subtasks` list from the story file one by one. The following logic applies to EVERY task.]]