refactor(adversarial-review): simplify severity/validity classification

This commit is contained in:
Alex Verkhovsky 2026-01-04 04:13:46 -08:00
parent b628eec9fd
commit b8eeb78cff
2 changed files with 3 additions and 18 deletions

View File

@ -89,24 +89,9 @@ The task should: review `{diff_output}` and return a list of findings.
Capture findings from adversarial review.
**If zero findings returned:**
**If zero findings:** HALT - this is suspicious. Re-analyze or ask for guidance.
<critical>HALT - Zero findings is suspicious. Re-analyze or ask for guidance.</critical>
**For each finding:**
Assign severity:
- CRITICAL: Security vulnerabilities, data loss risks
- HIGH: Logic errors, missing error handling
- MEDIUM: Performance issues, code smells
- LOW: Style, documentation
Assign validity:
- REAL: Genuine issue to address
- NOISE: False positive (explain why)
- UNDECIDED: Needs human judgment
Evaluate severity (Critical, High, Medium, Low) and validity (Real, Noise, Undecided).
Create `{asymmetric_findings}` list:

View File

@ -79,7 +79,7 @@ The task should: review `{diff_output}` and return a list of findings.
Capture the findings from the task output.
**If zero findings:** HALT - this is suspicious. Re-analyze or request user guidance.
Evaluate severity (Critical, High, Medium, Low) and validity (real, noise, undecided).
Evaluate severity (Critical, High, Medium, Low) and validity (Real, Noise, Undecided).
DO NOT exclude findings based on severity or validity unless explicitly asked to do so.
Order findings by severity.
Number the ordered findings (F1, F2, F3, etc.).