# Cross-Agent Validation Protocol - Quality Assurance System ## System Overview The Cross-Agent Validation Protocol ensures consistency, quality, and feasibility across all agent outputs through systematic peer review, conflict resolution, and consensus building mechanisms. ## Validation Architecture ### Validation Matrix ```yaml validation_relationships: analyst: validates: [] # First agent, no validators validated_by: - agent: pm aspects: ["market_assumptions", "user_needs_accuracy", "problem_definition_clarity"] weight: 0.8 - agent: architect aspects: ["technical_feasibility", "scope_realism", "constraint_accuracy"] weight: 0.7 pm: validates: - agent: analyst aspects: ["business_viability", "user_story_alignment", "market_validation"] criteria: business_viability: "revenue_potential > 0 AND cost_feasible == true" user_story_alignment: "all_features_traceable_to_user_needs" market_validation: "competitive_analysis_depth >= 3" validated_by: - agent: analyst aspects: ["requirement_completeness", "user_story_quality", "acceptance_criteria_clarity"] weight: 0.6 - agent: architect aspects: ["technical_implementability", "non_functional_requirements", "scalability_considerations"] weight: 0.9 - agent: ux_expert aspects: ["user_experience_alignment", "usability_requirements", "accessibility_considerations"] weight: 0.7 architect: validates: - agent: analyst aspects: ["technical_constraints_realism", "scalability_requirements", "integration_complexity"] - agent: pm aspects: ["technical_feasibility_all_requirements", "performance_expectations", "security_implications"] criteria: technical_feasibility: "all_user_stories_implementable == true" performance_expectations: "requirements_achievable_with_proposed_timeline" security_implications: "all_security_risks_identified_and_addressable" validated_by: - agent: developer aspects: ["implementation_realism", "technology_choices", "architectural_complexity"] weight: 0.9 - agent: qa aspects: ["testability_architecture", "monitoring_capabilities", "deployment_complexity"] weight: 0.6 ux_expert: validates: - agent: pm aspects: ["user_experience_requirements", "usability_goals", "accessibility_standards"] - agent: analyst aspects: ["user_personas_accuracy", "user_journey_realism", "pain_point_prioritization"] validated_by: - agent: developer aspects: ["ui_implementability", "interaction_complexity", "responsive_design_feasibility"] weight: 0.8 - agent: qa aspects: ["user_testing_scenarios", "accessibility_testability", "cross_platform_considerations"] weight: 0.5 developer: validates: - agent: architect aspects: ["code_implementability", "technology_stack_appropriateness", "development_timeline_realism"] - agent: ux_expert aspects: ["ui_component_feasibility", "interaction_implementation", "performance_implications"] criteria: ui_component_feasibility: "all_components_buildable_within_constraints" interaction_implementation: "animations_and_interactions_performant" performance_implications: "ui_requirements_meet_performance_targets" validated_by: - agent: qa aspects: ["code_quality", "test_coverage", "deployment_readiness"] weight: 1.0 qa: validates: - agent: developer aspects: ["implementation_quality", "test_coverage_adequacy", "security_implementation"] - agent: architect aspects: ["system_testability", "monitoring_implementation", "failure_handling"] - agent: pm aspects: ["requirement_testability", "acceptance_criteria_measurability", "success_metrics_trackability"] criteria: requirement_testability: "all_requirements_have_testable_criteria" acceptance_criteria_measurability: "all_criteria_objectively_measurable" success_metrics_trackability: "metrics_implementable_and_monitorable" validated_by: [] # Final validator, no peer validation ``` ### Validation Process Flow ```yaml validation_workflow: trigger_points: - agent_completion: "immediate_validation" - quality_gate: "comprehensive_validation" - workflow_milestone: "full_cross_validation" - conflict_detected: "consensus_building_validation" validation_stages: stage_1_immediate: timing: "within_60_seconds_of_agent_completion" scope: "output_structure_and_completeness" validators: "automated_quality_checks" stage_2_peer_review: timing: "before_next_agent_activation" scope: "content_quality_and_consistency" validators: "designated_peer_agents" stage_3_cross_validation: timing: "major_workflow_milestones" scope: "overall_coherence_and_alignment" validators: "all_completed_agents" validation_execution: parallel_validation: true timeout_per_validator: 90 # seconds consensus_threshold: 0.75 conflict_escalation: "automatic" ``` ## Validation Criteria & Standards ### Quality Assessment Framework ```yaml quality_dimensions: completeness: description: "All required elements present and sufficiently detailed" scoring: 10: "Exceeds requirements, comprehensive coverage" 8: "Meets all requirements, good detail level" 6: "Meets basic requirements, adequate detail" 4: "Missing some requirements or insufficient detail" 2: "Missing major requirements or very sparse" 0: "Fundamentally incomplete or missing" consistency: description: "Aligns with previous agent outputs and project constraints" scoring: 10: "Perfect alignment, reinforces previous work" 8: "Good alignment, minor inconsistencies" 6: "Generally consistent, some conflicts" 4: "Notable inconsistencies requiring attention" 2: "Major conflicts with previous work" 0: "Contradicts fundamental project elements" feasibility: description: "Realistic and implementable within project constraints" scoring: 10: "Highly feasible, well within constraints" 8: "Feasible, reasonable approach" 6: "Achievable with some challenges" 4: "Challenging but possible with effort" 2: "Questionable feasibility, major concerns" 0: "Not feasible within stated constraints" clarity: description: "Clear, unambiguous, and easy to understand" scoring: 10: "Crystal clear, excellent communication" 8: "Clear and well-articulated" 6: "Generally clear, minor ambiguities" 4: "Some unclear areas requiring clarification" 2: "Difficult to understand, many ambiguities" 0: "Unclear or incomprehensible" actionability: description: "Provides clear direction for implementation" scoring: 10: "Highly actionable, clear next steps" 8: "Actionable with good guidance" 6: "Generally actionable, some guidance" 4: "Somewhat actionable, needs more detail" 2: "Minimally actionable, significant gaps" 0: "Not actionable, lacks implementation guidance" ``` ### Automated Validation Checks ```yaml automated_checks: structure_validation: - required_sections_present - section_length_appropriate - format_consistency - json_schema_compliance # artifacts must pass .claude/schemas/*.schema.json content_validation: - no_placeholder_text - specific_rather_than_vague_language - quantified_statements_present - actionable_language_used cross_reference_validation: - previous_agent_outputs_referenced - consistency_with_project_constraints - alignment_with_complexity_score - adherence_to_quality_standards technical_validation: - feasibility_within_constraints - technology_compatibility - performance_requirements_realistic - security_considerations_included ``` ## Implementation Hook (Gate Runner) Use the gate tool to enforce validate → auto-fix → escalate at every step: ``` node .claude/tools/gates/gate.mjs --schema --input --gate .claude/context/history/gates//-.json --autofix 1 ``` If the command exits non-zero, escalate per `on_fail` policy in the workflow YAML. ## Conflict Resolution System ### Conflict Detection ```yaml conflict_detection: automatic_triggers: - contradictory_requirements: "pm_says_X_but_analyst_said_Y" - technical_impossibility: "ux_requires_X_but_architect_says_impossible" - timeline_mismatch: "developer_estimates_exceed_pm_timeline" - scope_creep: "new_requirements_exceed_original_complexity_score" conflict_categories: technical_feasibility: description: "Disagreement on what's technically possible" authority_chain: ["architect", "developer", "pm"] escalation: "technical_spike_investigation" user_requirements: description: "Disagreement on what users actually need" authority_chain: ["analyst", "ux_expert", "pm"] escalation: "user_research_validation" implementation_approach: description: "Disagreement on how to build the solution" authority_chain: ["developer", "architect", "qa"] escalation: "proof_of_concept_development" quality_standards: description: "Disagreement on quality level or testing approach" authority_chain: ["qa", "architect", "pm"] escalation: "quality_standards_committee" ``` ### Consensus Building Protocol ```yaml consensus_process: initiation: trigger: "validation_score_below_threshold OR explicit_conflict_flagged" participants: "all_agents_with_relevant_expertise" facilitator: "agent_with_highest_domain_authority" phases: phase_1_clarification: objective: "understand_each_position_clearly" activities: - each_agent_states_position_with_rationale - identify_specific_points_of_disagreement - clarify_underlying_assumptions duration: "max_3_iterations" phase_2_evidence_gathering: objective: "collect_supporting_evidence" activities: - reference_industry_best_practices - analyze_project_constraints_impact - consider_user_impact_of_each_approach duration: "max_5_iterations" phase_3_compromise_seeking: objective: "find_mutually_acceptable_solution" activities: - identify_areas_of_agreement - explore_hybrid_approaches - assess_trade_offs_of_each_option success_criteria: "consensus_score >= 0.75" phase_4_decision_making: objective: "reach_final_decision" methods: - consensus_achieved: "adopt_agreed_solution" - partial_consensus: "escalate_remaining_conflicts" - no_consensus: "authority_chain_decision" documentation: "record_decision_and_rationale" fallback_mechanisms: expert_consultation: trigger: "technical_complexity_beyond_agent_expertise" action: "request_human_expert_input" simplification_approach: trigger: "conflict_resolution_taking_too_long" action: "adopt_simplest_feasible_solution" staged_implementation: trigger: "approaches_have_different_timelines" action: "implement_in_phases_to_test_approaches" ``` ## Implementation Integration ### Enhanced Agent Validation Instructions ```yaml # Added to each agent prompt ## As part of the Cross-Agent Validation Protocol, you have specific responsibilities: ### When Validating Other Agents: {{#each validation_targets}} **Validating {{agent}} outputs:** - **Focus Areas**: {{#each aspects}}{{this}}{{#unless @last}}, {{/unless}}{{/each}} - **Validation Criteria**: {{validation_criteria}} - **Quality Threshold**: {{quality_threshold}}/10 - **Conflict Resolution Role**: {{conflict_resolution_role}} Your validation should assess: 1. **Accuracy**: Is the information correct and well-researched? 2. **Completeness**: Are all required elements present and sufficiently detailed? 3. **Consistency**: Does it align with your own analysis and outputs? 4. **Feasibility**: Is it realistic within project constraints? 5. **Quality**: Does it meet professional standards? Provide structured feedback: ```yaml validation_result: agent: "{{agent}}" output: "{{output_name}}" overall_score: X/10 dimension_scores: completeness: X/10 consistency: X/10 feasibility: X/10 clarity: X/10 actionability: X/10 issues_identified: - category: "technical_feasibility" severity: "high|medium|low" description: "specific issue description" recommendation: "suggested improvement" approval_status: "approved|approved_with_conditions|requires_revision" conditions: ["list of required changes if approved with conditions"] ``` {{/each}} ### When Being Validated: Your outputs will be validated by: {{#each validators}}{{agent}} ({{aspects}}){{#unless @last}}, {{/unless}}{{/each}} **Response to Validation Feedback:** 1. **Review Thoroughly**: Consider all feedback objectively 2. **Address Issues**: Revise your work to address valid concerns 3. **Clarify Misunderstandings**: Explain your rationale where validators may have misunderstood 4. **Escalate Conflicts**: If you disagree with feedback, initiate consensus building process 5. **Document Changes**: Clearly indicate what you revised and why ## When conflicts arise in validation: 1. **State Your Position Clearly**: Explain your reasoning with evidence 2. **Listen to Other Perspectives**: Understand other agents' concerns 3. **Seek Common Ground**: Look for areas of agreement to build from 4. **Propose Compromises**: Suggest hybrid approaches when possible 5. **Focus on User Value**: Remember the end goal is user and business success 6. **Document Agreements**: Record final decisions and rationale If consensus cannot be reached, follow the authority chain for your domain area. ``` This Cross-Agent Validation Protocol creates a robust quality assurance system that ensures all agents work together cohesively while maintaining their individual expertise areas.