# QA Automation for Games ## Overview Automated testing in games requires different approaches than traditional software. Games have complex state, real-time interactions, and subjective quality measures that challenge automation. ## Testing Pyramid for Games ``` /\ / \ Manual Playtesting /----\ (Experience, Feel, Fun) / \ /--------\ Integration Tests / \ (Systems, Workflows) /------------\ / \ Unit Tests /________________\ (Pure Logic, Math, Data) ``` ### Unit Tests (Foundation) Test pure logic that doesn't depend on engine runtime: - Math utilities (vectors, transforms, curves) - Data validation (save files, configs) - State machines (isolated logic) - Algorithm correctness ### Integration Tests (Middle Layer) Test system interactions: - Combat system + inventory - Save/load round-trips - Scene transitions - Network message handling ### Manual Testing (Top) What can't be automated: - "Does this feel good?" - "Is this fun?" - "Is the difficulty right?" ## Automation Strategies by Engine ### Unity ```csharp // Unity Test Framework [Test] public void DamageCalculation_CriticalHit_DoublesDamage() { var baseDamage = 100; var result = DamageCalculator.Calculate(baseDamage, isCritical: true); Assert.AreEqual(200, result); } // Play Mode Tests (runtime) [UnityTest] public IEnumerator PlayerJump_WhenGrounded_BecomesAirborne() { var player = CreateTestPlayer(); player.Jump(); yield return new WaitForFixedUpdate(); Assert.IsFalse(player.IsGrounded); } ``` ### Unreal Engine ```cpp // Automation Framework IMPLEMENT_SIMPLE_AUTOMATION_TEST(FDamageTest, "Game.Combat.Damage", EAutomationTestFlags::ApplicationContextMask | EAutomationTestFlags::ProductFilter) bool FDamageTest::RunTest(const FString& Parameters) { float BaseDamage = 100.f; float Result = UDamageCalculator::Calculate(BaseDamage, true); TestEqual("Critical hit doubles damage", Result, 200.f); return true; } ``` ### Godot ```gdscript # GUT Testing Framework func test_damage_critical_hit(): var base_damage = 100 var result = DamageCalculator.calculate(base_damage, true) assert_eq(result, 200, "Critical hit should double damage") ``` ## What to Automate ### High Value Targets - **Save/Load** - Data integrity is critical - **Economy** - Currency, items, progression math - **Combat Math** - Damage, stats, modifiers - **Localization** - String loading, formatting - **Network Serialization** - Message encoding/decoding ### Medium Value Targets - **State Machines** - Character states, game states - **Pathfinding** - Known scenarios - **Spawning** - Wave generation, loot tables - **UI Data Binding** - Correct values displayed ### Low Value / Avoid - **Visual Quality** - Screenshots drift, hard to maintain - **Input Feel** - Timing-sensitive, needs human judgment - **Audio** - Subjective, context-dependent - **Fun** - Cannot be automated ## Continuous Integration for Games ### Build Pipeline 1. **Compile** - Build game executable 2. **Unit Tests** - Fast, isolated tests 3. **Integration Tests** - Longer, system tests 4. **Smoke Test** - Can the game launch and reach main menu? 5. **Nightly** - Extended test suites, performance benchmarks ### CI Gotchas for Games - **Long build times** - Games take longer than web apps - **GPU requirements** - Some tests need graphics hardware - **Asset dependencies** - Large files, binary formats - **Platform builds** - Multiple targets to maintain ## Regression Testing ### Automated Regression - Run full test suite on every commit - Flag performance regressions (frame time, memory) - Track test stability (flaky tests) ### Save File Regression - Maintain library of save files from previous versions - Test that new builds can load old saves - Alert on schema changes ## Test Data Management ### Test Fixtures ``` tests/ ├── fixtures/ │ ├── save_files/ │ │ ├── new_game.sav │ │ ├── mid_game.sav │ │ └── endgame.sav │ ├── configs/ │ │ └── test_balance.json │ └── scenarios/ │ └── boss_fight_setup.scene ``` ### Deterministic Testing - Seed random number generators - Control time/delta time - Mock external services ## Metrics and Reporting ### Track Over Time - Test count (growing is good) - Pass rate (should be ~100%) - Execution time (catch slow tests) - Code coverage (where applicable) - Flaky test rate (should be ~0%) ### Alerts - Immediate: Any test failure on main branch - Daily: Coverage drops, new flaky tests - Weekly: Trend analysis, slow test growth