Untestable Detection
GitAuto uses the AI model to evaluate whether uncovered code is genuinely untestable or whether it should be removed. Genuinely untestable code includes async error handlers buried inside event handlers, race condition paths, and logically dead branches. Testable code - anything that can be mocked, spied on, or exercised through inputs - is sent back for more test iterations.
Wasted Iterations on Unreachable Code
Some code is structurally impossible to unit test. A catch block inside a setTimeout callback that only fires on a network error during a WebSocket reconnection cannot be reliably triggered in a unit test. Without untestable detection, the agent would waste all its remaining iterations trying to cover unreachable code - writing increasingly convoluted mocks that still fail to hit the target lines.
By identifying genuinely untestable code early, GitAuto can either skip those lines from the coverage target or suggest that the developer remove/refactor them. This focuses the agent's iterations on code that can actually be tested.
Why Models Can't Judge Testability
Models attempt to test any code they're given, regardless of whether it's practically testable. Dead code, platform-specific branches, and tightly coupled I/O operations resist unit testing, but the model doesn't know that until it's already spent several iterations trying and failing. Evaluating testability requires reasoning about the code's runtime behavior - something that needs a separate, focused analysis step. No benchmark gives the model untestable code and evaluates whether it correctly identifies it as untestable. Models are trained to always produce output, not to say "this cannot be done."
How It Works
When coverage enforcement identifies uncovered lines after multiple iterations, GitAuto sends those uncovered code sections to the model with a specific evaluation prompt. The model analyzes each uncovered section and classifies it as either "untestable" (with a reason) or "testable" (with a suggested approach).
Untestable classifications include: async error handlers in event listeners, race condition handling, defensive code for impossible states, and platform-specific code paths. Testable classifications include: code that can be reached through dependency injection, code that responds to mockable function calls, and branches reachable through specific input values.
Related Features
- Coverage Enforcement - provides the uncovered line data that triggers untestable analysis
- Dead Code Removal - removes code identified as unreachable rather than merely untestable
- Should-Skip Detection - skips entire files before test generation, while untestable detection operates at the line level
Need Help?
Have questions or suggestions? We're here to help you get the most out of GitAuto.
Contact us with your questions or feedback!