Software testability, the tendency for software to reveal its faults during testing, is an important issue for verification and quality assurance. Testability can also be used to good advantage as a debugging technique.
One measure of testability is a technique termed ``Sensitivity Analysis." This technique analyzes how likely a test scheme is to (1) propagate data state errors to the output space, (2) cause internal states to become corrupted when faults are exercised, and (3) exercise the code. By knowing where faults are likely to hide for a particular test scheme, we have insight into where assertions are warranted and particularly beneficial. After applying sensitivity analysis, injecting assertions based on its results, and obtaining a rough failure probability estimate for a program during test or operation, we propose a testability-based debugging paradigm that can be used to identify possible sites of faults. This model works well when hiding faults are of small size and thus causing infrequent failure.