1. False Positives
    1. What
      1. Test executes and despite everything working as expected, the test tells us that there is a "bug"
      2. Tester / Test incorrectly concludes that the program failed or the intent is not being met.
    2. Where
      1. Commonly observed in:
        1. Errors in automation test scripts
        2. Unstability in test environment
        3. Failure due to third party libraries or cooperating processes
    3. Example
      1. Automation Test Script Failure due to changing locators
      2. Test Pipeline Failure due to Build Issue(s)
  2. Common Causes
    1. Flaky Test Scripts
      1. Poor Design
      2. Non Modular Structure
      3. Non Clean Code, etc.
      4. Poor Testability
    2. Changing Locators (Automation Hooks)
      1. XPaths
      2. JSON Paths
      3. XML Paths, etc.
    3. Unstable Test Environment
      1. State of System
      2. Dependent Processes
      3. Version Changes
      4. Breaking Contracts, etc.
    4. Impractical Test Sequence
      1. Write Software --> Read Software
      2. Delete Software --> Read Software
    5. Configuration Issues
      1. Misconfigured Dependencies
      2. Does it consider the specific version of each one?
      3. Does it consider environment states?
        1. Setup
        2. Teardown
    6. Script Development & Execution on one specific machine
      1. Scripts that only run on local machine are mostly FLAKY
    7. Programs don't do what they haven't been told to pay attention to!
  3. Impact
    1. Loss of time
      1. In going through logs
      2. In investigating the problem
      3. In reproducing the test steps
      4. In fixing the test scripts, environment, etc.
    2. Noise in Testing Cycles
    3. Loss of credibility & Integrity
    4. Defocus from actual information gathering process (testing)
    5. The more you go in one direction, the far you go from the opposite direction. - Jerry Weinberg
  4. Avoiding False Positives
    1. Auto-Healing Locators
    2. Robust Locators (Automation Hooks)
    3. Automatic Retry / Rerun(s)
    4. Scheduling test runs periodically to ascertain stability.
    5. Develop & Execute on Separate Machines
      1. Prevention > Investigation > Detection
      2. Headless Executions
      3. Docker Containers / Cloud Systems
    6. Plan Testing across Layers
      1. Flakiness Index: UI > API > Unit
    7. Refreshing Test Environment
      1. Resetting State
      2. Using Hooks
        1. Setup
        2. Teardown
      3. Using Docker Images, etc.
    8. Verbose Logging & Observability
    9. Using Contract Verification Scripts
      1. Monitor Contracts
      2. Monitor Dependencies
    10. Good Testability
      1. Access to GOLD
      2. GOLD = Going one layer down
      3. Automation IDs, etc.
  5. By: Rahul Parwal