1. Responsibilities for test analyst
    1. Monitoring and controlling a project
      1. Product (Quality) risks
        1. Identifying risk
          1. Business risk is the focus of the test analyst
        2. Assessing risk
          1. Frequency of use
          2. Business loss
          3. Potential financial, ecological, social losses or liability
          4. Civil or criminal legal sanctions
          5. Safety concerns
          6. Fines, loss of license
          7. Lack of reasonable workarounds
          8. Visibility of feature
          9. Visibility of failure leading to negative publicity and potential image damage
          10. Loss of customers
        3. Mitigating risk
          1. Depth-first approach
          2. Priority order
          3. Breadth-first approach
          4. Testing across all areas
      2. Defects
      3. Test cases
      4. Traceability
      5. Confidence
    2. Talking with other testers
      1. Insourced
      2. Outsourced
      3. Distributed
      4. Centralized
  2. Advanced tester
    1. Chosen a career path in testing by passing ISTQB Foundation Level
    2. Demonstrated theoretical and practical skills
    3. Experienced in testing projects
    4. Types of systems
      1. Systems of systems
        1. High levels of complexity
        2. The time and effort needed to localize defect
        3. More integration testing may be required
        4. Higher management overhead
        5. Lack of overall control
      2. Safety critical systems
        1. Performing explicit safety analysis as part of the risk management
        2. Performing testing according to a predefined sdlc model, such as the V-model
        3. Conducting failover and recovery tests to ensure that software architectures are correctly designed and implemented
        4. Performing reliability testing to demonstrate low failure rates and high levels of availability
        5. Taking measures to ensure that safety and security requirements are fully implemented
        6. Showing that faults are correctly handled
        7. Demonstrating that specific levels of test coverage have been achieved
        8. Creating full test documentation with complete traceability between requirements and test cases
        9. Retaining test data, results, or test environments
        10. Food and drug industry
        11. Space industry
        12. Aircraft industry
      3. Real-time and Embedded systems
        1. Specific testing techniques
        2. Specify and perform dynamic analysis with tools
        3. Testing infrastructure must be provided that allows embedded software to be executed and results obtained
        4. Simulators and emulators may need to be developed and tested to be used during testing
    5. Can fulfill role of test analyst in a project
    6. Never stop learning and improving
    7. More chances to being and stayed employed
    8. Technical test analyst role
      1. Understand the technical issues and concepts in applying test automation
      2. Recognize and classify typical risks associated with the performance, security, reliability, portability and maintainability risks
      3. Create test plans that detail the planning, design and execution of tests for mitigating performance, security, reliability, portability and maintainability risks
      4. Select and apply appropriate structural design techniques to ensure that tests provide an adequate level of confidence, based on code coverage and design coverage
      5. Effectively participate in technical reviews with developers and software architects, applying knowledge of typical mistakes made in code and architecture
      6. Recognise risks in code and software architecture and create test plan elements to mitigate those risks through dynamic analysis
      7. Propose improvements to the security, maintainability and testability of code by applying static analysis
      8. Outline the cost and benefit to be expected from introducing particular types of test automation
      9. Select appropriate tools to automate technical testing tasks
      10. Functionality
        1. Security
      11. Reliability
        1. Maturity (robustness)
        2. Fault - tolerance
        3. Recoverability
        4. Compliance
      12. Efficiency
        1. Performance
        2. Resource utilization
        3. Compliance
      13. Maintainability
        1. Analyzability
        2. Changeability
        3. Stability
        4. Testability
        5. Compliance
      14. Portability
        1. Adaptability
        2. Installability
        3. Co-existence
        4. Replaceability
        5. Compliance
  3. Test analyst role
    1. Apply appropriate techniques to achieve the defined testing goals
    2. Prepare and execute all necessary testing activities
    3. Judge when testing criteria have been fulfilled
    4. Report on progress in a concise and thorough manner
    5. Support evaluations and reviews with evidence from testing
    6. Implement the tools appropriate to performing the testing tasks
    7. Structure the testing tasks required to implement the test strategy
    8. Provide the appropriate level of documentation relevant to the testing activities
    9. Determine the appropriate types of functional testing to be performed
    10. Assume responsibilities for the usability testing for a given project
    11. Select and apply appropriate testing techniques to ensure that tests provide an adequate level of confidence, based on defined coverage criteria
    12. Determine the proper prioritization of the testing activities based on the information provided by the risk analysis
    13. Perform the appropriate testing activities based on sdlc being used
    14. Effectively participate in formal and informal reviews with stakeholders, applying knowledge of typical mistakes made in work products
    15. Design and implement defect classification scheme
    16. Apply tools to support efficient testing process
    17. Support the test manager in creating appropriate testing strategies
    18. Perform analysis on a system in sufficient detail to permit appropriate test conditions to be identified
  4. Test process
    1. Planning, monitoring and control
      1. Analyst provide the information to test manager
      2. Project and product risk
      3. Test manager responsible for risk management
      4. Monitor - manage a project
      5. Control - initiate change as needed
    2. Analysis
      1. Review test basis
      2. Risk analysis
    3. Design
      1. Concrete or logical test cases
      2. Define the objective
      3. Determine the level of detail
      4. What the test cases should do
      5. Pick your target test level
      6. Review your work products
    4. Implementation
      1. Organizing the tests
      2. Deciding the level of detail
      3. Automating the automatable
      4. Setting up the environment
      5. Implementing the approach
    5. Execution
      1. Order of execution
      2. Logging
    6. Evaluating exit criteria and reporting
    7. Test closure activities
  5. Test process activity
    1. Fundamental test process activity
      1. Test planning
        1. Test control
          1. Test analysis and design
          2. Test environment implementation
          3. System test execution
          4. Evaluating of exit criteria and reporting
          5. Closure activities
    2. V-Model testing activity
      1. Concurrent with project planning
        1. Throughout the project from start to completion
          1. Concurrent with requirements specification, high-level design and low-level design
          2. Started during system design, done during coding and component testing, conclude prior to system testing
          3. Start when entry criteria are met, when component and integration testing are completed, continues until the exit criteria are metaaq
          4. Throughout the testing, more frequent as the end of project approaches
          5. System test is concluded, exit criteria are met, may be postponed after all testing is completed
    3. Iterative model testing activity
      1. At the beginning of each iteration
        1. Done by iteration, trends tracked for the entire project
          1. Per items designed for the particular iteration
          2. Limited to what is needed for iteration
          3. Start after component testing is completed, combined with integration testing, entry criteria may not be used
          4. Throughout the testing, at the end of each iteration and project
          5. At the end of the project, all iterations are completed, may be postponed until all testing is completed
  6. Involvement in SDLC
    1. Requirements engineering and management
      1. Reviewing the requirements
      2. Participating in review meetings
      3. Clarifying requirements
      4. Verifying testability
    2. Project management
      1. Providing input to the schedule and specific task milestones
    3. Configuration and change management
      1. Reviewing release notes
      2. Conducting build verification testing
      3. Noting versions for defect and test case execution reporting
    4. Software development
      1. Planning and designing test cases
      2. Coordinating tasks and deliverables
    5. Software maintenance
      1. Managing defects
      2. Tracking defect turnaround time
      3. Creating new test cases
    6. Technical support
      1. Providing accurate documentation regarding known defects and workarounds
    7. Technical documentation
      1. Providing input to the writers
      2. Providing review services
  7. Usability and accessibility testing
    1. Usability testing
      1. Effectiveness
        1. Capability of software product to enable users to achieved specified goals with accuracy and completeness in a specified context
      2. Efficiency
        1. Capability of product to enable users to expend appropriate amounts of resources in relation to the effectiveness achieved in specified context of use
      3. Satisfaction
        1. Ability to satisfy the user in a particular context of use
      4. Formative
        1. Helping developing the interface during design
          1. Detection and removal of defects
      5. Summative
        1. Identify usability problems after implementation
          1. Testing of requirements
    2. Accessibility testing
    3. Test process
      1. Planning issues
      2. Test design
        1. Designing for the user
        2. Considerations for usability tests
          1. Verification
          2. Did we build the product right
          3. Validation
          4. Did we build the right product
          5. Syntax
          6. The structure or grammar of the interface
          7. Semantics
          8. Reasonable and meaningful messages and output
        3. Information transfer
      3. Specifying usability tests
        1. Inspecting, evaluation, reviewing
        2. Interacting with prototypes
        3. Verifying and validating the implementation
        4. Conducting surveys and questionnaires
          1. SUMI Software Usability Measurement Inventory
          2. Brief questionnaire filled by the user
          3. Software questionnaire from user perspective
          4. WAMMI Website Analysis Measurement Inventory
          5. Standardized publicly available usability survey
          6. Web questionnaire from user perspective
    4. ISO 9126
      1. Attractiveness
      2. Learnability
      3. Operability
      4. Understandability
      5. Compliance
  8. Functional testing
    1. Accuracy
      1. What the software should do
    2. Suitability
      1. Verify a set of functions is appropriate for their set of intended specified tasks
    3. Interoperability
      1. Verify is the SUT will function correctly in all the intended target environments
    4. Compliance
  9. Test tools
    1. Test design tools
      1. Help us to create test cases
    2. Data tools
      1. Analyze requirements and generate data to test
      2. Anonymizing the data
      3. Creating data from set of input parameters
      4. Database tools
    3. Test execution tools
      1. Reduce the cost of repeated executions of the same tests
      2. Better coverage of the software than would be possible with only manual testing
      3. Execution of the same tests in many environments or configurations with no additional development effort
      4. The ability to test facets of software that would be impossible to test with only manual testing
    4. Data-driven automation
      1. Data
      2. Scripts
    5. Keyword-driven automation
      1. Keywords (Action words)
      2. Data
      3. Scripts
    6. Principal points of automation
      1. Automation won't solve all testing problems
      2. A test automation project is like any other development project
      3. No point in buying expensive automation tool if we won't use it's capabilities
      4. Automation fails for many reasons (bad organization, politics, unrealistic expectations, no management backing)
      5. Good automation requires strong technical skills and domain knowledge
      6. First design tests then find tools to support it
      7. Avoid automating tests that are human-centric
    7. Risks
      1. Automating bad tests
      2. When software change the automation must change
      3. Automation can't catch all defects
    8. Checklist what to automate
      1. How often we need to execute the test case
      2. Are there procedural aspects that can easily be automated
      3. Is partial automation better approach
      4. Do we have the required details to enable test automation
      5. Do we have an automation concept, should we automate smoke test
      6. Should we automate regression testing
      7. How much change are we expecting
      8. What are objectives of test automation (lower cost)
    9. Benefits
      1. Test execution time should be more predictable
      2. Regression testing will be faster and more reliable
      3. The status of the team should grow
      4. Test automation can help when repetition of regression testing is needed
      5. Some testing is only possible with automation
      6. Test automation is more cost effective than doing testing manually
  10. Testing techniques
    1. Specification-based techniques
      1. Equivalence partitioning
        1. Grouping the test conditions into partitions that will be handled the same way
        2. Used for data handling
      2. Boundary value analysis
        1. Defining and testing for the boundaries of the partitions
        2. Displacement or omission of boundaries and occasional extra boundary
        3. Two value boundary testing
        4. Three value boundary testing
      3. Decision tables
        1. Defining and testing for combinations of conditions using a tabular model
        2. Incorrect processing resulting from combination of interacting conditions
        3. Most frequently used condition goes first
        4. Number of columns
        5. Collapsing decision table - looking for conditions result in the same action
        6. Collapsing decision table - replace the conditions which doesn't affect the outcome with dash and remove the duplicate columns
        7. Used when conditions that exist at a given moment in time for a single transaction are sufficient by themselves to determine the actions
      4. Cause-effect graphing
        1. Defining and testing for combinations of conditions using a graphical model
        2. Incorrect processing resulting from combination of interacting conditions
        3. Combinations of conditions that cause an effect (causality)
        4. Combinations of conditions that exclude a particular result (not)
        5. Combinations of conditions that have to be true cause a particular result (and)
        6. Alternative combinations that can be true to cause a particular result (or)
      5. State transition testing
        1. Identifying all the valid states and transitions that must be tested
        2. Incorrect processing in the current state based on previous processing/incorrect or unsupported transitions/states without exits/missing states and transitions
        3. 0 - switch coverage
        4. 1 - switch coverage
        5. N-1 - switch coverage (Chow's coverage measure)
        6. State transition table (Start state, Event, Effect, End state, Transition)
        7. State transition table with 1 - switch coverage (Test case, Start state, Switch state, End state)
        8. Current state, event/condition, action, new state
        9. We must refer to what conditions have existed in the past
      6. Combinatorial testing
        1. Determining the combinations of configurations to be tested
        2. Incorrect handling of combinations and discovery of combinations that interact when they should not
        3. Pairwise
          1. Orthogonal Arrays
          2. Every parameter is compared with every parameter in the neighboring column
          3. All-pairs
          4. All-pairs possible
          5. All-pairs for m and n
          6. All triples possible
          7. Each option represented in at least one configuration
        4. Classification trees
          1. Singleton coverage
          2. Two-wise (pairs) coverage
          3. Every pairing of each option
          4. Three-wise (triples) coverage
          5. Advantage is visualisation
        5. Input parameter model
      7. Use case testing
        1. Determining usage scenarios and testing accordingly
        2. Mishandling of usage scenarios / missed alternate path handling/ poor error reporting
        3. Should be separate not to mask errors
        4. Use cases - workflows are independent one of each other
      8. User story testing
        1. Determining small pieces of functionality for implementation and testing when using an Agile
        2. Failure to provide defined functionality
        3. Acceptance criteria
        4. User story is concisely expressed use case
      9. Domain analysis
        1. A combination of equivalence partitioning, boundary value analysis, and decision tables used to define tests for simple or complex set of values from multiple variables
        2. Boundary issues/variable interactions/ error handling
        3. Domain analysis matrix (out, off, on, in)
        4. IN Value that is in the partition
        5. OUT Value that is outside the partition
        6. ON Value that is on the boundary of the partition
        7. OFF Value that is just off the boundary of the partition
      10. Minimum coverage is at least one test case defined for each condition
    2. Defect based testing techniques
      1. Beizer's defect taxonomies
        1. Requirements
          1. Requirements incorrect
          2. Requirements logic faulty
          3. Requirements incomplete
          4. Requirements not verifiable
          5. Presentation and documentation
          6. Requirements changes
        2. Features and Functionality
          1. Feature or function incorrect
          2. Feature incomplete
          3. Functional case incomplete
          4. Domain defects
          5. User messages and diagnostics
          6. Exception conditions mishandled
        3. Structural defects
          1. Defects in control flow and structure
          2. Processing
        4. Data
          1. Data definition and structure
          2. Data access and handling
        5. Implementation and coding
          1. Coding and typing faults
          2. Violation of style guidelines and standards
          3. Poor documentation
        6. Integration
          1. Internal interfaces
          2. External interfaces, timing, throughput
        7. System and software architecture
          1. Operating system calls and use
          2. Software architecture
          3. Recovery and accountability
          4. Performance
          5. Incorrect diagnostics and exceptions
          6. Partitions and overlays
          7. Environment
        8. Test definition and execution
          1. Test design defects
          2. Test execution defects
          3. Poor test documentation
          4. Incomplete test cases
      2. IEEE Std 1044-1993
        1. Correct input not accepted
        2. Wrong input accepted
        3. Description incorrect or missing
        4. Parameters incomplete or missing
      3. Taxonomy may serve as checklist to be used during testing without the subsequent creation of detailed test cases
    3. Experience - based testing techniques
      1. Error guessing
        1. Guessing errors based on experience and knowledge and testing for those errors
        2. Defects that might have been missed with specification based testing and have been found in a defect taxonomy or are guessed by the tester
        3. Fault attack is structured to error guessing - enumerate a list of possible defects and to design test that attack these effects
      2. Checklist-based testing
        1. Defining a high-level reminder checklist of the features and characteristics to be covered in testing and then testing to that list
        2. Defects that have been missed by more formal techniques and can be found by varying the test pre-conditions, the test data used or the general approach
      3. Exploratory testing
        1. Simultaneously learning about software;planning, designing and executing the tests and documenting the results
        2. Serious defects that are apparent while testing scenarios rather than targeting specific functional capabilities
  11. Reviews
    1. Planning
      1. Understanding the review process, training the reviewers, getting management support
    2. Kick-off
      1. Having the initial meetings so that everyone understands what they are supposed to do
    3. Individual preparation
      1. Read, the work product, prepare comments, or just provide reactive comments
    4. Review meeting
      1. Conducting the meeting, outcomes are no changes or minor changes, changes are required but review isn't necessary, major changes are required and further review is necessary
    5. Rework
      1. Changes to the work made by author are required after the review
    6. Follow-up
      1. Re-review of changes may be required, look at the efficiency and to gather suggestions for improvement
    7. Checklist for reviews
      1. Checklist for requirements reviews
        1. Is each requirement testable
        2. Are there specific acceptance criteria associated with each requirement
        3. Is there a calling structure specified to the use cases
        4. Is there a unique identification for each stated requirement
        5. Does each requirement have a version assigned to it
        6. Is there a traceability from each requirement to its source (higher-level requirement or business requirement)
        7. Is there traceability between the stated requirements and the use cases
        8. Is each requirement clear
        9. Is each requirement unambiguous
        10. Does each requirement contain only a single item of testable functionality
      2. Checklist for use case reviews
        1. Is the main path clearly defined
        2. Are all alternative paths (scenarios) identified, complete with error handling
        3. Are the user interface messages defined
        4. Is there only one main path or does the use case definition combine multiple cases into one
        5. Is each path testable
        6. Does this use case call other use cases
        7. Is this use case called by other use cases
        8. What is expected frequency of use for this use case
        9. What are the types of users who will use this use case
      3. Checklist for usability reviews
        1. Is each field and it's function clearly defined
        2. Are all error messages defined
        3. Are all user prompts defined and consistent
        4. Is the tab order of the fields defined
        5. Are there keyboard alternatives to mouse actions
        6. Are there shortcut keys defined for the user
        7. Are there dependencies between the fields
        8. Is there a screen layout
        9. Does the screen layout match the specified requirements
        10. Is there an indicator for the user that appears when the system is processing
        11. Does the screen meet the minimum mouse click requirement
        12. Does the navigation flow logically for the user based on use case information
        13. Does the screen meet any requirements for learnability
        14. Is there any help text available for the user
        15. Is there any hover message available to the user
        16. Will the user consider the user interface to be attractive
        17. Is the use of colors consistent with other applications and with organization standards
        18. Are there sound effects used appropriately and are they configurable
        19. Does the screen meet localization requirements
        20. Can the user determine what to do
        21. Will the user be able to remember what to do
        22. Are there usability standards that must be met
        23. Are there accessibility requirements that must be met
      4. Checklist for user story reviews
        1. Is the story appropriate for the target iteration/sprint
        2. Are the acceptance criteria defined and testable
        3. Is the functionality clearly defined
        4. Are there any dependencies between this story and others
        5. Is the story prioritized
        6. Does the story contain a single item of functionality
        7. Is a framework or harness required for this story
        8. Who will provide the harness
      5. Checklist for success
        1. Follow the defined review process
        2. Keep good metrics regarding time spent, defects found, costs saved and efficiency gained
        3. Review documents as soon as it's efficient to do so
        4. Use checklist when conducting the review and record metrics while the review is in progress
        5. Use different type of reviews on the same work item if needed
        6. Focus on the most important problems
        7. Ensure that adequate time is allocated for preparation, conducting and rework for the review
        8. Time and budget shouldn't be allocated based on number of defects found
        9. Make sure the right people are reviewing the right work and everyone is reviewing and under review
        10. The reviews should be conducted in positive, blame - free and constructive environment
        11. Keep a focus on continuous improvement
    8. How to make review effective
      1. Right work product
      2. Conducting review at the right time in the project
      3. Effective review based on the type selected
      4. People with knowledge and experience
      5. Trained team and receptive to the review process
      6. Defect found in review tracked and resolved
      7. Test manager is responsible for coordinating the training and sustaining effective review program, planning and follow-up activities
      8. Conduct review as soon as we have document which describe the project requirements
      9. Decision makers, project stakeholders, customers are involved, managers aren't
    9. Single most effective way to improve software quality
  12. Defect
    1. Failure
    2. Metric and reporting
      1. Defect density analysis
        1. More testing effort on defect clusters
      2. Found vs. Fixed metric
        1. Do we have an efficient bug life cycle
      3. Convergence metrics
        1. Open vs closed issues should converge
      4. Phase containment diagram
        1. Where problems are being introduced and where they are found
      5. Is our defect information objective
      6. Root cause of defects
        1. Unclear requirements
        2. Missing requirements
        3. Wrong requirements
        4. Incorrect design implementation
        5. Incorrect interface implementation
        6. Code logic error
        7. Calculation error
        8. Hardware error
        9. Interface error
        10. Invalid data
    3. Error
    4. Incident
      1. Defect which doesn't require fix
      2. Invalid configuration
      3. Defect which does require fix
    5. New
      1. Invalid
      2. Deferred
      3. Opened
        1. Submitted
          1. Build
          2. QA
          3. Verified
          4. Closed
          5. Archived
    6. Defect report
      1. Accurate
      2. Complete
      3. Objective
      4. Concise
    7. Classification information
      1. The activity tat was occuring when the defect was found
      2. The phase in which the defect was introduced
      3. The phase in which defect was detected
      4. The ability of the tester to reproduce the defect
      5. The root cause of the problem
      6. The work product in which the mistake was made that cause the defect
      7. The type of the defect
      8. The symptom of the defect
      9. The likely cause of the defect