1. 1. Fundamentals
    1. Test Principles
      1. 1. Testing is context dependent - Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.
      2. 2. Exhaustive testing is impossible - Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, we use risks and priorities to focus testing efforts.
      3. 3. Early testing - Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives.
      4. 4. Defect clustering - A small number of modules contain most of the defects discovered during pre-release testing or show the most operational failures.
      5. 5. Pesticide paradox - If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs. To overcome this 'pesticide paradox', the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
      6. 6. Testing shows presence of defects - Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.
      7. 7. Absence of errors fallacy - Finding and fixing defects does not help if the system built is unusable and does not fulfill the users' needs and expectations
    2. Problems
      1. Mistakes (see error)
        1. Error: A human action that produces an incorrect result.
          1. Defect: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
          2. Failure: Deviation of the component or system from its expected delivery, service or result.
      2. Causes:
        1. errors in the specification, design and implementation of the software and system
        2. errors in use of the system
        3. environmental conditions
        4. intentional damage
        5. potential consequences of earlier errors, intentional damage, defects and failures
    3. Test Process
      1. planning and control
        1. Project and test plans should include time to be spent on planning the tests, designing test cases, preparing for execution and evaluating status
        2. test policies
          1. gives rules for testing, e.g. 'we always review the design documents'
        3. test strategy
          1. test strategy is the overall high-level approach, e.g. 'system testing is carried out by an independent team reporting to the program quality manager. It will be risk-based and proceeds from a product (quality) risk analysis'
        4. Test planning tasks
          1. Determine the scope and risks and identify the objectives of testing
          2. Determine the test approach (techniques, test items, coverage, identifying and interfacing with the teams involved in testing, testware)
          3. Implement the test policy and/or the test strategy
          4. Determine the required test resources (e.g. people, test environment, PCs)
          5. Schedule test analysis and design tasks, test implementation, execution and evaluation
          6. Determine the exit criteria: we need to set criteria such as coverage criteria (for example, the percentage of statements in the software that must be executed during testing)
        5. Test control tasks (ongoing)
          1. Measure and analyze the results of reviews and testing
          2. Monitor and document progress, test coverage and exit criteria
          3. Provide information on testing
          4. Initiate corrective actions
          5. Make decisions
          6. to continue testing
          7. to stop testing
          8. to release the software
          9. to retain it for further work
          10. etc.
      2. analysis and design
        1. Review the test basis (such as the product risk analysis, requirements, archi tecture, design specifications, and interfaces), examining the specifications
        2. Identify test conditions based on analysis of test items, their specifications, and what we know about their behavior and structure
        3. Design the tests using techniques to help select representative tests based on the test conditions
        4. Evaluate testability of the requirements and system
        5. Design the test environment set-up and identify any required infrastructure and tools.
      3. implementation and execution
        1. Implementation tasks
          1. Develop and prioritize our test cases, using the techniques, and create test data for those tests
          2. Create test suites from the test cases for efficient test execution
          3. Implement and verify the environment
        2. Execution tasks
          1. Execute the test suites and individual test cases, following test procedures
          2. Log the outcome of test execution and record the identities and versions of the software under test, test tools and testware: report defects, test logs, etc.
          3. Compare actual results with expected results
          4. report discrepancies as incidents
          5. Repeat test activities as a result of action taken for each discrepancy
          6. confirmation testing or re-testing
          7. regression testing
      4. evaluating exit criteria and reporting (should be set and evaluated for each test level)
        1. Check test logs against the exit criteria specified in test planning
        2. Assess if more tests are needed or if the exit criteria specified should be changed
        3. Write a test summary report for stakeholders
      5. test closure activities
        1. Check which planned deliverables we actually delivered and ensure all incident reports have been resolved through defect repair or deferral (open); document the-acceptance or rejection of the software system
        2. Finalize and archive testware, such as scripts, the test environment, and any other test infrastructure, for later reuse
        3. Hand over testware to the maintenance organization
        4. Evaluate how the testing went and analyze lessons learned for future releases and projects
    4. Test Process glossary
      1. Confirmation testing
        1. re-testing: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
      2. Exit criteria
        1. The set of generic and specific conditions, agreed upon with the stakeholders,
        2. for permitting a process to be officially completed. The purpose of exit criteria is to
        3. prevent a task from being considered completed when there are still outstanding parts of
        4. the task which have not been finished. Exit criteria are used to report against and to plan
        5. when to stop testing
      3. Incident
        1. Any event occurring that requires investigation
      4. Regression testing
        1. Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed
      5. Test basis
        1. All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis
      6. Test condition
        1. An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element
      7. Test coverage
        1. The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite
      8. Test data
        1. Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test
      9. Test execution
        1. The process of running a test on the component or system under test, producing actual result(s)
      10. Test log
        1. A chronological record of relevant details about the execution of tests
      11. Test plan
        1. A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process
      12. Test strategy
        1. A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects)
      13. Test summary report
        1. A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria
      14. Testware
        1. Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing
    5. The psychology of testing
      1. Glossary
        1. Independence
          1. Separation of responsibilities, which encourages the accomplishment of objective testing
      2. Levels of
        1. tests by the person who wrote the item under test
        2. tests by another person within the same team, such as another programmer
        3. tests by a person from a different organizational group, such as an independ ent test team
        4. tests designed by a person from a different-organization or company, such as outsourced testing or certification by an external body
  2. 2. Testing throughout the software life cycle
    1. Glossary
      1. Section 2.1
        1. (Commercial) off-the-shelf software (COTS)
          1. A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format
        2. incremental development model
          1. A development lifecycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this lifecycle model, each subproject follows a ‘mini V-model’ with its own design, coding and testing phases
        3. test level
          1. A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test
        4. validation
          1. Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled
        5. verification
          1. Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled
        6. V-model
          1. A framework to describe the software development lifecycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development lifecycle
      2. Section 2.2
        1. alpha testing
          1. Simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing
        2. beta testing
          1. Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market
        3. component testing
          1. The testing of individual software components
        4. driver
          1. A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system
        5. functional requirements
          1. A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document
        6. integration
          1. The process of combining components or systems into larger assemblies
        7. integration testing
          1. Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems
        8. non-functional testing
          1. Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability
        9. operational testing
          1. Testing conducted to evaluate a component or system in its operational environment
        10. regulation acceptance testing (compliance testing)
          1. The process of testing to determine the compliance of the component or system
        11. robustness testing
          1. Testing to determine the robustness of the software product
        12. stub
          1. A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component
        13. system testing
          1. The process of testing an integrated system to verify that it meets specified requirements
        14. test-driven development
          1. A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases
        15. test environment
          1. An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test
        16. user acceptance testing
          1. Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system
      3. Section 2.3
        1. black-box testing
          1. Testing, either functional or non-functional, without reference to the internal structure of the component or system
        2. code coverage
          1. An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage
        3. confirmation testing (re-testing)
        4. functional testing
          1. Testing based on an analysis of the specification of the functionality of a component or system
        5. interoperability testing
          1. The process of testing to determine the interoperability of a software product
        6. load testing
          1. A type of performance testing conducted to evaluate the behavior of a component or system with increasing load, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system
        7. maintainability testing
          1. The process of testing to determine the maintainability of a software product
        8. performance testing
          1. The process of testing to determine the performance of a software product
        9. portability testing
          1. The process of testing to determine the portability of a software product
        10. regression testing
          1. Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed
        11. reliability testing
          1. The process of testing to determine the reliability of a software product
        12. security testing
          1. Testing to determine the security of the software product
        13. specification-based testing
          1. black box testing: Testing, either functional or non-functional, without reference to the internal structure of the component or system
        14. stress testing
          1. A type of performance testing conducted to evaluate a system or component at or beyond the limits of its anticipated or specified work loads, or with reduced availability of resources such as access to memory or servers
        15. structural testing
          1. white-box testing: Testing based on an analysis of the internal structure of the component or system
        16. test suite
          1. A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.
        17. usability testing
          1. Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions
        18. white-box testing
      4. Section 2.4
        1. impact analysis
        2. maintenance testing
          1. Testing the changes to an operational system or the impact of a changed environment to an operational system
    2. SOFTWARE DEVELOPMENT MODELS
      1. Glossary
        1. Verification is concerned with evaluating a work product, component or system to determine whether it meets the requirements set. In fact, verification focuses on the question 'Is the deliverable built according to the specification?'
        2. Validation is concerned with evaluating a work product, component or system to determine whether it meets the user needs and requirements. Validation focuses on the question 'Is the deliverable fit for purpose, e.g. does it provide a solution to the problem?'
      2. V-model
        1. Test levels
          1. component testing
          2. integration testing
          3. system testing
          4. acceptance testing
      3. Iterative life cycles
        1. incremental development models
          1. prototyping
          2. Rapid Application Development (RAD)
          3. formally a parallel development of functions and subsequent integration
          4. Validation with the RAD development process is thus an early and major activity
          5. Dynamic System Development Methodology [DSDM]
          6. is a refined RAD process that allows controls to be put in place in order to stop the process from getting out of control
          7. Rational Unified Process (RUP)
          8. agile development
          9. Extreme Programming (XP)
          10. generation of business stories to define the functionality
          11. demands an on-site customer for continual feedback and to define and carry out functional acceptance testing
          12. promotes pair programming and shared code ownership amongst the developers
          13. states that component test scripts shall be written before the code is written and that those tests should be automated
          14. states that integration and testing of the code shall happen several times a day
          15. states that we always implement the simplest solution to meet today's problems
        2. Example iteration
          1. Define
          2. Develop
          3. Build
          4. Test (increase each phase)
          5. Implement
        3. Testing within a life cycle model
          1. for every development activity there is a corresponding testing activity
          2. each test level has test objectives specific to that level
          3. the analysis and design of tests for a given test level should begin during the corresponding development activity
          4. testers should be involved in reviewing documents as soon as drafts are avail able in the development cycle
    3. Test levels
      1. Component testing (unit) - module and program testing (e.g. modules, programs, objects, classes, etc.) that are separately testable
        1. Stub - is called from the software component to be tested
        2. driver - calls a component to be tested
        3. resource-behavior (e.g. memory leaks)
        4. performance
        5. robustness
        6. structural testing (e.g. decision coverage)
        7. In XP: test-first approach or test-driven development: prepare and automate test cases before coding
      2. Integration testing - tests interfaces between components, interactions to dif-ferent parts of a system or interfaces between systems
        1. component integration testing
        2. system integration testing
        3. Big-bang testing
          1. advantage: everything is finished before integration testing starts
          2. disadvantage: in general it is time-consuming and difficult to trace the cause of failures
        4. Top-down: testing takes place from top to bottom, following the control flow or architectural structure (e.g. starting from the GUI or main menu). Components or systems are substituted by stubs
        5. Bottom-up: testing takes place from the bottom of the control flow upwards. Components or systems are substituted by drivers.
        6. Functional incremental: integration and testing takes place on the basis of the functions or functionality, as documented in the functional specification
      3. System testing - concerned with the behavior of the whole system/product as defined by the scope of a development project or product. Requires a controlled test environment
        1. functional
        2. non-functional
          1. performance
          2. reliability
        3. Specification-based (black-box) techniques
          1. Decision table may be created for combinations of effects described in business rules
        4. Structure-based (white-box) techniques
      4. Acceptance testing Requires 'as-if production' environment
        1. Questions
          1. 'Can the system be released?'
          2. 'What, if any, are the outstanding (business) risks?'
          3. 'Has development met their obligations?'
        2. Goals
          1. to establish confidence in the system, part of the system or specific non-functional characteristics e.g. usability, of the system.
          2. to determine whether the system is fit for purpose.
          3. the system's readiness for deployment and use e.g. a large-scale system integration test may come after the acceptance of a system
        3. In other levels
          1. A Commercial Off The Shelf (COTS) software product may be acceptance tested when it is installed or integrated
          2. Acceptance testing of the usability of a component may be done during com ponent testing
          3. Acceptance testing of a new functional enhancement may come before system testing
        4. Types of acceptance testing
          1. Operational acceptance test (or Production acceptance test)
          2. testing of backup/restore
          3. disaster recovery
          4. maintenance tasks
          5. periodic check of security vulnerabilities
          6. Contract acceptance testing
          7. performed against a contract's acceptance criteria for producing custom-developed software
          8. Compliance acceptance testing (or regulation acceptance testing)
          9. performed against the regulations which must be adhered to, such as governmental, legal or safety regulations
          10. 2 stages of acceptance test of COTS
          11. Alpha testing at the developer's site
          12. A cross-section of potential users and members of the developer's organization are invited to use the system
          13. Beta testing or field testing
          14. sends the system to a cross-section of users who install it and use it under real-world working conditions
    4. Test types
      1. Functional testing (often - 'black-box') 'what it does'
        1. Based upon ISO 9126, can be done focusing on:
          1. suitability
          2. interoperability
          3. security
          4. accuracy
          5. compliance
        2. Perspectives:
          1. requirements-based
          2. business-process-based
          3. experienced-based
      2. Non-functional testing 'how well' the system works
        1. Characteristics (ISO/IEC 9126, 2001)
          1. functionality
          2. suitability
          3. accuracy
          4. security
          5. interoperability
          6. compliance
          7. reliability
          8. maturity (robustness)
          9. fault-tolerance
          10. recoverability
          11. compliance
          12. usability
          13. understandability
          14. learnability
          15. operability
          16. attractiveness
          17. compliance
          18. efficiency
          19. time behavior (performance)
          20. resource uti lization
          21. compliance
          22. maintainability
          23. analyzability
          24. changeability
          25. stability
          26. testability
          27. compliance
          28. portability
          29. adaptability
          30. installability
          31. co-existence
          32. replaceability
          33. compliance
        2. Types
          1. portability testing
          2. reliability testing
          3. maintainability testing
          4. usability testing
          5. stress testing
          6. load testing
          7. performance testing
      3. Structural testing ('white-box')
      4. Testing related to changes
        1. Confirmation testing (re-testing)
        2. Regression testing
    5. Maintenance testing (testing an OS) different from maintainability testing, which defines how easy it is to maintain the system
      1. Levels:
        1. component test
        2. integration test
        3. system test
        4. acceptance test
      2. Parts
        1. testing the changes
        2. regression tests Based on:
          1. Impact analysis
          2. Risk analysis
      3. Triggers
        1. modifications
          1. Planned modifications - 90% of work (enhancement changes (e.g. release-based))
          2. perfective modifications (adapting software to the user's wishes, for instance by supplying new functions or enhancing performance)
          3. adaptive modifications (adapting software to environmental changes such as new hardware, new systems software or new legislation)
          4. corrective planned modifications (deferrable correction of defects)
          5. Ad-hoc corrective and emergency changes
          6. <- patching up
          7. planned modification
          8. changes of environment
          9. planned OS or DB upgrades
          10. patches to newly exposed or discovered vulnerabilities of the OS
        2. migration (from one platform to another
          1. operational testing of the new environment
          2. testing of the changed software
        3. retirement of the system
          1. the testing of data migration or archiving
  3. 3. Static techniques
    1. 3.1 REVIEWS AND THE TEST PROCESS
      1. Glossary
        1. static testing
          1. Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static analysis
        2. dynamic testing
          1. Testing that involves the execution of the software of a component or system
        3. reviews
          1. An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough
      2. Types of defects
        1. deviations from standards
        2. missing requirements
        3. design defects
        4. non-maintainable code
        5. inconsistent interface specifications
        6. Advantages:
          1. early feedback on quality issues
          2. Cheap fix and improvement
          3. development productivity increases
          4. exchange of information between the participants
          5. increased awareness of quality issues
    2. 3,2 REVIEW PROCESS
      1. Glossary
        1. entry criteria
          1. The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria
        2. exit criteria
          1. The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing
        3. formal review
          1. A review characterized by documented procedures and requirements, e.g. inspection
        4. informal review
          1. A review not based on a formal (documented) procedure. initiating (IDEAL): The phase within the IDEAL model where the groundwork is laid for a successful improvement effort. The initiating phase consists of the activities: set context, build sponsorship and charter infrastructure
        5. inspection
          1. A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure
        6. moderator
          1. The leader and main person responsible for an inspection or other review process
        7. reviewer
          1. The person involved in the review that identifies and describes anomalies in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process
        8. scribe
          1. The person who records each defect mentioned and any suggestions for process improvement during a review meeting, on a logging form. The scribe should ensure that the logging form is readable and understandable
        9. technical review
          1. A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken
        10. walkthrough
          1. A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content
      2. 3.2.1 Phases of a formal review
        1. informal
          1. not documented
        2. formal
          1. Phases
          2. Planning
          3. entry criterias:
          4. A short check of a product sample by the moderator (or expert) does not reveal a large number of major defects. For example, after 30 minutes of checking, no more than 3 major defects are found on a single page or fewer than 10 major defects in total in a set of 5 pages.
          5. The document to be reviewed is available with line numbers
          6. The document has been cleaned up by running any automated checks that apply
          7. References needed for the inspection are stable and available
          8. The document author is prepared to join the review team and feels confident with the quality of the document
          9. Focuses:
          10. focus on higher-level documents
          11. focus on standards
          12. focus on related documents at the same level
          13. focus on usage
          14. Kick-off
          15. Preparation
          16. checking rate (pages per hour)
          17. Review meeting
          18. Phases
          19. logging phase
          20. discussion phase
          21. decision phase
          22. Severity classes
          23. Critical: defects will cause downstream damage; the scope and impact of the defect is beyond the document under inspection
          24. Major, defects could cause a downstream effect (e.g. a fault in a design can result in an error in the implementation)
          25. Minor, defects are not likely to cause downstream damage (e.g. non-compli ance with the standards and templates)
          26. exit criteria
          27. the average number of critical and/or major defects found per page (e.g. no more than three critical/major defects per page)
          28. Rework
          29. Follow-up
      3. 3.2.2 Roles and responsibilities
        1. The moderator
          1. (or review leader) leads the review process
          2. deter-mines, in co-operation with the author, the type of review
          3. approach
          4. the composition of the review team
          5. performs the entry check
          6. follow-up on the rework
          7. schedules the meeting
          8. disseminates documents before the meeting
          9. coaches other team members
          10. paces the meeting
          11. leads possible discussions
          12. stores the data that is collected
        2. The author
          1. learn as much as possible with regard to improving the quality of the document
          2. to improve his or her ability to write future documents
          3. to illuminate unclear areas
          4. to understand the defects found
        3. The scribe (or recorder)
          1. often the author
          2. to record each defect mentioned and any suggestions for process improvement
        4. The reviewers (also called checkers or inspectors)
          1. to check any material for defects
        5. The manager
          1. decides on the execution of reviews
          2. allocates time in project schedules
          3. determines whether review process objectives have been met
          4. take care of any review training requested by the participants
          5. can also be involved in the review itself depending on his or her background, playing the role of a reviewer
      4. 3.2.3 Types of review
        1. Walkthrough
          1. guiding the participants through the document by the author
          2. useful for higher-level documents, such as requirement specifications and architectural documents
          3. goals
          4. to present the document to stakeholders both within and outside the soft ware discipline
          5. to explain (knowledge transfer) and evaluate the contents of the document
          6. to establish a common understanding of the document
          7. to examine and discuss the validity of proposed solutions and the viability of alternatives, establishing consensus
          8. Key characteristics
          9. The meeting is led by the authors; often a separate scribe is present
          10. Scenarios and dry runs may be used to validate the content
          11. Separate pre-meeting preparation for reviewers is optional
        2. Technical review
          1. discussion meeting that focuses on achieving con-sensus about the technical content of a document
          2. Compared to inspec-tions, technical reviews are less formal
          3. little or no focus on defect identification on the basis of referenced documents
          4. goals
          5. assess the value of technical concepts and alternatives in the product and project environment
          6. establish consistency in the use and representation of technical concepts
          7. ensure, at an early stage, that technical concepts are used correctly
          8. inform participants of the technical content of the document
          9. Key characteristics
          10. It is a documented defect-detection process that involves peers and technical experts
          11. It is often performed as a peer review without management partici pation
          12. Ideally it is led by a trained moderator, but possibly also by a technical expert
          13. A separate preparation is carried out during which the product is examined and the defects are found
          14. More formal characteristics such as the use of checklists and a logging list or issue log are optional
        3. Inspection
          1. the most formal review type
          2. Weinberg's concept of egoless engineering
          3. number of goals
          4. if the time to market is extremely important, the emphasis in inspections will be on efficiency
          5. In a safety-critical market, the focus will be on effectiveness
          6. goals
          7. help the author to improve the quality of the document under inspection
          8. remove defects efficiently, as early as possible
          9. improve product quality, by producing documents with a higher level of quality
          10. create a common understanding by exchanging information among the inspection participants
          11. train new employees in the organization's development process
          12. learn from defects found and improve processes in order to prevent recur rence of similar defects
          13. sample a few pages or sections from a larger document in order to measure the typical quality of the document, leading to improved work by individuals in the future, and to process improvements
          14. Key characteristics
          15. It is usually led by a trained moderator (certainly not by the author)
          16. It uses defined roles during the process
          17. It involves peers to examine the product
          18. Rules and checklists are used during the preparation phase
          19. A separate preparation is carried out during which the product is examined and the defects are found
          20. The defects found are documented in a logging list or issue log
          21. A formal follow-up is carried out by the moderator applying exit criteria
          22. Optionally, a causal analysis step is introduced to address process improve ment issues and learn from the defects found
          23. Metrics are gathered and analyzed to optimize the process
      5. 3.2.4 Success factors for reviews
        1. Find a 'champion'
        2. Pick things that really count
        3. Explicitly plan and track review activities
        4. Train participants
        5. Manage people issues
        6. Follow the rules but keep it simple
        7. Continuously improve process and tools
        8. Report results
          1. quantify the benefits as well as the costs
        9. Just do it!
    3. 3.3 STATIC ANALYSIS BY TOOLS
      1. Glossary
        1. compiler
          1. A software tool that translates programs expressed in a high order language into their machine language equivalents
        2. cyclo-matic complexity
          1. The number of independent paths through a program. Cyclomatic complexity is defined as: L – N + 2P, where - L = the number of edges/links in a graph - N = the number of nodes in a graph - P = the number of disconnected parts of the graph (e.g. a called graph or subroutine)
        3. control flow
          1. A sequence of events (paths) in the execution through a component or system
        4. data flow
          1. An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction
        5. static analysis
          1. Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software development artifacts. Static analysis is usually carried out by means of a supporting tool
      2. Differents from dynamic testing
        1. Static analysis is performed on requirements, design or code without actually executing the software artifact being examined
        2. Static analysis is ideally performed before the types of formal review
        3. Static analysis is unrelated to dynamic properties of the requirements, design and code, such as test coverage
        4. The goal of static analysis is to find defects, whether or not they may cause failures As with reviews, static analysis finds defects rather than failures
      3. 3.3.1 Coding standards
      4. 3.3.2 Code metrics
        1. Complexity metrics identify high risk, complex areas
        2. The cyclomatic complexity metric is based on the number of decisions in a program
          1. It is important to testers because it provides an indication of the amount of testing (including reviews) necessary to practically avoid defects
          2. While there are many ways to calculate cyclomatic complexity, the easiest way is to sum the number of binary decision statements (e.g. if, while, for, etc.) and add 1 to it
        3. The control flow
      5. 3.3.3 Code structure
        1. aspects
          1. control flow structure
          2. the sequence in which the instructions are executed
          3. reflects the iterations and loops in a program's design
          4. can also be used to identify unreachable (dead) code
          5. number of code metrics, e.g.:
          6. number of nested levels
          7. cyclomatic complexity
          8. data flow structure
          9. follows the trail of a data item as it is accessed and mod-ified by the code
          10. how the data act as they are transformed by the program
          11. Defects can be found
          12. referencing a variable with an undefined value
          13. variables that are never used
          14. data structure
          15. organization of the data itself, independent of the program
          16. provides a lot of information about the difficulty in writing programs to handle the data and in designing test cases to show program correctness
      6. the value of static analysis
        1. early detection of defects prior to test execution
        2. early warning about suspicious aspects of the code, design or requirements
        3. identification of defects not easily found in dynamic testing
        4. improved maintainability of code and design since engineers work according to documented standards and rules
        5. prevention of defects, provided that engineers are willing to learn from their errors and continuous improvement is practised
  4. 4. Test design techniques
    1. 4.1 IDENTIFYING TEST CONDITIONS AND DESIGNING TEST CASES Test Documentation Standard [IEEE829]
      1. Glossary
        1. test case
          1. A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement
        2. test case specification
          1. A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item
        3. test condition
          1. An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element
        4. test data
          1. Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test
        5. test procedure specification
          1. A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script
        6. test script
          1. Commonly used to refer to a test procedure specification, especially an automated one
        7. traceability
          1. The ability to identify related items in documentation and software, such as requirements with associated tests
      2. 4.1.1 Introduction
        1. test conditions
          1. documented in a Test Design Specification
        2. test cases
          1. documented in a Test Case Specification
        3. test procedures (or scripts)
          1. documented in a Test Procedure Specification (also known as a test script or a manual test script)
      3. 4.1.2 Formality of test documentation
      4. 4.1.3 Test analysis: identifying test conditions
        1. A test condition is simply something that we could test
        2. the basic ideas
          1. [Marick, 1994]: 'test requirements' as things that should be tested
          2. [Hutcheson, 2003]: 'test inventory' as a list of things that could be tested
          3. [Craig, 2002]: 'test objectives' as broad categories of things to test and 'test inventories' as the actual list of things that need to be tested
          4. ISTQB: test condition
        3. traceability.
          1. Test conditions should be able to be linked back to their sources in the test basis
          2. horizontal
          3. through all the test documentation for a given test level (e.g. system testing, from test conditions through test cases to test scripts)
          4. vertical
          5. through the layers of development documentation (e.g. from requirements to components)
        4. Test conditions can be identified for test data as well as for test inputs and test outcomes,
        5. IEEE 829 STANDARD: TEST DESIGN SPECIFICATION
      5. 4.1.4 Test design: specifying test cases
        1. IEEE 829 Standard for Test Documentation
        2. Oracle - source of informa-tion about the correct behavior of the system
        3. IEEE 829 STANDARD: TEST CASE SPECIFICATION
      6. 4.1.5 Test implementation: specifying test procedures or scripts
        1. test proce-dure in IEEE 829 also referred to as a test script
          1. The document that describes the steps to be taken in running a set of tests (and specifies the executable order of the tests)
    2. 4.2 CATEGORIES OF TEST DESIGN TECHNIQUES
      1. Glossary
        1. white-box test design techniques
          1. Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system
        2. experience-based test design techniques
          1. Procedure to derive and/or select test cases based on the tester’s experience, knowledge and intuition
        3. specification-based test design techniques
          1. Black box test design technique: Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
        4. structure-based test design techniques
          1. See white box test design technique
        5. white-box test design techniques
          1. Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system
      2. 3 types or categories of test design technique, distin-guished by their primary source:
        1. a specification
        2. the structure of the system or component
        3. a person's experience
      3. Techniques
        1. Static (Chapter 3)
        2. Dynamic techniques
          1. 4.2.2 Static testing techniques
          2. specification-based (black-box, also known as behavioral techniques or input/output-driven testing techniques) what the software does, not how it does it
          3. functional techniques
          4. non-functional techniques (i.e. quality characteristics)
          5. Specification-based techniques are appropriate at all levels of testing (component testing through to acceptance testing) where a specification exists.
          6. 4.2.4 Structure-based (white-box) testing techniques (or structural techniques)
          7. Structure-based techniques can also be used at all levels of testing
          8. 4.2.5 Experience-based testing techniques
          9. used to complement other techniques
          10. used when there is no specifica-tion
          11. or if the specification is inadequate or out of date
          12. the only type of technique used for low-risk systems
    3. 4.3 SPECIFICATION-BASED OR BLACK-BOX TECHNIQUES
      1. Glossary
        1. boundary value analysis
          1. A black box test design technique in which test cases are designed based on boundary values
        2. decision table testing
          1. A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table
        3. equivalence partitioning
          1. A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once
        4. state transition testing
          1. A black box test design technique in which test cases are designed to execute valid and invalid state transitions
        5. use case testing
          1. A black box test design technique in which test cases are designed to execute scenarios of use cases
      2. 4.3.1 Equivalence partitioning and boundary value analysis
        1. Equivalence partitions are also known as equivalence classes
        2. Boundary value analysis (BVA) is based on testing at the boundaries between partitions
        3. Designing test cases
          1. both equivalence partitioning and boundary value analysis
          2. Invalid inputs are separate test cases
      3. 4.3.2 Decision table testing ('cause-effect' table)
        1. combinations of things (e.g. inputs, conditions, etc.)
        2. other techniques with combination:
          1. pairwise testing
          2. orthogonal arrays
        3. Creating a table listing all the combinations of True and False for each of the aspects
      4. 4.3.3 State transition testing ('finite state machine')
        1. four basic parts:
          1. the states that the software may occupy (open/closed or funded/insufficient funds)
          2. the transitions from one state to another (not all transitions are allowed)
          3. the events that cause a transition (closing a file or withdrawing money)
          4. the actions that result from a transition (an error message or being given your cash)
        2. model can be as detailed or as abstract as you need it to be
      5. 4.3.4 Use case testing
        1. is a technique that helps us identify test cases that exercise the whole system on a transaction by transaction basis from start to finish
    4. 4.4 STRUCTURE-BASED OR WHITE-BOX TECHNIQUES
      1. Glossary
        1. code coverage
          1. An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage
        2. decision coverage
          1. The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage
        3. statement coverage
          1. The percentage of executable statements that have been exercised by a test suite
        4. structural testing
          1. See white box testing
        5. structure-based testing
          1. See white-box testing
        6. white-box testing
          1. Testing based on an analysis of the internal structure of the component or system
      2. 4.4.1 Using structure-based techniques to measure coverage and design tests
        1. 2 purposes:
          1. test coverage measurement
          2. 100% coverage does not mean 100% tested!
          3. structural test case design
        2. Types of coverage
          1. For testing levels
          2. At integration level
          3. coverage of interfaces
          4. specific interactions
          5. at system or acceptance level the coverage items may be:
          6. require-ments
          7. menu options
          8. screens
          9. typical business transactions
          10. database structural elements (records, fields and sub-fields) and files
          11. For specification-based techniques
          12. EP: percentage of equivalence partitions exercised
          13. BVA: percentage of boundaries exercised
          14. Decision tables: percentage of business rules or decision table columns tested
          15. State transition testing:
          16. Percentage of states visited
          17. Percentage of (valid) transitions exercised (this is known as Chow's 0- switch coverage)
          18. Percentage of pairs of valid transitions exercised ('transition pairs' or Chow's 1-switch coverage)
          19. transition triples
          20. transition quadruples
          21. etc.
          22. Percentage of invalid transitions exercised (from the state table)
        3. Instrumentation to measure coverage
          1. 1 Decide on the structural element to be used, i.e. the coverage items to be counted
          2. 2 Count the structural elements or items.
          3. 3 Instrument the code.
          4. 4 Run the tests for which coverage measurement is required.
          5. 5 Using the output from the instrumentation, determine the percentage of elements or items exercised.
      3. 4.4.2 Statement coverage and statement testing
        1. Example
          1. READ A 2 READ B 3 C =A + 2*B 4 IF C> 50 THEN 5 PRINT large C 6 ENDIF
          2. Then 100% coverage test will be: Test 1_4: A = 20, B = 25
      4. 4.4.3 Decision coverage and decision testing
        1. 'sub-sumes' statement coverage - this means that 100% decision coverage always guarantees 100% statement coverage but not the other way around!
        2. Bit to achieve 100% decision coverage, at least 2 test cases are necessary to cover both True and False
        3. Example
          1. 1 READ A 2 READ B 3 C=A-2*B 4 IFC <0THEN 5 PRINT "C negative" 6 ENDIF
          2. Then 100% coverage test will be: Test 2_1: A = 20, B = 15 Test 2_2: A = 10, B = 2
      5. 4.4.4 Other structure-based techniques
        1. branch coverage
          1. Branch coverage measures the coverage of both conditional and unconditional branches Whilst decision coverage measures the coverage of conditional branches
        2. linear code sequence and jump (LCSAJ) coverage
        3. condition coverage
        4. multiple condition coverage (condition combination coverage)
        5. condition determination coverage (multiple condition decision coverage or modified con-dition decision coverage, MCDC)
        6. path coverage or 'independent path segment coverage'
    5. 4.5 EXPERIENCE-BASED TECHNIQUES
      1. Glossary
        1. error guessing
          1. A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them
        2. exploratory testing
          1. An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests
      2. 4.5.1 Error guessing
        1. A structured approach to the error-guessing technique is to list possible defects or failures and to design tests that attempt to produce them
        2. can be built based on
          1. the tester's own experience
          2. experience of other people
          3. available defect and failure data
          4. from common knowledge about why software fails
      3. 4.5.2 Exploratory testing
        1. is a hands-on approach in which testers are involved in minimum planning and maximum test execution
        2. A key aspect of exploratory testing is learning
        3. Books
          1. Kaner, 2002
          2. Copeland, 2003
          3. Whittaker, 2002 ('attacks')
    6. 4.6 CHOOSING A TEST TECHNIQUE
      1. The best testing technique is no single testing technique
      2. Internal factors
        1. Models used
          1. The models available (i.e. developed and used during the specification) will govern which testing techniques can be used
        2. Tester knowledge and experience
          1. How much testers know about the system and about testing techniques
        3. Likely defects
          1. Knowledge of the likely defects (since each technique is good at finding a particular type of defect)
        4. Test objective
          1. simply to gain confidence? - Use cases
          2. thorough testing? - more rigorous and detailed techniques
        5. Documentation
          1. Whether or not documentation exists and it is up to date
          2. The content and style of the documentation will also influence the choice of techniques
        6. Life cycle model
          1. A sequential life cycle model? - more formal techniques
          2. An iterative life cycle model? - an exploratory testing approach
      3. External factors
        1. Risk
          1. The greater the risk (e.g. safety-critical systems)? - more thorough and more formal testing
          2. Commercial risk
          3. by quality issues? - more thorough testing
          4. time-to-market issues? - exploratory testing
        2. Customer and contractual requirements
          1. Contracts may specify particular testing techniques (most commonly statement or branch coverage)
        3. Type of system
          1. Financial application? - boundary value analysis
        4. Regulatory requirements
          1. regulatory standards or guidelines
          2. the aircraft industry
          3. equivalence partitioning
          4. boundary value analysis
          5. state transition testing
          6. statement, decision or modified condition decision coverage
        5. Time and budget
  5. 5. Test management
    1. 5.1 TEST ORGANIZATION
      1. Glossary
        1. tester
          1. A skilled professional who is involved in the testing of a component or system
        2. test leader
          1. See test manager
        3. test manager
          1. The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object
      2. 5.1.1 Independent and integrated testing
        1. Independent Test Team
          1. benefits
          2. see more defects
          3. brings a different set of assumptions to testing and to reviews
          4. brings a skeptical attitude of professional pessimism
          5. reporting to a senior or execu-tive manager
          6. a separate budget
          7. risks
          8. interpersonal isolation
          9. stakeholders might see as a bottleneck and a source of delay
          10. programmers can abdicate their responsibility for quality
      3. 5.1.2 Working as a test leader
        1. planning, monitoring, and control of the testing activities
        2. devise the test objectives, organizational test policies (if not already in place), test strategies and test plans
        3. estimate the testing to be done and negotiate with management to acquire the necessary resources
        4. recognize when test automation is appropriate plan the effort, select the tools, and ensure training of the team
        5. consult with other groups to help them with their testing
        6. lead, guide and monitor the analysis, design, implementation and execution of the test cases, test procedures and test suites
        7. ensure proper configuration management of the testware produced and traceability of the tests to the test basis
        8. make sure the test environment is put into place before test execution and managed during test execution
        9. schedule the tests for execution and monitor, measure, control and
        10. report on the test progress, the product quality status and the test results, adapting the test plan and compensating as needed to adjust to evolving conditions
        11. During test execution and as the project winds down, they write summary reports on test status
      4. 5.1.3 Working as a tester
        1. In the planning and preparation phases
          1. review and contribute to test plans
          2. review-ing and assessing requirements and design specifications
        2. identifying test conditions and creating
          1. test designs
          2. test cases
          3. test procedure specifications
          4. test data
          5. automate or help to automate the tests
          6. set up the test envi-ronments
          7. assist system administration network management staff
        3. test execution
          1. execute and log the tests
          2. evaluate the results
          3. document problems found
          4. monitor the testing and the test environment
          5. gather performance metrics
          6. review each other's work, incl. test specifications, defect reports and test results
      5. 5.1.4 Defining the skills test staff need
        1. Application or business domain:
          1. the intended behavior, the problem the system will solve, the process it will automate
        2. Technology:
          1. issues, limitations and capabilities of the chosen implementation technology
        3. Testing:
          1. know the testing topics
    2. 5.2 TEST PLANS, ESTIMATES AND STRATEGIES
      1. Glossary
        1. entry criteria
          1. The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria
        2. exit criteria
          1. The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing
        3. exploratory testing
          1. An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests
        4. test approach
          1. The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed
        5. test level
          1. A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project. Examples of test levels are component test, integration test, system test and acceptance test
        6. test plan
          1. A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process
        7. test procedure
          1. test procedure specification: A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script
        8. test strategy
          1. A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects)
      2. 5.2.1 The purpose and substance of test plans
        1. Reasons
          1. guides our thinking
          2. forces us to confront the challenges that await us and focus our thinking on important topics
          3. cating with other members of the project team, testers, peers, managers and other stakeholders
          4. manage change
        2. Master test plan
          1. Test levels
          2. integration test plan
          3. system test plan
          4. hardware test plan
          5. software test plan
        3. planning tasks
          1. purposes
          2. What is in scope and what is out of scope for this testing effort?
          3. What are the test objectives?
          4. What are the important project and product risks?
          5. What constraints affect testing (e.g., budget limitations, hard deadlines, etc.)?
          6. What is most critical for this product and project?
          7. Which aspects of the product are more (or less) testable?
          8. What should be the overall test execution schedule and how should we decide the order in which to run specific tests?
          9. select strategies
          10. split the testing work into various levels
          11. fit your testing work in the level
          12. inter-level coordination
          13. integrate and coordinate all the testing work with the rest of the project
        4. entry criteria factors
          1. Acquisition and supply:
          2. the availability of staff, tools, systems and other materials required
          3. Test items:
          4. the state that the items to be tested must be in to start and to finish testing
          5. Defects:
          6. the number known to be present, the arrival rate, the number predicted to remain, and the number resolved
          7. Tests:
          8. the number run, passed, failed, blocked, skipped, and so forth
          9. Coverage:
          10. the portions of the test basis, the software code or both that have been tested and which have not
          11. Quality:
          12. the status of the important quality characteristics for the system
          13. Money:
          14. the cost of finding the next defect in the current level of testing compared to the cost of finding it in the next level of testing (or in production)
          15. Risk:
          16. the undesirable outcomes that could result from shipping too early (such as latent defects or untested areas) - or too late (such as loss of market share)
      3. 5.2.3 Estimating what testing will involve and what it will cost
        1. phases
          1. planning and control
          2. analysis and design
          3. implementation and execution
          4. evaluating exit criteria and reporting
          5. test closure
        2. risk analysis
          1. identify risks and activities required to reduce them
        3. performance-testing planning
      4. 5.2.4 Estimation techniques
        1. consulting the people 'bottom up' estimation
          1. who will do the work
          2. other people with expertise on the tasks to be done
        2. analyzing metrics top-down estimation
          1. from past projects
          2. from industry data
          3. Approaches
          4. simplest approach
          5. 'How many testers do we typically have per developer on a project?'
          6. classifying the project
          7. in terms of size (small, medium or large)
          8. complexity (simple, mod-erate or complex)
          9. how long such projects have taken in the past
          10. simple and reliable approach
          11. the average effort per test case in similar past projects
          12. use the estimated number of test cases to estimate the total effort
          13. Sophisticated approaches
          14. building mathematical models that look at historical or industry averages for certain key parameters
          15. number of tests run by tester per day
          16. number of defects found by tester per day
          17. etc.
          18. tester-to-developer
        3. estimation must be negotiated with management
      5. 5.2.5 Factors affecting test effort
        1. project documentation
        2. Complexity
          1. The difficulty of comprehending and correctly handling the problem the system is being built to solve
          2. The use of innovative technologies, especially those long on hyperbole and short on proven track records
          3. The need for intricate and perhaps multiple test configurations, especially when these rely on the timely arrival of scarce software, hardware and other supplies
          4. The prevalence of stringent security rules, strictly regimented processes or other regulations
          5. The geographical distribution of the team, especially if the team crosses time-zones (as many outsourcing efforts do)
        3. increasing the size of the product leads to increases in the size of the project and the project team
        4. availability of test tools
        5. life cycle of development model
        6. Process maturity, including test process maturity
        7. Time pressure
        8. people factors
          1. skills of the individ-uals and the team as a whole
          2. the alignment of those skills with the project's needs
          3. solid relationships
          4. reliable execution of agreed-upon commitments and responsibilities
          5. a determination to work together towards a common goal
          6. the stability of the project team
        9. The test results
          1. The delivery of good-quality software at the start of test execution
          2. quick, solid defect fixes during test execution
          3. prevents delays in the test execution process
      6. 5.2.6 Test approaches or strategies
        1. The major types
          1. Analytical:
          2. the risk-based strategy
          3. project documents and stakeholder input
          4. planning
          5. esti mating
          6. designing
          7. prioritizing based on risk
          8. the requirements-based strategy
          9. planning
          10. estimating
          11. designing tests
          12. have in common the use of some formal or infor mal analytical technique usually during the requirements and design stages of the project
          13. Model-based:
          14. mathematical models
          15. have in common the creation or selection of some formal or informal model for critical system behaviors usually during the requirements and design stages of the project
          16. Methodical:
          17. checklist suggests the major areas of testing to run
          18. industry-standard for software quality, e .g. ISO 9126, to outline of major test areas
          19. have in common the adherence to a pre-planned, systematized approach
          20. developed in-house
          21. assembled from various concepts developed in-house and gathered from outside
          22. or adapted significantly from outside ideas
          23. may have an early or late point of involvement for testing
          24. design implement execute
          25. Process- or standard-compliant:
          26. IEEE 829
          27. one of the agile methodologies e.g. Extreme Programming (XP)
          28. have in common reliance upon an externally developed approach to testing
          29. may have an early or late point of involvement for testing
          30. Dynamic:
          31. lightweight set of testing guide lines
          32. exploratory testing
          33. have in common concentrat ing on finding as many defects as possible during test execution and adapt ing to the realities of the system under test
          34. typically emphasize the later stages of testing
          35. the attack-based approach [Whittaker, 2002] and [Whittaker, 2003]
          36. exploratory approach [Kaner et al., 2002]
          37. Consultative or directed:
          38. ask the users or develop ers of the system to tell you what to test or even rely on them to do the testing
          39. have in common the reliance on a group of non-testers to guide or perform the testing effort
          40. typically emphasize the later stages of testing simply due to the lack of recognition of the value of early testing
          41. Regression-averse:
          42. automate all the tests of system functionality
          43. have in common a set of procedures (usually automated)
          44. may involve automating functional tests prior to release of the function
          45. sometimes the testing is almost entirely focused on testing functions that already have been released
          46. There is no one best way
          47. adopt whatever test approaches
          48. feel free to borrow and blend
        2. Factors to consider
          1. Risks:
          2. For a well-established application that is evolving slowly regression-averse strategies make sense
          3. For a new application, a risk analysis may reveal different risks if you pick a risk-based analytical strategy
          4. Skills:
          5. which skills your testers possess and lack for strategy execution
          6. A standard- compliant strategy is a smart choice when you lack the time and skills in your team to create your own approach
          7. Objectives:
          8. Testing must satisfy the needs of stakeholders
          9. find as many defects as possible with a minimal amount of up-front time and effort invested - dynamic strategy makes sense
          10. Regulations:
          11. satisfy regulators
          12. methodical test strategy
          13. Product:
          14. weapons systems and contract-development software - synergy with a requirements-based analytical strategy
          15. Business:
          16. can use a legacy system as a model for a new system - can use a model-based strategy
    3. 5.3 TEST PROGRESS MONITORING AND CONTROL
      1. Glossary
        1. defect density
          1. The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points)
        2. failure rate
          1. The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs
        3. test control
          1. A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned
        4. test coverage
          1. The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite
        5. test monitoring
          1. A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned
        6. test report
          1. test summary report: A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria
          2. test progress report: A document summarizing testing activities and results, produced at regular intervals, to report progress of testing activities against a baseline (such as the original test plan) and to communicate risks and alternatives requiring a decision to management
      2. 5.3.1 Monitoring the progress of test activities
        1. Test monitoring's purposes
          1. Give the test team and the test manager feedback to guide and improve the testing and the project
          2. Provide the project team with visibility about the test results
          3. Measure the status of the testing, test coverage and test items against the exit criteria to determine whether the test work is done
          4. Gather data for use in estimating future test efforts
        2. small projects
          1. gather test progress monitoring information manually using
          2. documents
          3. spreadsheets
          4. simple databases
        3. large teams distributed projects long-term test efforts
          1. data collection is aided by the use of automated tools
        4. Metrics
          1. ultra-reliable software
          2. thousands of source lines of code (KSLOC)
          3. func-tion points (FP)
          4. other metric of code size
          5. common metrics
          6. The extent of completion of test environment preparation
          7. The extent of test coverage achieved, measured against requirements, risks, code, configurations or other areas of interest
          8. The status of the testing (including analysis, design and implementation) compared to various test milestones
          9. The economics of testing, such as the costs and benefits of continuing test execution in terms of finding the next defect or running the next test
        5. use the IEEE 829 test log template
      3. 5.3.2 Reporting test status
        1. variations driven by
          1. the pref-erences of the testers and stakeholders
          2. the needs and goals of the project
          3. reg-ulatory requirements
          4. time and money constraints
          5. limitations of the tools available
        2. Enables conclusions, recommendations, and decisions about how to guide the project forward
        3. data gathering for test report (should be identified at test planning and preparation periods)
          1. How will you assess the adequacy of the test objectives for a given test level and whether those objectives were achieved?
          2. How will you assess the adequacy of the test approaches taken and whether they support the achievement of the project's testing goals?
          3. How will you assess the effectiveness of the testing with respect to these objectives and approaches?
        4. test summary report (at a key milestone or at the end of a test level)
          1. The IEEE 829 Standard Test Summary Report Template
      4. 5.3.3 Test control
        1. guiding and corrective actions to try to achieve the best possible outcome for the project
    4. 5.4 CONFIGURATION MANAGEMENT
      1. Glossary
        1. configuration management
          1. A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements
        2. version control
          1. configuration control: An element of configuration management, consisting of the evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification
      2. Goals
        1. Determe clearly what the items are that make up the software or system
          1. source code
          2. test scripts
          3. third-party software
          4. hardware
          5. data
          6. both development and test documentation
        2. making sure that these items are managed carefully, thoroughly and attentively throughout the entire project and product life cycle
        3. support the build process
        4. to map what is being tested to the underlying files and components that make it up
          1. report defects against something which is version controlled
        5. transmittal report or release notes
      3. Should be planned during the project planning stage
        1. As the project proceeds
          1. the configuration process and mechanisms must be implemented
          2. the key interfaces to the rest of the development process should be documented
      4. IEEE 829 STANDARD: TEST ITEM TRANSMITTAL REPORT TEMPLATE
    5. 5.5 RISK AND TESTING
      1. Glossary
        1. product risk
          1. A risk directly related to the test object
        2. project risk
          1. A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc
        3. risk
          1. A factor that could result in future negative consequences; usually expressed as impact and likelihood
        4. risk-based testing
          1. An approach to testing to reduce the level of product risks and inform stakeholders of their status, starting in the initial stages of a project. It involves the identification of product risks and the use of risk levels to guide the test process
      2. 5.5.1 Risks and levels of risk
        1. Risk is the possibility of a negative or undesirable outcome
      3. 5.5.2 Product risks 'quality risks'
        1. Possibility that the system or software might fail to satisfy some reasonable customer, user, or stakeholder expectation
        2. Risk-based testing
          1. starts early in the project, identifying risks to system quality
          2. guide testing planning, specification, preparation and execution
          3. involves both mitigation
          4. testing to provide opportunities to reduce the likelihood of defects, especially high-impact defects
          5. testing to identify work-arounds to make the defects that do get past us less painful
          6. involves measuring how well we are doing at finding and removing defects in critical areas
          7. involve using risk analysis to identify proactive opportunities to remove or prevent defects through non-testing activities and to help us select which test activities to perform
          8. product risk analysis techniques
          9. a close reading of the requirements specification, design specifica-tions, user documentation and other items
          10. brainstorming with many of the project stakeholders
          11. a sequence of one-on-one or small-group sessions with the business and technology experts in the company
          12. team-based approach that involves the key stakeholders and experts is preferable to a purely document-based approach
          13. risks in the areas
          14. functionality
          15. localization
          16. usability
          17. reliability
          18. performance
          19. supportability
          20. use the quality characteristics and sub-characteristics from ISO 9126
          21. a checklist of typical or past risks that should be considered
          22. review the tests that failed and the bugs that you found in a previous release or a similar product
          23. A five-point scale to rate likelihood and impact vary tends to work well
          24. Tips
          25. to consider both likelihood and impact
          26. calculate a risk priority number
          27. a high likelihood and a medium impact = 6 (2 times 3)
          28. risk analyses are educated guesses
      4. 5.5.3 Project risks
        1. Examples of possible risks
          1. the late delivery of the test items to the test team
          2. availability issues with the test environment
          3. excessive delays in repairing defects found in testing
          4. problems with getting professional system administration support for the test environment.
        2. 4 typical options of risks:
          1. Mitigate: Take steps in advance to reduce the likelihood (and possibly the impact) of the risk
          2. Contingency: Have a plan in place to reduce the impact should the risk become an outcome
          3. Transfer: Convince some other member of the team or project stakeholder to reduce the likelihood or accept the impact of the risk
          4. Ignore: Do nothing about the risk, which is usually a smart option only when there's little that can be done or when the likelihood and impact are low
        3. typical risks
          1. Logistics or product quality problems that block tests:
          2. These can be miti gated through careful planning, good defect triage and management, and robust test design
          3. Test items that won't install in the test environment:
          4. These can be mitigated through smoke (or acceptance) testing prior to starting test phases or as part of a nightly build or continuous integration. Having a defined uninstall process is a good contingency plan
          5. Excessive change to the product that invalidates test results or requires updates to test cases, expected results and environments:
          6. These can be mit igated through good change-control processes, robust test design and light weight test documentation. When severe incidents occur, transference of the risk by escalation to management is often in order
          7. Insufficient or unrealistic test environments that yield misleading results:
          8. One option is to transfer the risks to management by explaining the limits on test results obtained in limited environments. Mitigation - sometimes com plete alleviation - can be achieved by outsourcing tests such as performance tests that are particularly sensitive to proper test environments
        4. additional risks
          1. Organizational issues such as shortages of people, skills or training, problems with communicating and responding to test results, bad expectations of what testing can achieve and complexity of the project team or organization
          2. Supplier issues such as problems with underlying platforms or hardware, failure to consider testing issues in the contract or failure to properly respond to the issues when they arise
          3. Technical problems related to ambiguous, conflicting or unprioritized requirements, an excessively large number of requirements given other project constraints, high system complexity and quality problems with the design, the code or the tests
        5. test items can also have risks, e.g.:
          1. the test plan will omit tests for a functional area
          2. that the test cases do not exercise the critical areas of the system
      5. 5.5.4 Tying it all together for risk management
        1. assess or analyze risks early in the project
        2. educated guesses
        3. Do not confuse impact with likelihood or vice versa
    6. 5.6 INCIDENT MANAGEMENT
      1. Glossary
        1. incident logging
          1. Recording the details of any incident that occurred, e.g. during testing
      2. 5.6.1 What are incident reports for and how do I write good ones?
        1. causes
          1. the system exhibits questionable behavior
          2. a defect only when the root cause is some problem in tested item
          3. misconfiguration or failure of the test environment
          4. corrupted test data
          5. bad tests
          6. invalid expected results
          7. tester
          8. mistakes
          9. can also log, report, track, and manage incidents found during development and reviews
        2. defect detection percentage (DDP) metric
      3. 5.6.2 What goes in an incident report?
      4. 5.6.3 What happens to incident reports after you file them?
  6. 6. Tool support for testing
    1. Glossary
      1. debugging tool
        1. A tool used by programmers to reproduce failures, investigate the state of programs and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set and examine program variables
      2. driver
        1. A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system
      3. stub
        1. A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component
      4. probe effect
        1. The effect on the component or system by the measurement instrument when the component or system is being measured, e.g. by a performance testing tool or monitor. For example performance may be slightly worse when performance testing tools are being used
      5. data-driven testing
        1. A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools
      6. keyword-driven testing
        1. A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test
      7. scripting language
        1. A programming language in which executable test scripts are written, used by a test execution tool (e.g. a capture/playback tool)
    2. Types of tools
      1. test execution tools
      2. performance testing tools
      3. static analysis tools
      4. test management tools
    3. 6.1 TYPES OF TEST TOOL
      1. 'probe effect'
        1. 'instrumenting the code'
          1. different coverage tools get a slightly different coverage measure on the same program
        2. 'Heizenbugs'
          1. If the code is run with the debugger, then the bug disappears
      2. 6.1.2 Tool support for management of testing and tests
        1. Also known as:
          1. 'the management of tests'
          2. 'managing the testing process'
        2. Test management tools
          1. Features or characteristics
          2. management of tests
          3. keeping track of the associated data for a given set of tests
          4. knowing which tests need to run in a common environment
          5. number of tests planned, written, run, passed or failed
          6. scheduling of tests to be executed
          7. (manually or by a test execution tool)
          8. management of testing activities
          9. time spent in test design
          10. test execution
          11. whether we are on schedule or on budget
          12. interfaces to other tools, such as:
          13. test execution tools (test running tools)
          14. incident management tools
          15. requirement management tools
          16. configuration management tools
          17. traceability of tests, test results and defects to requirements or other sources
          18. logging test results
          19. summarize results from test execution tools that the test manage-ment tool interfaces with
          20. preparing progress reports based on metrics (quantitative analysis), such as:
          21. tests run and tests passed
          22. incidents raised, defects fixed and outstanding
        3. Requirements management tools
          1. Features or characteristics
          2. storing requirement statements
          3. storing information about requirement attributes
          4. checking consistency of requirements
          5. identifying undefined, missing or 'to be defined later' requirements
          6. prioritizing requirements for testing purposes
          7. traceability of requirements to tests and tests to requirements, functions or features
          8. traceability through levels of requirements
          9. interfacing to test management tools
          10. coverage of requirements by a set of tests (sometimes)
        4. Incident management tools
          1. also known as
          2. a defect-tracking tool
          3. a defect-management tool
          4. a bug-tracking tool
          5. a bug-management tool
          6. Features or characteristics
          7. storing information about the attributes of incidents (e.g. severity)
          8. storing attachments (e.g. a screen shot)
          9. prioritizing incidents
          10. assigning actions to people (fix, confirmation test, etc.)
          11. status, e.g.:
          12. open
          13. rejected
          14. duplicate
          15. deferred
          16. ready for confirmation test
          17. closed
          18. reporting of statistics/metrics about incidents, e.g.:
          19. average time open
          20. number of incidents with each status
          21. total number raised
          22. open or closed
        5. Configuration management tools
          1. Features or characteristics
          2. storing information about versions and builds of the software and testware
          3. traceability between software and testware and different versions or variants
          4. keeping track of which versions belong with which configurations, e.g.:
          5. operating systems
          6. libraries
          7. browsers
          8. build and release management
          9. baselining (e.g. all the configuration items that make up a specific release)
          10. access control (checking in and out)
      3. 6.1.3 Tool support for static testing
        1. Review process support tools
          1. a common reference for the review process or processes to use in different situations
          2. storing and sorting review comments
          3. communicating comments to relevant people
          4. coordinating online reviews
          5. keeping track of comments, including defects found, and providing statistical information about them
          6. providing traceability between comments, documents reviewed and related documents
          7. a repository for rules, procedures and checklists to be used in reviews, as well as entry and exit criteria
          8. monitoring the review status (passed, passed with corrections, requires re- review)
          9. collecting metrics and reporting on key factors
        2. Static analysis tools (D) *D - likely to be used by developers
          1. calculate metrics such as cyclomatic complexity or nesting levels (which can help to identify where more testing may be needed due to increased risk)
          2. enforce coding standards
          3. analyze structures and dependencies
          4. aid in code understanding
          5. identify anomalies or defects in the code
        3. Modeling tools (D)
          1. identifying inconsistencies and defects within the model
          2. helping to identify and prioritize areas of the model for testing
          3. predicting system response and behavior under various situations, such as level of load
          4. helping to understand system functions and identify test conditions using a modeling language such as UML
        4. can be used before dynamic tests can be run: - earlier defect detection and fix - fewer defects left to propa-gate into later stages
      4. 6.1.4 Tool support for test specification
        1. Test design tools
          1. Types of tools
          2. construct test cases
          3. Computer Aided Software Engineering (CASE)
          4. possible to identify the input fields, including the range of valid values
          5. select combinations of possible factors to be used in testing, to ensure that all pairs of combinations of operating system and browser are tested
          6. 'screen scraper'
          7. coverage tool
          8. which branches have been covered by a set of existing tests
          9. identify the path that needs to be taken in order to cover the untested branches
          10. Features or characteristics
          11. generating test input values from:
          12. requirements
          13. design models (state, data or object)
          14. code
          15. graphical user interfaces
          16. test conditions
          17. generating expected results, if an oracle is available to the tool
          18. - This helps the testing to be more thorough (if that is an objective of the test!) - Unmanageable number of tests can be done by risk analysis
        2. Test data preparation tools
          1. enable data to be selected from an existing data-base or created, generated, manipulated and edited for use in tests
          2. The most sophisticated tools can deal with a range of files and database formats
          3. Features or characteristics
          4. extract selected data records from files or databases
          5. 'massage' data records to make them anonymous or not able to be identified with real people (for data protection)
          6. enable records to be sorted or arranged in a different order
          7. generate new records populated with pseudo-random data, or data set up according to some guidelines, e.g. an operational profile
          8. construct a large number of similar records from a template, to give a large set of records for volume tests, for example
      5. 6.1.5 Tool support for test execution and logging
        1. Test execution tools
          1. 'test running tool' or 'regression testing tools'
          2. 'capture/playback' tools or 'capture/replay' tools or 'record/playback' tools
          3. difficult to maintain because:
          4. It is closely tied to the flow and interface presented by the GUI
          5. It may rely on the circumstances, state and context of the system at the time the script was recorded
          6. The test input information is 'hard-coded'
          7. using programming skills
          8. - tests can repeat actions (in loops) for different data values
          9. - take different routes depending on the outcome of a test
          10. - can be called from other scripts giving some structure to the set of tests
          11. Features or characteristics
          12. capturing (recording) test inputs while tests are executed manually
          13. storing an expected result in the form of a screen or object to compare to, the next time the test is run
          14. executing tests from stored scripts and optionally data files accessed by the script (if data-driven or keyword-driven scripting is used)
          15. dynamic comparison (while the test is running) of screens, elements, links, controls, objects and values
          16. ability to initiate post-execution comparison
          17. logging results of tests run (pass/fail, differences between expected and actual results)
          18. masking or filtering of subsets of actual and expected results
          19. measuring timings for tests
          20. synchronizing inputs with the application under test
          21. e.g. wait until the appli cation is ready to accept the next input, or insert a fixed delay to represent human interaction speed
          22. sending summary results to a test management tool
        2. Test harness/unit test framework tools (D)
          1. test harness
          2. stubs
          3. drivers
          4. unit test framework tools
          5. Features or characteristics
          6. supplying inputs to the software being tested
          7. receiving outputs generated by the software being tested
          8. executing a set of tests within the framework or using the test harness
          9. framework tools
          10. recording the pass/fail results of each test
          11. storing tests
          12. support for debugging
          13. coverage measurement at code level
        3. Test comparators
          1. Dynamic comparison
          2. Integrated with test execution tool
          3. Post-execution comparison
          4. 'stand-alone' tool
          5. comparing a large volume of data
          6. comparing a large set of records from a database with the expected content of those records
          7. Features or characteristics
          8. dynamic comparison of transient events that occur during test execution
          9. post-execution comparison of stored data, e.g. in files or databases
          10. masking or filtering of subsets of actual and expected results
        4. Coverage measurement tools (D)
          1. component testing level coverage items
          2. lines of code
          3. code statements
          4. decision outcomes
          5. component integration testing level coverage items
          6. a call to a function or module
          7. system testing level
          8. acceptance testing levels
          9. Features or characteristics
          10. identifying coverage items (instrumenting the code)
          11. calculating the percentage of coverage items that were exercised by a suite of tests
          12. reporting coverage items that have not been exercised as yet
          13. identifying test inputs to exercise as yet uncovered items (test design tool functionality)
          14. generating stubs and drivers (if part of a unit test framework)
        5. Security tools
          1. may focus on:
          2. the network
          3. the support software
          4. the application code
          5. the underlying database
          6. Features or characteristics
          7. identifying viruses
          8. detecting intrusions such as denial of service attacks
          9. simulating various types of external attacks
          10. probing for open ports or other externally visible points of attack
          11. identifying weaknesses in password files and passwords
          12. security checks during operation
      6. 6.1.6 Tool support for performance and monitoring
        1. Dynamic analysis tools (D)
          1. detecting memory leaks
          2. identifying pointer arithmetic errors such as null pointers
          3. identifying time dependencies
          4. Broken links research 'web spider'
        2. Performance-testing, load-testing and stress-testing tools
          1. Performance-testing tools
          2. testing at system level to see whether or not the system will stand up to a high volume of usage
          3. 'load' test
          4. checks that the system can cope with its expected number of transactions
          5. 'volume' test
          6. checks that the system can cope with a large amount of data
          7. 'stress' test
          8. beyond the normal expected usage of the system
          9. Features or characteristics
          10. generating a load on the system to be tested
          11. measuring the timing of specific transactions as the load on the system varies
          12. measuring average response times
          13. producing graphs or charts of responses over time
        3. Monitoring tools
          1. For:
          2. servers
          3. networks
          4. databases
          5. security
          6. performance
          7. website and internet usage
          8. applications
          9. Features or characteristics
          10. identifying problems and sending an alert message to the administrator
          11. logging real-time and historical information
          12. finding optimal settings
          13. monitoring the number of users on a network
          14. monitoring network traffic
      7. 6.1.7 Tool support for specific application areas (Kl)
        1. web-based performance-testing tools
        2. performance-testing tools for back-office systems
        3. static analysis tools for specific development platforms and programming languages
        4. dynamic analysis tools that focus on security issues
        5. dynamic analysis tools for embedded systems
      8. 6.1.8 Tool support using other tools
        1. word processor
        2. spreadsheet
        3. SQL
        4. debugging tools
    4. 6.2 EFFECTIVE USE OF TOOLS: POTENTIAL BENEFITS AND RISKS
      1. 6.2.1 Potential benefits of using tools
        1. Benefits
          1. reduction of repetitive work
          2. greater consistency and repeatability
          3. objective assessment
          4. ease of access to information about tests or testing
        2. Examples of repetitive work
          1. running regression tests
          2. entering the same test data
          3. checking against coding standards
          4. creating a specific test database
        3. Examples of beneficial usage of tools
          1. to confirm the correctness of a fix to a defect (a debugging tool or test execution tool)
          2. enter-ing test inputs (a test execution tool)
          3. generating tests from requirements (a test design tool or possibly a requirements management tool)
          4. assessing the cyclomatic complexity or nesting levels of a component (a static analysis tool)
          5. coverage (coverage measurement tool)
          6. system behavior (monitoring tools)
          7. incident statistics (test management tool)
          8. statistics and graphs about test progress (test execution or test management tool)
          9. incident rates (incident management or test management tool)
          10. performance (performance testing tool)
      2. 6.2.2 Risks of using tools
        1. Risks include:
          1. unrealistic expectations for the tool
          2. underestimating the time, cost and effort for the initial introduction of a tool
          3. underestimating the time and effort needed to achieve significant and con tinuing benefits from the tool
          4. underestimating the effort required to maintain the test assets generated by the tool
          5. over-reliance on the tool
        2. Two other important factors are:
          1. the skill needed to create good tests
          2. the skill needed to use the tools well, depending on the type of tool
      3. 6.2.3 Special considerations for some types of tools
        1. Test execution tools
          1. levels of scripting
          2. linear scripts (which could be created manually or captured by recording a manual test)
          3. structured scripts (using selection and iteration programming structures)
          4. shared scripts (where a script can be called by other scripts so can be re-used (also require a formal script library under configuration man agement)
          5. data-driven scripts (where test data is in a file or spreadsheet to be read by a control script)
          6. keyword-driven scripts (where all of the information about the test is stored in a file or spreadsheet, with a number of control scripts that implement the tests described in the file)
          7. Reasons why a captured test (a linear script) is not a good solution:
          8. The script doesn't know what the expected result is until you program it in - it only stores inputs that have been recorded, not test cases
          9. A small change to the software may invalidate dozens or hundreds of scripts
          10. The recorded script can only cope with exactly the same conditions as when it was recorded. Unexpected events (e.g. a file that already exists) will not be interpreted correctly by the tool
          11. when capturing test inputs is useful
          12. exploratory testing
          13. running unscripted tests with experienced business users
          14. short term, where the context remains valid
          15. to log everything that is done, as an audit trail
        2. Performance testing tools
          1. Examples
          2. the transaction throughput
          3. the degree of accuracy of a given computation
          4. the computer resources being used for a given level of transactions
          5. the time taken for certain transactions
          6. the number of users that can use the system at once
          7. Issues
          8. the design of the load to be generated by the tool (e.g. random input or according to user profiles)
          9. timing aspects (e.g. inserting delays to make simulated user input more realistic)
          10. the length of the test and what to do if a test stops prematurely
          11. narrowing down the location of a bottleneck
          12. exactly what aspects to measure (e.g. user interaction level or server level)
          13. how to present the information gathered
        3. Static analysis tools
        4. Test management tools
    5. 6.3 INTRODUCING A TOOL INTO AN ORGANIZATION
      1. 6.3.1 Main principles
        1. Factors in selecting a tool:
          1. assessment of the organization's maturity (e.g. readiness for change)
          2. identification of the areas within the organization where tool support will help to improve testing processes
          3. evaluation of tools against clear requirements and objective criteria
          4. proof-of-concept to see whether the product works as desired and meets the requirements and objectives defined for it
          5. evaluation of the vendor (training, support and other commercial aspects) or open-source network of support
          6. identifying and planning internal implementation (including coaching and mentoring for those new to the use of the tool)
      2. 6.3.2 Pilot project
        1. should experiment with different ways of using the tool
          1. different settings for a static analysis tool
          2. different reports from a test management tool
          3. differ-ent scripting and comparison techniques for a test execution tool
          4. different load profiles for a performance-testing tool
        2. The objectives for a pilot project for a new tool are:
          1. to learn more about the tool (more detail, more depth)
          2. to see how the tool would fit with existing processes or documentation, how those would need to change to work well with the tool and how to use the tool to streamline existing processes
          3. to decide on standard ways of using the tool that will work for all potential users, e.g.:
          4. naming conventions
          5. creation of libraries
          6. defining modularity, where different elements will be stored
          7. how modularity and the tool itself will be maintained
          8. to evaluate the pilot project against its objectives (have the benefits been achieved at reasonable cost?)
      3. 6.3.3 Success factors
        1. incremental roll-out (after the pilot) to the rest of the organization
        2. adapting and improving processes, testware and tool artefacts to get the best fit and balance between them and the use of the tool
        3. providing adequate training, coaching and mentoring of new users
        4. defining and communicating guidelines for the use of the tool, based on what was learned in the pilot
        5. implementing a continuous improvement mechanism as tool use spreads through more of the organization
        6. monitoring the use of the tool and the benefits achieved and adapting the use of the tool to take account of what is learned
    6. CHAPTER REVIEW
      1. Section 6.1
        1. Tools that support the management of testing and tests:
          1. test management tool
          2. A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as testware management, scheduling of tests, the logging of results, progress tracking, incident management and test reporting
          3. requirements management tool
          4. A tool that supports the recording of requirements, requirements attributes (e.g. priority, knowledge responsible) and annotation, and facilitates traceability through layers of requirements and requirements change management. Some requirements management tools also provide facilities for static analysis, such as consistency checking and violations to pre-defined requirements rules
          5. incident management tool
          6. A tool that facilitates the recording and status tracking of incidents. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities
          7. configuration management tool
          8. A tool that provides support for the identification and control of configuration items, their status over changes and versions, and the release of baselines consisting of configuration items
        2. Tools that support static testing:
          1. review process support tool
          2. static analysis tool (D)
          3. static analyzer: A tool that carries out static analysis
          4. modeling tool (D)
          5. A tool that supports the creation, amendment and verification of models of the software or system
        3. Tools that support test specification:
          1. test design tool
          2. A tool that supports the test design activity by generating test inputs from a specification that may be held in a CASE tool repository, e.g. requirements management tool, from specified test conditions held in the tool itself, or from code
          3. test data preparation tool
          4. A type of test tool that enables data to be selected from existing databases or created, generated, manipulated and edited for use in testing
        4. Tools that support test execution and logging:
          1. test execution tool
          2. A type of test tool that is able to execute other software using an automated test script, e.g. capture/playback
          3. test harness and unit test framework tool (D)
          4. test harness: A test environment comprised of stubs and drivers needed to execute a test.
          5. unit test framework: A tool that provides an environment for unit or component testing in which a component can be tested in isolation or with suitable stubs and drivers. It also provides other support for the developer, such as debugging capabilities
          6. test comparator
          7. A test tool to perform automated test comparison of actual results with expected results
          8. coverage measurement tool (D)
          9. coverage tool: A tool that provides objective measures of what structural elements, e.g. statements, branches have been exercised by a test suite.
          10. security tool
          11. A tool that supports operational security
        5. Tools that support performance and monitoring:
          1. dynamic analysis tool
          2. A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers, check pointer arithmetic and to monitor the allocation, use and de-allocation of memory and to flag memory leaks
          3. performance-testing, load-testing and stress-testing tool
          4. performance testing tool: A tool to support performance testing that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple users or high volumes of input data. During execution, response time 32 measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times
          5. See performance testing tool.
          6. stress testing tool: A tool that supports stress testing.
          7. monitoring tool
          8. monitor: A software tool or hardware device that runs concurrently with the component or system under test and supervises, records and/or analyses the behavior of the component or system
      2. Section 6.3
        1. main principles of introducing a tool into an organization, e.g.:
          1. assessing organizational maturity
          2. clear requirements and objective criteria
          3. proof-of-concept
          4. vendor evaluation
          5. coaching and mentoring
        2. goals of a proof-of-concept or piloting phase for tool evaluation. e.g.:
          1. learn about the tool
          2. assess fit with current practices
          3. decide on standards
          4. assess benefits
        3. Factors important for success, e.g.:
          1. incremental roll-out
          2. adapting processes
          3. training and coaching
          4. defining usage guidelines
          5. learning lessons
          6. monitoring benefits
  7. Books
    1. Myers
    2. Copeland
    3. Kaner
    4. Rex Black