SE206 - Unit 6: Testing in Agile

Glossary of Terms

The process of evaluating a system or its component(s) with the intent to find whether it satisfies the specified requirements or not. It involves executing a system to identify gaps, errors, or missing requirements. (As per ANSI/IEEE 1059: A process of analyzing a software item to detect differences between existing and required conditions and to evaluate features).

Differences between existing and required conditions in a software item, identified during testing.

Testing conducted by developers on individual units or components of source code to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.

The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. It addresses the concern: "Are you building it right?". Involves static activities like reviews and inspections, typically done by developers.

The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. It addresses the concern: "Are you building the right thing?". Involves dynamic activities like executing the software, typically done by testers.

Testing of a software development artifact, e.g., requirements, design or code, without execution of these artifacts. (Implied by Verification activities like reviews, walkthroughs, inspections).

Testing that involves the execution of the software of a component or system. (Implied by Validation activities).

A validation method suitable for date and numeric data types that checks whether data is within given values. Example: a student's score should be >0 but <100.

A validation method suitable for text and date data types that checks if the input data contains the required number of characters. Example: a telephone number should be 11 digits.

A validation method that checks that the input data does not contain invalid characters based on the expected data type (e.g., Integer, String, Boolean). Example: username should not contain numbers.

A validation method suitable for date and text data types that checks if data is in a specific format. Example: Student ID starts with 3 letters followed by 4 digits (SEM2021).

A validation method suitable for numeric data types, similar to a range check but only one of the boundaries is checked. Example: student's score must be >0.

A validation method, often for primary keys, that checks if data is actually present and has not been missed out. Example: Student's ID should be present.

A validation method suitable for a list of values, where an entered data item is compared to a list of valid entries (often stored in a database table). Example: Country field must be one of the predefined countries.

A validation method, often used for barcodes or ISBNs, that looks at an extra digit calculated from the other digits of a number and then put on the end of the number to verify accuracy.

A verification method where data is entered twice, sometimes by different operators, and the computer system compares both entries for discrepancies.

A verification method involving a manual check by the user entering the data, who confirms its correctness on the screen, often against a paper document or from their own knowledge.

A software testing method in which the internal structure/design/implementation of the item being tested is NOT known to the tester. Testing is based on inputs and outputs.

A software testing method in which the internal structure/design/implementation of the item being tested IS known to the tester. Also called glass-box or open-box testing.

A software testing method which is a combination of Black Box and White Box testing. The tester has limited knowledge of the internal workings, possibly access to design documents or databases.

A type of black-box testing based on the specifications of the software. The application is tested by providing input and examining the output to ensure it conforms to the intended functionality and specified requirements.

The phase in software testing in which individual software modules are combined and tested as a group. It can be done bottom-up (starting with unit-tested modules) or top-down (starting with high-level modules).

Testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements and quality standards. Tests the system as a whole.

Selective re-testing of a system or component to verify that modifications have not caused unintended effects and that the system or component still complies with its specified requirements. Ensures bug fixes don't introduce new faults.

The first stage of testing, typically performed in-house by developer and QA teams. It often combines unit, integration, and system testing. Aspects tested include spelling, broken links, and performance on low-spec machines.

Testing performed after alpha testing, where a sample of the intended audience tests the application in a real-world environment. Also known as pre-release testing.

Testing software from requirements which are non-functional in nature but important, such as performance, security, usability, portability, etc.

Key Concepts from Unit 6

What is Software Testing?

Testing is a critical process in software development. Its primary goals are:

  • To evaluate a system or its components.
  • To determine if the system satisfies specified requirements.
  • To identify any gaps, errors, defects, or missing requirements compared to actual requirements.
  • (ANSI/IEEE 1059) To analyze a software item to detect differences between existing and required conditions and to evaluate its features.

Who Performs Testing and When?

  • Roles: Testing involves various stakeholders. Dedicated testing teams (Software Testers, QA Engineers) evaluate developed software. Developers also conduct Unit Testing on their code. Project Managers and End Users can also be involved.
  • When to Start: An early start to testing (Shift-Left) is crucial. It reduces rework cost and time. In SDLC, testing can begin from the Requirements Gathering phase and continue through deployment.
  • Agile vs. Waterfall: In Waterfall, formal testing is a distinct phase. In Agile/incremental models, testing is performed at the end of every increment/iteration, with the whole application tested at the end. Testing is continuous.
  • When to Stop: It's hard to achieve 100% testing. Criteria include: deadlines, test case completion, functional/code coverage targets, bug rate stabilization (no high-priority bugs), and management decisions.

Verification vs. Validation

These two terms are often confused but represent distinct quality assurance activities:

  • Verification ("Are you building it right?"):
    • Ensures the software system meets all specified functionality and conforms to design specifications.
    • Typically occurs first and involves checking documentation, code, etc.
    • Primarily done by developers.
    • Involves static activities (reviews, walkthroughs, inspections) without executing the code.
    • An objective process.
  • Validation ("Are you building the right thing?"):
    • Ensures that the developed functionalities meet the intended behavior and user needs.
    • Occurs after verification and involves checking the overall product.
    • Primarily done by testers.
    • Involves dynamic activities (executing the software against requirements).
    • A subjective process, involves decisions on how well the software works.

Validation Methods (Data Input Checks)

Techniques to ensure data entered into the system is sensible and acceptable:

  • Range Check: Data within specified upper/lower limits (e.g., age 18-65). Considers Normal, Abnormal, Extreme data.
  • Length Check: Data has a specific number of characters (e.g., password 8-16 characters).
  • Type Check: Data conforms to the expected data type (e.g., numeric field shouldn't accept letters).
  • Format Check: Data adheres to a predefined pattern (e.g., email format, date dd/mm/yyyy).
  • Limit Check: Data is above or below a single boundary (e.g., quantity > 0).
  • Presence Check: Ensures required data is not missing (e.g., mandatory fields).
  • Lookup Check: Data matches an entry in a predefined list of valid values (e.g., selecting a state from a dropdown).
  • Check Digit: An extra digit used to verify the accuracy of a numerical identifier (e.g., ISBN, credit card numbers).

Verification Methods

Techniques to ensure data is accurately copied or transferred:

  • Double Entry: Data is entered twice (often by different operators) and the system compares the two entries for consistency.
  • Screen/Visual Check: A manual check where the user visually confirms the correctness of entered data as displayed on the screen, possibly against a source document.

Software Testing Methods (Approaches)

Different approaches based on the tester's knowledge of the system's internal workings:

  • Black-Box Testing: Tester has no knowledge of the internal code structure or logic. Testing is based on inputs and observing outputs against requirements. Focuses on "what" the system does.
  • White-Box Testing (Glass-Box/Open-Box): Tester has detailed knowledge of the internal logic, code structure, and design. Tests are designed to cover specific code paths, branches, and conditions. Focuses on "how" the system works internally.
  • Grey-Box Testing: Tester has partial or limited knowledge of the internal workings, perhaps access to design documents or databases. Combines aspects of both black-box and white-box testing to design more effective test cases.

Testing Levels

Testing is performed at different stages and scopes:

1. Functional Testing

A type of black-box testing focusing on the functional requirements and specifications. It verifies that the system performs its intended functions correctly.

  • Unit Testing: Testing individual components or modules by developers.
  • Integration Testing: Testing the interfaces and interactions between integrated components.
    • Bottom-up: Starts with lower-level modules and integrates upwards.
    • Top-down: Starts with higher-level modules and integrates downwards, often using stubs.
  • System Testing: Testing the complete and integrated system as a whole against specified requirements and quality standards. Performed by a specialized team.
  • Regression Testing: Re-testing after modifications or bug fixes to ensure no new defects have been introduced and existing functionality remains intact.
  • Alpha Testing: In-house testing by developer/QA teams before release. Combines unit, integration, and system testing. Focuses on basic usability, functionality, and stability.
  • Beta Testing (Pre-release Testing): External testing by a sample of real users in their environment before the official release. Gathers feedback on usability, performance, and overall satisfaction.

2. Non-Functional Testing

Tests aspects of the software not related to specific functions, but to quality attributes such as:

  • Performance Testing: Evaluates responsiveness, stability, and scalability under a particular workload.
  • Load Testing: Assesses system behavior under normal and peak load conditions.
  • Stress Testing: Evaluates system behavior beyond normal operating conditions to find its breaking point.
  • Usability Testing: Assesses how easy and intuitive the software is to use for end-users.
  • Security Testing: Identifies vulnerabilities and ensures data protection and system integrity.
  • Portability Testing: Checks if the software can be easily transferred from one environment to another.

Essay Questions

Fill in the Blank Questions

True/False Questions

Multiple Choice Questions