Key Concepts from Unit 5
Evaluating XP Implementation
Evaluating the effectiveness of XP involves assessing its core practices:
- Refactoring: Improves code efficiency and understandability but needs discipline. Automated testing simplifies checking if changes broke functionality.
- Test-First Development: Clarifies requirements, ensures testability, but requires effort to write and maintain tests.
- Pair Programming: Improves code quality, knowledge sharing, and serves as informal review, though requires compatible pairs.
- The XP Release Cycle: Involves selecting stories, breaking into tasks, development/integration/testing, evaluation, and release.
The GQM (Goal-Question-Metric) method can be applied to measure the implementation success of specific XP practices by defining goals (e.g., Improve code quality via Refactoring), questions (e.g., How often is code refactored?), and metrics (e.g., % of code changed per iteration).
XP Rules Summary
XP practices translate into operational rules:
- Planning Game: Collaborative planning using story cards.
- Small Releases: Frequent delivery of testable features (e.g., 3-4 weeks).
- Metaphor: A shared story or analogy explaining the system's overall design/operation.
- Simple Design: Meet current needs with the simplest possible design.
- Testing: Programmers write unit tests, customers write functional/acceptance tests.
- Refactoring: Continuously improve code structure without changing behavior.
- Pair Programming: Two developers, one computer.
- Collective Code Ownership: Anyone can improve any code.
- Continuous Integration: Integrate and test code many times a day.
- 40-Hour Week: Maintain a sustainable pace.
- On-Site Customer: Customer available full-time to answer questions and provide feedback.
- Coding Standards: Team agrees on common coding rules.
Agile in E-Government
E-Government involves using ICT for government services.
- Challenges: Digital Divide (unequal access), financial barriers, integrating legacy systems.
- Applying Agile: Agile methods can help E-Gov projects adapt to changing policies, user needs, and technology.
- 4-DAT Framework: A tool to analyze project dimensions (Scope, Agile Values, Process, Agility) to help select suitable Agile methods (like XP or Scrum) for specific E-Gov contexts.
- Agile SPM: Enables flexible requirement definition and management suitable for evolving E-Gov services.
Requirement Engineering (RE) in Agile
RE is the process of defining, documenting, and maintaining requirements.
Traditional RE Process:
- Feasibility Study: Assess practicality (Technical, Operational, Economic).
- Elicitation and Analysis: Gather requirements from stakeholders.
- Specification: Document requirements (e.g., in an SRS).
- Validation: Check if specified requirements meet user needs.
- Management: Handle changes to requirements.
RE in Agile: While the steps exist conceptually, they are more iterative and continuous. Elicitation, specification, validation happen frequently within iterations, often using User Stories and direct customer feedback instead of a large upfront SRS. Agile breaks tasks into smaller iterations (1-4 weeks), allowing for flexibility and risk minimization. Each iteration typically involves a full cycle: planning, analysis, design, coding, testing.
SRS (Software Requirement Specification): A detailed document describing functional and non-functional requirements. Key characteristics of a good SRS include: Correctness, Completeness, Consistency, Unambiguousness, Modifiability, Verifiability, Traceability, Design Independence, Testability, Understandability.
Essay Questions
Click on a question to reveal the answer based on the unit material.
Requirement Engineering (RE) is defined as the process of defining, documenting, and maintaining requirements during the engineering design process. It provides mechanisms to understand customer desires, analyze needs, assess feasibility, negotiate solutions, specify clearly, validate specifications, and manage requirements as they evolve.
The slides outline a five-step RE process (visualized in slide 23):
- Feasibility Study: Assesses the practicality and reasons for developing the software, considering technical, operational, and economic factors. A Feasibility Report is often produced.
- Requirement Elicitation and Analysis: Involves gathering requirements from customers, stakeholders, and existing systems. This phase identifies needs, resolves conflicts, and understands the problem domain.
- Software Requirement Specification (SRS): Involves documenting the gathered requirements, often in an SRS document. This uses models like ER diagrams, DFDs, and data dictionaries to create a technical representation understood by the development team.
- Software Requirement Validation: Checks the documented requirements (e.g., in the SRS) for correctness, completeness, consistency, lack of ambiguity, and whether they truly meet user needs. Techniques include reviews, prototyping, and test-case generation.
- Software Requirement Management: Handles changes to requirements that inevitably occur during the development process, managing different priorities and evolving needs.
The Feasibility Study is the first step in the RE process, aiming to establish the rationale for developing the software, ensuring it's acceptable to users, flexible, and meets standards. The slides mention three specific types of feasibility (slide 24-25):
- Technical Feasibility: This evaluates the availability and capability of the current technologies needed to meet the customer's requirements within the given time and budget constraints. It asks: "Can we build it with the technology we have or can acquire?"
- Operational Feasibility: This assesses how well the proposed software solution will work within the organization's existing operational environment and meet the business needs and customer requirements. It considers how the software will fit into the workflow and solve the intended business problems.
- Economic Feasibility: This determines if the project is financially viable. It analyzes the costs of development versus the expected benefits (like financial profits) for the organization. It asks: "Is the project a good financial investment?"
The overall purpose is to determine if the project is practical and justifiable from technical, operational, and financial perspectives before significant resources are committed.
A Software Requirement Specification (SRS) is a document created by a software analyst after gathering requirements from various sources. It translates customer needs (often expressed in ordinary language) into a technical language that the development team can understand and use. It details what the software will do and how it's expected to perform, including functional and non-functional requirements, and may use models like ER diagrams and DFDs (Slide 28-29).
The slides list several characteristics of a good SRS (Slide 36). Here are three key ones with explanations derived from slides 37-48:
- Correctness (Slide 37): An SRS is correct if it accurately represents all the requirements that are truly expected from the system. It must cover the actual needs of the user. User reviews are essential to ensure this accuracy. Importance: Ensures the developed system actually solves the user's problem.
- Unambiguousness (Slide 40): Every requirement stated in the SRS must have only one possible interpretation. If terms or methods with multiple meanings are used, they must be clearly defined within the SRS context. Importance: Prevents misunderstandings between stakeholders and developers, leading to fewer errors and rework.
- Completeness (Slide 38): The SRS must include all essential requirements (functional, performance, design constraints, interfaces), define the software's response to all possible inputs and situations, and properly label/reference all diagrams, tables, and define all terms used. Importance: Ensures nothing critical is missed and provides a full picture for development and testing.
The slides categorize software requirements into two main types (Slides 34-35):
- Functional Requirements (Slide 34): These define the specific functions or behaviors that a system or system element must perform. They describe *what* the system does. They detail the operations, transformations, and processing the software must carry out, often related to specific user tasks or business processes. Example: A requirement stating "The system shall allow users to add items to a shopping cart" is functional.
- Non-functional Requirements (Slide 35): These specify criteria used to judge the *operation* of a system, rather than its specific behaviors. They define *how* the system should perform its functions or constrain the system's design. They describe qualities or properties of the system. The slides further divide these into:
- Execution Qualities: Observable at runtime, such as security (e.g., access control) and usability (e.g., ease of use).
- Evolution Qualities: Related to the static structure and long-term maintenance of the system, such as testability (ease of testing), maintainability (ease of modification/bug fixing), extensibility (ease of adding features), and scalability (ability to handle increased load).
Example: A requirement stating "The system login page must load within 2 seconds" or "The system must encrypt user passwords" is non-functional.
In essence, functional requirements define what the system *must do*, while non-functional requirements define *how well* the system must do it or impose constraints on it.
Requirement Validation is a step in the Requirement Engineering process that occurs after the requirements have been specified (e.g., in an SRS document). Its purpose is to check that the documented requirements accurately reflect the customer's actual needs and are correct, complete, consistent, and feasible (Slide 30).
Validation is crucial because stakeholders might request impossible or conflicting features, or analysts might misinterpret needs. Without validation, the team might build a system based on flawed requirements, leading to a product that doesn't meet user expectations, requires extensive rework, or fails completely.
The slides mention four validation techniques (Slide 31):
- Requirements reviews/inspections: A systematic manual analysis of the requirements document by a team (including stakeholders, developers, testers) to find errors, ambiguities, and omissions.
- Prototyping: Creating an executable model or mock-up of the system (or parts of it) based on the requirements. Users can interact with the prototype to check if it matches their understanding and needs, providing early feedback.
- (Test-case generation: Developing tests based on requirements to check if they are specific enough to be tested (testability).)
- (Automated consistency analysis: Using tools to check structured requirement descriptions for logical inconsistencies.)
Reviews and Prototyping are two common and effective techniques mentioned.
According to slide 39, an SRS is considered **Consistent** if, and only if, no subset of individual requirements described within it conflict with each other. Consistency ensures that different parts of the specification do not contradict one another.
The slides highlight potential types of conflicts:
- Conflict in characteristics of real-world objects: Different requirements might describe the same entity or feature in incompatible ways.
- Logical or temporal conflict between specified actions: Two requirements might specify actions that cannot logically occur together or contradict each other in terms of process flow.
Example of Inconsistency (based on slide 39):
An SRS document contains the following two requirements:
- Requirement A: "The system shall generate the monthly sales summary report in a tabular format."
- Requirement B: "The monthly sales summary report must be presented as a free-form textual document."
This represents an inconsistency because the format of the *same* output (the monthly sales summary report) is described in conflicting ways (tabular vs. textual). A consistent SRS would specify only one format or clearly define conditions under which each format is used.
Another example provided: One requirement states the program will *add* two inputs, while another states the program will *multiply* them under the same conditions.
Requirement Management, as described in slide 32, is the process of managing **changing requirements** throughout the requirements engineering process and the subsequent system development lifecycle.
It involves activities like identifying, controlling, tracking, and reporting requirements and the changes made to them.
Requirement Management is necessary because:
- Requirements Evolve: Business needs, user expectations, market conditions, and technical possibilities change over time. Requirements defined at the start of a project are rarely static.
- Changing Priorities: The importance or priority of different requirements can shift during development due to feedback, budget constraints, or strategic decisions (as mentioned in slide 32).
- Stakeholder Understanding Improves: As the project progresses and stakeholders see prototypes or early versions, their understanding of their own needs may evolve, leading to new or modified requirements.
- Complexity: In large projects, managing numerous requirements and their interdependencies requires a structured approach to avoid chaos and ensure changes are handled consistently and their impact is understood.
Without effective requirement management, projects risk scope creep, confusion, inconsistencies, budget overruns, and delivering a system that doesn't meet the *final* user needs.
Maintainability is defined in slide 19 as a quality attribute indicating how easily a software product can be modified. This includes several aspects:
- Correcting Bugs: Bugs discovered after release can be easily located and fixed.
- Adding New Tasks/Features: New functionality can be incorporated into the existing system without excessive effort or breaking existing parts.
- Modifying Functionality: Existing features can be adapted or changed to meet new requirements.
Essentially, a maintainable software product is easy to understand, modify, and enhance over its lifespan.
Maintainability is important because:
- Reduced Costs: Software maintenance (bug fixing, updates, enhancements) often consumes a significant portion of the total software lifecycle cost. High maintainability reduces this effort and cost.
- Faster Updates: Changes and fixes can be implemented more quickly, allowing the software to adapt to changing user needs or market conditions faster.
- Lower Risk: Modifying highly maintainable code is less likely to introduce new errors (regressions) compared to modifying complex, poorly structured code.
- Longer Lifespan: Software that is easy to maintain can remain useful and relevant for a longer period, maximizing the initial investment.
Requirement Elicitation and Analysis is the process of gathering requirements from stakeholders and understanding the problem domain. Slide 26 highlights several common challenges faced during this phase:
- Identifying the Right People: Ensuring that all relevant stakeholders are involved, and only the relevant ones, can be difficult. Missing key stakeholders means missing requirements; involving irrelevant people wastes time.
- Stakeholders Don't Know What They Want: Often, stakeholders have a vague idea of their needs but struggle to articulate specific, complete, and consistent requirements, especially for innovative systems.
- Communication Barriers: Stakeholders often express requirements in their own domain-specific terms or natural language, which might be ambiguous or misunderstood by the analysts or development team.
- Conflicting Requirements: Different stakeholders may have different needs or priorities that conflict with each other. Resolving these conflicts requires negotiation and prioritization.
- Changing Requirements: Requirements often change even during the elicitation and analysis phase itself, as understanding evolves or external factors shift. This requires flexibility and management.
- Organizational and Political Factors: The requirements gathered can be influenced by internal politics, organizational structure, or individual agendas, potentially skewing the true needs of the system.
Overcoming these challenges requires skilled analysts who can use various elicitation techniques (interviews, workshops, observation, etc.), facilitate communication, manage conflicts, and iteratively refine understanding.
Traceability, as a characteristic of a good SRS (Slide 44), refers to the ability to follow the life of a requirement both forwards and backwards. An SRS is traceable if:
- The origin (source) of each requirement is clear.
- It facilitates referencing each requirement in future development phases (design, coding, testing) or subsequent documentation (e.g., enhancement specifications).
Traceability helps in understanding the purpose of a requirement, managing changes (assessing the impact of changing a requirement), and verifying that all requirements have been addressed in the final system.
The slides mention two types of traceability:
- Backward Traceability: This links each requirement in the SRS back to its source. The source could be an earlier document (like a business case or feasibility study), a specific stakeholder request, or a high-level system goal. It answers the question: "Why does this requirement exist?"
- Forward Traceability: This ensures each requirement in the SRS has a unique identifier (name or reference number) so that it can be referenced by design elements, code modules, test cases, and other downstream artifacts. It answers the question: "Where is this requirement implemented and tested?"