Thursday, December 30, 2010

Traceability Matrix

A requirements traceability matrix is a document that traces and maps user requirements [requirement Ids from requirement specification document] with the test case ids. Purpose is to make sure that all the requirements are covered in test cases so that while testing no functionality can be missed.
This document is prepared to make the clients satisfy that the coverage done is complete as end to end, this document consists of Requirement/Base line doc Ref No., Test case/Condition, and Defects/Bug id. Using this document the person can track the Requirement based on the Defect id
Note – We can make it a “Test case Coverage checklist” document by adding few more columns. We will discuss in later posts
Types of Traceability Matrix:
  • Forward Traceability – Mapping of Requirements to Test cases
  • Backward Traceability – Mapping of Test Cases to Requirements
  • Bi-Directional Traceability - A Good Traceability matrix is the References from test cases to basis documentation and vice versa.
Types of Traceability Matrix
Why Bi-Directional Traceability is required?
Bi-Directional Traceability contains both Forward & Backward Traceability. Through Backward Traceability Matrix, we can see that test cases are mapped with which requirements.
This will help us in identifying if there are test cases that do not trace to any coverage item— in which case the test case is not required and should be removed (or maybe a specification like a requirement or two should be added!). This “backward” Traceability is also very helpful if you want to identify that a particular test case is covering how many requirements?
Through Forward Traceability – we can check that requirements are covered in which test cases? Whether is the requirements are coved in the test cases or not?
Forward Traceability Matrix ensures – We are building the Right Product.
Backward Traceability Matrix ensures – We the Building the Product Right.
Traceability matrix is the answer of the following questions of any Software Project:
  • How is it feasible to ensure, for each phase of the SDLC, that I have correctly accounted for all the customer’s needs?
  • How can I certify that the final software product meets the customer’s needs? Now we can only make sure requirements are captured in the test cases by traceability matrix.
Disadvantages of not using Traceability Matrix [some possible (seen) impact]:
No traceability or Incomplete Traceability Results into:
1. Poor or unknown test coverage, more defects found in production 
2. It will lead to miss some bugs in earlier test cycles which may arise in later test cycles. Then a lot of discussions arguments with other teams and managers before release.
3. Difficult project planning and tracking, misunderstandings between different teams over project dependencies, delays, etc
Benefits of using Traceability Matrix
  • Make obvious to the client that the software is being developed as per the requirements.
  • To make sure that all requirements included in the test cases
  • To make sure that developers are not creating features that no one has requested
  • Easy to identify the missing functionalities.
  • If there is a change request for a requirement, then we can easily find out which test cases need to update.
  • The completed system may have “Extra” functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort.

Cost of Quality


The cost of quality includes all costs incurred in the pursuit of quality or in performing
quality-related activities. Cost of quality studies are conducted to provide a base-
line for the current cost of quality, identify opportunities for reducing the cost of quality,
and provide a normalized basis of comparison. The basis of normalization is
almost always dollars. Once we have normalized quality costs on a dollar basis, we
have the necessary data to evaluate where the opportunities lie to improve our
processes. Furthermore, we can evaluate the effect of changes in dollar-based terms.

Quality costs may be divided into costs associated with prevention, appraisal, and
failure. Prevention costs include
• quality planning
• formal technical reviews
• test equipment
• training

Appraisal costs include activities to gain insight into product condition the “first time
through” each process. Examples of appraisal costs include
• in-process and interprocess inspection
• equipment calibration and maintenance
• testing

Failure costs are those that would disappear if no defects appeared before shipping a
product to customers. Failure costs may be subdivided into internal failure costs and
external failure costs. Internal failure costs are incurred when we detect a defect in
our product prior to shipment. Internal failure costs include
• rework
• repair
• failure mode analysis

External failure costs are associated with defects found after the product has been
shipped to the customer. Examples of external failure costs are
• complaint resolution
• product return and replacement
• help line support
• warranty work

As expected, the relative costs to find and repair a defect increase dramatically as
we go from prevention to detection to internal failure to external failure costs based on data collected  and others, illustrates this phenomenon.

 

FORMAL TECHNICAL REVIEWS


A formal technical review is a software quality assurance activity performed by software
engineers (and others). The objectives of the FTR are
 (1) to uncover errors in function, logic, or implementation for any representation of the software;
(2) to verify that the software under review meets its requirements;
 (3) to ensure that the software has been represented according to predefined standards; (4) to achieve software that is developed in a uniform manner;
And
 (5) to make projects more manageable.

In addition, the FTR serves as a training ground, enabling junior engineers to observe different approaches to software analysis, design, and implementation. The FTR also serves
to promote backup and continuity because a number of people become familiar with
parts of the software that they may not have otherwise seen.

The FTR is actually a class of reviews that includes walkthroughs, inspections,
round-robin reviews and other small group technical assessments of software. Each
FTR is conducted as a meeting and will be successful only if it is properly planned,
controlled, and attended. In the sections that follow, guidelines similar to those for a
walkthrough are presented as a representative formal technical review.

 

Defect leakage


Defect leakage refers to the defect Found \ reproduced by the Client or User, which the tester was unable to found.
Defect leakage is the number of bugs that are found in the field that were not found internally. There are a few ways to express this:
* total number of leaked defects (a simple count)
* defects per customer: number of leaked defects divided by number of customers running that release
* % found in the field: number of leaked defects divided by number of total defects found in that release
In theory, this can be measured at any stage - number of defects leaked from dev into QA, number leaked from QA into beta certification, etc. I've mostly
used it for customers in the field, though.

Probe Testing


It is almost same as Exploratory testing. It is a creative, intuitive process. Everything testers do is optimized to find bugs fast, so plans often change as testers learn more about the product and its weaknesses. Session-based test management is one method to organize and direct exploratory testing. It allows us to provide meaningful reports to management while preserving the creativity that makes exploratory testing work. This page includes an explanation of the method as well as sample session reports, and a tool we developed that produces metrics from those reports.

Automation Testing v/s Manual Testing Guidelines:

I met with my team’s automation experts a few weeks back to get their input on when to automate and when to manually test. The general rule of thumb has always been to use common sense. If you’re only going to run the test one or two times or the test is really expensive to automation, it is most likely a manual test. But then again, what good is saying “use common sense” when you need to come up with deterministic set of guidelines on how and when to automate?
Pros of Automation
• If you have to run a set of tests repeatedly, automation is a huge win for you
• It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner
• It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner (see What is a Nightly)
• Aids in testing a large test matrix (different languages on different OS platforms). Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.
Cons of Automation
• It costs more to automate. Writing the test cases and writing or configuring the automate framework you’re using costs more initially than running the test manually.
• Can’t automate visual references, for example, if you can’t tell the font colour via code or the automation tool, it is a manual test.
Pros of Manual
• If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
• It allows the tester to perform more ad-hoc (random testing). In my experiences, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.
Cons of Manual
• Running tests manually can be very time consuming
• Each time there is a new build, the tester must rerun all required tests - which after a while would become very mundane and tiresome.
Other deciding factors:
• What you automate depends on the tools you use. If the tools have any limitations, those tests are manual.
• Is the return on investment worth automating? Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases?

How to test a website by Manual Testing?

A: — Web Testing
During testing the websites the following scenarios should be considered.
Functionality
Performance
Usability
Server side interface
Client side compatibility
Security
Functionality:
In testing the functionality of the web sites the following should be tested.
Links
Internal links
External links
Mail links
Broken links
Forms
Field validation
Functional chart
Error message for wrong input
Optional and mandatory fields
Database
Testing will be done on the database integrity.
Cookies
Testing will be done on the client system side, on the temporary internet files.
Performance:
Performance testing can be applied to understand the web site’s scalability, or to benchmark the performance in the environment of third party products such as servers and middle ware for potential purchase.
Connection speed:
Tested over various Networks like Dial up, ISDN etc
Load
What is the no. of users per time?
Check for peak loads & how system behaves.
Large amount of data accessed by user.
Stress
Continuous load
Performance of memory, cpu, file handling etc.
Usability :
Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction. Usability can be defined as the degree to which a given piece of software assists the person sitting at the keyboard to accomplish a task, as opposed to becoming an additional impediment to such accomplishment. The broad goal of usable systems is often assessed using several
Criteria:
Ease of learning
Navigation
Subjective user satisfaction
General appearance
Server side interface:
In web testing the server side interface should be tested.
This is done by Verify that communication is done properly.
Compatibility of server with software, hardware, network and database should be tested.
The client side compatibility is also tested in various platforms, using various browsers etc.
Security:
The primary reason for testing the security of an web is to identify potential vulnerabilities and subsequently repair them.
The following types of testing are described in this section:
Network Scanning
Vulnerability Scanning
Password Cracking
Log Review
Integrity Checkers
Virus Detection
Performance Testing
Performance testing is a rigorous usability evaluation of a working system under realistic conditions to identify usability problems and to compare measures such as success
rate, task time and user satisfaction with requirements. The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing.
To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly. A clearly defined set of expectations is essential for meaningful performance testing.
For example, for a Web application, you need to know at least two things:
expected load in terms of concurrent users or HTTP connections
acceptable response time
Load testing:
Load testing is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing
Examples of volume testing:
testing a word processor by editing a very large document
testing a printer by sending it a very large job
testing a mail server with thousands of users mailboxes
Examples of longevity/endurance testing:
testing a client-server application by running the client in a loop against the server over an extended period of time
Goals of load testing:
Expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc. Ensure that the application meets the performance baseline established during Performance testing. This is done by running regression tests against the application at a specified maximum load.
Although performance testing and load testing can seen similar, their goals are different. On one hand, performance testing uses load testing techniques and tools for measurement and benchmarking purposes and uses various load levels whereas load testing operates at a predefined load level, the highest load that the system can accept while still functioning properly.
Stress testing:
Stress testing is a form of testing that is used to determine the stability of a given system or entity. This is designed to test the software with abnormal situations. Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs.
Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing).
The main purpose behind this madness is to make sure that the system fails and recovers gracefully — this quality is known as recoverability.
Stress testing does not break the system but instead it allows observing how the system reacts to failure. Stress testing observes for the following.
Does it save its state or does it crash suddenly?
Does it just hang and freeze or does it fail gracefully?
Is it able to recover from the last good state on restart?
Etc.
Compatability Testing
A Testing to ensure compatibility of an application or Web site with different browsers, OS and hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of the product and introduce costly and embarrassing bugs. We test for compatibility using real test environments. That is testing how will the system performs in the particular software, hardware or network environment. Compatibility testing can be performed manually or can be driven by an automated functional or reg The purpose of compatibility testing is to reveal issues related to the product& interaction session test suite.with other software as well as hardware. The product compatibility is evaluated by first identifying the hardware/software/browser components that the product is designed to support. Then a hardware/software/browser matrix is designed that indicates the configurations on which the product will be tested. Then, with input from the client, a testing script is designed that will be sufficient to evaluate compatibility between the product and the hardware/software/browser matrix. Finally, the script is executed against the matrix,and any anomalies are investigated to determine exactly where the incompatibility lies.
Some typical compatibility tests include testing your application:
On various client hardware configurations
Using different memory sizes and hard drive space
On various Operating Systems
In different network environments
With different printers and peripherals (i.e. zip drives, USBs, etc.)

Wednesday, December 29, 2010

THE PROJECT PLAN

Each step in the software engineering process should produce a deliverable that can
be reviewed and that can act as a foundation for the steps that follow. The Software
Project Plan is produced at the culmination of the planning tasks. It provides baseline
cost and scheduling information that will be used throughout the software process.
The Software Project Plan is a relatively brief document that is addressed to a diverse
audience. It must (1) communicate scope and resources to software management,
technical staff, and the customer; (2) define risks and suggest risk aversion techniques;
(3) define cost and schedule for management review; (4) provide an overall approach
to software development for all people associated with the project; and (5) outline
how quality will be ensured and change will be managed.
A presentation of cost and schedule will vary with the audience addressed. If the
plan is used only as an internal document, the results of each estimation technique
can be presented. When the plan is disseminated outside the organization, a reconciled
cost breakdown (combining the results of all estimation techniques) is provided.
Similarly, the degree of detail contained within the schedule section may vary with
the audience and formality of the plan.
It is important to note that the Software Project Plan is not a static document. That
is, the project team revisits the plan repeatedly—updating risks, estimates, schedules
and related information—as the project proceeds and more is learned.

Component based development model

The technical foundation for a component-based process model for software engineering is provided by object-oriented technologies. The construction of classes that include both the data and the algorithms required to process the data is emphasized by the object-oriented paradigm. Object-oriented classes can be reused in a variety of applications and computer-based system architectures if they are correctly designed and implemented.

Several of the traits of the spiral model are incorporated into the component-based development (CBD) paradigm. Its evolving nature necessitates an iterative approach to software development. The component-based development methodology, on the other hand, builds applications out of prepackaged software components (referred to as classes).
The identification of potential classes is the first step in the engineering activity. Examining the data that will be changed by the programme and the algorithms that will be used to achieve this

The WINWIN Spiral Model


The spiral model suggests a framework activity that addresses customer communication.
 The objective of this activity is to elicit project requirements from the customer.
 In an ideal context, the developer simply asks the customer what is required and
the customer provides sufficient detail to proceed. Unfortunately, this rarely happens.
In reality, the customer and the developer enter into a process of negotiation,
 where the customer may be asked to balance functionality,
performance, and other product or system characteristics against cost and
time to market.

The best negotiations strive for a “win-win” result. That is, the customer wins by
getting the system or product that satisfies the majority of the customer’s needs and
the developer wins by working to realistic and achievable budgets and deadlines.

Boehm’s WINWIN spiral model  defines a set of negotiation activities at
the beginning of each pass around the spiral. Rather than a single customer communication
activity, the following activities are defined:
1. Identification of the system or subsystem’s key “stakeholders.”
2. Determination of the stakeholders’ “win conditions.”
3. Negotiation of the stakeholders’ win conditions to reconcile them into a set of
win-win conditions for all concerned (including the software project team).
Successful completion of these initial steps achieves a win-win result, which becomes
the key criterion for proceeding to software and system definition.

In addition to the emphasis placed on early negotiation, the WINWIN spiral model
introduces three process milestones, called anchor points , that help establish
the completion of one cycle around the spiral and provide decision milestones
before the software project proceeds.

In essence, the anchor points represent three different views of progress as the
project traverses the spiral. The first anchor point, life cycle objectives (LCO), defines
a set of objectives for each major software engineering activity. For example, as part
of LCO, a set of objectives establishes the definition of top-level system/product
requirements. The second anchor point, life cycle architecture (LCA), establishes objectives
that must be met as the system and software architecture is defined. For example,
as part of LCA, the software project team must demonstrate that it has evaluated
the applicability of off-the-shelf and reusable software components and considered
their impact on architectural decisions. Initial operational capability (IOC) is the third
anchor point and represents a set of objectives associated with the preparation of the
software for installation/distribution, site preparation prior to installation, and assistance
required by all parties that will use or support the software.

The Spiral Model

The spiral model, originally proposed by Boehm , is an evolutionary software
process model that couples the iterative nature of prototyping with the controlled and
systematic aspects of the linear sequential model. It provides the potential for rapid
development of incremental versions of the software. Using the spiral model, software
is developed in a series of incremental releases. During early iterations, the
incremental release might be a paper model or prototype. During later iterations,
increasingly more complete versions of the engineered system are produced.
A spiral model is divided into a number of framework activities, also called task
regions.6 Typically, there are between three and six task regions. A spiral model
 that contains six task regions:
Customer communication—tasks required to establish effective communication
between developer and customer.

Planning—tasks required to define resources, timelines, and other projectrelated
information.

Risk analysis—tasks required to assess both technical and management
risks.

Engineering—tasks required to build one or more representations of the
application.

Construction and release—tasks required to construct, test, install, and
provide user support (e.g., documentation and training).

Customer evaluation—tasks required to obtain customer feedback based
on evaluation of the software representations created during the engineering
stage and implemented during the installation stage.

Each of the regions is populated by a set of work tasks, called a task set, that are
adapted to the characteristics of the project to be undertaken. For small projects, the
number of work tasks and their formality is low. For larger, more critical projects,
each task region contains more work tasks that are defined to achieve a higher level
of formality
.
As this evolutionary process begins, the software engineering team moves around
the spiral in a clockwise direction, beginning at the center. The first circuit around
the spiral might result in the development of a product specification; subsequent
passes around the spiral might be used to develop a prototype and then progressively
more sophisticated versions of the software. Each pass through the planning region
results in adjustments to the project plan. Cost and schedule are adjusted based on
feedback derived from customer evaluation. In addition, the project manager adjusts
the planned number of iterations required to complete the software.

Unlike classical process models that end when software is delivered, the spiral
model can be adapted to apply throughout the life of the computer software. An alternative
view of the spiral model can be considered by examining the project entry point
axis. Each cube placed along the axis can be used to represent
the starting point for different types of projects. A “concept development
project” starts at the core of the spiral and will continue (multiple iterations occur
along the spiral path that bounds the central shaded region) until concept development
is complete. If the concept is to be developed into an actual product, the process
proceeds through the next cube (new product development project entry point) and
a “new development project” is initiated. The new product will evolve through a number
of iterations around the spiral, following the path that bounds the region that has
somewhat lighter shading than the core. In essence, the spiral, when characterized
in this way, remains operative until the software is retired. There are times when the
process is dormant, but whenever a change is initiated, the process starts at the appropriate
entry point .

The spiral model is a realistic approach to the development of large-scale systems
and software. Because software evolves as the process progresses, the developer and
customer better understand and react to risks at each evolutionary level. The spiral model
uses prototyping as a risk reduction mechanism but, more important, enables the developer
to apply the prototyping approach at any stage in the evolution of the product. It
maintains the systematic stepwise approach suggested by the classic life cycle but incorporates
it into an iterative framework that more realistically reflects the real world. The
spiral model demands a direct consideration of technical risks at all stages of the project
and, if properly applied, should reduce risks before they become problematic.
But like other paradigms, the spiral model is not a panacea. It may be difficult to
convince customers (particularly in contract situations) that the evolutionary approach
is controllable. It demands considerable risk assessment expertise and relies on this
expertise for success. If a major risk is not uncovered and managed, problems will
undoubtedly occur. Finally, the model has not been used as widely as the linear
sequential or prototyping paradigms. It will take a number of years before efficacy of
this important paradigm can be determined with absolute certainty.