Monday, January 31, 2011

Test Case Design Techniques

The test case design techniques are broadly grouped into two categories: Black box techniques, White box techniques and other techniques that do not fall under either category.
Black Box (Functional)
- Specification derived tests
- Equivalence partitioning
- Boundary Value Analysis
- State-Transition Testing

White Box (Structural)
- Branch Testing
- Condition Testing
- Data Definition - Use Testing
- Internal boundary value testing

Other- Error guessing
Specification Derived Tests
As the name suggests, test cases are designed by walking through the relevant specifications. It is a positive test case design technique.

Equivalence Partitioning
Equivalence partitioning is the process of taking all of the possible test values and placing them into classes (partitions or groups). Test cases should be designed to test one value from each class. Thereby, it uses fewest test cases to cover the maximum input requirements.
For example, if a program accepts integer values only from 1 to 10. The possible test cases for such a program would be the range of all integers. In such a program, all integers upto to 0 and above 10 will cause an error. So, it is reasonable to assume that if 11 will fail, all values above it will fail and vice versa.
If an input condition is a range of values, let one valid equivalence class be the range (0 or 10 in this example). Let the values below and above the range be two respective invalid equivalence values (i.e. -1 and 11). Therefore, the above three partition values can be used as test cases for the above example.

Boundary Value Analysis
This is a selection technique where the test data are chosen to lie along the boundaries of the input domain or the output range. This technique is often called as stress testing and incorporates a degree of negative testing in the test design by anticipating that errors will occur at or around the partition boundaries.
For example, a field is required to accept amounts of money between $0 and $10. As a tester, you need to check if it means upto and including $10 and $9.99 and if $10 is acceptable. So, the boundary values are $0, $0.01, $9.99 and $10.
Now, the following tests can be executed. A negative value should be rejected, 0 should be accepted (this is on the boundary), $0.01 and $9.99 should be accepted, null and $10 should be rejected. In this way, it uses the same concept of partitions as equivalence partitioning.

State Transition Testing
As the name suggests, test cases are designed to test the transition between the states by creating the events that cause the transition.

Branch Testing
In branch testing, test cases are designed to exercise control flow branches or decision points in a unit. This is usually aimed at achieving a target level of Decision Coverage. Branch Coverage, need to test both branches of IF and ELSE. All branches and compound conditions (e.g. loops and array handling) within the branch should be exercised at least once.

Condition Testing
The object of condition testing is to design test cases to show that the individual components of logical conditions and combinations of the individual components are correct. Test cases are designed to test the individual elements of logical expressions, both within branch conditions and within other expressions in a unit.

Data Definition – Use Testing
Data definition-use testing designs test cases to test pairs of data definitions and uses. Data definition is anywhere that the value of a data item is set. Data use is anywhere that a data item is read or used. The objective is to create test cases that will drive execution through paths between specific definitions and uses.

Internal Boundary Value Testing
In many cases, partitions and their boundaries can be identified from a functional specification for a unit, as described under equivalence partitioning and boundary value analysis above. However, a unit may also have internal boundary values that can only be identified from a structural specification.

Error Guessing
It is a test case design technique where the testers use their experience to guess the possible errors that might occur and design test cases accordingly to uncover them.

Using any or a combination of the above described test case design techniques; you can develop effective test cases.

Thursday, January 27, 2011

Certification

Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification currently offered actually requires the applicant to demonstrate the ability to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification. Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester.


Software testing certification types
Certifications can be grouped into: exam-based and education-based.

Exam-based certifications: For these there is the need to pass an exam, which can also be learned by self-study: e.g. for ISTQB or QAI.
Education-based certifications: Education based software testing certifications are instructor-led sessions, where each course has to be passed, e.g. IIST (International Institute for Software Testing).

Testing certifications
Certified Software Tester (CSTE) offered by the Quality Assurance Institute (QAI)
Certified Software Test Professional (CSTP) offered by the International Institute for Software Testing8]
[[CSTP (TM)]] (Australian Version) offered by K. J. Ross & Associates9]
CATe offered by the International Institute for Software Testing0]
ISEB offered by the Information Systems Examinations Board
Certified Tester, Foundation Level (CTFL) offered by the International Software Testing Qualification Board
Certified Tester, Advanced Level (CTAL) offered by the International Software Testing Qualification Board
CBTS offered by the Brazilian Certification of Software Testing (ALATS)

Quality assurance certifications
*CSQE offered by the American Society for Quality (ASQ)
*CSQA offered by the Quality Assurance Institute (QAI)

Certifications
Required Certification: At least Should have done any on Certification

1 Certified Associate in Software Testing (CAST) offered by the Quality Assurance Institute (QAI)[42]

2 CATe offered by the International Institute for Software Testing [43]

3 Certified Manager in Software Testing (CMST) offered by the Quality Assurance Institute (QAI)[42]

4 Certified Software Tester (CSTE) offered by the Quality Assurance Institute (QAI)[42]

4 Certified Software Test Professional (CSTP) offered by the International Institute for Software Testing [43]

5 CSTP (TM) (Australian Version) offered by K. J. Ross & Associates [44]

6 ISEB offered by the Information Systems Examinations Board

7 ISTQB Certified Tester, Foundation Level (CTFL) offered by the International Software Testing Qualification Board [45] [46]

8 ISTQB Certified Tester, Advanced Level (CTAL) offered by the International Software Testing Qualification Board [45] [46]

9 TMPF TMap Next Foundation offered by the Examination Institute for Information Science [47]

10 TMPA TMap Next Advanced offered by the Examination Institute for Information Science [47]



Quality assurance certifications


1 CMSQ offered by the Quality Assurance Institute (QAI).[42]

2 CSQA offered by the Quality Assurance Institute (QAI)[42]

3 CSQE offered by the American Society for Quality (ASQ)[48]

4 CQIA offered by the American Society for Quality (ASQ)[48]

Monday, January 24, 2011

Error, Bug & Defect

Error:

Error is an undesirable deviation from requirements

Error means normally arises in software

Error means to change the functionality of the program.

Deviation for actual and the expected / theoretical value.

It the one which is generated because of wrong login, loop or due to syntax

The difference between Expected & Actual Outputs/Results
It is an undesirable deviation from requirements

Bug:

Bug is an error found BEFORE the application goes into production.

Bug identifies the error change customer requirement.

Bug: An Error found in the development environment before the product is shipped to the customer.
Bug: A programming error that causes a program to work poorly, produce incorrect results, or
crash...........An error in software or hardware that causes a program to malfunction

Defect:

Defect: Is an error found AFTER the application goes into production

Defect can be defined as a variance from expectations.

Defect is an error found AFTER the application goes into production.

Defect is the difference between expected and actual result in the context of testing.

Defect is the deviation of the customer requirement.
Defect: An Error found in the product itself after it is shipped to the customer.

Tuesday, January 11, 2011

The V model to W model

V-Model:


The V-model promotes the idea that the dynamic test stages (on the right hand side of the model) use the documentation identified on the left hand side as baselines for testing. The V-Model further promotes the notion of early test preparation.

The V-Model of testing

Early test preparation finds faults in baselines and is an effective way of detecting faults early. This approach is fine in principle and the early test preparation approach is always effective. However, there are two problems with the V-Model as normally presented.



The V-Model with early test preparation

There is rarely a perfect, one-to-one relationship between the documents on the left hand side and the test activities on the right. For example, functional specifications don’t usually provide enough information for a system test. System tests must often take account of some aspects of the business requirements as well as physical design issues for example. System testing usually draws on several sources of requirements information to be thoroughly planned.

V-Model has little to say about static testing at all. The V-Model treats testing as a back-door activity on the right hand side of the model. There is no mention of the potentially greater value and effectiveness of static tests such as reviews, inspections, static code analysis and so on. This is a major omission and the V-Model does not support the broader view of testing as a constantly prominent activity throughout the development lifecycle.


Paul Herzlich introduced the W-Model approach in 1993. The W-Model attempts to address shortcomings in the V-Model. Rather than focus on specific dynamic test stages, as the V-Model does, the W-Model focuses on the development products themselves. Essentially, every development activity that produces a work product is “shadowed” by a test activity. The purpose of the test activity specifically is to determine whether the objectives of a development activity have been met and the deliverable meets its requirements. In its most generic form, the W-Model presents a standard development lifecycle with every development stage mirrored by a test activity. On the left hand side, typically, the deliverables of a development activity (for example, write requirements) is accompanied by a test activity “test the requirements” and so on. If your organization has a different set of development stages, then the W-Model is easily adjusted to your situation. The important thing is this: the W-Model of testing focuses specifically on the product risks of concern at the point where testing can be most effective.


The W-Model and static test techniques.
If we focus on the static test techniques, you can see that there is a wide range of techniques available for evaluating the products of the left hand side. Inspections, reviews, walkthroughs, static analysis, requirements animation as well as early test case preparation can all be used.


The W-Model and dynamic test techniques.
If we consider the dynamic test techniques you can see that there is also a wide range of techniques available for evaluating executable software and systems. The traditional unit, integration, system and acceptance tests can make use of the functional test design and measurement techniques as well as the non-functional test techniques that are all available for use to address specific test objectives.
The W-Model removes the rather artificial constraint of having the same number of dynamic test stages as development stages. If there are five development stages concerned with the definition, design and construction of code in your project, it might be sensible to have only three stages of dynamic testing only. Component, system and acceptance testing might fit your normal way of working. The test objectives for the whole project would be distributed across three stages, not five. There may be practical reasons for doing this and the decision is based on an evaluation of product risks and how best to address them. The W-Model does not enforce a project “symmetry” that does not (or cannot) exist in reality. The W-model does not impose any rule that later dynamic tests must be based on documents created in specific stages (although earlier documentation products are nearly always used as baselines for dynamic testing). More recently, the Unified Modeling Language (UML) described in Booch, Rumbaugh and Jacobsen’s book [5] and the methodologies based on it, namely the Unified Software Process and the Rational Unified Process™ (described in [6-7]) have emerged in importance. In projects using these methods, requirements and designs might be documented in multiple models so system testing might be based on several of these models (spread over several documents).
We use the W-Model in test strategy as follows. Having identified the specific risks of concern, we specify the products that need to be tested; we then select test techniques (static reviews or dynamic test stages) to be used on those products to address the risks; we then schedule test activities as close as practicable to the development activity that generated the products to be tested.

Monday, January 3, 2011

Check Lists for Functional Testing

Testing web application is certainly different than testing desktop or any other application. With in web applications, there are certain standards which are followed in almost all the applications. Having these standards makes life easier for use, because these standards can be converted into checklist and application can be tested easily against the checklist.

LINKS

Check that the link takes you to the page it said it would.
Ensure to have no orphan pages (a page that has no links to it)
Check all of your links to other websites
Are all referenced web sites or email addresses hyperlinked?
If we have removed some of the pages from our own site, set up a custom 404 page that redirects your visitors to your home page (or a search page) when the user try to access a page that no longer exists.
Check all mailto links and whether it reaches properly

FORMS

Acceptance of invalid input
Optional versus mandatory fields
Input longer than field allows
Radio buttons
Default values on page load/reload(Also terms and conditions should be disabled)
Is Command Button can be used for HyperLinks and Continue Links ?
Is all the datas inside combo/list box are arranged in chronolgical order?
Are all of the parts of a table or form present? Correctly laid out? Can you confirm that selected texts are in the "right place?
Does a scrollbar appear if required?

DATA VERIFICATION AND VALIDATION

Is the Privacy Policy clearly defined and available for user access?
At no point of time the system should behave awkwardly when an invalid data is fed
Check to see what happens if a user deletes cookies while in site
Check to see what happens if a user deletes cookies after visiting a site

DATA INTEGRATION

Check the maximum field lengths to ensure that there are no truncated characters?
If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers?
If a particular set of data is saved to the database check that each value gets saved fully to the database. (i.e.) Beware of truncation (of strings) and rounding of numeric values.

DATE FIELD CHECKS

Assure that leap years are validated correctly & do not cause errors/miscalculations.
Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/ miscalculations.
Is copyright for all the sites includes Yahoo co-branded sites are updated

NUMERIC FIELDS

Assure that lowest and highest values are handled correctly.
Assure that numeric fields with a blank in position 1 are processed or reported as an error.
Assure that fields with a blank in the last position are processed or reported as an error an error.
Assure that both + and - values are correctly processed.
Assure that division by zero does not occur.
Include value zero in all calculations.
Assure that upper and lower values in ranges are handled correctly. (Using BVA)

ALPHANUMERIC FIELD CHECKS

Use blank and non-blank data.
Include lowest and highest values.
Include invalid characters & symbols.
Include valid characters.
Include data items with first position blank.
Include data items with last position blank.