Showing posts with label Test Cases. Show all posts
Showing posts with label Test Cases. Show all posts

Monday, November 21, 2011

Test Cases Priorities for executions

After building & validating the testing models several test cases are generated. The next biggest task is to decide the priority for executing them by using some systematic procedure.

The process begins with identification of "Static Test Cases" and "Dynamic Test Runs", brief introduction of which is as under.

Test case: It is a collection of several items and corresponding information, which enables a test to be executed or performing a test run.

Test Run: It is a dynamic part of the specific testing activities in the overall sequence of testing on some specific testing object.

Every time we invoke a static test case, we in-turn perform an individual dynamic test run. Hence we can say that, every test case can correspond to several test runs.


Why & how do we prioritize?
Out of a large cluster of test cases in our hand, we need to scientifically decide their priorities of execution based upon some rational, non-arbitrary, criteria. We carry out the prioritization activity with an objective to reduce the overall number of test cases in the total testing feat.

There are couples of risks associated with our prioritization activities for the test cases. We may have the risk that some of the application features may not undergo testing at all.

During prioritization we work out plans addressing following two key concepts:

Concept – 1: Identify the essential features that must be tested in any case.

Concept – 2: Identify the risk or consequences of not testing some of the features.

The decision making in selecting the test cases is largely based upon the assessment of the risk first.

The objective of the test case prioritization exercise is to build confidence among the testers and the project leaders that the tests identified for execution are adequate from different angles.

The list of test cases decided for execution can be subjected to n-number of reviews in case of doubts / risks associated with any of the omitted tests.

Following four schemes are quite common for prioritizing the test cases.

All these methods are independent of each other & are aimed at optimizing the number of test cases. It is difficult to brand either of the methods better than the other. We can use any one method as a standalone scheme or can be used in conjunction with another one. When we get similar results out of different prioritization schemes, level of confidence increases.

Scheme – 1: Categorization of Priority.

Scheme – 2: Risk analysis.

Scheme – 3: Brainstorming to dig out the problematic areas.

Scheme – 4: Combination of different schemes.


Let us discuss the priority categorization scheme in greater detail here.

Easiest of all methods for categorizing our tests is to assign a priority code directly to every test description. This involves assigning a unique number to each & every test description.

A popular three-level priority categorization scheme is described as under

Priority - 1: Allocated to all tests that must be executed in any case.

Priority - 2: Allocated to the tests which can be executed, only when time permits.

Priority - 3: Allocated to the tests, which even if not executed, will not cause big upsets.

After assignment of priority codes, the tester estimates the amount of time required to execute the tests selected in each category. In case the estimated time happens to lie within the allotted schedule, means successful identification of tests & completion of the partitioning exercise. In case of any deviation of time plans, partitioning exercise is carried out further.

There is another extension to the above scheme i.e. new five-level scale using which we can classify the test priorities further.

The Five-Level Priority scheme is as under

Priority-1a: Allocated to the tests, which must pass, otherwise the delivery date will be affected.

Priority-2a: Allocated to the tests, which must be executed before the final delivery.

Priority-3a: Allocated to the tests which can be executed, only when time permits.

Priority-4a: Allocated to the tests, which can wait & can be executed even after the delivery date.

Priority-5a: Allocated to the tests, which have remote probability of execution ever.

Testers plan to divide the tests in various categories. For instance, say tests from priority 2 are further divided among priority levels like 3a, 4a and 5a. Likewise any test can be downgraded or upgraded.


Other considerations used while prioritizing or sequencing the test cases

a) Relative Dependencies: Some test cases are such that they can run only after the others because the one is used to set up the other. This is applicable especially for continuously operating systems involving test run to start from a state created by the previous one.

b) Timings of defect detection: Applies to cases wherein the problems can be detected only when many other problems have been found and already fixed. For example it applies to integration testing involving many components having their own problems at individual components level.

c) Damage or accidents: Applies to cases wherein acute problems or even severe damages can happen during testing unless some critical areas had not been checked before the present test run. For example it applies to embedded software involving safety critical systems, wherein the testers would not prefer to start testing the safety features prior to first testing the other related functions.

d) Difficulty levels: This is one of the most natural & commonly used sequence to execute the test cases involving moving from simple & easy test cases to difficult and complicated ones. This applies to scenarios where complicated problems can be expected. Here the testers prefer to execute comparatively simpler test cases first to narrow down the problematic areas.

5) Combining the test cases: Applies to majority of cases in large-scale software testing exercises involving interleaving and parallel testing to accelerate the testing process.

Wednesday, September 7, 2011

Test Cases Samples

Write test cases for copy & paste in MS Word

1. Verify that the text which is selected for copy, that area should get highlighted.
2. Verify that on the selected text if right click is done then copy option should be enabled and paste option should be disabled.
3. Verify that once the selected area is copied then in the right click paste option should get enabled
4. Verify if the text is not selected then cut and copy should be disabled in the right click option.
5. Verify that the using the short cut keys like CTRL+C, CTRL+V the text are getting pastes.
6. Verify that using Edit menu option the user is able to copy and paste the text.
7. Verify that if some text area is selected and right click paste is done then copied text should get overwrite on the selected text.
8. Verify that the copy should copy the content with own format.
9. Verify that Paste function should paste the content n times.

Test Cases for White Board

1. Verify the Length & Width of the Board
2. Verify the Surface of the Board
3. Check whether you can able to write on the board
4. Check written words are visible
5. Try to erase the words written & write a new words



Test Cases for Save As Button in MS Office:


* Give ctrl + S save as dialogue box should be appear.
* Goto File->Save AS, save as Dialogue box should be appear.
* To give File name, File name Field should be availabld.
* To choose the document type Save as type combo box should be available.
* To navigate to the desired path to save the file.. navigational buttons should be available.
* To change the view of the folder icons, Change view button should be display.
* On clicking save button, the File should be saved in the given path.
* On Clicking cancel button or Close(x) button or Press Esc Key, the save as dialogue box should be closed and the cursor should blink in the document.



Test Cases for Note Pad as "Save"
Click File-Save (should open a window and should ask you the file name and path as well)

Check for .extension like it should be .txt only

Check for short-key like (Ctrl+S) and ask for same option as above (should open an window and should ask you the file name and path as well)

Save as (should open the same window with same file name path)

If you change that file name and path in that window then it should accept.

If you dont give any name in it then data should not be saved.

Should accept all alph-numeric, space and special characters.

Performance:
Add huge amount of data the file and test the time taken for saving the file.

Load:

Keep on adding huge and save every time you add data, and test how much data a notepad can save..

Stress:

If you add enormous amount of data (say 100 MB data), notepad fails to save the data.

Add huge data till notepad fails to save the data and test at what point notepad fails to save the data.

Monday, January 31, 2011

Test Case Design Techniques

The test case design techniques are broadly grouped into two categories: Black box techniques, White box techniques and other techniques that do not fall under either category.
Black Box (Functional)
- Specification derived tests
- Equivalence partitioning
- Boundary Value Analysis
- State-Transition Testing

White Box (Structural)
- Branch Testing
- Condition Testing
- Data Definition - Use Testing
- Internal boundary value testing

Other- Error guessing
Specification Derived Tests
As the name suggests, test cases are designed by walking through the relevant specifications. It is a positive test case design technique.

Equivalence Partitioning
Equivalence partitioning is the process of taking all of the possible test values and placing them into classes (partitions or groups). Test cases should be designed to test one value from each class. Thereby, it uses fewest test cases to cover the maximum input requirements.
For example, if a program accepts integer values only from 1 to 10. The possible test cases for such a program would be the range of all integers. In such a program, all integers upto to 0 and above 10 will cause an error. So, it is reasonable to assume that if 11 will fail, all values above it will fail and vice versa.
If an input condition is a range of values, let one valid equivalence class be the range (0 or 10 in this example). Let the values below and above the range be two respective invalid equivalence values (i.e. -1 and 11). Therefore, the above three partition values can be used as test cases for the above example.

Boundary Value Analysis
This is a selection technique where the test data are chosen to lie along the boundaries of the input domain or the output range. This technique is often called as stress testing and incorporates a degree of negative testing in the test design by anticipating that errors will occur at or around the partition boundaries.
For example, a field is required to accept amounts of money between $0 and $10. As a tester, you need to check if it means upto and including $10 and $9.99 and if $10 is acceptable. So, the boundary values are $0, $0.01, $9.99 and $10.
Now, the following tests can be executed. A negative value should be rejected, 0 should be accepted (this is on the boundary), $0.01 and $9.99 should be accepted, null and $10 should be rejected. In this way, it uses the same concept of partitions as equivalence partitioning.

State Transition Testing
As the name suggests, test cases are designed to test the transition between the states by creating the events that cause the transition.

Branch Testing
In branch testing, test cases are designed to exercise control flow branches or decision points in a unit. This is usually aimed at achieving a target level of Decision Coverage. Branch Coverage, need to test both branches of IF and ELSE. All branches and compound conditions (e.g. loops and array handling) within the branch should be exercised at least once.

Condition Testing
The object of condition testing is to design test cases to show that the individual components of logical conditions and combinations of the individual components are correct. Test cases are designed to test the individual elements of logical expressions, both within branch conditions and within other expressions in a unit.

Data Definition – Use Testing
Data definition-use testing designs test cases to test pairs of data definitions and uses. Data definition is anywhere that the value of a data item is set. Data use is anywhere that a data item is read or used. The objective is to create test cases that will drive execution through paths between specific definitions and uses.

Internal Boundary Value Testing
In many cases, partitions and their boundaries can be identified from a functional specification for a unit, as described under equivalence partitioning and boundary value analysis above. However, a unit may also have internal boundary values that can only be identified from a structural specification.

Error Guessing
It is a test case design technique where the testers use their experience to guess the possible errors that might occur and design test cases accordingly to uncover them.

Using any or a combination of the above described test case design techniques; you can develop effective test cases.

Tuesday, December 21, 2010

Negative Test Cases

Top 10 Negative Test Cases

Negative test cases are designed to test the software in ways it was not intended to be used, and should be a part of your testing effort.  Below are the top 10 negative test cases you should consider when designing your test effort:
  1. Embedded Single Quote - Most SQL based database systems have issues when users store information that contain a single quote (e.g. John's car).  For each screen that accepts alphanumeric data entry, try entering text that contains one or more single quotes.

  1. Required Data Entry - Your functional specification should clearly indicate fields that require data entry on screens.  Test each field on the screen that has been indicated as being required to ensure it forces you to enter data in the field.

  1. Field Type Test - Your functional specification should clearly indicate fields that require specific data entry requirements (date fields, numeric fields, phone numbers, zip codes, etc).  Test each field on the screen that has been indicated as having special types to ensure it forces you to enter data in the correct format based on the field type (numeric fields should not allow alphabetic or special characters, date fields should require a valid date, etc).

  1. Field Size Test - Your functional specification should clearly indicate the number of characters you can enter into a field (for example, the first name must be 50 or less characters).  Write test cases to ensure that you can only enter the specified number of characters. Preventing the user from entering more characters than is allowed is more elegant than giving an error message after they have already entered too many characters.

  1. Numeric Bounds Test - For numeric fields, it is important to test for lower and upper bounds. For example, if you are calculating interest charged to an account, you would never have a negative interest amount applied to an account that earns interest, therefore, you should try testing it with a negative number.   Likewise, if your functional specification requires that a field be in a specific range (e.g. from 10 to 50), you should try entering 9 or 51, it should fail with a graceful message.

  1. Numeric Limits Test - Most database systems and programming languages allow numeric items to be identified as integers or long integers.  Normally, an integer has a range of -32,767 to 32,767 and long integers can range from -2,147,483,648 to 2,147,483,647.  For numeric data entry that do not have specified bounds limits, work with these limits to ensure that it does not get an numeric overflow error.

  1. Date Bounds Test - For date fields, it is important to test for lower and upper bounds. For example, if you are checking a birth date field, it is probably a good bet that the person's birth date is no older than 150 years ago.  Likewise, their birth date should not be a date in the future.

  1. Date Validity - For date fields, it is important to ensure that invalid dates are not allowed (04/31/2007 is an invalid date).  Your test cases should also check for leap years (every 4th and 400th year is a leap year).

  1. Web Session Testing - Many web applications rely on the browser session to keep track of the person logged in, settings for the application, etc.  Most screens in a web application are not designed to be launched without first logging in.   Create test cases to launch web pages within the application without first logging in.  The web application should ensure it has a valid logged in session before rendering pages within the application.

  1. Performance Changes - As you release new versions of your product, you should have a set of performance tests that you run that identify the speed of your screens (screens that list information, screens that add/update/delete data, etc).   Your test suite should include test cases that compare the prior release performance statistics to the current release.  This can aid in identifying potential performance problems that will be manifested with code changes to the current release.