Monday, December 20, 2010

Different types of testing

Performance Testing:Performance testing is testing to ensure that the application responds in the time limit set by the user.  If this is needed, the client must supply the benchmarks to measure by and we must have a hardware environment that mirrors production.
Windows / Internet GUI Standards: This testing is used to ensure that the application has a standardized look and feel.  It may be as simple as ensuring that accelerator keys work properly and font type and size are consistent or it could be as exhaustive as ensuring that the application could be assigned a Windows logo if submitted for one (there are strict guidelines for this).  Note: If this level of testing is needed, the client must provide their standards as to allow us to compare to that standard.
Platform Testing:Platform testing is used to warrant that the application will run on multiple platforms (Win 95/98, Win NT, IE 4.0, IE 5.0, Netscape, etc.)
Localization: Localization testing is done to guarantee that the application will work properly in different languages (i.e. Win 95/98 English, German, Spanish, etc.)  This also involves ensuring that dates will work in dd/mm/yy format for the UK.
Conversion:Conversion testing is used to test any data that must be converted to ensure the application will work properly.  This could be conversion from a legacy system or changes needed for the new schema.
Parallel Testing:
Parallel testing is used to test the functionality of the updated system with the functionality of the existing system. This is sometimes used to ensure that the changes did not corrupt existing functionality.
Regression of unchanged functionality:
If regression must occur for functional areas that are not being changed, specify the functional areas to regress and the level of regression needed (positive only or positive and negative testing).
Automated Testing:
Automated testing can be used to automate regression and functional testing.  This can be very helpful if the system is stable and not changed often. If the application is a new development project, automated testing generally does not pay big dividends.
Installation Testing: Installation testing is testing the setup routine to ensure that the product can be installed fresh, over an existing copy, and with other products. This will test different versions of OCX’s and DLL’s. 
End to End / Interface Testing: End to End testing is testing all inputs (super-systems) and outputs (sub-systems) along with the application.  A controlled set of transactions is used and the test data is published prior to the test along with the expected results. This testing ensures that the application will interact properly with the other systems.
Security Testing:Security testing is performed to guarantee that only users with the appropriate authority are able to use the applicable features of the system.

Network Testing:Network testing is done to determine what happens when different network latency is applied when using the application.  It can uncover possible problems with slow network links, etc.

Exploratory Testing
Often taken to mean a creative, internal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it
Benefits Realization tests
With the increased focus on the value of Business returns obtained from investments in information technology, this type of test or analysis is becoming more critical. The benefits realization test is a test or analysis conducted after an application is moved into production in order to determine whether the application is likely to deliver the original projected benefits. The analysis is usually conducted by the business user or client group who requested the project and results are reported back to executive management

Mutation Testing
Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (‘bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources
Sanity testing: Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state
Sanity testing
Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort, For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state
Build Acceptance Tests
Build Acceptance Tests should take less than 2-3 hours to complete (15 minutes is typical). These test cases simply ensure that the application can be built and installed successfully. Other related test cases ensure that Testing received the proper Development Release Document plus other build related information (drop point, etc.). The objective is to determine if further testing is possible. If any Level 1 test case fails, the build is returned to developers un-tested
Smoke Tests
Smoke Tests should be automated and take less than 2-3 hours (20 minutes is typical). These tests cases verify the major functionality a high level. The objective is to determine if further testing is possible. These test cases should emphasize breadth more than depth. All components should be touched, and every major feature should be tested briefly by the Smoke Test. If any Level 2 test case fails, the build is returned to developers un-tested
Bug Regression Testing
Every bug that was “Open” during the previous build, but marked as “Fixed, Needs Re-Testing” for the current build under test, will need to be regressed, or re-tested. Once the smoke test is completed, all resolved bugs need to be regressed. It should take between 5 minutes to 1 hour to regress most bugs
Database Testing
Database testing done manually in real time, it check the data flow between front end back ends. Observing that operations, which are operated on front-end is effected on back-end or not.
The approach is as follows:
While adding a record there’ front-end check back-end that addition of record is effected or not. So same for delete, update, Some other database testing checking for mandatory fields, checking for constraints and rules applied on the table , some time check the procedure using SQL Query analyzer
Functional Testing (or) Business functional testing
All the functions in the applications should be tested against the requirements document to ensure that the product conforms with what was specified.(They meet functional requirements)Verifies the crucial business functions are working in the application. Business functions are generally defined in the requirements Document. Each business function has certain rules, which can’t be broken. Whether they applied to the user interface behavior or data behind the applications. Both levels need to be verified. Business functions may span several windows (or) several menu options, so simply testing that all windows and menus can be used is not enough to verify the business functions. You must verify the business functions as discrete units of your testing
* Study SRS
* Identify Unit Functions
* For each unit function
* Take each input function
* Identify Equivalence class
* Form Test cases
* Form Test cases for boundary values
* From Test cases for Error Guessing
* Form Unit function v/s Test cases, Cross Reference Matrix
User Interface Testing (or) structural testing
It verifies whether all the objects of user interface design specifications are met. It examines the spelling of button test, window title test and label test. Checks for the consistency or duplication of accelerator key letters and examines the positions and alignments of window objects
Volume Testing
Testing the applications with voluminous amount of data and see whether the application produces the anticipated results (Boundary value analysis)
Stress Testing:Stress testing is testing to ensure that the application will respond appropriately with many users and activities happening simultaneously.  If this is needed, the number of users must be agreed upon beforehand and the hardware environment for system test must mirror production.
Load Testing
It verifies the performance of the server under stress of many clients requesting data at the same time
Installation testing
The tester should install the systems to determine whether installation process is viable or not based on the installation guide
Configuration Testing
The system should be tested to determine it works correctly with appropriate software and hardware configurations
Compatibility Testing
The system should be tested to determine whether it is compatible with other systems (applications) that it needs to interface with
Documentation Testing
It is performed to verify the accuracy and completeness of user documentation
1. This testing is done to verify whether the documented functionality matches the software functionality
2. The documentation is easy to follow, comprehensive and well edited
If the application under test has context sensitive help, it must be verified as part of documentation testing
Recovery/Error Testing
Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems
Comparison Testing
Testing that compares software weaknesses and strengths to competing products
Acceptance Testing
Acceptance testing, which black box is testing, will give the client the opportunity to verify the system functionality and usability prior to the system being moved to production. The acceptance test will be the responsibility of the client; however, it will be conducted with full support from the project team. The Test Team will work with the client to develop the acceptance criteria
Alpha Testing
Testing of an application when development is nearing completion, Minor design changes may still be made as a result of such testing. Alpha Testing is typically performed by end-users or others, not by programmers or testers
Beta Testing
Testing when development and testing are essentially completed and final bugs, problems need to be found before the final release. Beta Testing is typically done by end-users or others, not by programmers or testers
Regression Testing
The objective of regression testing is to ensure software remains intact. A baseline set of data and scripts will be maintained and executed to verify changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software being regression tested. All discrepancies will be highlighted and accounted for, before testing proceeds to the next level
Incremental Integration Testing
Continuous testing of an application as new functionality is recommended. This may require various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers or by testers
Usability:Usability is testing to ensure that the application is easy to work with, limits keystrokes, and is easy to understand.  The best way to perform this testing is to bring in experienced, medium and novice users and solicit their input on the usability of the application.
Integration Testing
Upon completion of unit testing, integration testing, which is black box testing, will begin. The purpose is to ensure distinct components of the application still work in accordance to customer requirements. Test sets will be developed with the express purpose of exercising the interfaces between the components. This activity is to be carried out by the Test Team. Integration test will be termed complete when actual results and expected results are either in line or differences are explainable/acceptable based on client input
System Testing
Upon completion of integration testing, the Test Team will begin system testing. During system testing, which is a black box test, the complete system is configured in a controlled environment to validate its accuracy and completeness in performing the functions as designed. The system test will simulate production in that it will occur in the “production-like” test environment and test all of the functions of the system that will be required in production. The Test Team will complete the system test. Prior to the system test, the unit and integration test results will be reviewed by SQA to ensure all problems have been resolved. It is important for higher level testing efforts to understand unresolved problems from the lower testing levels. System testing is deemed complete when actual results and expected results are either in line or differences are explainable/acceptable based on client input
Parallel/Audit Testing
Testing where the user reconciles the output of the new system to the output of the current system to verify the new
User’s Guide / Training Guides: This testing is done to ensure that the user, help and training guides are accurate and easy to use.
Guerrilla Testing: Guerrilla testing is done to use the system with unstructured scenarios to ensure that it responds appropriately.  To accomplish this, you may ask someone to perform a function without telling them the steps for doing it.
Hardware Testing: Hardware testing involves testing with a bad disk drive, faulty network cards, etc.  If this type of testing is desired, be very specific about what is in scope for this testing.

Duplicate instances of application:This testing is done to determine if bringing up multiple copies of the same application will cause blocking or other problems.
Year 2000:This testing is performed to ensure that the application will work in the year 2000 and beyond.
Temporal Testing: Temporal testing is done to guarantee that date-centric problems do not occur with the application.  For example, if many bills are created quarterly, you may want to set the server date to a quarter-end to test this date-centric event.
Disaster Recovery (Backup / Restore):
This testing is done to aid production support in ensuring that adequate procedures are in place for restoring upon a disaster.
Input and Boundary Tests:
Testing designed to guarantee that the system would only allow valid input. This includes testing to ensure that the maximum number of characters for a field may not be exceeded, boundary conditions such as valid ranges and “off-by-one”, “null”, “max”, “min”, tab order from field to field on the screen, etc.
Out of Memory Tests:
Testing designed to ensure that the application would run in the amount of memory specified in the technical documentation.  This testing will also detect memory leaks associated with starting and stopping the application many times.

2 comments:

  1. Its so amazing blog, i really like that Software Development Company. 123movies

    ReplyDelete
  2. sad shayari image it's very nice and useful i got many more information

    ReplyDelete