Tuesday, December 21, 2010

What makes a good Software QA engineer?


* The same qualities a good tester has are useful for a QA engineer. Additionally, they must be able to understand the entire software development process and how it can fit into the business approach and goals of the organization. Communication skills and the ability to understand various sides of issues are important. In organizations in the early stages of implementing QA processes, patience and diplomacy are especially needed. An ability to find problems as well as to see 'what's missing' is important for inspections and reviews.

Testing:
* An examination of the behavior of a program by executing on sample data sets.
* Testing comprises of set of activities to detect defects in a produced material.
* To unearth & correct defects.
* To detect defects early & to reduce cost of defect fixing.
* To avoid user detecting problems.
* To ensure that product works as users expected it to.

Why Testing?
* To unearth and correct defects.
* To detect defects early and to reduce cost of defect fixing.
* To ensure that product works as user expected it to.
* To avoid user detecting problems.

Test Life Cycle
* Identify Test Candidates
* Test Plan
* Design Test Cases
* Execute Tests
* Evaluate Results
* Document Test Results
* Casual Analysis/ Preparation of Validation Reports
* Regression Testing / Follow up on reported bugs.

Testing Techniques
* Black Box Testing
* White Box Testing
* Regression Testing
* These principles & techniques can be applied to any type of testing.

Black Box Testing
* Testing of a function without knowing internal structure of the program.

White Box Testing
* Testing of a function with knowing internal structure of the program.

Regression Testing
* To ensure that the code changes have not had an adverse affect to the other modules or on existing functions.

Functional Testing
* Study SRS
* Identify Unit Functions
* For each unit function
* - Take each input function
* - Identify Equivalence class
* - Form Test cases
* - Form Test cases for boundary values
* - From Test cases for Error Guessing
* Form Unit function v/s Test cases, Cross Reference Matrix
* Find the coverage

Unit Testing:
* The most 'micro' scale of testing to test particular functions or code modules. Typically done by the programmer and not by testers .
* Unit - smallest testable piece of software.
* A unit can be compiled/ assembled/ linked/ loaded; and put under a test harness.
* Unit testing done to show that the unit does not satisfy the functional specification and/ or its implemented structure does not match the intended design structure.

Integration Testing:
* Integration is a systematic approach to build the complete software structure specified in the design from unit-tested modules. There are two ways integration performed. It is called Pre-test and Pro-test.
* Pre-test: the testing performed in Module development area is called Pre-test. The Pre-test is required only if the development is done in module development area.

Alpha testing:
* Testing of an application when development is nearing completion minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

Beta testing:
* Testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers.

System Testing:
* A system is the big component.
* System testing is aimed at revealing bugs that cannot be attributed to a component as such, to inconsistencies between components or planned interactions between components.
* Concern: issues, behaviors that can only be exposed by testing the entire integrated system (e.g., performance, security, recovery).

Volume Testing:
* The purpose of Volume Testing is to find weaknesses in the system with respect to its handling of large amounts of data during short time periods. For example, this kind of testing ensures that the system will process data across physical and logical boundaries such as across servers and across disk partitions on one server.

Stress testing:
* This refers to testing system functionality while the system is under unusually heavy or peak load; it's similar to the validation testing mentioned previously but is carried out in a "high-stress" environment. This requires that you make some predictions about expected load levels of your Web site.

Usability testing:
* Usability means that systems are easy and fast to learn, efficient to use, easy to remember, cause no operating errors and offer a high degree of satisfaction for the user. Usability means bringing the usage perspective into focus, the side towards the user.

Security testing:
* If your site requires firewalls, encryption, user authentication, financial transactions, or access to databases with sensitive data, you may need to test these and also test your site's overall protection against unauthorized internal or external access.

Test Plan:
* A Test Plan is a detailed project plan for testing, covering the scope of testing, the methodology to be used, the tasks to be performed, resources, schedules, risks, and dependencies. A Test Plan is developed prior to the implementation of a project to provide a well defined and understood project roadmap.

Test Specification:
* A Test Specification defines exactly what tests will be performed and what their scope and objectives will be. A Test Specification is produced as the first step in implementing a Test Plan, prior to the onset of manual testing and/or automated test suite development. It provides a repeatable, comprehensive definition of a testing campaign.

No comments:

Post a Comment