Friday, May 27, 2011

A Quick 10-Step Guide

Testing Your Web Apps

A Quick 10-Step Guide

Interested in a quick checklist for testing a web application? The following 10 steps cover the most critical items that I have found important in making sure a web application is ready to be deployed. Depending on size, complexity, and corporate policies, modify the following steps to meet your specific testing needs.

Step 1 - Objectives

Make sure to establish your testing objectives up front and make sure they are measurable. It will make your life a lot easier by having written objectives that your whole team can understand and rally around. In addition to documenting your objectives, make sure your objectives are prioritized. Ask yourself questions like "What is most important: minimal defects or time-to-market?"

Here are two examples of how to determine priorities:

If you are building a medical web application that will assist in diagnosing illnesses, and someone could potentially die based on how correctly the application functions, you may want to make testing the correctness of the business functionality a higher priority than testing for navigational consistency throughout the application.

If you are testing an application that will be used to solicit external funding, you may want to put testing the aspects of the application that impact the visual appeal as the highest testing priority.

Your web application doesn't have to be perfect; it just needs to meet your intended customer's requirements and expectations.

Step 2 – Process and Reporting
Make sure that everyone on your testing team knows his or her role. Who should report what to whom and when? In other words, define your testing process. Use the following questions to help you get started:

* How will issues be reported?
* Who can assign issues?
* How will issues be categorized?
* Who needs what report and when do they need it?
* Are team meetings scheduled in advance or scheduled as needed?

You may define your testing process and reporting requirements formally or informally, depending on your particular needs. The main point to keep in mind is to organize your team in a way that supports your testing objectives and takes into account the individual personalities on your team. One size never fits all when dealing with people.

Step 3 - Tracking Results
Once you start executing your test plans, you will probably generate a large number of bugs, issues, defects, etc. You will want a way to easily store, organize, and distribute this information to the appropriate technical team members. You will also need a way to keep management informed on the status of your testing efforts. If your company already has a system in place to track this type of information, don't try to reinvent the wheel. Take advantage of what's already in place.

If your company doesn't already have something in place, spend a little time investigating some of the easy-to-setup online systems such as the one found at By using an online system, you can make it much easier on yourself by eliminating the need to install and maintain an off-the-shelf package.

Step 4 - Test Environment

Set up a test environment that is separate from your development and production environment. This includes a separate web server, database server, and application server if applicable. You may or may not be able to utilize existing computers to setup a separate test environment.

Create an explicitly defined procedure for moving code to and from your test environment and make sure the procedure is followed. Also, work with your development team to make sure each new version of source code to be tested is uniquely identified.

Step 5 - Usability Testing

In usability testing, you'll be looking at aspects of your web application that affect the user's experience, such as:

* How easy is it to navigate through your web application?
* Is it obvious to the user which actions are available to him or her?
* Is the look-and-feel of your web application consistent from page to page, including font sizes and colors?

The book, "Don't Make Me Think! A Common Sense Approach to Web Usability" by Steve Krug and Roger Black, provides a practical approach to the topic of usability. I refer to it often, and recommend it highly.

In addition to the traditional navigation and look-and-feel issues, Section 508 compliance is another area of importance. The 1998 Amendment to Section 508 of the Rehabilitation Act spells out accessibility requirements for individuals with certain disabilities.

For instance, if a user forgets to fill in a required field, you might think it is a good idea to present the user with a friendly error message and change the color of the field label to red or some other conspicuous color. However, changing the color of the field label would not really help a user who has difficulty deciphering colors. The use of color may help most users, but you would want to use an additional visual clue, such as placing an asterisk beside the field in question or additionally making the text bold.

For more details, refer to Another great resource that can help analyze your HTML pages for Section 508 compliance can be found at If you are working with the United States federal government, Section 508 compliance is not only good design, it most likely is a legal requirement. You may want to utilize the following information regarding techniques for accessibility evaluation and repair tools, which can be found at

Step 6 – Unit Testing

Unit testing is focused on verifying small portions of functionality. For example, an individual unit test case might focus on verifying that the correct data has been saved to the database when the Submit button on a particular page is clicked.

An important subset of unit testing that is often overlooked is range checking. That is, making sure all the fields that collect information from the user, can gracefully handle any value that is entered. Most people think of range checking as making sure that a numeric field only accepts numbers. In addition to traditional range checking make sure you also check for less common, but just as problematic exceptions. For example, what happens when a user enters his or her last name and the last name contains an apostrophe, such as O'Brien? Different combinations of databases and database drivers handle the apostrophe differently, sometimes with unexpected results. Proper unit testing will help rid your web application of obvious errors that your users should never have to encounter.

Step 7 - Verifying the HTML

Hyper Text Markup Language (HTML) is the computer language sent from your web server to the web browser on your users' computer to display the pages that make up your web application. The World Wide Web Consortium ( manages the HTML specification. One major objective of HTML is to provide the ability for anyone from anywhere to access information on the World Wide Web. This concept generally holds true if you conform strictly to the relevant version of the HTML specification that you will support. Unfortunately, in the real world, it is possible for a developer to inadvertently use a proprietary HTML tag that may not work for all of your intended users.

Verifying HTML is simple in concept but can be very time consuming in practice. A good place to start is with the World Wide Web Consortium's free HTML Validation Service ( There are also other online and downloadable applications to help in this area such as Net Mechanic ( There are two main aspects of verifying the validity of your HTML. First, you want to make sure that your syntax is correct, such as verifying that all opening and closing tags match, etc. Secondly, you want to verify how your pages look in different browsers, at different screen resolutions, and on different operating systems. Create a profile of your target audience and make some decisions on what browsers you will support, on which operating systems, and at what screen resolutions.

In general, the later versions of Microsoft Internet Explorer are very forgiving. If your development team has only been using Internet Explorer 5.5 on high-resolution monitors, you may be unpleasantly surprised when you see your web application on a typical user's computer. The sooner you start verifying your HTML, the better off your web application will be.

Step 8 - Load Testing

In performing load testing, you want to simulate how users will use your web application in the real world. The earlier you perform load testing the better. Simple design changes can often make a significant impact on the performance and scalability of your web application. A good overview of how to perform load testing can be found on Microsoft's Developer Network (MSDN) website:

A topic closely related to load testing is performance tuning. Performance tuning should be tightly integrated with the design of your application. If you are using Microsoft technology, the following article is a great resource for understanding the specifics of tuning a web application.

People hate to wait for a web page to load. As general rule, try to make sure that all of your pages load in 15 seconds or less. This rule will of course depend on your particular application and the expectations of the people using it.

Step 9 - User Acceptance Testing

By performing user acceptance testing, you are making sure your web application fits the use for which it was intended. Simply stated, you are making sure your web application makes things easier for the user and not harder. One effective way to handle user acceptance testing is by setting up a beta test for your web application.
Step 10 - Testing Security

With the large number of highly skilled hackers in the world, security should be a huge concern for anyone building a web application. You need to test how secure your web application is from both external and internal threats. The security of your web application should be planned for and verified by qualified security specialists.

Some additional online resources to help you stay up to date on the latest Internet security issues include:

CERT Coordination Center

Computer Security Resource Center

After performing your initial security testing, make sure to also perform ongoing security audits to ensure your web application remains secure over time as people and technology change.

Testing a web application can be a totally overwhelming task. The best advice I can give you is to keep prioritizing and focusing on the most important aspects of your application and don't forget to solicit help from your fellow team members.

By following the steps above coupled with your own expertise and knowledge, you will have a web application you can be proud of and that your users will love. You will also be giving your company the opportunity to deploy a web application that could become a run away success and possibly makes tons of money, saves millions of lives, or slashes customer support costs in half. Even better, because of your awesome web application, you may get profiled on CNN, which causes the killer job offers to start flooding in.

Proper testing is an integral part of creating a positive user experience, which can translate into the ultimate success of your web application. Even if your web application doesn't get featured on CNN, CNBC, or Fox News, you can take great satisfaction in knowing how you and your team's diligent testing efforts made all the difference in your successful deployment.

Tuesday, May 24, 2011

Accuracy of Quality Assurance

Accuracy of Quality Assurance

Quality Assurance and Control: Why Accuracy is Vital

Competition is intense in today’s business and industrial arenas. When products and services vie heavily for market shares, the ability to deliver quality products that stand up to their promises is often a determining factor when a customer chooses a vendor. The accuracy of that vendor’s quality assurance process is key to gaining or expanding footholds in market shares.

Quality Assurance (QA) can be defined as “the inspection and testing process a business conducts to verify manufacturing, assembly, or performance standards for a product or service in accordance with business, local, industry, or government guidelines.”

Quality Control (QC) often goes hand-in-hand with Quality Assurance. QC can be defined as “Steps, checks, and balances to detect, repair, and improve upon errors or defects in a manufactured product or a service.”

A QC/QA inspection process can be as simple as checking a document for typographical, grammatical, or punctuation errors. It can be as complex as disassembling a large-capacity airliner and inspecting every part then reassembling it, testing performance as each step is completed.

Typical QC/QA Process Steps
The most popular Quality Assurance process is called the Shewart Cycle, developed by Dr. W. Edwards Deming. The cycle outlines a process untailored to specific businesses or industries.

Commonly abbreviated as PDCA, the Shewart Cycle comprises four steps:
1. Plan. Outline specific and detailed inspection steps and expected results. Explain corrective actions at each step, should a product or service fail to pass inspection. Include the percentage of each product category to undergo Quality Assurance testing.
2. Do. Initiate and complete each action step within the QualityAssurance process.
3. Check. Using prequalified standards, monitor, test, and compare products for known quality points and potential defects or shortcomings.
4. Act. Adjust product generation processes to improve and correct products to meet or exceed expectations or requirements.

Above all, test and retest often.

The Shewart Cycle versus ISO 9000
PDCA is a process to ensure accuracy in quality assurance and is conducted within the business structure. ISO 9000 is an international standard that ensures a company’s Quality Assurance plans and procedures are in place and effective.

Quality Often Lies in the Details
Focusing on details will enable thorough and accurate quality assurance processes. QC/QA in a manufacturing process could easily include not only the final product but the raw materials used in creating a product.
· Is the ordered materiel of the correct chemical composition? Is the steel strong, dense, etc., enough? Is the liquid too acidic, sweet, watery, etc.?
· Is the board precisely the correct length or it is slightly off the required dimension? Is the glass too thick? Too thin, etc.? Is it strong enough or thin enough? Is it made from the right wood?
· Is the color or shade correct? Is it of the correct dye lot?
· Is that bolt under the sealed covering torqued to specifications? Is it too tight or too loose? Is it the right type, length, or width? Is it the color specified or does it matter?
· Is the switch high enough or low enough? Is it rheostat control or levered?

Details Beyond Construction
Accurate Quality Control and Quality Assurance processes offer more to businesses and manufactures than product quality. QC and QA procedures test product generation efficiency and cost analysis data, as well.
· Can efficiency of manufacturing of Widget X increase if the business moves the tooling machinery? How quickly will the move pay for itself?
· Can less costly materials be used and still maintain quality standards? Is materiel the cost cutting tool or is the vendor?
· Can Shipping and/or Receiving Departments’ procedural changes assist in Quality Assurance improvements?
· What faster manufacturing process exists?
· Will better lighting help?
· Will additional personnel be worthwhile?
· Will fewer personnel be worthwhile?
· Would more automation improve product quality?
· Are employees encouraged to forward ideas for QA improvement? Are they rewarded or recognized for it?

Costs of Inaccuracy
Inattentive inspections or overlooked processes can cause product errors and faults, slow downs or stoppage of the manufacturing process, and even cause injury. Quality products are not available; sales decline, and the business finds itself unable to maintain its workforce or its market share.

Product Liability
When a product fails, breaks, or injures someone, a lawyer is second on the call list, right after an ambulance or vendor. When potential injury is catastrophic, civil damages can total in the millions of dollars. The Occupational Safety and Health Administration (OHSA) could site the using organization, which in turn might sue the vendor. No business can afford repeated and repeatedly large cash outlays, simply because they failed to ensure availability or accuracy in QA processes. Costs are considerably lower to develop, maintain, and improve quality than they could be due to the lack of quality.

Long-Term Impact
Slipshod Quality Control and Quality Assurance processes that do not detect degrading quality or performance will heavily impact any business’ bottom line. Customers will cancel orders, possibly return products already sold, because no one likes sub-par quality, regardless of the amount paid for it. When products stay on the shelves, no business can survive.

But the opposite can be true, as well. High quality products or services that meet or exceed customers’ needs and expectations can sell for higher prices and more steadily than products of lesser quality can. Ensuring high benchmarks are the Quality Control and Quality Assurances goals and gold standards.

A huge market may await a business’ product or service, but unless the business can deliver a product that meets or exceeds customers’ expectations in a safe, timely, and effective manner, others claim that market share, and the business is left behind.

Assuring quality of products and services is paramount, and maintaining accuracy and consistency of those standards is vital to keeping or expanding customer bases.

Monday, May 9, 2011

Functional & Non-Functional Testing

Functional Testing: Testing the application against business requirements. Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team.

Functional Testing covers:

* Unit Testing
* Smoke testing / Sanity testing
* Integration Testing (Top Down,Bottom up Testing)
* Interface & Usability Testing
* System Testing
* Regression Testing
* Pre User Acceptance Testing(Alpha & Beta)
* User Acceptance Testing
* White Box & Black Box Testing
* Globalization & LocalizationTesting

Non-Functional Testing: Testing the application against client's and performance requirement. Non-Functioning testing is done based on the requirements and test scenarios defined by the client.

Non-Functional Testing covers:

* Load and Performance Testing
* Ergonomics Testing
* Stress & Volume Testing
* Compatibility & Migration Testing
* Data Conversion Testing
* Security / Penetration Testing
* Operational Readiness Testing
* Installation Testing
* Security Testing (ApplicationSecurity, Network Security, System Security)

Sunday, May 1, 2011

What is the difference between severity and priority?

Severity Value : S1 : Catastrophic Blocking :
The use case cannot be completed with any level of workaround. Problem causes data loss, corruption, or distortion. There is defective program functionality with no workaround. A system or product crash or fatal error occurs. Program produces an incorrect result, particularly when the result could cause an incorrect business decision to be made. Problem causes inability to use the product further. Problem results in considerable performance degradation, (i.e., 3+ times longer than requirement). The error will definitely cause a customer support call. A severity 1 bug would prevent the product from being released.

Severity Value : S2 :Serious (Critical/Major) :
The failure is severe and can be avoided with some level of workaround. There must be some means of completing the use case, however,the workaround is undesirable.. Program does not function according to product or design specifications. Product produces data or results that are misleading to the user. Problem results in performance degradation that is less than required (i.e., takes twice as long). There is a serious inconsistency or inconvenience. Problem is likely to cause a customer support call.

Severity Value : S3 : Normal
Soft failure, the main flow of the use case works but there are behavioral or data problems. Performance may be less then desired.Problem is a of less intensity or typographical issue. There is a minor functional problem with a reasonably satisfactory workaround. Problem is a minor inconvenience. Problem is a non-intuitive or clumsy function. Problem is not likely to cause a customer support call. A defect that causes the failure of non-critical aspects of the system. . The product may be released if the defect is documented but the existence of the defect may cause customer dissatisfaction.

Severity Value : S4 : Small (Minor) :
There is no failure, the use case completes as designed. Fix would slightly improve the use case behavior or performance. A minor condition or documentation error that has no significant effect on the Customer's operations. Also: Additional requests for new features and suggestions, which are defined as new functionality in existing License Software.Generally, the product could be released and most customers would be unaware of the defect's existence of only slightly dissatisfied.Such defects do not affect the release of the product.

Priority Value : P1 :
FIX the bug ASAP

Priority Value : P2 :
Bug can be fixed before current itteration/minor release.

Priority Value : P3 :
The bug is very minor. A fix for this defect is not expected during the current itteration/minor release.

Priority Value : P4 :
This defect does not need to be addressed in the current final release.

Type Of Defect :
High Severity & Low Priority (S1/S2 & P4) :
1) An application which generates some banking related reports weekly, monthly, quarterly & yearly by doing some calculations. If there is a fault while calculating yearly report. This is a high severity fault but low priority [Exception : If this defect is discovered during last week of the year] because this fault can be fixed in the next release as a change request.2) An inability to access a rarely used menu option may be of low priority to the business, but the severity is high as a series of tests cannot be executed, all dependent on access to the option.

High Severity & High Priority (S1/S2 & P1) :
1) If there is a fault while calculating weekly report. This is a high severity and high priority fault because this fault will block the functionality of the application immediately within a week. It should be fixed urgently.

Low Severity & High Priority(S4 & P1):
1) If there is a spelling mistake or content issue on the homepage of a website which has daily hits of lakhs. In this case, though this fault is not affecting the website or other functionalities but considering the status and popularity of the website in the competitive market it is a high priority fault.
2) A spelling mistake would be deemed as a low severity by the tester, but if this mistake occurs in the company name or address, this would be classed as high priority by the business.

Low Severity & Low Priority(S4 & P4) :
1) If there is a spelling mistake on the pages which has very less hits throughout the month on any website. This fault can be considered as low severity and low priori

Severity Priority
In simple words, severity depends on the harshness of the bug. In simple words, priority depends on the urgency with which the bug needs to be fixed.
It is an internal characteristic of the particular bug. Examples of High severity bugs include the application fails to start, the application crashes or causes data loss to the user. It is an external (that is based on someone's judgment) characteristic of the bug.
Examples of high priority bugs are the application does not allow any user to log in, a particular functionality is not working or the client logo is incorrect. As you can see in the above example, a high priority bug can have a high severity, a medium severity or a low severity.
Its value is based more on the needs of the end-users. Its value is based more on the needs of the business.
Its value takes only the particular bug into account. For example, the bug may be in an obscure area of the application but still have a high severity. Its value depends on a number of factors (e.g. the likelihood of the bug occurring, the severity of the bug and the priorities of other open bugs).
Its value is (usually) set by the bug reporter. Its value is initially set up by the bug reporter. However, the value can be changed by someone else (e.g. the management or developer) based on their discretion.
Its value is objective and therefore less likely to change. Its value is subjective (based on judgment). The value can change over a period of time depending on the change in the project situation.
A high severity bug may be marked for a fix immediately or later. A high priority bug is marked for a fix immediately.
The team usually needs only a handful of values (e.g. Showstopper, High, Medium and Low) to specify severity. In practice, new values may be designed (typically by the management) on a fairly constant basis. This may happen if there are too many high priority defects. Instead of a single High value, new values may be designed such as Fix by the end of the day, Fix in next build and Fix in the next release.
I hope that you are now clear about the difference between severity and priority and can explain the difference to anyone with ease.

Test Strategy & Test Plan

Test Strategy
A Test Strategy document is a high level document and normally developed by project manager. This document defines “Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.

The Test Stategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document.

Some companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing.

Components of the Test Strategy document

* Scope and Objectives
* Business issues
* Roles and responsibilities
* Communication and status reporting
* Test deliverability
* Industry standards to follow
* Test automation and tools
* Testing measurements and metrices
* Risks and mitigation
* Defect reporting and tracking
* Change and configuration management
* Training plan

Test Plan
The Test Plan document on the other hand, is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents.
The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.

It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase have their own Test Plan documents.

There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities.
My own personal view is that when a testing phase starts and the Test Manager is “controlling” the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process.

* Test Plan id
* Introduction
* Test items
* Features to be tested
* Features not to be tested
* Test techniques
* Testing tasks
* Suspension criteria
* Features pass or fail criteria
* Test environment (Entry criteria, Exit criteria)
* Test delivarables
* Staff and training needs
* Responsibilities
* Schedule

This is a standard approach to prepare test plan and test strategy documents, but things can vary company-to-compan