Thursday, December 30, 2010

Traceability Matrix

A requirements traceability matrix is a document that traces and maps user requirements [requirement Ids from requirement specification document] with the test case ids. Purpose is to make sure that all the requirements are covered in test cases so that while testing no functionality can be missed.
This document is prepared to make the clients satisfy that the coverage done is complete as end to end, this document consists of Requirement/Base line doc Ref No., Test case/Condition, and Defects/Bug id. Using this document the person can track the Requirement based on the Defect id
Note – We can make it a “Test case Coverage checklist” document by adding few more columns. We will discuss in later posts
Types of Traceability Matrix:
  • Forward Traceability – Mapping of Requirements to Test cases
  • Backward Traceability – Mapping of Test Cases to Requirements
  • Bi-Directional Traceability - A Good Traceability matrix is the References from test cases to basis documentation and vice versa.
Types of Traceability Matrix
Why Bi-Directional Traceability is required?
Bi-Directional Traceability contains both Forward & Backward Traceability. Through Backward Traceability Matrix, we can see that test cases are mapped with which requirements.
This will help us in identifying if there are test cases that do not trace to any coverage item— in which case the test case is not required and should be removed (or maybe a specification like a requirement or two should be added!). This “backward” Traceability is also very helpful if you want to identify that a particular test case is covering how many requirements?
Through Forward Traceability – we can check that requirements are covered in which test cases? Whether is the requirements are coved in the test cases or not?
Forward Traceability Matrix ensures – We are building the Right Product.
Backward Traceability Matrix ensures – We the Building the Product Right.
Traceability matrix is the answer of the following questions of any Software Project:
  • How is it feasible to ensure, for each phase of the SDLC, that I have correctly accounted for all the customer’s needs?
  • How can I certify that the final software product meets the customer’s needs? Now we can only make sure requirements are captured in the test cases by traceability matrix.
Disadvantages of not using Traceability Matrix [some possible (seen) impact]:
No traceability or Incomplete Traceability Results into:
1. Poor or unknown test coverage, more defects found in production 
2. It will lead to miss some bugs in earlier test cycles which may arise in later test cycles. Then a lot of discussions arguments with other teams and managers before release.
3. Difficult project planning and tracking, misunderstandings between different teams over project dependencies, delays, etc
Benefits of using Traceability Matrix
  • Make obvious to the client that the software is being developed as per the requirements.
  • To make sure that all requirements included in the test cases
  • To make sure that developers are not creating features that no one has requested
  • Easy to identify the missing functionalities.
  • If there is a change request for a requirement, then we can easily find out which test cases need to update.
  • The completed system may have “Extra” functionality that may have not been specified in the design specification, resulting in wastage of manpower, time and effort.

Cost of Quality


The cost of quality includes all costs incurred in the pursuit of quality or in performing
quality-related activities. Cost of quality studies are conducted to provide a base-
line for the current cost of quality, identify opportunities for reducing the cost of quality,
and provide a normalized basis of comparison. The basis of normalization is
almost always dollars. Once we have normalized quality costs on a dollar basis, we
have the necessary data to evaluate where the opportunities lie to improve our
processes. Furthermore, we can evaluate the effect of changes in dollar-based terms.

Quality costs may be divided into costs associated with prevention, appraisal, and
failure. Prevention costs include
• quality planning
• formal technical reviews
• test equipment
• training

Appraisal costs include activities to gain insight into product condition the “first time
through” each process. Examples of appraisal costs include
• in-process and interprocess inspection
• equipment calibration and maintenance
• testing

Failure costs are those that would disappear if no defects appeared before shipping a
product to customers. Failure costs may be subdivided into internal failure costs and
external failure costs. Internal failure costs are incurred when we detect a defect in
our product prior to shipment. Internal failure costs include
• rework
• repair
• failure mode analysis

External failure costs are associated with defects found after the product has been
shipped to the customer. Examples of external failure costs are
• complaint resolution
• product return and replacement
• help line support
• warranty work

As expected, the relative costs to find and repair a defect increase dramatically as
we go from prevention to detection to internal failure to external failure costs based on data collected  and others, illustrates this phenomenon.

 

FORMAL TECHNICAL REVIEWS


A formal technical review is a software quality assurance activity performed by software
engineers (and others). The objectives of the FTR are
 (1) to uncover errors in function, logic, or implementation for any representation of the software;
(2) to verify that the software under review meets its requirements;
 (3) to ensure that the software has been represented according to predefined standards; (4) to achieve software that is developed in a uniform manner;
And
 (5) to make projects more manageable.

In addition, the FTR serves as a training ground, enabling junior engineers to observe different approaches to software analysis, design, and implementation. The FTR also serves
to promote backup and continuity because a number of people become familiar with
parts of the software that they may not have otherwise seen.

The FTR is actually a class of reviews that includes walkthroughs, inspections,
round-robin reviews and other small group technical assessments of software. Each
FTR is conducted as a meeting and will be successful only if it is properly planned,
controlled, and attended. In the sections that follow, guidelines similar to those for a
walkthrough are presented as a representative formal technical review.

 

Defect leakage


Defect leakage refers to the defect Found \ reproduced by the Client or User, which the tester was unable to found.
Defect leakage is the number of bugs that are found in the field that were not found internally. There are a few ways to express this:
* total number of leaked defects (a simple count)
* defects per customer: number of leaked defects divided by number of customers running that release
* % found in the field: number of leaked defects divided by number of total defects found in that release
In theory, this can be measured at any stage - number of defects leaked from dev into QA, number leaked from QA into beta certification, etc. I've mostly
used it for customers in the field, though.

Probe Testing


It is almost same as Exploratory testing. It is a creative, intuitive process. Everything testers do is optimized to find bugs fast, so plans often change as testers learn more about the product and its weaknesses. Session-based test management is one method to organize and direct exploratory testing. It allows us to provide meaningful reports to management while preserving the creativity that makes exploratory testing work. This page includes an explanation of the method as well as sample session reports, and a tool we developed that produces metrics from those reports.

Automation Testing v/s Manual Testing Guidelines:

I met with my team’s automation experts a few weeks back to get their input on when to automate and when to manually test. The general rule of thumb has always been to use common sense. If you’re only going to run the test one or two times or the test is really expensive to automation, it is most likely a manual test. But then again, what good is saying “use common sense” when you need to come up with deterministic set of guidelines on how and when to automate?
Pros of Automation
• If you have to run a set of tests repeatedly, automation is a huge win for you
• It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner
• It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner (see What is a Nightly)
• Aids in testing a large test matrix (different languages on different OS platforms). Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially.
Cons of Automation
• It costs more to automate. Writing the test cases and writing or configuring the automate framework you’re using costs more initially than running the test manually.
• Can’t automate visual references, for example, if you can’t tell the font colour via code or the automation tool, it is a manual test.
Pros of Manual
• If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it.
• It allows the tester to perform more ad-hoc (random testing). In my experiences, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs.
Cons of Manual
• Running tests manually can be very time consuming
• Each time there is a new build, the tester must rerun all required tests - which after a while would become very mundane and tiresome.
Other deciding factors:
• What you automate depends on the tools you use. If the tools have any limitations, those tests are manual.
• Is the return on investment worth automating? Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases?

How to test a website by Manual Testing?

A: — Web Testing
During testing the websites the following scenarios should be considered.
Functionality
Performance
Usability
Server side interface
Client side compatibility
Security
Functionality:
In testing the functionality of the web sites the following should be tested.
Links
Internal links
External links
Mail links
Broken links
Forms
Field validation
Functional chart
Error message for wrong input
Optional and mandatory fields
Database
Testing will be done on the database integrity.
Cookies
Testing will be done on the client system side, on the temporary internet files.
Performance:
Performance testing can be applied to understand the web site’s scalability, or to benchmark the performance in the environment of third party products such as servers and middle ware for potential purchase.
Connection speed:
Tested over various Networks like Dial up, ISDN etc
Load
What is the no. of users per time?
Check for peak loads & how system behaves.
Large amount of data accessed by user.
Stress
Continuous load
Performance of memory, cpu, file handling etc.
Usability :
Usability testing is the process by which the human-computer interaction characteristics of a system are measured, and weaknesses are identified for correction. Usability can be defined as the degree to which a given piece of software assists the person sitting at the keyboard to accomplish a task, as opposed to becoming an additional impediment to such accomplishment. The broad goal of usable systems is often assessed using several
Criteria:
Ease of learning
Navigation
Subjective user satisfaction
General appearance
Server side interface:
In web testing the server side interface should be tested.
This is done by Verify that communication is done properly.
Compatibility of server with software, hardware, network and database should be tested.
The client side compatibility is also tested in various platforms, using various browsers etc.
Security:
The primary reason for testing the security of an web is to identify potential vulnerabilities and subsequently repair them.
The following types of testing are described in this section:
Network Scanning
Vulnerability Scanning
Password Cracking
Log Review
Integrity Checkers
Virus Detection
Performance Testing
Performance testing is a rigorous usability evaluation of a working system under realistic conditions to identify usability problems and to compare measures such as success
rate, task time and user satisfaction with requirements. The goal of performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing.
To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly. A clearly defined set of expectations is essential for meaningful performance testing.
For example, for a Web application, you need to know at least two things:
expected load in terms of concurrent users or HTTP connections
acceptable response time
Load testing:
Load testing is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing
Examples of volume testing:
testing a word processor by editing a very large document
testing a printer by sending it a very large job
testing a mail server with thousands of users mailboxes
Examples of longevity/endurance testing:
testing a client-server application by running the client in a loop against the server over an extended period of time
Goals of load testing:
Expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc. Ensure that the application meets the performance baseline established during Performance testing. This is done by running regression tests against the application at a specified maximum load.
Although performance testing and load testing can seen similar, their goals are different. On one hand, performance testing uses load testing techniques and tools for measurement and benchmarking purposes and uses various load levels whereas load testing operates at a predefined load level, the highest load that the system can accept while still functioning properly.
Stress testing:
Stress testing is a form of testing that is used to determine the stability of a given system or entity. This is designed to test the software with abnormal situations. Stress testing attempts to find the limits at which the system will fail through abnormal quantity or frequency of inputs.
Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing).
The main purpose behind this madness is to make sure that the system fails and recovers gracefully — this quality is known as recoverability.
Stress testing does not break the system but instead it allows observing how the system reacts to failure. Stress testing observes for the following.
Does it save its state or does it crash suddenly?
Does it just hang and freeze or does it fail gracefully?
Is it able to recover from the last good state on restart?
Etc.
Compatability Testing
A Testing to ensure compatibility of an application or Web site with different browsers, OS and hardware platforms. Different versions, configurations, display resolutions, and Internet connect speeds all can impact the behavior of the product and introduce costly and embarrassing bugs. We test for compatibility using real test environments. That is testing how will the system performs in the particular software, hardware or network environment. Compatibility testing can be performed manually or can be driven by an automated functional or reg The purpose of compatibility testing is to reveal issues related to the product& interaction session test suite.with other software as well as hardware. The product compatibility is evaluated by first identifying the hardware/software/browser components that the product is designed to support. Then a hardware/software/browser matrix is designed that indicates the configurations on which the product will be tested. Then, with input from the client, a testing script is designed that will be sufficient to evaluate compatibility between the product and the hardware/software/browser matrix. Finally, the script is executed against the matrix,and any anomalies are investigated to determine exactly where the incompatibility lies.
Some typical compatibility tests include testing your application:
On various client hardware configurations
Using different memory sizes and hard drive space
On various Operating Systems
In different network environments
With different printers and peripherals (i.e. zip drives, USBs, etc.)

Wednesday, December 29, 2010

THE PROJECT PLAN

Each step in the software engineering process should produce a deliverable that can
be reviewed and that can act as a foundation for the steps that follow. The Software
Project Plan is produced at the culmination of the planning tasks. It provides baseline
cost and scheduling information that will be used throughout the software process.
The Software Project Plan is a relatively brief document that is addressed to a diverse
audience. It must (1) communicate scope and resources to software management,
technical staff, and the customer; (2) define risks and suggest risk aversion techniques;
(3) define cost and schedule for management review; (4) provide an overall approach
to software development for all people associated with the project; and (5) outline
how quality will be ensured and change will be managed.
A presentation of cost and schedule will vary with the audience addressed. If the
plan is used only as an internal document, the results of each estimation technique
can be presented. When the plan is disseminated outside the organization, a reconciled
cost breakdown (combining the results of all estimation techniques) is provided.
Similarly, the degree of detail contained within the schedule section may vary with
the audience and formality of the plan.
It is important to note that the Software Project Plan is not a static document. That
is, the project team revisits the plan repeatedly—updating risks, estimates, schedules
and related information—as the project proceeds and more is learned.

The WINWIN Spiral Model


The spiral model suggests a framework activity that addresses customer communication.
 The objective of this activity is to elicit project requirements from the customer.
 In an ideal context, the developer simply asks the customer what is required and
the customer provides sufficient detail to proceed. Unfortunately, this rarely happens.
In reality, the customer and the developer enter into a process of negotiation,
 where the customer may be asked to balance functionality,
performance, and other product or system characteristics against cost and
time to market.

The best negotiations strive for a “win-win” result. That is, the customer wins by
getting the system or product that satisfies the majority of the customer’s needs and
the developer wins by working to realistic and achievable budgets and deadlines.

Boehm’s WINWIN spiral model  defines a set of negotiation activities at
the beginning of each pass around the spiral. Rather than a single customer communication
activity, the following activities are defined:
1. Identification of the system or subsystem’s key “stakeholders.”
2. Determination of the stakeholders’ “win conditions.”
3. Negotiation of the stakeholders’ win conditions to reconcile them into a set of
win-win conditions for all concerned (including the software project team).
Successful completion of these initial steps achieves a win-win result, which becomes
the key criterion for proceeding to software and system definition.

In addition to the emphasis placed on early negotiation, the WINWIN spiral model
introduces three process milestones, called anchor points , that help establish
the completion of one cycle around the spiral and provide decision milestones
before the software project proceeds.

In essence, the anchor points represent three different views of progress as the
project traverses the spiral. The first anchor point, life cycle objectives (LCO), defines
a set of objectives for each major software engineering activity. For example, as part
of LCO, a set of objectives establishes the definition of top-level system/product
requirements. The second anchor point, life cycle architecture (LCA), establishes objectives
that must be met as the system and software architecture is defined. For example,
as part of LCA, the software project team must demonstrate that it has evaluated
the applicability of off-the-shelf and reusable software components and considered
their impact on architectural decisions. Initial operational capability (IOC) is the third
anchor point and represents a set of objectives associated with the preparation of the
software for installation/distribution, site preparation prior to installation, and assistance
required by all parties that will use or support the software.

The Spiral Model

The spiral model, originally proposed by Boehm , is an evolutionary software
process model that couples the iterative nature of prototyping with the controlled and
systematic aspects of the linear sequential model. It provides the potential for rapid
development of incremental versions of the software. Using the spiral model, software
is developed in a series of incremental releases. During early iterations, the
incremental release might be a paper model or prototype. During later iterations,
increasingly more complete versions of the engineered system are produced.
A spiral model is divided into a number of framework activities, also called task
regions.6 Typically, there are between three and six task regions. A spiral model
 that contains six task regions:
Customer communication—tasks required to establish effective communication
between developer and customer.

Planning—tasks required to define resources, timelines, and other projectrelated
information.

Risk analysis—tasks required to assess both technical and management
risks.

Engineering—tasks required to build one or more representations of the
application.

Construction and release—tasks required to construct, test, install, and
provide user support (e.g., documentation and training).

Customer evaluation—tasks required to obtain customer feedback based
on evaluation of the software representations created during the engineering
stage and implemented during the installation stage.

Each of the regions is populated by a set of work tasks, called a task set, that are
adapted to the characteristics of the project to be undertaken. For small projects, the
number of work tasks and their formality is low. For larger, more critical projects,
each task region contains more work tasks that are defined to achieve a higher level
of formality
.
As this evolutionary process begins, the software engineering team moves around
the spiral in a clockwise direction, beginning at the center. The first circuit around
the spiral might result in the development of a product specification; subsequent
passes around the spiral might be used to develop a prototype and then progressively
more sophisticated versions of the software. Each pass through the planning region
results in adjustments to the project plan. Cost and schedule are adjusted based on
feedback derived from customer evaluation. In addition, the project manager adjusts
the planned number of iterations required to complete the software.

Unlike classical process models that end when software is delivered, the spiral
model can be adapted to apply throughout the life of the computer software. An alternative
view of the spiral model can be considered by examining the project entry point
axis. Each cube placed along the axis can be used to represent
the starting point for different types of projects. A “concept development
project” starts at the core of the spiral and will continue (multiple iterations occur
along the spiral path that bounds the central shaded region) until concept development
is complete. If the concept is to be developed into an actual product, the process
proceeds through the next cube (new product development project entry point) and
a “new development project” is initiated. The new product will evolve through a number
of iterations around the spiral, following the path that bounds the region that has
somewhat lighter shading than the core. In essence, the spiral, when characterized
in this way, remains operative until the software is retired. There are times when the
process is dormant, but whenever a change is initiated, the process starts at the appropriate
entry point .

The spiral model is a realistic approach to the development of large-scale systems
and software. Because software evolves as the process progresses, the developer and
customer better understand and react to risks at each evolutionary level. The spiral model
uses prototyping as a risk reduction mechanism but, more important, enables the developer
to apply the prototyping approach at any stage in the evolution of the product. It
maintains the systematic stepwise approach suggested by the classic life cycle but incorporates
it into an iterative framework that more realistically reflects the real world. The
spiral model demands a direct consideration of technical risks at all stages of the project
and, if properly applied, should reduce risks before they become problematic.
But like other paradigms, the spiral model is not a panacea. It may be difficult to
convince customers (particularly in contract situations) that the evolutionary approach
is controllable. It demands considerable risk assessment expertise and relies on this
expertise for success. If a major risk is not uncovered and managed, problems will
undoubtedly occur. Finally, the model has not been used as widely as the linear
sequential or prototyping paradigms. It will take a number of years before efficacy of
this important paradigm can be determined with absolute certainty.

The Incremental Model

The incremental model combines elements of the linear sequential model (applied
repetitively) with the iterative philosophy of prototyping.  the
incremental model applies linear sequences in a staggered fashion as calendar time
progresses. Each linear sequence produces a deliverable “increment” of the software
. For example, word-processing software developed using the incremental
paradigm might deliver basic file management, editing, and document production
functions in the first increment; more sophisticated editing and document production
capabilities in the second increment; spelling and grammar checking in the third
increment; and advanced page layout capability in the fourth increment. It should be
noted that the process flow for any increment can incorporate the prototyping paradigm.
When an incremental model is used, the first increment is often a core product.
That is, basic requirements are addressed, but many supplementary features (some
known, others unknown) remain undelivered. The core product is used by the customer
(or undergoes detailed review). As a result of use and/or evaluation, a plan is
developed for the next increment. The plan addresses the modification of the core
product to better meet the needs of the customer and the delivery of additional
features and functionality. This process is repeated following the delivery of each
increment, until the complete product is produced.

The incremental process model, like prototyping  and other evolutionary
approaches, is iterative in nature. But unlike prototyping, the incremental
model focuses on the delivery of an operational product with each increment. Early
increments are stripped down versions of the final product, but they do provide capability
that serves the user and also provide a platform for evaluation by the user.
Incremental development is particularly useful when staffing is unavailable for a
complete implementation by the business deadline that has been established for the
project. Early increments can be implemented with fewer people. If the core product
is well received, then additional staff (if required) can be added to implement the next
increment. In addition, increments can be planned to manage technical risks.

 For example, a major system might require the availability of new hardware that is under
development and whose delivery date is uncertain. It might be possible to plan early
increments in a way that avoids the use of this hardware, thereby enabling partial
functionality to be delivered to end-users without inordinate delay.

Rapid application development (RAD)

Rapid application development (RAD) is an incremental software development process
model that emphasizes an extremely short development cycle. The RAD model is a
“high-speed” adaptation of the linear sequential model in which rapid development
is achieved by using component-based construction. If requirements are well understood
and project scope is constrained, the RAD process enables a development team
to create a “fully functional system” within very short time periods (e.g., 60 to 90 days)
. Used primarily for information systems applications, the RAD approach
encompasses the following phases :

Business modeling. The information flow among business functions is modeled in
a way that answers the following questions: What information drives the business
process? What information is generated? Who generates it? Where does the information
go? Who processes it?

Data modeling. The information flow defined as part of the business modeling phase
is refined into a set of data objects that are needed to support the business. The char-
acteristics (called attributes) of each object are identified and the relationships between
these objects defined.

Process modeling. The data objects defined in the data modeling phase are transformed
to achieve the information flow necessary to implement a business function.
Processing descriptions are created for adding, modifying, deleting, or retrieving a
data object.

Application generation. RAD assumes the use of fourth generation techniques
(Section 2.10). Rather than creating software using conventional third generation
programming languages the RAD process works to reuse existing program components
(when possible) or create reusable components (when necessary). In all cases,
automated tools are used to facilitate construction of the software.

Testing and turnover. Since the RAD process emphasizes reuse, many of the program
components have already been tested. This reduces overall testing time. However,
new components must be tested and all interfaces must be fully exercised.
Model.  Obviously, the time constraints imposed on a RAD project demand “scalable scope” .
 If a business application can be modularized in a way that enables each major function to be
Completed in less than three months (using the approach described previously), it is a candidate
for RAD. Each major function can be addressed by a separate RAD team and then
integrated to form a whole.

Like all process models, the RAD approach has drawbacks:
• For large but scalable projects, RAD requires sufficient human resources to
create the right number of RAD teams.

• RAD requires developers and customers who are committed to the rapid-fire
activities necessary to get a system complete in a much abbreviated time
frame. If commitment is lacking from either constituency, RAD projects will
fail.

• Not all types of applications are appropriate for RAD. If a system cannot be
properly modularized, building the components necessary for RAD will be
problematic. If high performance is an issue and performance is to be
achieved through tuning the interfaces to system components, the RAD
approach may not work.

• RAD is not appropriate when technical risks are high. This occurs when a new
application makes heavy use of new technology or when the new software
requires a high degree of interoperability with existing computer programs.


 

Prototype Model

Often, a customer defines a set of general objectives for software but does not identify
detailed input, processing, or output requirements. In other cases, the developer
may be unsure of the efficiency of an algorithm, the adaptability of an operating system,
or the form that human/machine interaction should take. In these, and many
other situations, a prototyping paradigm may offer the best approach.

The prototyping paradigm (Figure 2.5) begins with requirements gathering. Developer
and customer meet and define the overall objectives for the software, identify
whatever requirements are known, and outline areas where further definition is
mandatory. A "quick design" then occurs. The quick design focuses on a representation
of those aspects of the software that will be visible to the customer/user (e.g.,
input approaches and output formats). The quick design leads to the construction of
a prototype. The prototype is evaluated by the customer/user and used to refine
requirements for the software to be developed. Iteration occurs as the prototype is
tuned to satisfy the needs of the customer, while at the same time enabling the developer
to better understand what needs to be done.

Ideally, the prototype serves as a mechanism for identifying software requirements.
If a working prototype is built, the developer attempts to use existing program fragments
or applies tools (e.g., report generators, window managers) that enable working
programs to be generated quickly.



 

But what do we do with the prototype when it has served the purpose just
described? Brooks [BRO75] provides an answer:
In most projects, the first system built is barely usable. It may be too slow, too big, awkward
in use or all three. There is no alternative but to start again, smarting but smarter, and build
a redesigned version in which these problems are solved . . . When a new system concept
or new technology is used, one has to build a system to throw away, for even the best planning
is not so omniscient as to get it right the first time. The management question, therefore,
is not whether to build a pilot system and throw it away. You will do that. The only
question is whether to plan in advance to build a throwaway, or to promise to deliver the
throwaway to customers . . .
The prototype can serve as "the first system." The one that Brooks recommends
we throw away. But this may be an idealized view. It is true that both customers and
developers like the prototyping paradigm. Users get a feel for the actual system and
developers get to build something immediately. Yet, prototyping can also be problematic
for the following reasons:

1. The customer sees what appears to be a working version of the software,
unaware that the prototype is held together “with chewing gum and baling
wire,” unaware that in the rush to get it working no one has considered overall
software quality or long-term maintainability. When informed that the
product must be rebuilt so that high levels of quality can be maintained, the
customer cries foul and demands that "a few fixes" be applied to make the
prototype a working product. Too often, software development management
relents.

2. The developer often makes implementation compromises in order to get a
prototype working quickly. An inappropriate operating system or programming
language may be used simply because it is available and known; an
inefficient algorithm may be implemented simply to demonstrate capability.
After a time, the developer may become familiar with these choices and forget
all the reasons why they were inappropriate. The less-than-ideal choice
has now become an integral part of the system.

Although problems can occur, prototyping can be an effective paradigm for software
engineering. The key is to define the rules of the game at the beginning; that is,
the customer and developer must both agree that the prototype is built to serve as a
mechanism for defining requirements. It is then discarded (at least in part) and the
actual software is engineered with an eye toward quality and maintainability.