Wednesday, December 29, 2010

The Incremental Model

The incremental model combines elements of the linear sequential model (applied
repetitively) with the iterative philosophy of prototyping.  the
incremental model applies linear sequences in a staggered fashion as calendar time
progresses. Each linear sequence produces a deliverable “increment” of the software
. For example, word-processing software developed using the incremental
paradigm might deliver basic file management, editing, and document production
functions in the first increment; more sophisticated editing and document production
capabilities in the second increment; spelling and grammar checking in the third
increment; and advanced page layout capability in the fourth increment. It should be
noted that the process flow for any increment can incorporate the prototyping paradigm.
When an incremental model is used, the first increment is often a core product.
That is, basic requirements are addressed, but many supplementary features (some
known, others unknown) remain undelivered. The core product is used by the customer
(or undergoes detailed review). As a result of use and/or evaluation, a plan is
developed for the next increment. The plan addresses the modification of the core
product to better meet the needs of the customer and the delivery of additional
features and functionality. This process is repeated following the delivery of each
increment, until the complete product is produced.

The incremental process model, like prototyping  and other evolutionary
approaches, is iterative in nature. But unlike prototyping, the incremental
model focuses on the delivery of an operational product with each increment. Early
increments are stripped down versions of the final product, but they do provide capability
that serves the user and also provide a platform for evaluation by the user.
Incremental development is particularly useful when staffing is unavailable for a
complete implementation by the business deadline that has been established for the
project. Early increments can be implemented with fewer people. If the core product
is well received, then additional staff (if required) can be added to implement the next
increment. In addition, increments can be planned to manage technical risks.

 For example, a major system might require the availability of new hardware that is under
development and whose delivery date is uncertain. It might be possible to plan early
increments in a way that avoids the use of this hardware, thereby enabling partial
functionality to be delivered to end-users without inordinate delay.

Rapid application development (RAD)

Rapid application development (RAD) is an incremental software development process
model that emphasizes an extremely short development cycle. The RAD model is a
“high-speed” adaptation of the linear sequential model in which rapid development
is achieved by using component-based construction. If requirements are well understood
and project scope is constrained, the RAD process enables a development team
to create a “fully functional system” within very short time periods (e.g., 60 to 90 days)
. Used primarily for information systems applications, the RAD approach
encompasses the following phases :

Business modeling. The information flow among business functions is modeled in
a way that answers the following questions: What information drives the business
process? What information is generated? Who generates it? Where does the information
go? Who processes it?

Data modeling. The information flow defined as part of the business modeling phase
is refined into a set of data objects that are needed to support the business. The char-
acteristics (called attributes) of each object are identified and the relationships between
these objects defined.

Process modeling. The data objects defined in the data modeling phase are transformed
to achieve the information flow necessary to implement a business function.
Processing descriptions are created for adding, modifying, deleting, or retrieving a
data object.

Application generation. RAD assumes the use of fourth generation techniques
(Section 2.10). Rather than creating software using conventional third generation
programming languages the RAD process works to reuse existing program components
(when possible) or create reusable components (when necessary). In all cases,
automated tools are used to facilitate construction of the software.

Testing and turnover. Since the RAD process emphasizes reuse, many of the program
components have already been tested. This reduces overall testing time. However,
new components must be tested and all interfaces must be fully exercised.
Model.  Obviously, the time constraints imposed on a RAD project demand “scalable scope” .
 If a business application can be modularized in a way that enables each major function to be
Completed in less than three months (using the approach described previously), it is a candidate
for RAD. Each major function can be addressed by a separate RAD team and then
integrated to form a whole.

Like all process models, the RAD approach has drawbacks:
• For large but scalable projects, RAD requires sufficient human resources to
create the right number of RAD teams.

• RAD requires developers and customers who are committed to the rapid-fire
activities necessary to get a system complete in a much abbreviated time
frame. If commitment is lacking from either constituency, RAD projects will
fail.

• Not all types of applications are appropriate for RAD. If a system cannot be
properly modularized, building the components necessary for RAD will be
problematic. If high performance is an issue and performance is to be
achieved through tuning the interfaces to system components, the RAD
approach may not work.

• RAD is not appropriate when technical risks are high. This occurs when a new
application makes heavy use of new technology or when the new software
requires a high degree of interoperability with existing computer programs.


 

Prototype Model

Often, a customer defines a set of general objectives for software but does not identify
detailed input, processing, or output requirements. In other cases, the developer
may be unsure of the efficiency of an algorithm, the adaptability of an operating system,
or the form that human/machine interaction should take. In these, and many
other situations, a prototyping paradigm may offer the best approach.

The prototyping paradigm (Figure 2.5) begins with requirements gathering. Developer
and customer meet and define the overall objectives for the software, identify
whatever requirements are known, and outline areas where further definition is
mandatory. A "quick design" then occurs. The quick design focuses on a representation
of those aspects of the software that will be visible to the customer/user (e.g.,
input approaches and output formats). The quick design leads to the construction of
a prototype. The prototype is evaluated by the customer/user and used to refine
requirements for the software to be developed. Iteration occurs as the prototype is
tuned to satisfy the needs of the customer, while at the same time enabling the developer
to better understand what needs to be done.

Ideally, the prototype serves as a mechanism for identifying software requirements.
If a working prototype is built, the developer attempts to use existing program fragments
or applies tools (e.g., report generators, window managers) that enable working
programs to be generated quickly.



 

But what do we do with the prototype when it has served the purpose just
described? Brooks [BRO75] provides an answer:
In most projects, the first system built is barely usable. It may be too slow, too big, awkward
in use or all three. There is no alternative but to start again, smarting but smarter, and build
a redesigned version in which these problems are solved . . . When a new system concept
or new technology is used, one has to build a system to throw away, for even the best planning
is not so omniscient as to get it right the first time. The management question, therefore,
is not whether to build a pilot system and throw it away. You will do that. The only
question is whether to plan in advance to build a throwaway, or to promise to deliver the
throwaway to customers . . .
The prototype can serve as "the first system." The one that Brooks recommends
we throw away. But this may be an idealized view. It is true that both customers and
developers like the prototyping paradigm. Users get a feel for the actual system and
developers get to build something immediately. Yet, prototyping can also be problematic
for the following reasons:

1. The customer sees what appears to be a working version of the software,
unaware that the prototype is held together “with chewing gum and baling
wire,” unaware that in the rush to get it working no one has considered overall
software quality or long-term maintainability. When informed that the
product must be rebuilt so that high levels of quality can be maintained, the
customer cries foul and demands that "a few fixes" be applied to make the
prototype a working product. Too often, software development management
relents.

2. The developer often makes implementation compromises in order to get a
prototype working quickly. An inappropriate operating system or programming
language may be used simply because it is available and known; an
inefficient algorithm may be implemented simply to demonstrate capability.
After a time, the developer may become familiar with these choices and forget
all the reasons why they were inappropriate. The less-than-ideal choice
has now become an integral part of the system.

Although problems can occur, prototyping can be an effective paradigm for software
engineering. The key is to define the rules of the game at the beginning; that is,
the customer and developer must both agree that the prototype is built to serve as a
mechanism for defining requirements. It is then discarded (at least in part) and the
actual software is engineered with an eye toward quality and maintainability.

People CMM

The People CMM is a maturity framework that describes the key elements of managing and developing the workforce of an organization. It describes an evolutionary improvement path from an ad hoc approach to managing the work-force, to a mature, disciplined development of the knowledge, skills, and motivation of the people that fuels enhanced business performance.
The People CMM helps organizations to
  • characterize the maturity of their human resource practices
  • set priorities for improving the competence of its work-force
  • integrate competence growth with process improvement
  • establish a culture of workforce excellence
The People CMM publications and training will support incorporating people management capabilities into software improvement programs by communicating a model that complements the Capability Maturity Model Integrated (CMMI), and by making available an appraisal method that can be used alone or integrated with existing process appraisal methods.
The People CMM is designed to guide organizations in selecting activities for improving their workforce practices based on the current maturity of their workforce practices. By concentrating on a focused set of practices and working aggressively to install them, organizations can steadily improve their level of talent and make continuous and lasting gains in their performance. The People CMM guides an organization through a series of increasingly sophisticated practices and techniques for developing its overall work-force. These practices have been chosen from experience as those that have significant impact on individual, team, and organizational performance.
Framework of the People CMM
 The People Capability Maturity Model (People CMM) adapts the maturity framework of the Capability Maturity Model for Software (CMM) [Paulk 95], to managing and developing an organization's work force. The motivation for the People CMM is to radically improve the ability of software organizations to attract, develop, motivate, organize, and retain the talent needed to continuously improve software development capability. The People CMM is designed to allow software organizations to integrate work-force improvement with software process improvement programs guided by the SW-CMM. The People CMM can also be used by any kind of organization as a guide for improving their people-related and work-force practices.
Based on the best current practices in the fields such as human resources and organizational development, the People CMM provides organizations with guidance on how to gain control of their processes for managing and developing their work force. The People CMM helps organizations to characterize the maturity of their work-force practices, guide a program of continuous work-force development, set priorities for immediate actions, integrate work-force development with process improvement, and establish a culture of software engineering excellence. It describes an evolutionary improvement path from ad hoc, inconsistently performed practices, to a mature, disciplined development of the knowledge, skills, and motivation of the work force, just as the CMM describes an evolutionary improvement path for the software processes within an organization.
The People CMM consists of five maturity levels that lay successive foundations for continuously improving talent, developing effective teams, and successfully managing the people assets of the organization. Each maturity level is a well-defined evolutionary plateau that institutionalizes a level of capability for developing the talent within the organization.
Except for Level 1, each maturity level is decomposed into several key process areas that indicate the areas an organization should focus on to improve its workforce capability. Each key process area is described in terms of the key practices that contribute to satisfying its goals. The key practices describe the infrastructure and activities that contribute most to the effective implementation and institutionalization of the key process area.
The five maturity levels of the People CMM are:
1) Initial.
2) Repeatable. The key process areas at Level 2 focus on instilling basic discipline into workforce activities. They are:
  • Work Environment
  • Communications
  • Staffing
  • Performance Management
  • Training
  • Compensation
3) Defined. The key process areas at Level 3 address issues surrounding the identification of the organization's primary competencies and aligning its people management activities with them. They are:
  • Knowledge and Skills Analysis
  • Workforce Planning
  • Competency Development
  • Career Development
  • Competency-Based Practices
  • Participatory Culture
4) Managed. The key process areas at Level 4 focus on quantitatively managing organizational growth in people management capabilities and in establishing competency-based teams. They are:
  • Mentoring
  • Team Building
  • Team-Based Practices
  • Organizational Competency Management
  • Organizational
  • Performance Alignment
5) Optimizing. The key process areas at Level 5 cover the issues that address continuous improvement of methods for developing competency, at both the organizational and the individual level. They are:
  • Personal Competency Development
  • Coaching
  • Continuous Workforce Innovation

The Software Engineering Institute has released three documents describing the People CMM. These documents describe the People CMM, and the key practices that correspond to each maturity level of the People CMM, and gives information on how to apply the People CMM in guiding organizational improvement. It contains an elaboration of what is meant by work-force capability (i.e., maturity) at each maturity level, and describes how the People CMM can be applied by an organization in two primary ways: as a standard for assessing work-force practices, and as a guide in planning and implementing improvement activities.

CMM Level 5 companies list

CMM Level 5 companies list
List of CMM-5 Certified Software Service Companies in India Listed in no particular order. The purpose of this list is to quickly provide a list of potentical outsourcing companies, so companies that have located their staff and services to India are not considered. Please send corrections to:

Sl no. Company Location
1 ANZ Operations & Technology Private Limited
Bangalore
2 Applitech Solution Limited Ahmedabad
3 CBS India Chennai/Bangalore
4 CGI Information Systems and Management Consultants Private Ltd Bangalore
5 CG-Smith Software Limited Bangalore
6 Citicorp Overseas Software Limited Mumbai
7 Cognizant Technology Solutions Bangalore
8 Covansys India Pvt. Ltd. Bangalore
9 DCM Technologies Hyderabad
10 Engineering Analysis Center of Excellence Pvt. Ltd. (EACoE) Bangalore
11 FCG Software Services (India) Pvt. Ltd. Bangalore
12 Future Software Ltd Chennai
13 HCL Perot Systems Noida/Bangalore
14 HCL Technologies Limited Chennai
15 Hewlett Packard India Software Operations Limited Bangalore
16 Hexaware Technologies Limited Chennai and Mumbai
17 Honeywell India S/w Operations Bangalore
18 Hughes Software Systems Bangalore
19 IBM Global Services Bangalore
20 i-flex solutions limited, IT Services Divisions Mumbai and Bangalore
21 Information Technologies (India) Ltd. New Delhi
22 Infosys Technologies Limited Bangalore
23 InfoTech Enterprises Limited Hyderabad
24 Intergraph Consulting Pvt. Ltd., Hyderabad
25 International Computers (India) Ltd., Pune/Mumbai
26 ITC Infotech Ltd.
Bangalore
27 Intelligroup Asia PVT.Ltd., Hyderabad
28 IT Solutions (India) Private Limited Bangalore and Chennai
29 Kshema technologies Ltd Bangalore
30 Larsen & Turbo Infotech Limited, Mumbai and Navi Mumbai
31 LG Soft India Pvt. Ltd Bangalore
32 MphasiS-BFL Limited Bangalore
33 Mastek Limited Mumbai
34 Motorola India Electronics Ltd., Bangalore
35 Network Systems & Technologies (P) Ltd., Trivandrum
36 NIIT, Software Solutions Bangalore
37 NeST Information Technology (P) Ltd.,
38 Patni Computer Systems Ltd Mumbai
39 Philips Software Centre Private Bangalore
40 Phoenix Global Solutions (I) Pvt. Ltd. Bangalore
41 Sasken Communication Technologies Limited. Bangalore
42 Satyam Computer Services Ltd. Hyderabad
43 SignalTree Solutions (India) Ltd. Hyderabad
44 SkyTECH Solutions Pvt Ltd. Kolkata and Mumbai,
45 Sobha Renaissance Information Technology Pvt. Ltd. Bangalore
46 Sonata Software Limited Bangalore
47 SSI Technologies Chennai
48 Syntel, Inc. (India)
49 Siemens Information Systems Ltd., Bangalore
50 Tata Consultancy Services Bangalore
51 Tata Elxsi Limited Bangalore
52 Tata Interactive Systems Mohali
53 TCG Software Services Pvt. Ltd Calcutta
54 Trigyn Technologies Ltd., Mumbai
55 Wipro Technologies Bangalore
56 Software Paradigms(I) Pvt.Ltd Mysore
57 Robert Bosch India Limited Bangalore
58 LG CNS Global Pvt.Ltd Bangalore/Delhi
59 Xicron Technology (http://xicrontech.com/)

Thursday, December 23, 2010

Change control

Change control is vital. But the forces that make it necessary also make it annoying. We
worry about change because a tiny perturbation in the code can create a big failure in the
product. But it can also fix a big failure or enable wonderful new capabilities. We worry
about change because a single rogue developer could sink the project; yet brilliant ideas
originate in the minds of those rogues, and a burdensome change control process could
effectively discourage them from doing creative work.

Version control

Version control combines procedures and tools to manage different versions of configuration
objects that are created during the software process.
Configuration management allows a user to specify alternative configurations of the software
system through the selection of appropriate versions. This is supported by associating
attributes with each software version, and then allowing a configuration to be specified
[and constructed] by describing the set of desired attributes.
These "attributes" mentioned can be as simple as a specific version number that is
attached to each object or as complex as a string of Boolean variables (switches) that
indicate specific types of functional changes that have been applied to the system

TEST PLAN OUTLINE

(IEEE 829 FORMAT)
1) Test Plan Identifier
2) References
3) Introduction
4) Test Items
5) Software Risk Issues
6) Features to be Tested
7) Features not to be Tested
8) Approach
9) Item Pass/Fail Criteria
10) Suspension Criteria and Resumption Requirements
11) Test Deliverables
12) Remaining Test Tasks
13) Environmental Needs
14) Staffing and Training Needs
15) Responsibilities
16) Schedule
17) Planning Risks and Contingencies
18) Approvals
19) Glossary

1 TEST PLAN IDENTIFIER
Some type of unique company generated number to identify this test plan, its level and the
level of software that it is related to. Preferably the test plan level will be the same as the
related software level. The number may also identify whether the test plan is a Master plan, a
Level plan, an integration plan or whichever plan level it represents. This is to assist in
coordinating software and testware versions within configuration management.
Keep in mind that test plans are like other software documentation, they are dynamic in nature
and must be kept up to date. Therefore, they will have revision numbers.
You may want to include author and contact information including the revision history
information as part of either the identifier section of as part of the introduction.


2 REFERENCES
List all documents that support this test plan. Refer to the actual version/release number of
the document as stored in the configuration management system. Do not duplicate the text
from other documents as this will reduce the viability of this document and increase the
maintenance effort. Documents that can be referenced include:
• Project Plan
• Requirements specifications
• High Level design document
• Detail design document
• Development and Test process standards
• Methodology guidelines and examples
• Corporate standards and guidelines



3 INTRODUCTION
State the purpose of the Plan, possibly identifying the level of the plan (master etc.). This is
essentially the executive summary part of the plan.
You may want to include any references to other plans, documents or items that contain
information relevant to this project/process. If preferable, you can create a references section
to contain all reference documents.
Identify the Scope of the plan in relation to the Software Project plan that it relates to. Other
items may include, resource and budget constraints, scope of the testing effort, how testing
relates to other evaluation activities (Analysis & Reviews), and possible the process to be
used for change control and communication and coordination of key activities.
As this is the “Executive Summary” keep information brief and to the point.


4 TEST ITEMS (FUNCTIONS)
These are things you intend to test within the scope of this test plan. Essentially, something
you will test, a list of what is to be tested. This can be developed from the software
application inventories as well as other sources of documentation and information.
This can be controlled and defined by your local Configuration Management (CM) process if
you have one. This information includes version numbers, configuration requirements where
needed, (especially if multiple versions of the product are supported). It may also include key
delivery schedule issues for critical elements.
Remember, what you are testing is what you intend to deliver to the Client.
This section can be oriented to the level of the test plan. For higher levels it may be by
application or functional area, for lower levels it may be by program, unit, module or build.



5 SOFTWARE RISK ISSUES
Identify what software is to be tested and what the critical areas are, such as:
A. Delivery of a third party product.
B. New version of interfacing software
C. Ability to use and understand a new package/tool, etc.
D. Extremely complex functions
E. Modifications to components with a past history of failure
F. Poorly documented modules or change requests
There are some inherent software risks such as complexity; these need to be identified.
A. Safety
B. Multiple interfaces
C. Impacts on Client
D. Government regulations and rules
Another key area of risk is a misunderstanding of the original requirements. This can occur at
the management, user and developer levels. Be aware of vague or unclear requirements and
requirements that cannot be tested.
The past history of defects (bugs) discovered during Unit testing will help identify potential
areas within the software that are risky. If the unit testing discovered a large number of
defects or a tendency towards defects in a particular area of the software, this is an indication
of potential future problems. It is the nature of defects to cluster and clump together. If it
was defect ridden earlier, it will most likely continue to be defect prone.
One good approach to define where the risks are is to have several brainstorming sessions.
__Start with ideas, such as, what worries me about this project/application.


6 FEATURES TO BE TESTED
This is a listing of what is to be tested from the USERS viewpoint of what the system does.
This is not a technical description of the software, but a USERS view of the functions.
Set the level of risk for each feature. Use a simple rating scale such as (H, M, L): High,
Medium and Low. These types of levels are understandable to a User. You should be
prepared to discuss why a particular level was chosen.
It should be noted that Section 4 and Section 6 are very similar. The only true difference is the
point of view. Section 4 is a technical type description including version numbers and other
technical information and Section 6 is from the User’s viewpoint. Users do not understand
technical software terminology; they understand functions and processes as they relate to their
jobs.


7 FEATURES NOT TO BE TESTED
This is a listing of what is NOT to be tested from both the Users viewpoint of what the system
does and a configuration management/version control view. This is not a technical
description of the software, but a USERS view of the functions.
Identify WHY the feature is not to be tested, there can be any number of reasons.
• Not to be included in this release of the Software.
• Low risk, has been used before and is considered stable.
• Will be released but not tested or documented as a functional part of the release of this
version of the software.
Sections 6 and 7 are directly related to Sections 5 and 17. What will and will not be tested are
directly affected by the levels of acceptable risk within the project, and what does not get
tested affects the level of risk of the project.


8 APPROACH (STRATEGY)
This is your overall test strategy for this test plan; it should be appropriate to the level of the
plan (master, acceptance, etc.) and should be in agreement with all higher and lower levels of
plans. Overall rules and processes should be identified.
• Are any special tools to be used and what are they?
• Will the tool require special training?
• What metrics will be collected?
• Which level is each metric to be collected at?
• How is Configuration Management to be handled?
• How many different configurations will be tested?
• Hardware
• Software
• Combinations of HW, SW and other vendor packages
• What levels of regression testing will be done and how much at each test level?
• Will regression testing be based on severity of defects detected?
• How will elements in the requirements and design that do not make sense or are
untestable be processed?
If this is a master test plan the overall project testing approach and coverage requirements
must also be identified.
Specify if there are special requirements for the testing.
• Only the full component will be tested.
• A specified segment of grouping of features/components must be tested together.
Other information that may be useful in setting the approach are:
• MTBF, Mean Time Between Failures - if this is a valid measurement for the test involved
and if the data is available.
• SRE, Software Reliability Engineering - if this methodology is in use and if the
information is available.
How will meetings and other organizational processes be handled?




9 ITEM PASS/FAIL CRITERIA
What are the Completion criteria for this plan? This is a critical aspect of any test plan and
should be appropriate to the level of the plan.
• At the Unit test level this could be items such as:
• All test cases completed.
• A specified percentage of cases completed with a percentage containing some number
of minor defects.
• Code coverage tool indicates all code covered.
• At the Master test plan level this could be items such as:
• All lower level plans completed.
• A specified number of plans completed without errors and a percentage with minor
defects.
This could be an individual test case level criterion or a unit level plan or it can be general
functional requirements for higher level plans.

What is the number and severity of defects located?
• Is it possible to compare this to the total number of defects? This may be impossible, as
some defects are never detected.
• A defect is something that may cause a failure, and may be acceptable to leave in the
application.
• A failure is the result of a defect as seen by the User, the system crashes, etc.



10 SUSPENSION CRITERIA AND RESUMPTION REQUIREMENTS
Know when to pause in a series of tests.
If the number or type of defects reaches a point where the follow on testing has no value,
it makes no sense to continue the test; you are just wasting resources.
Specify what constitutes stoppage for a test or series of tests and what is the acceptable level
of defects that will allow the testing to proceed past the defects.
Testing after a truly fatal error will generate conditions that may be identified as defects but
are in fact ghost errors caused by the earlier defects that were ignored.



11 TEST DELIVERABLES
What is to be delivered as part of this plan?
__Test plan document.
__Test cases.
__Test design specifications.
__Tools and their outputs.
__Simulators.
__Static and dynamic generators.
__Error logs and execution logs.
__Problem reports and corrective actions.
One thing that is not a test deliverable is the software itself that is listed under test items and
is delivered by development.


12 REMAINING TEST TASKS
If this is a multi-phase process or if the application is to be released in increments there may
be parts of the application that this plan does not address. These areas need to be identified to
avoid any confusion should defects be reported back on those future functions. This will also
allow the users and testers to avoid incomplete functions and prevent waste of resources
chasing non-defects.
If the project is being developed as a multi-party process, this plan may only cover a portion
of the total functions/features. This status needs to be identified so that those other areas have
plans developed for them and to avoid wasting resources tracking defects that do not relate to
this plan.
When a third party is developing the software, this section may contain descriptions of those
test tasks belonging to both the internal groups and the external groups.



13 ENVIRONMENTAL NEEDS
Are there any special requirements for this test plan, such as:
• Special hardware such as simulators, static generators etc.
• How will test data be provided. Are there special collection requirements or specific
ranges of data that must be provided?
• How much testing will be done on each component of a multi-part feature?
• Special power requirements.
• Specific versions of other supporting software.
• Restricted use of the system during testing.



14 STAFFING AND TRAINING NEEDS
Training on the application/system.
Training for any test tools to be used.
What is to be tested and who is responsible for the testing and training.



15 RESPONSIBILITIES
Who is in charge?
This issue includes all areas of the plan. Here are some examples:
• Setting risks.
• Selecting features to be tested and not tested.
• Setting overall strategy for this level of plan.
• Ensuring all required elements are in place for testing.
• Providing for resolution of scheduling conflicts, especially, if testing is done on the
production system.
• Who provides the required training?
• Who makes the critical go/no go decisions for items not covered in the test plans?




16 SCHEDULE
Should be based on realistic and validated estimates. If the estimates for the development of
the application are inaccurate, the entire project plan will slip and the testing is part of the
overall project plan.
__As we all know, the first area of a project plan to get cut when it comes to crunch time at
the end of a project is the testing. It usually comes down to the decision, ‘Let’s put
something out even if it does not really work all that well’. And, as we all know, this is
usually the worst possible decision.
How slippage in the schedule will to be handled should also be addressed here.
__If the users know in advance that a slippage in the development will cause a slippage in
the test and the overall delivery of the system, they just may be a little more tolerant, if
they know it’s in their interest to get a better tested application.
__By spelling out the effects here you have a chance to discuss them in advance of their
actual occurrence. You may even get the users to agree to a few defects in advance, if the
schedule slips.
At this point, all relevant milestones should be identified with their relationship to the
development process identified. This will also help in identifying and tracking potential
slippage in the schedule caused by the test process.
It is always best to tie all test dates directly to their related development activity dates. This
prevents the test team from being perceived as the cause of a delay. For example, if system
testing is to begin after delivery of the final build, then system testing begins the day after
delivery. If the delivery is late, system testing starts from the day of delivery, not on a
specific date. This is called dependent or relative dating.



17 PLANNING RISKS AND CONTINGENCIES
What are the overall risks to the project with an emphasis on the testing process?
• Lack of personnel resources when testing is to begin.
• Lack of availability of required hardware, software, data or tools.
• Late delivery of the software, hardware or tools.
• Delays in training on the application and/or tools.
• Changes to the original requirements or designs.

Specify what will be done for various events, for example:
• Requirements definition will be complete by January 1, 19XX, and, if the requirements
change after that date, the following actions will be taken.
• The test schedule and development schedule will move out an appropriate number of
days. This rarely occurs, as most projects tend to have fixed delivery dates.
• The number of test performed will be reduced.
• The number of acceptable defects will be increased.
• These two items could lower the overall quality of the delivered product.
• Resources will be added to the test team.
• The test team will work overtime.
• This could affect team morale.
• The scope of the plan may be changed.
• There may be some optimization of resources. This should be avoided, if possible,
for obvious reasons.
• You could just QUIT.
A rather extreme option to say the least. Management is usually reluctant to accept scenarios such as the one above even though they have seen it happen in the past. The important thing to remember is that, if you do nothing at all, the usual result is that testing is cut back or omitted completely, neither of which should be an acceptable option.


18 APPROVALS
Who can approve the process as complete and allow the project to proceed to the next level
(Depending on the level of the plan)?
At the master test plan level, this may be all involved parties.
When determining the approval process, keep in mind who the audience is.
• The audience for a unit test level plan is different than that of an integration, system or
Master level plan.
• The levels and type of knowledge at the various levels will be different as well.
• Programmers are very technical but may not have a clear understanding of the overall
Business process driving the project.
• Users may have varying levels of business acumen and very little technical skills.
• Always be wary of users who claim high levels of technical skills and programmers that
claim to fully understand the business process. These types of individuals can cause more
harm than good if they do not have the skills they believe they possess.


19 GLOSSARY
Used to define terms and acronyms used in the document, and testing in general, to eliminate
confusion and promote consistent communications.

Can you explain Co-habiting software?

When we install the application at the end client it is very possible that on
the same PC other applications also exist. It is also very possible that those
applications share common DLLs, resources etc., with your application. There
is a huge chance in such situations that your changes can affect the cohabiting
software. So the best practice is after you install your application or after any
changes, tell other application owners to run a test cycle on their application.

Which test cases are first written white boxes or black box?

 Black box test cases do not require system understanding
but white box testing needs more structural understanding. And structural
understanding is clearer in the later part of project, i.e., while executing or
designing. For black box testing you need to only analyze from the functional
perspective which is easily available from a simple requirement document.

Can you explain calibration?

Calibration is a part of ISO 9001 quality model. It includes tracing the accuracy of the devices used in the production, development and testing. Devices used must be maintained and calibrated to ensure that it is working in good order.The records are maintained in Quality system database. Each record includes

  • Tracking number
  • Equipment description, type, model
  • Location
  • Calibration Intervals
  • Calibration procedure
  • Calibration history
  • Calibration Due

What are the different test plan documents in project?

The test plan documents that are prepared during a project are
Centre Test plan
Project Test plan
Unit Test plan
System Test plan
Integration Test plan
Acceptances Test plan

Central/Project test plan: This is the main test plan which outlines the
complete test strategy of the software project. This document should be
prepared before the start of the project and is used until the end of the
software development lifecyle.


Acceptance test plan: This test plan is normally prepared with the end
customer. This document commences during the requirement phase and is
completed at final delivery.


System test plan: This test plan starts during the design phase and proceeds
until the end of the project.


Integration and unit test plan: Both of these test plans start during the
execution phase and continue until the final delivery.

Can you explain the concept of baseline in software development?

A baseline is a software configuration management concept that helps us to control change without seriously impeding justifiable change.

A specification or product that has been formally reviewed and agreed upon, that thereafter serves as the basis for further development, and that can be changed only through formal change control procedures.

One way to describe a baseline is through analogy: Consider the doors to the kitchen in a large restaurant. One door is marked OUT and the other is marked IN. The doors have stops that allow them to be opened only in the appropriate
direction. If a waiter picks up an order in the kitchen, places it on a tray and then realizes he has selected the wrong dish, he may change to the correct dish quickly and informally before he leaves the kitchen. If, however, he leaves the kitchen, gives the customer the dish and then is informed of his error, he must follow a set procedure: (1) look at the check to determine if an error has
occurred, (2) apologize profusely, (3) return to the kitchen through the IN door, (4) explain the problem, and so forth.

A baseline is analogous to the kitchen doors in the restaurant.
Before a software configuration item becomes a baseline, change may be made quickly and informally. However, once a baseline is established, we figuratively pass through a swinging oneway door. Changes can be made, but a specific, formal procedure must be applied to evaluate and verify each change.

Can you explain regression testing and confirmation testing?

Confirmation testing : -If we fix a defect in an existing application we use 
confirmation testing to test if the defect is removed. It’s 
very much possible because of this defect or changes to the 
application it can affect other sections of the 
application. 
REGRESSION TESTING: Test the bug fixes r working properly 
as the specification and test, by fixing these bug any 
other feature may get impacted (conduct impact analysis) is 
regression testing.

What is configuration management?

In software engineering software configuration management (SCM) is the task of tracking and controlling changes in the software. Configuration management practices include revise control and the establishment of  baseline.
SCM concerns itself with answering the question "Somebody did something, how can one reproduce it?" Often the problem involves not reproducing "it" identically, but with controlled, incremental changes. Answering the question thus becomes a matter of comparing different results and of analysing their differences. Traditional configuration management typically focused on controlled creation of relatively simple products. Now, implementers of SCM face the challenge of dealing with relatively minor increments under their own control, in the context of the complex system being developed

The goals of SCM are generally:
  • Configuration identification - Identifying configurations,configuration items and baselines.
  • Configuration control - Implementing a controlled change process. This is usually achieved by setting up a change control board whose primary function is to approve or reject all change requests that are sent against any baseline.
  • Configuration status accounting - Recording and reporting all the necessary information on the status of the development process.
  • Configuration auditing - Ensuring that configurations contain all their intended parts and are sound with respect to their specifying documents, including requirements, architectural specifications and user manuals.
  •  Build Management - Managing the process and tools used for builds.
  • Process Management - Ensuring adherence to the organization's development process.
  • Environment management - Managing the software and hardware that host the system.
  •  Teamwork - Facilitate team interactions related to the process.
  • Defect tracking - Making sure every defect has traceability back to the source.

What is the difference between Software Testing and Debugging?


 Testing is the process of locating or identifying the errors or bugs in a software system.  Whereas Debugging is the process of Fixing the identified Bugs. It involves a process of analyzing and rectifying the syntax errors, logic errors and all other types of errors identified during the process of testing.

Wednesday, December 22, 2010

What’s the difference between Inspections and Walkthroughs?

A walkthrough is an informal meeting for evaluation or informational purposes. A walk through is also a process at an abstract level. It's the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). The purpose of code walkthroughs is to ensure the code fits the purpose. Walkthroughs also offer opportunities to assess an individual or team's competency.


An inspection is a formal meeting more formalized than a walkthrough and typically consists of 3-10 people including a moderator reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult but is one of the most cost-effective methods of ensuring quality since bug prevention is more cost effective than bug detection

What are different types of verifications?

Software verification is a broader and more complex discipline of software engineering whose goal is to assure that software fully satisfies all the expected requirements.
There are two fundamental approaches to verification:
  • Dynamic verification, also known as Test or Experimentation - This is good for finding bugs
  • Static verification, also known as   Analysis- This is useful for proving correctness of a program although it may result in false positives

There are four levels of verification:
1. Component Testing: Verifying the design implementation for one software element like unit / module or a group of software elements
2. Integration Testing: Testing with orderly progression which involves the integration of various software and / or hardware elements together and tested. It continuous until the complete system has been integrated.
3. System Testing: A type of testing which tests integrated software and hardware system verification whether the system meets the specified requirements.
4. Acceptance Testing: A testing process that determines whether a system satisfies the acceptance criterion and for enabling the customer for determining whether or not to accept the system.

what Is coverage and what are the dIfferent types of coverage technIques?

  
Coverage is a measurement used in software testing to describe the degree
to which the source code is tested. There are three basic types of coverage
techniques are
Statement coverage: This coverage ensures that each line of
source code has been executed and tested.

 Decision coverage: This coverage ensures that every decision
(true/false) in the source code has been executed and tested.

 Path coverage: In this coverage we ensure that every possible
route through a given part of code is executed and tested.

On what basis is the Acceptance plan prepared?

A formal product evaluation performed by a customer as a
condition of purchase. The testing can be based upon the
User Requirements Specification to which the system should
conform. Use Acceptance testing is black box testing.

User Acceptance Testing is the last stage of the Software
Development Lifecycle. User acceptance testing we
validating all the user requirements specified in SRS is
met after code freeze. The acceptance test is the
responsibility of the client/customer or project manager;
however, it is conducted with the full support of the
project team. Before the software goes live determining if
software is satisfactory to an end-user or customer.
The  status of the bug would  depends on the bug found .but
priority would be given high.

What does entry and exit criteria mean in a project?

Entry criteria in Testing means the criteria which has to be satisfied for the testing activity to be started. For example if you have to start testing a application, then following are some of the entry criterias:
1. Application should be deployed
2. Version number should be changed
3. Access needed for testing the application should be given to the appropriate testing team
4. Database should be accessbile
5. Test plan should be approved

The testing activity will be considered as completed when all the activities that are necessary for an application to exit mentioned in the exit criteria are satisfied. Following are some exit criterias for the completion of smoke test in an application.
1. Major functionalities should be tested.
2. All the smoke test results should be passed.
3. Smoke Test execution results should be approved.

How will you do a risk analysis during software testing?

Risk Analysis

In this tutorial you will learn about Risk Analysis, Technical Definitions, Risk Analysis, Risk Assessment, Business Impact Analysis, Product Size Risks, Business Impact Risks, Customer-Related Risks, Process Risks, Technical Issues, Technology Risk, Development Environment Risks, Risks Associated with Staff Size and Experience.

Risk Analysis is one of the important concepts in Software Product/Project Life Cycle. Risk analysis is broadly defined to include risk assessment, risk characterization, risk communication, risk management, and policy relating to risk. Risk Assessment is also called as Security risk analysis.

Technical Definitions:

Risk Analysis: A risk analysis involves identifying the most probable threats to an organization and analyzing the related vulnerabilities of the organization to these threats.

Risk Assessment: A risk assessment involves evaluating existing physical and environmental security and controls, and assessing their adequacy relative to the potential threats of the organization.

Business Impact Analysis: A business impact analysis involves identifying the critical business functions within the organization and determining the impact of not performing the business function beyond the maximum acceptable outage. Types of criteria that can be used to evaluate the impact include: customer service, internal operations, legal/statutory and financial.

Risks for a software product can be categorized into various types. Some of them are:

Product Size Risks:

The following risk item issues identify some generic risks associated with product size:

  • Estimated size of the product and confidence in estimated size? 
  • Estimated size of product? 
  • Size of database created or used by the product? 
  • Number of users of the product? 
  • Number of projected changes to the requirements for the product?
Risk will be high, when a large deviation occurs between expected values and the previous experience. All the expected information must be compared to previous experience for analysis of risk.

Business Impact Risks:

The following risk item issues identify some generic risks associated with business impact:

  • Affect of this product on company revenue? 
  • Reasonableness of delivery deadline? 
  • Number of customers who will use this product and the consistency of their needs relative to the product? 
  • Number of other products/systems with which this product must be interoperable? 
  • Amount and quality of product documentation that must be produced and delivered to the customer? 
  • Costs associated with late delivery or a defective product?

Customer-Related Risks:

Different Customers have different needs. Customers have different personalities. Some customers accept what is delivered and some others complain about the quality of the product. In some other cases, customers may have very good association with the product and the producer and some other customers may not know. A bad customer represents a significant threat to the project plan and a substantial risk for the project manager.

The following risk item checklist identifies generic risks associated with different customers:

  • Have you worked with the customer in the past? 
  • Does the customer have a solid idea of what is required? 
  • Will the customer agree to spend time in formal requirements gathering meetings to identify project scope? 
  • Is the customer willing to participate in reviews? 
  • Is the customer technically sophisticated in the product area? 
  • Does the customer understand the software engineering process?

Process Risks:

If the software engineering process is ill-defined or if analysis, design and testing are not conducted in a planned fashion, then risks are high for the product.

  • Has your organization developed a written description of the software process to be used on this project? 
  • Are the team members following the software process as it is documented? 
  • Are the third party coders following a specific software process and is there any procedure for tracking the performance of them? 
  • Are formal technical reviews are done regularly at both development and testing teams? 
  • Are the results of each formal technical review documented, including defects found and resources used? 
  • Is configuration management used to maintain consistency among system/software requirements, design, code, and test cases? 
  • Is a mechanism used for controlling changes to customer requirements that impact the software?

Technical Issues:

  • Are specific methods used for software analysis? 
  • Are specific conventions for code documentation defined and used? 
  • Are any specific methods used for test case design? 
  • Are software tools used to support planning and tracking activities? 
  • Are configuration management software tools used to control and track change activity throughout the software process? 
  • Are tools used to create software prototypes? 
  • Are software tools used to support the testing process? 
  • Are software tools used to support the production and management of documentation? 
  • Are quality metrics collected for all software projects? 
  • Are productivity metrics collected for all software projects?

Technology Risk:

  • Is the technology to be built new to your organization? 
  • Does the software interface with new hardware configurations? 
  • Does the software to be built interface with a database system whose function and performance have not been proven in this application area? 
  • Is a specialized user interface demanded by product requirements? 
  • Do requirements demand the use of new analysis, design or testing methods? 
  • Do requirements put excessive performance constraints on the product?

Development Environment Risks:

  • Is a software project and process management tool available? 
  • Are tools for analysis and design available? 
  • Do analysis and design tools deliver methods that are appropriate for the product to be built? 
  • Are compilers or code generators available and appropriate for the product to be built? 
  • Are testing tools available and appropriate for the product to be built? 
  • Are software configuration management tools available? 
  • Does the environment make use of a database or repository? 
  • Are all software tools integrated with one another? 
  • Have members of the project team received training in each of the tools?

Risks Associated with Staff Size and Experience:

  • Are the best people available and are they enough for the project? 
  • Do the people have the right combination of skills? 
  • Are staffs committed for entire duration of the project?