system analysis & design and software engineering
TRANSCRIPT
![Page 1: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/1.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 1 of 25
� Introduction of SAD • The full form of sad is System Analysis & Design.
• This subject is needed for system developers. Before the availability of computers
when all records were maintained manually. Record keeping systems were large in
size, inefficient and error full. After availability of computers we have to generate
system on computer so we don’t need calculate & store it manually.
• So the purpose of SAD is to understand system & convert it into a computer
software or program.
• For this we have to collect useful data, & understand user requirements.
• From Data you can get Information which must be accurate & of high speed.
� What is Information • Def:-
When we process data and convert it into a form that is useful and meaningful to
decision maker, it becomes information.
• When any person invests money besides any technology, he also expects same
reward. For this purpose we have to develop system in proper manner. The system
should fulfill users demand, should be user friendly.
• Computer can not know about that whether information is right or wrong. It just
accept which thing we input in it.
• So we have to decide whether information is accurate or not.
1 System Analysis & Design AND Software
Engineering, Concepts of Quality Assurance
Topics Covered 1. Introduction
2. System, subsystem, business system, information system (Definitions
only)
3. System analyst (Role : information analyst, system designer &
programmer analyst)
4. SDLC
5. Fact – finding techniques
(Interview, questionnaire, record review and observation)
6. Tools for documenting procedures and decisions.
Decision trees and decision tables.
7. Data flow analysis tool
DFD (context and zero level) and data dictionary.
8. Software engineering (Brief introduction)
9. Introduction to QA
![Page 2: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/2.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 2 of 25
Definations :-
(1) System : “ A system is a set of component that interact with each other to
accomplish a specific task / achieve some goal.”
Ex. 1) Human body
2) Computer
• Human body consists of various different parts that are related to each other. Such as
heart, hand, legs etc. are some component of human body and human body is try to
achieve a goal to live life well. So, human body can be called as a ‘System’.
• A computer is made of so many individual components like hardware, software, data &
procedures. These are components are used to achieve a goal of computer. So computer
is also called a system.
(2) Subsystem: “some components is that system are also one kind of system that is
called a subsystem.”
• Any system is consisting of some components that called a subsystem.
• More than one subsystem will make one system.
• Suppose if we called computer is a one system. And there is a component like cpu,
monitor etc is also one kind of system. So, cpu, monitor is called a subsystem of a
computer system.
Example :
In this example company is a example of system & dept. of the company are called as a
subsystem.
(3) Business System : “A Business is a system because its component like
marketing, manufacturing, sales, research ,shipping, accounting and personal all
work together to create a profit that benefit an employee & stock holders of the
company.” • Each of these components is called a business system.
• For example the accounting department may consist of account payable, account
receivable, billing, auditing & so on.
Company
Marketing Dept.
Finance Dept.
Production Dept.
SubSystem
System
![Page 3: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/3.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 3 of 25
(4) Information system : “A system which is purely depended on information and
it will provide you information. So it’s called as a information system.” • Information system consists of subsystem including hardware, software & data storage
for files & database.
• The particular set of subsystem use the specific equipment, programs, files & procedure
constitutes an information system application.
• So, information system can have purchasing, accounting or sales application.
• Every business system depends on more or less abstract entities called an Information
system.
• This system is the means by which data flow from one person or department.
• Information system serve all the system of a business which linking the different
component in such a way they effectively work toward the same purpose.
Ex. Whether report, cricket match score
� System Analyst and Design: System Development process is divided into two major areas:
1. System Analysis
2. System Design. 1) System Analysis:
Analysis is the process of data gathering & filtering useful data. Analysis
Consider theoretical activity It includes Fact finding, Feasibility study types of
Analysis techniques.
2) System Design:
System Designing implements analyzed data to complete system. It includes Data Flow
diagram, Output Design, Input Design, and File and Database Design. Analysis specifies what
the system should do. Design states how to accomplish the objective.
� Definition of System Analyst. : A person who conducts a methodical study and evaluations of an activity such
as a business to identify its desired objectives in order to determine procedures by
which objectives can be gained.
� Responsibilities of System Analyst :
1) System Analysis only:
In this condition Analyst’s responsibility is conducting system studies to learn
together information & determine requirements about a business activity.
2) System Analysis & Design:
Analyst analyze complete system but now responsibility of designing the new
system. In this analyst will be system designers or applications developers.
3) System analysis design & programming:-
![Page 4: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/4.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 4 of 25
For this type of responsibility first of all system analyst gather information
about system. Then he find requirement & Feasibility study. After done this work he
develop design specification and program software. This type of work will be done by
programmer analysts.
� Function of System Analyst : • The task of system analyst is to elect needs & resource constraints and to translate
these into a viable operation.
• Its primary responsibility is to identify information needs of an organization and
obtain a logical design of an information system which will meet these needs.
1) Defining Requirements:
• The first, important & most difficult task is to understand user’s requirements.
• Requirement changes time to time.
• In this step analyst use interview, questionnaire, review documents and onsite
observation technique.
2) Prioritizing requirements by agreement:
• After defining requirements there is need to set priority among requirements of
users.
• For this Analyst has to conduct meeting with all the users and arriving at all the
acceptable decision
• For this task Analyst must have good interpersonal relations and diplomacy. He is
able to convince all the users about their co operation.
3) Analysis & evaluation:
• In this step analyst analyze working of current information system. Then after he
filter facts & opinions gathered by users.
• By this step analyst reject redundant data & focus on imp data.
• Analyst can use graphical means of data analysis are useful in this task.
4) Gathering Data, Facts & Opinions of Users:-
• Analyst has to gather information regarding users need & priority.
• Now information is result of user’s long experience & expertise. So user must be
aware of how system will design on their information.
• So Users has to check fact whether that information gathered by analyst is true or
false.
• So Analyst has to take Opinions of the users for that & then after he can carry on his
project.
5) Solving Problems:
• It is but obvious that during analysis analyst find some difficulties.
• So analyst must study the problem in depth & suggest alternative solutions to the
management.
• So that manager can pick what he considers as the best solution.
• To tell manager analyst may produce table or graph that provide some comparison.
![Page 5: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/5.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 5 of 25
6) Drawing up functional Specification:
• Now the analyst will obtain the functional specifications of the system to be
designed.
• This specification must be non technical so user & managers understand it. The
specification must be precise and detailed so that it can be used by system
implementers.
7) Designing System:
• Once the specifications are accepted, the analyst designs the system.
• The design must be understandable to the system implementer. & also can changes
easily.
• Analyst must know the latest design tools. He must create a system test plan.
8) Evaluating System:
• An analyst must evaluate a system after it has been in use for a reasonable period of
time.
• He must have open mind to accept valid criticism.
• He is enabling to carryout necessary improvements.
� Role of System Analysis :
1) Change Agent:
• A Candidate System is designed to introduce change and reorientation in how the user
organization handles information or makes decisions. So it is necessary to user that he
have to accept changes.
• For user acceptance, analysts prefer user participation during design & implementation.
• So in the role of change agent, system analysts may use different approaches to
introduce changes to the user organization.
2) Investigator & Monitor:
• If there is any system is supposed to fail or any fault in it then system analyst must
investigate to find the reason for it.
• As an investigator he must extract the problems from that system & create information
structures that uncover previously unknown trends that may be effective for orgaization.
• As a monitor, He must undertake and successfully complete a project in relation to
time, cost and quality.
3) Architect:
• As an architect, analyst must create detailed physical design of candidate systems or
design system architecture on the basis of end user requirement.
• These designs become the blue print for the programmers.
4) Psychologist:
• In the role of Psychologist he must be interprets their thoughts. He should understands
the thinking of people whom he meets, access their behavior, and draws conclusion
from these interactions.
• This role is played mostly in fact finding.
![Page 6: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/6.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 6 of 25
5) Motivator:
• System Acceptance is achieved through user participation in its development, effective
user training and proper motivation to use the system.
• As a motivator his work will be during few week of after implementation and also when
turnover results in new people being trained to work with the system.
6) Intermediately:
• In implementing candidate system the analyst tries for all parties to be present &
involved.
• So diplomacy in dealing with people, they can accept change & improvement, &
analyst will achieve his goal.
7) Salesperson:
• While doing Analysis, the critical activity, also analyst should be capable to act as a
sales person. It will lead the success of system.
8) Politician:
• Analyst works with different category of person like Managers, Accountants,
Programmers, Clerks, etc.
• Diplomacy & Fineness in dealing with people can improve acceptance of the system.
9) Innovator: • The systems analyst must separate the symptoms of the user’s problem from the true
reasons.
• With his or her knowledge of computer technology, the analyst must help the user
search useful, new applications of computers.
� Systems designers :
• The systems designer is the person (or group of people) who will receive the output of
the systems analysis work. His or her job is to transform a technology-free statement of
user requirements into a high-level architectural design that will provide the framework
within which the programmer can work.
• In many case, the systems analyst and the systems designer are the same person, or
member of the same unified group of people. It is important for the systems analyst and
systems designer to stay in close touch throughout the project.
� Programmers Analyst : • Particularly on large systems development projects, the systems designers are likely to
be a “buffer” between the systems analysts and the programmers.
• The systems analysts deliver their product to the system designers, and the system
designers deliver their product to the programmer.
• There is another reason why the systems analyst and the programmer may have little or
no contact with each other: work is often performed in a strictly serial sequence in
many systems development projects. Thus, the work of systems analysis takes place
first and is completely finished before the work of programming begins.
� Information Analyst. : • Information analyst means to gather all the information as per requirement.
![Page 7: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/7.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 7 of 25
• The work of information analyst is only to gather all the information of a particular data
and then give all the collected information to the system designer.
� Explain SDLC in detail. • The definition of SDLC is as follows.
• When the System approach is applied to the development of information System, a
multistep cycle emerges. This multistep cycle is known as System Development
Life Cycle (SDLC).
• SDLC is classically thought of as the set of activities that analysts, designers and
users carry out to develop and implement an information system.
• The figure for all these phases of SDLC is as follows.
• The different phases are shown in above figure which are also known as the stages of
the System Development Life Cycle.
• We have to consider the below mentioned activities to solve the problems.
o Understanding the Problem.
o Deciding the plan for the solution.
o Coding of the planned Solution.
o Testing the actual program.
• The various phases to be performed for developing a software system are as follows.
o Preliminary Investigation
o Requirement Analysis
o Design of a System
o Coding Phase
o System Testing
o System Maintenance and Evolution Phase
� Preliminary Investigation :
Preliminary Investigation
Requirement Analysis
System Design Phase
Coding Phase
System Testing Phase
Maintenance and Evolution Phase.
![Page 8: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/8.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 8 of 25
o First stage is the preliminary investigation.The main aim of this stage is to identify the
particular problem.It is also known as Initial Investigation. First of all the preliminary
investigation is needed just because after this stage we can move further in our analysis.
o Suppose an Manager of a company wants to create a software about his Company so the
software is known as a problem in technical terms because it has some of it’s limitations
and needs.
o With the use of Preliminary analysis we can determine the nature and scope of a problem
and we can solve the problem and get possible solutions.
o When the request is made, all preliminary investigation begins with this activities:
A. Request Clarification
B. Feasibility Study
• Technical Feasibility
• Operational Feasibility
• Economic Feasibility
• Legal Feasibility
C. Request Approval
A. Request Clarification o Many requests from employees and users in organization are not clearly stated. Therefore,
before any further steps, the project request must be clearly stated.
o If requests are not clearly stated then the systems analyst has to get a clarification (by phone
call or a personal meeting) from the user regarding exactly what does the user want.
o If request is made without any clarification then it is difficult to understand it.
o The project request must be examined to determine what is the actual need/idea?
B. Feasibility Study o The process of developing a large problem can be very costly so investigation stage may
require a preliminary study which is known as Feasibility Study.
o An important outcome of preliminary investigation is the determination that whether the
system requested is feasible.
o Because all the requests are not possible to fulfill.
o System analyst is supposed to find out whether the system is feasible or not
o The main aim of Feasibility Study is to check whether the product is feasible .
o The feasibility study includes analysis of problem and collection of data which would be
input to the System.
o The different scenarios which are important in feasibility study are as follows.
• Technical Feasibility
• Economical Feasibility
• Legal Feasibility
• Operational Feasibility
• Technical Feasibility: o Whether it is technically possible with the existing technology or with the intended
[planned] new technology.
o It involves the consideration that whether enough equipment, software technology,
persons are available or not for doing project, whether any new technology or other
![Page 9: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/9.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 9 of 25
resources are needed to develop the system, if system developed, can be expanded
for any new need in the future.
o It is carried out by a small group of people who are familiar with information
system.
• Economic Feasibility: o It includes the calculations for cost of full system, cost of hardware and software,
cost for different alternative solutions and cost if nothing changes
o It calculates whether the cost acceptable for the expected benefit?
o The feasibility study is carried out by a small team consisting of 2-3 persons
experienced and skilled in doing this activity.
• Operational Feasibility: o Is it operationally feasible looking at the user degree of resistance [fight] and other c
operational problems?
o Whether the system is operatable by the operating people or not, whether the system
is more beneficial to the organization or not, whether the proposed will produce the
required result under particular circumstances or not those are the pin-points of
operation feasibility study.
• Legal Feasibility : o Legal feasibility studies issues arising out of the need to the development of the
system
o The possible consideration might include copyright law , labour law , antitrust
legislation , foreign trade , regulation , etc.
o Legal feasibility plays a major role in formulating contracts between vendor and
users.
o Another important legal aspect is the same country then the tax laws , foreign
currency transfer regulations etc… have to be taken care of .
C. Request Approval o After the request clarification and feasibility study, the request must be approved by the
higher level managers or directors before going on further steps to develop the system.
o Not all requested projects are desirable or feasible.
� Requirement Analysis : o Requirment analysis is not a preliminary study but it is an in-depth study of End User
information that produce the user requirement.
o The requirement analysis includes the following tasks.
o Organizational Analysis.
o Analysis of the present System.
o Functional requirements Analysis
� User Interface Requirements.
� Processing Requirements.
![Page 10: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/10.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 10 of 25
� Storage Requirements.
� Control Requirements.
� System Design Phase : o System design specifies how the system will accomplish the objective.
o System design includes the following components.
� User Interface Design
� Input and Output Design
� Data Design
� Process or Program Design
� Technical System Specification.
o The System design is the initial step to moving from Problem domain to Solution
domain.
o The designing is mostly divided into two phases.
o System Design ( Top Level design)
o Detailed Design.
o Design phase should be understood to the developer because when any updatation
occures to that problem the programmer can easily handel that updatation.
� Coding Phase : o The system design needs to be implemented to make it a workable System.This
demands the coding of design into the understable language.
o The codig affects both testing of the problem and the maintence of that problem. With
the use of coding we can easily develop the program and also can write the program.
o The main aim of this phase is simplicity and clarity.
� System Testing Phase :
• Before use of any system it must be tested.
• System testing is experimentally to ensure that the software does not fail.
• System must have to run according to its specification.
• Special test data are input for processing and result examined.
• After verify system it is used.
Below are the following types of testing:
o Data set testing.
o Unit testing
o System testing
o Integration testing
o Black box testing
o White box testing
o Regression testing
o Automation testing
o User acceptance testing
o Performance testing
![Page 11: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/11.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 11 of 25
� Maintenance and Evolution Phase. : o This phase is useful to removing all faults before delivery and to maintain the problem.
o Faults will be discovered long after the system is installed and as these faults are
detected they have to be removed.
o Maintenance activities related to fixing of errors fall under corrective maintenance.
o It is an important duty of the developer to provide training to user about how the
software applications.
o Removing errors is one of the activity of this phase and to change the input data ,
system environment and the output formats.
o These requires the modification of the system.
o These maintenance activities related to adaptive maintenance.
o This phase is based on the existing software and also related to understand the
document.
o In this phase the review for the system is done for….
� Knowing all capability of the system.
� Knowing the required changes.
� Studying the performance.
o New project have to set up to cary out the changes.
� Fact-finding techniques :
• Important activity in system investigation.
• The function of the system is to be understands by the system analyst to design the
proposed system.
• These are also known as data and fact gathering.
• The analyst fully understands the current system.
• He / She uses these requirement technique to investigate the requirement.
• This technique will help to collect accurate and specific information regarding proposed
system.
• The technique used to gather this data are known as fact-finding or data gathering
technique.
• Various kind of technique are used the most popular among them are given below..
1. Interview
2. Questionnaires
3. Record reviews
4. Observation
1. Interviews : • Important technique as in the analyst directly contacts system and potential user of the
proposed system.
• Interviewer should establish relationship and understanding with the interviewee.
• Analyst should prefer to use day to day language instead of jargon and technical terms.
• This method is used to collect the information from group or single user.
• The information is quite accurate and reliable.
Following guide for interviewer..
• Set a stage
o Establish a relationship
o Phrase question clearly and briefly
![Page 12: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/12.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 12 of 25
o Be a good listener ; avoid argument
o Evaluate the outcome of the interview
• Two types of interviews
o Structured
o Non-structured
• Structured Interviews o Is asked for a standard set of question in a particular order. o All are asked same set of question.
The questions are asked further in two types of format…
• Open response format o Respondent is free to answer in his own words.
• Close response format
o Which limit the respondent to give their answer from set of already prescribed choice.
• Unstructured Interviews
o The unstructured interviews are undertaken in a question and answer format.
o This is more flexible nature than structured.
o It is very rightly used to gather general information about the system.
o Here the respondents are free to answer in their own words.(their views are not restrict )
2. Questionnaires
• Another way of information gathering where the potential users of the system are given
questionnaires to be filled up and returned to the analyst.
• Useful when the analyst needs to gather information from a large number of people.
• It is not possible to interviews each individual.
• There are two types of questionnaires…
• Open response based
• Closed response based
• Open response based
• Gather information and data about essential and critical design of the system.
• Open ended question requires no response direction or specific response.
• Used to learn about the feeling, opinions, experience of the respondent.
• Help to make the system effective.
• Closed response based
• To collect true information about the system.
• It deal with how system behave, how it comfortable with it.
• In this type of questionnaire answer to the question asked here is having multiple choice
or logical type of data i.e. yes/no or true/false.
![Page 13: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/13.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 13 of 25
3. Records reviews:
• Record and report are the collection of information.
• This can also put light on the requirement of the system and modification it has
undergone.
• It will have limitation if they are not up-to-date.
• The analyst may analyze the records at the beginning of the study which may give him
fair introduction the system.
• One drawback of using this method for gathering information is that practically the
functioning of the systems is generally different from the procedure shown in records.
• So analyst should be careful in gathering information using this method.
4. Observation: There are two type of observation.
1) Official observation
2) Unofficial observation
1) Official observation:
• It’s not a good method to observe every single elements while
• Collecting information to develop the system.
• The future system you’re building up may be deemed to change the current way of
working.
• Moreover, those you’re looking at, may feel uncomfortable and may behave unusually,
which will affect your survey’s quality.
2) Unofficial observation: • In order to get an overview of an organization, take a look at its quantity of paper and
document, interruption of work, unreasonable timing and positive reflection of a good
working environment.
• It’s also important to know the quantity and quality of data that need to be processed
and predict how they change over the time.
• Researching through document is the final good method to get important information.
� Tools for Documenting Procedures and Decisions : • A tool is any device, object or operations used to accomplish a specific task.
• System analyst relies on such type of tools.
• These tools help analyst in so many different ways.
• To explain the procedure or documenting the procedures there are two tools:
– (a) Decision tree
– (b) Decision table
• When analyst starts the study of any information system, the first question is about what are
possibilities?
• Or what can happen? Means he/she is asking about the condition to take any appropriate
action.
• In real situation the problem is not same hence the conditions vary for different problems
and different situations, so some time it is referred as decision variable.
![Page 14: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/14.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 14 of 25
• When all possible conditions are known, the analyst next determines what to do when
certain condition occurs.
• Actions are alternatives, the steps, activities, or procedures that an individual may decide to
take when conformed to a set of condition.
• The action may be simple or it may be complex in different situation.
Decision tree : • A single matter can be explained in so many different ways.
• For ex.: A company might give discount amt. on three different values for the condition
on size of the order
• For ex.: over 10000 – 4 %, in between 5000 to 10000 – 3 %, and below 5000 2%
and payment occurs within 10 days or not...
The same process can be explained in following ways...
- Greater than 10000. Greater than or equal to 5000 but less than or equal to
10000 and below 5000.
- Not less than 10000, Not more than 10000 but atleast 5000 and not 5000 or
more
Having such different ways of saying the same thing can, create the difficulties in
communication during system study.
• Decision tree is one of the methods for describing decisions, while avoiding difficulties in
communication.
• A decision tree is a diagram that presents condition and action sequentially and it can be
show which condition to consider first and which is second and so on.
• It is also a method of showing the relationship of each condition and its permissible action.
• The diagram indicates some branches on a tree.
Structure of Tree :
• The root of the tree on the left of the diagram is the starting point of the decision sequential.
• The particular branch to be follow depends on the condition that exist and the decision to be
made.
Root
Condition
Condition
Condition
Condition
Condition
Condition
Condition
Condition
Action
Action
Condition
Condition Condition
Condition
Action
Action
![Page 15: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/15.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 15 of 25
• Progression from left to right along a particular branch is the result of a making a sequence
of decision.
• Each decision point is the next set of decision to be considered.
• The nodes of the tree represent condition and indicate that a determination must be made
about which condition exist before the next path can be chosen.
• The right sides of the tree list are action to be taken which can be depending on the
sequence of condition that is followed.
• Decision tree start from a root and process to various possibilities that is called node or sub
node.
• The size of tree is depending upon the no of condition and action.
Advantages of decision tree :
• From the decision tree we can express if than else condition.
• Decision tree is useful when have to express the logic from the nested decision.
• Decision tree describe condition & action from the system. So, analyst can find actual
decision.
• Decision tree are also useful to verify of logical problem for solving that we have to take
some complex decision and no. of actions.
Disadvantages of decision tree :
• if problem is more complex then we have to follow more no. of sequence step.
• So, it is very hard to understand a problem and also it will confuse the system analyst or
any team.
• In the decision tree we can’t identify the actual information because we don’t try to
description statement or sentence.
Decision Table: • A decision table is a matrix of rows and columns, rather then a tree ; that shows condition
and action
• Decision rules, included in a decision table, state what procedure to follow when creation
conditions exists.
The decision table is made up of four section:
- Condition Statements
- Condition entry
- Action Statement
- Action entry.
• The condition statement identifies relevant conditions.
• Condition entries tell which value, if any applies for a particular condition.
• Action statement lists the set of all steps that can be taken when a creation condition occurs.
• The action entry shows what specific actions in the set to take when selected condition or
combinations of conditions are true.
• Sometimes the notes are added below the table of indicate when to use the table or to
distinctguish it from other decision table.
Condition Decision Rule
Condition Statements Condition Entries
![Page 16: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/16.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 16 of 25
Action Statements Action Entries
• The columns on the right side of the table, linking conditions and actions , from decision
rules, which states the condition that must be specified for a particular set of actions to be
taken.
Conditions Decision Rules
1 2 3 4
C1 Basic Health Insu. Y Y N N
C2 Social Health Insu. Y N Y N
A1 Pay only visit
charge
X
A2 Pay nothing X X
A3 Pay full amt X
Table T-1 :
• The above decision table describe action taking in payment to a doctor
• There are two type of insurance
– Health insurance (condition1)
– Social health insurance (condition2)
• If patient has only health insurance he/she has to pay visit charge
• If patient has both type of insurance, he/she has to pay nothing
• If patient does not have any insurance, he/she has to pay full payment
The above matter is stated in decision table. There are two conditions statements
and corresponding four condition entries, with three actions statements and corresponding
action entries.
� Building decision table : To develop decision table analyst should use the following steps :
1. Identifies the condition in the decision. Each condition selected should have the
potential to either occur or nor occur, partial occurrence is not possible.
2. Determine the actions.
3. Study the combinations of conditions that are possible . for N conditions there are 2n
combinations.
4. Fill in the table with decision rules.
5. Mark the action entries with X to single action to take, leave a cell blank for no action
applies.
6. Examine the table for redundant rule for contradictions with in rules.
![Page 17: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/17.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 17 of 25
After constructing a table, analyst must verifies it for correctness and completeness to
ensure that the table includes all the conditions..
Along with the decision rule that relate them to the actions.
Analyst should also examine the table for redundancy and contradictions.
� Eliminating redundancy : - Decision table can become two large and unwieldy it allowed to grow in an controlled
fashion
- Removing redundant entries can help to manage table size
- Redundancy occurs when both of the following are true.
1. Two decision rules are identical except far one condition raw:
2. The action for the two rules are identical
Here, action entry is not dependent on condition 1 entry , hence these two rules are
redundant and can combined in a one rule.
The condition row where they differ can be replaced by A– as shown in the above table.
� Removing Contradiction : • Decision rules contradict each other when two or more rules have the same set of
conditions and the actions are different
• Contradictions mean either that analyst’s information is incorrect or that there is an error in
the construction of the in table T-2 many contradictory rules are shown with ‘X’ on the top.
� Data flow Analysis : While developing a system the analyst wants to know the answer to four specific questions.
• What process makes up a system?
• What data are used in each process?
• What data are stored?
• What data enter and leaves the system?
Data drives the business activities.
System analyst recognizes the central role of business data in organization.
Data flow analysis study the use of data in each activity.
Data flow diagram graphically shows the relation between processes and data.
Data dictionary; formally describes the system data and where they are used.
� Data Flow Diagram : (DFD) – A graphical tool used to describe and analyze the movement of data through a
system including the process stores of data and delay in the system.
– DFD shows relation between data & processes. It provides a compact top-down
representation of a system, which makes it easier for users and analysts to imagine
the whole system.
– It is commonly used for documenting the system.
– DFD are the central tool and the basis from one another , through process may be
described transformation of data from input to output.
– Through process may be described logically and independently of the physical
components are called logical DFD.
![Page 18: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/18.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 18 of 25
– In contrast, physical DFD show the actual implementation and the movement of
data between people, departments and workstations.
– Logical DFD can be completed using only 4 simple notations. The symbols are as
follow.
� Special symbols used in DFD :
Special symbols or icons and the annotations that associate them with our system are as
under:
1) Data flow: Data move in specific direction from an origin to a destination in the form
of a document or any other medium.
2) Process: People, procedures or devices can be used as processes that use or produce
data.
3) Database: It is use for a store of any data which may represent computerized or non
computerized device.
4) Entity: This symbol is use for input or output of data which is interchange with source
and destination.
• Each component in a DFD is labeled with a descriptive name. Process name are
identified with a number.
• The number assigned to a specific process does not represent the sequence of process.
• It is strictly for identification and will take on added value when we study the
components that make up a specific process.
• As the name suggest, DFD concentrate on the data moving through the system not on
device or equipment.
� Developing Data flow Diagram :
– System analyst must first study the current system. That is , the actual activated and
process that occur.
– In the terminology of structured analysis, this is a study of the physical system.
– The physical system is translated into a logical description that focuses on data and
processes.
![Page 19: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/19.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 19 of 25
– It emphasizes data and processes in order to focus on actual activities that occur and
the recourses needed to perform them, rather on who performs the work.
� Types of DFD : – Physical data flow diagram
– Logical data flow diagram
� Physical data flow diagram : – It is an implementation dependent view of the current system, showing what tasks are
carried out and how they are carried out and how they performed.
– Its characteristics include.. Transaction file, equipment, device used, locations, name of
procedure etc...
– It shows actual implementation of and movement of data between people, departments
and workstations.
� Logical data flow diagram : – It is an implementation independent view of a system , focusing on the flow of the data
between processes without any concern of specific devices , storage location or people
in the system.
Drawing context diagram:
– The first steps in requirement analysis are to learn about the general characteristics of
the business process under investigation.
Context Level Diagram :
0
level DFD :
![Page 20: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/20.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 20 of 25
� General rules for drawing logical DFD :
• Any data flow leaving a process must be based on data that are input to the process
• All data flow are named, the name reflects the data flowing between process, data stores
, sources , or links.
• Consider only those data which are needed in process.
• Maintain consistencies between processes that are not identified in the higher level
diagram are introduced at the lower level.
• However, within the process new flow and data stores are identified.
• Follow meaningful leveling conventions: leveling refers to the handling of local files
data stores and data flows that are relevant only to the inside of a process are concealed
until that process is exploded into greter detail.
• Assign meaningful labels: the descriptions assigned to data flows and processed should
tell the reader what is going on all data flows should be named to reflect their content
accurately.
• Data flow naming: the name assigned to data flows should reflect the data of interest to
the analyst not the document on which they reside.
• Process naming: all process should be assigned named that tell the reader something
specific about the nature of the activities in the process.
� Data Dictionary :
![Page 21: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/21.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 21 of 25
• when the volume of data very large, it is very much difficult for analyst to manage the
data definitions
• if the information system is very big, then more than one person are working on the
same data at that time any data defined by any person , can be used by the other person
hence they need the definition or description of the data.
• A data dictionary is a catalog – a repository – of the elements in a system as the name
suggest, these elements center around data and the way they are structured to meet user
requirements and organization needs.
• It contains a list of all the elements containing the data following through a system.
• The major elements are data flows, data stores and process. The data dictionary stores
details and descriptions of these elements.
• If data dictionary is developed properly, then any data related question – answer can be
extracted from data dictionary.
• It’s a developed during data flow analysis and assists the analyst. The stored details are
used during system design.
• It is Centralized repository of information about data such as meaning, relationships to
other data, origin, usage and format.
• Data dictionary consists of data about data. The data dictionary stores details &
descriptions of that element. Without data dictionary DBMS cannot access data.
• If analyst wants to know which data item is referenced in the system, he is able to find
answer properly by developing data dictionary.
� Importance of Data Dictionary :
• To manage details in large system.
• To communicate a common meaning for all system elements.
• To facilitate analysis of the details in order to evaluate charteristics and determination
System changes should be made.
• To locate errors and omissions in the system.
• To document the features of the system.
� Rules for Data Dictionary :
• Name of data elements should be meaningful.
I.e. Post_Name. This defined the name of post in which we can store data about all
designations which consist that department. Not the name like ABCD.. Or PN. You
cannot use like this type of name.
• Each word must be unique within a table.
• Aliases are allowed when 2 or more entries show the name meaning.
A vendor no can also called as customer no.
� What does Data Dictionary Record :
It contains two types of descriptions for the data.
1) Data Elements:- Data Elements are building blocks for all other data in the system.
i.e.
Post_Name, Post_Id, Post_Detail.
2) Data Structures: - It is a set of data items related to one another. i.e. Organization
consists of
Departments, Posts, Employees etc.
� Describing Data Elements :
![Page 22: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/22.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 22 of 25
Each entry in the data dictionary consists of a set of details like Data Names, Data
Values, Length, Aliases.
o Data Names :
It is the name of each element which is referred thought the systems development
process. So it must be meaningful & understandable data names.
o Data Values :
On Data Names there are specific data values are permissible. By putting data values,
entries on that element will be restricted to a specific range. I.e. if the price of any
product sold by a firm does not exceed Rs. 450/- that annotation belongs in the data
dictionary.
o Length:
When systems design features are developed later in the systems development process,
it is important to know the amount of space needed for each data item.
o Aliases:
The additional names are called aliases. Data name can be referred to bye different
names. i.e. employee_name can be referred as Stock_Holder_name.A meaningful data
dictionary will include all aliases.
Software Engineering
• A software industry is unique.
• Anyone can write a computer program, those who have good idea and can deliver
sound marketable product.
• A computer is perform a certain task is the software, which consist of electronic
instructions.
Program
• A specific set of the instruction that drive a computer to perform specific task is
called program
What is software
• Computer software is a product, made by programmer and engineer.
• It include program that execute within a computer that require output by providing
input
• Software engineering is concerned with the theories, method and tools, which are
needed to develop for these computers
• There is a difference between programming and software engineering.
• Software engineering includes activities like cost estimation, time estimation,
designing, coding, documentation, maintenance, quality assurance, testing of
software etc.
• When programming includes only the coding part
• Thus, it can be said that programming activity is only a subset of software
development activities.
![Page 23: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/23.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 23 of 25
System software
• Some misunderstood system software as an Operating System. However it is not so
• System software is a collection of programs written to service other program
• For example compiler, editor
• System software deals with computer hardware and resource like networking and
other I/O devices.
Business software
• Business unit has many activities such as purchase, sales, investment, banking
transaction, employee transaction, customer/supplier relationship and so on
Real Time Software
• Software that monitors real world events as they occurs is called real time software
• Real time software including a data gathering component that collects and format
information from external environment
• An analysis, a control/output component that responds to the external
environment and a monitoring component that coordinates all the components
• So that real-time response (ranging from 1 millisecond to 1 second) can be
Maintained.
Engineering and scientific software
• Engineering and scientific software have been characterized by “number crunching”
algorithms.
• Computer- aided design, system simulation and other interactive application have to
take on real time
Embedded software
• Embedded software means attaching electronics /electrical equipment with CPU
• And you can control all the attached equipments from your keyboard or any
Other input device
• EX mobile attached with PC
• Start and stop washing machine
Web based software
• Today’s internet age has brought about a large demand for services that can be
Delivered over the network.
Artificial Intelligence software
• Artificial intelligence software makes use of non-numerical algorithms to solve
complex problem
• Ex pattern recognition, game playing.
Introduction to QA
Quality : The Quality can be defined as “a characteristics or attributes of something.” It can
be any measurable characteristics with we can know the standard.
Quality Assurance : Quality Assurance consists of set of auditing & reporting functions
that assess the effectiveness & completeness of quality control activities. The goal QA is to
provide management with the data necessary to be informed about product quality to get good
![Page 24: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/24.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 24 of 25
s/w product. If quality assurance identifies some problems then it is necessary to apply
resources to resolve the problems.
Quality Factors : there are some quality factors on basis of which it is decided that s/w is
providing quality assurance or not there are 5 main quality factors which are as follows:
1. Portability: A s/w should be portable such that it should be easily accessed on any
operating system.
2. Usability: s/w can be used by expert user or novice user. The s/w should be usable such
that both can easily invoke the function of s/w product.
3. Reusability: s/w is not developed for once use. s/w or different modules of s/w can be
easily reused.
4. Correctness: All the functions &actions of s/w must be correctly implemented. There
must not be any incorrect o/p for any action.
5. Maintainability: any s/w should be modified easily. Errors should be corrected easily
& new functions & actions should be added easily.
Quality Control Variation control may be equated to quality control. But how do we achieve quality
control? Quality control involves the series of inspections, reviews, and tests used throughout
the software process to ensure each work product meets the requirements placed upon it.
Quality control includes a feedback loop to the process that created the work product. The
combination of measurement and feedback allows us to tune the process when the work
products created fail to meet their specifications. This approach views quality control as part of
the manufacturing process. Quality control activities may be fully automated, entirely manual,
or a combination of automated tools and human interaction. A key concept of quality control is
that all work products have defined, measurable specifications to which we may compare the
output of each process. The feedback loop is essential to minimize the defects produced. The
main goal of quality control is user satisfaction.
Difference between QA & Q
Quality(Q) Quality Assurance (QA) It is characteristic or attribute of something. It consist set of auditing & reporting
functions.
It is related with standard only. It is related with standard and management
also.
It doesn’t define any strategy. It defines the strategy to gain good s/w
product.
Quality paradigm(Example) : QA method:
Product
Assurance
Product
Assurance
Inspection
Quality Control
Quality Assurance
Total Quality
Management
![Page 25: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/25.jpg)
Sub: SAD Chapter-1 SAD And Software Engineering B.C.A Sem-3
Page 25 of 25
Quality Assurance Activities (QA Activities):
Software quality assurance activities (SQA) is composed of a variety of tasks associated
with two different constituencies
1. The software engineers: who do technical work, apply technical methods &
perform testing
2. SQA group: that has responsibility for quality assurance planning, oversight, record keeping, analysis, and reporting.
All SQA activities are performed by independent SQA group that conducts the following
activities.
1. Prepares an SQA plan for a project: Plan is developed during project
planning phase. Quality assurance activities performed by the software engineering
team and the SQA group are conducted in the plan. The plan identifies
• Evaluations to be performed
• Audits and reviews to be performed
• Standards that is applicable to the project
• Procedures for error reporting and tracking
• Documents to be produced by the SQA group
• Amount of feedback provided to the software project team
2. Participates in the development of the project’s software process
description: The software team selects a process for the work to be performed.
The SQA group reviews the process description for compliance with organizational
policy, internal software standards, externally imposed standards (e.g., ISO-9001),
and other parts of the software project plan.
3. Reviews software engineering activities to verify compliance with the
defined software process: The SQA group identifies, documents, and tracks
deviations from the process and verifies that corrections have been made.
What is the role of an SQA group?
1. Audits designated software work products to verify compliance with
those defined as part of the software process: The SQA group reviews
selected work products; identifies, documents, and tracks deviations; verifies that
corrections have been made; and periodically reports the results of its work to the
project manager.
2. Ensures that deviations in software work and work products are
documented and handled according to a documented procedure: Deviations may be encountered in the project plan, process description, applicable
standards, or technical work products.
3. Records any noncompliance and reports to senior management: Noncompliance items are tracked until they are resolved.
![Page 26: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/26.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 1 of 17
2 Basics of Software Testing, Types of Software
Testing, Verification and Validation
Topic Covered
1. Introduction
2. Software faults & failures (Bug/error/defect/faults/failure)
3. Testing Artifacts
o Test case
o Test script
o Test plan
o Test harness
o Test suite
4. Static Testing
- Informal Review
- Walkthrough
- Technical Review
- Inspection
5. Dynamic Testing
6. Test Levels
- Unit Testing
- Integration Testing
- System Testing
- Acceptance Testing
���� Techniques of Software Testing
1. Black Box Testing
- Equivalence Partitioning
- Boundry Data Analysis
- Decision Table testing
- State Trancision testing
2. White Box Testing
- Statement Testing and coverage
- Decision Testing and coverage
3. Grey Box Testing
7. Non-Functional Testing
- Performance Testing
- Stress Testing
- Load Testing
- Usability Testing
- Security Testing
![Page 27: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/27.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 2 of 17
� Introduction :
o Software testing means executing a program in order to understand its behavior o i.e. to check whether it execute according to user’s requirement or not? o More technically check failure rate, response time, throughput for certain data sets,
accuracy, system design and many more. o The test effort is employed after the requirement have been defined and coding process
has been completed o Software testing depending on the testing method employed can be implemented at any
time in the development process o The main aim of testing is to find an error o Software engineer should test software while designing and before implementation o Hence, the test should fulfill the basic characteristic of test o i.e. finding the most error with a minimum of effort
Testability
o Software testability means a computer program can be tested easily o Software testability is simply how easily a computer program can be tested
Operability
o The better it works, the more efficiently it can be tested o If a system is designed and implemented with quality in mind
Observability
o What you see is what you test o Input provided as part of testing produce distinct outputs o System states and variable are visible during execution o Internal error are automatically detected and reported o Source code is accessible
Controllability
o The better we can control the software o Software and hardware states and variable can be controlled directly by the test
engineer
Decomposability
o By controlling the scope of testing we can more quickly isolate problems and perform smarter retesting
Simplicity
o The less there is to test, the more quickly we test it o The program should have functional simplicity, structure simplicity and code simplicity
Stability
o The fewer the changes, the fewer the trouble to testing o Changes to the software are infrequent, controlled when they do occur o Software recovers well from failures.
Understandability
o The more information we have, the smarter we will test o The architectural design and the dependencies (Relationships) between internal,
external and shared component are well understood o Technical documentation is instantly accessible, well organized, specific and detailed
and accurate Changes to the design are communicate to testers
� Software faults & failures :
![Page 28: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/28.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 3 of 17
• Here we have some types of software faults are bug, error, defect, fault or failure. • In computer terminology each are differ from others. 1) Bug : It is a coding error in a computer program. A program that contains large number
of bugs interfere with functionality of software is said to be buggy.
Bugs can have wide variety of effects, with varying levels of inconvenience to the user of the program. Some bugs have only a subtle effect on the program’s functionality, and may thus lie undetected for a long time. More serious bugs may cause the program to crash.
• Types of bug:- 1) Team working bugs. 2) Co programming bugs. 3) Resource bugs 4) Syntax bugs 5) Logic bugs 6) Mathematical bugs
For Exa :
o If the requirement is 5 plus 3 divide by 4 o Programmer codes to add 5 and 3 first and then divide the result [8] by 4 to arrive at
result 2 o But if application requirement is to first divide 3 by 4 and then add 5 o Then expecting result 5.75 the it is called a classic design/code error
2) Failure : It occurs when there is a difference of observed behavior of a program from expected output.It occurs when the detailed service no longer complies with the specifications.
3) Fault : A fault is an incorrect step, process or data definition in a computer program faults are the source of failure.
4) Error : It is a problem found before software is released to end user. 5) Defect : It is a problem found only after the software has been released to end users.
A programmer make an error, which results in a defect in the software source code. If this defect is executed in certain situations the system will produce wrong results & cause failure. It is not necessary that all defects will results in failures
� Testing pieces :
1) Test case : • A test case in software engineering is a set of conditions or variables under which a
tester will determine whether an application or software system is working correctly or not
• Technically defined a test case is an input and an expected result • This can be as practical as “for x your derived result is y”, whereas other test cases
described in more detail the input scenario and what result might be excepted
Formal test cases
• In order to fully test that all the requirement of an application are met, there must be at least two test cases for each requirement
![Page 29: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/29.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 4 of 17
• One positive test • One negative test • What characterized a formal, written test case is that there is a known input and an expected output
Informal Test Cases
• For application or system without formal requirements, test cases can be written based on the accepted normal operation of program of a similar class
• Test cases are not written at all but the activities and result are reported after the test have been run
Example
• Test case p1 open->setup->deposit->withdraw->withdraw->close • Test case p2 open->setup->deposit->summarize->creditlimit>withdraw->close
2) Test Script : • Test script is the combination of a test case, test procedure, and test data • Test script can be manual, automated or combination of both • Test script in software testing is a set of instruction that will be perform on system • Test script written as a short program can either be written using a special automated function GUI test tool (HP quicktest professional) or in well known programming language (such as c++, c#, java, php)
Advantages � The major advantage of automated testing is that
• Test may be executed continuously without the need for a human intervention • It is easily repeatable • It is very useful automating test if they are to be executed several times
Disadvantages � Disadvantage of automated testing are that
• Automated tests may be poorly written and can break during playback • No human interaction • Automated tests can only examine what they have been programmed to examine
3) Test plan : • A test plan is a systematic approach to testing software • The plan typically contains a details understanding of what the workflow will be • A test plan documents the strategy that will be used to verify and ensure that system meets its design specification and other requirement
Test plan may include one or more of the following
• Design verification • To be performed during the development or approval stage
• Manufacture • To be performed during preparation in an ongoing manner for the purpose of
performance verification and quality control • Acceptance test
• To be performed at the time of delivery or installation of the product
![Page 30: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/30.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 5 of 17
• Service and repair test • To be performed as required over the service life of product
• Regression test • To be performed on an exiting operational product • (E.g. upgrading the platform/O.S. on which an exiting application runs)
4) Test harness : • In software testing, a test harness is a collection of software and test data configured to
test a program unit by running it • Objective of test harness are to
• Automated the testing process • Execute test suites of test cases • Generate associated test reports
• Test harness call the functions with supplied parameters and print out and compare the result to the desired value
• Harness will be written for one language or runtime (c, c++, java, .NET) • For example on windows if you want any GUI OS, the harness will include code to start
the program, open any required file , select which cases are run (if any operation fail give proper msg)
• Sometimes Test case is just a function or method • This function does all of the actual work of the test case whether the case passed or
failed
5) Test suite : • The most common term for a collection of test cases is a test suite • The test suite often also contains more detailed instructions or goals for each collection of test cases
• Test suites are used to group similar test case together • It definitely contains a section where the tester identify the system configuration used during testing
• A test suite for primality testing, subroutine might consist a list of numbers and their primality (prime or not) The testing subroutine would supply each number in the list to the primality tester, and verify that the result of each test is correct.
Types of Software Testing, verification & validation
Verification & validation:
• Verification is the process of evaluating a system or component at the end of a phase to determine if it satisfies the condition imposed (compulsory) at the start of that phase or in other words, are we building the correct system?
• Verification is use to find errors • It is performed by executing a program in a simulated (computer generated) environment • Validation refers to the process of using software in a live environment in order to find errors. • Verification is the process of evaluating a system or component at the end of a development phase to determine if it satisfies the requirement imposed (compulsory) at the start of that phase or in other words, are we building the correct system?
![Page 31: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/31.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 6 of 17
• Validation may continue for several months • During the course of validation of the system, failure may occur and software will be changed • System validation checks the quality of the software in both simulated (virtual) and live environment
We can say • Verification: “Are we building the product right?” • Validation: “Are we building the right product?”
Verification Validation
1. Verification represents static testing techniques.
1. Validation represents dynamic testing techniques.
2. Verification ensures that the software documents comply with the organizations standards, it is static analysis technique.
2. Validation ensures that the software operates as planned in the requirements phase by executing it, running predefined test cases and measuring the output with expected results.
3. Verification answers the question “Is the Software build according to the specifications”.
3. Validation answers the question “Did we build the software fit for purpose and does it provides the solution to the problem”.
1. Static Testing: Static testing is the form of software testing where you do not execute the code being examined. This technique could be called non-execution technique. It is primarily syntax checking of the code or manually reviewing the code, requirements documents, design documents etc. to find errors. The fundamental objective of static testing technique is to improve the quality of the software products by finding errors in early stages of software development life cycle.
Following are the main Static Testing techniques used:
1. Informal Review: - No formal process - May take the form of pair programming or a technical guide reviewing designs and code - Results may be documented - Varies in usefulness depending on the reviewers - Main purpose: inexpensive way to get some benefit
2. Walkthrough:
- Meeting led by author - May take the form of scenarios, dry runs, peer group participation - Open-ended sessions
o Optional pre-meeting preparation of reviewers
![Page 32: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/32.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 7 of 17
o Optional preparation of a review report including list of findings - Optional scribe (who is not the author) - May vary in practice from quite informal to very formal - Main purposes: learning, gaining understanding, finding defects
3. Technical Review:
- Documented, defined defect-detection process that includes peers and technical experts with optional management participation
- May be performed as a peer review without management participation - Ideally led by trained moderator (not the author) - Pre-meeting preparation by reviewers - Optional use of checklists - Preparation of a review report which includes the list of findings, the verdict whether the
software product meets its requirements and, where appropriate, recommendations related to findings
- May vary in practice from quite informal to very formal - Main purposes: discussing, making decisions, evaluating alternatives, finding defects,
Solving technical problems and checking conformance to specifications, plans, Regulations, and standards
4. Inspection:
- Led by trained moderator (not the author) - Usually conducted as a peer examination - Defined roles - Includes metrics gathering - Formal process based on rules and checklists - Specified entry and exit criteria for acceptance of the software product - Pre-meeting preparation - Inspection report including list of findings - Formal follow-up process (with optional process improvement components) - Optional reader - Main purpose: finding defects
-
2. Dynamic Testing:-
Dynamic Testing is used to test the software by executing it. Dynamic Testing is also
known as Dynamic Analysis, this technique is used to test the dynamic behavior of the code. In dynamic testing the software should be compiled and executed, this analyses the variable quantities like memory usage, CPU usage, response time and overall performance of the software.
Dynamic testing involves working with the software, input values are given and output values are checked with the expected output. Dynamic testing is the Validation part of Verification and Validation.
Some of the Dynamic Testing Techniques are given below:
1. Unit Testing:-
![Page 33: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/33.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 8 of 17
Unit is the smallest testable part of the software system. Unit testing is done to verify that the lowest independent entities in any software are working fine. The smallest testable part is isolated from the remainder code and tested to determine whether it works correctly. 2. Integration Testing:-
In integration testing the individual tested units are grouped as one and the interface between them is tested. Integration testing identifies the problems that occur when the individual units are combined i.e it detects the problem in interface of the two units. Integration testing is done after unit testing.
There are mainly three approaches to do integration testing.
Top-down Approach
Top down approach tests the integration from top to bottom, it follows the architectural structure. Example: Integration can start with GUI and the missing components will be substituted by stubs and integration will go on.
Bottom-up approach
In bottom up approach testing takes place from the bottom of the control flow, the higher level components are substituted with drivers
Big bang approach
In big bang approach most or all of the developed modules are coupled together to form a complete system and then used for integration testing.
3. System Testing:-
Testing the behavior of the whole software/system as defined in software requirements specification(SRS) is known as system testing, its main focus is to verify that the customer requirements are fulfilled.
System testing is done after integration testing is complete. System testing should test functional and non functional requirements of the software.
Following types of testing should be considered during system testing cycle. The test types followed in system testing differ from organization to organization however this list covers some of the main testing types which need to be covered in system testing.
Sanity Testing Usability Testing Stress Testing Load Testing Performance Testing Regression Testing Maintenance Testing Security Testing Accessibility Testing
![Page 34: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/34.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 9 of 17
4. Acceptance Testing:-
Acceptance testing is performed after system testing is done and all or most of the major defects have been fixed. The goal of acceptance testing is to establish confidence in the delivered software/system that it meets the end user/customers requirements and is fit for use Acceptance testing is done by user/customer and some of the project stakeholders.
Acceptance testing is done in production kind of environment.
For Commercial off the shelf (COTS) software’s that are meant for the mass market testing needs to be done by the potential users, there are two types of acceptance testing for COTS software’s.
Alpha Testing
Alpha testing is mostly applicable for software’s developed for mass market i.e. Commercial off the shelf(COTS), feedback is needed from potential users. Alpha testing is conducted at developers site, potential users, members or developers organization are invited to use the system and report defects.
Beta Testing
Beta testing is also know as field testing, it is done by potential or existing users/customers at an external site without developers involvement, this test is done to determine that the software satisfies the end users/customers needs. This testing is done to acquire feedback from the market.
� Test levels :
- Unit Test
- Integration Test
- System Test
As explained above in dynamic testing…
� Technique of software testing:
� Black box testing :
Black box testing tests functional and non-functional characteristics of the software without referring to the internal code of the software.
Black Box testing doesn’t require knowledge of internal code/structure of the system/software.
It uses external descriptions of the software like SRS(Software Requirements Specification), Software Design Documents to derive the test cases.
Advantages:
• More efficient on larger units of code • Tester needs no knowledge of implementation (coding)
![Page 35: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/35.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 10 of 17
• Tester and programmer are different • Test are done from user’s point of view
Disadvantages:
• Only small number of inputs can be possible can actually tested • Without clarity, test cases are hard to design • Unnecessary repetition May leave program path untested
Black box Test Design Techniques
Typically Black box Test Design Techniques include:
1. Equivalence Partitioning:-
Equivalence partitioning (EP) is a blackbox testing technique. This technique is very common and mostly used by all the testers informally. Equivalence partitions are also known as equivalence classes.
As the name suggests Equivalence partitioning is to divide or to partition a set of test conditions into sets or groups that can be considered same by the software system.
As you all know that exhaustive testing of the software is not feasible task for complex software’s so by using equivalence partitioning technique we need to test only one condition from each partition because it is assumed that all the conditions in one partition will be treated in the same way by the software. If one condition works fine then all the conditions within that partition will work the same way and tester does not need to test other conditions or in other way if one condition fails in that partition then all other conditions will fail in that partition.
These conditions may not always be true however testers can use better partitions and also test some more conditions within those partitions to confirm that the selection of that partition is fine.
Lets take some examples:
A store in city offers different discounts depending on the purchases made by the individual. In order to test the software that calculates the discounts, we can identify the ranges of purchase values that earn the different discounts. For example, if a purchase is in the range of $1 up to $50 has no discounts, a purchase over $50 and up to $200 has a 5% discount, and purchases of $201 and up to $500 have a 10% discounts, and purchases of $501 and above have a 15% discounts.
Now we can identify 4 valid equivalence partitions and 1 invalid partition as shown below.
Invalid
Partition
Valid
Partition(No
Discounts)
Valid
Partition
(5%)
Valid
Partition(10
%)
Valid
Partition(15%
)
$0.01 $1-$50 $51-$200 $201-$500 $501-Above
1. Boundary Value Analysis:-
What is a Boundary Value
A boundary value is any input or output value on the edge of an equivalence partition.
![Page 36: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/36.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 11 of 17
Let us take an example to explain this:
Suppose you have a software which accepts values between 1-1000, so the valid partition will be (1-1000), equivalence partitions will be like:
Invalid Partition Valid Partition Invalid Partition
0 1-1000 1001 and above
And the boundary values will be 1, 1000 from valid partition and 0,1001 from invalid partitions.
Boundary Value Analysis is a black box test design technique where test case are designed by using boundary values, BVA is used in range checking.
Example:2
A store in city offers different discounts depending on the purchases made by the individual. In order to test the software that calculates the discounts, we can identify the ranges of purchase values that earn the different discounts. For example, if a purchase is in the range of $1 up to $50 has no discounts, a purchase over $50 and up to $200 has a 5% discount, and purchases of $201 and up to $500 have a 10% discounts, and purchases of $501 and above have a 15% discounts.
We can identify 4 valid equivalence partitions and 1 invalid partition as shown below.
Invalid
Partition
Valid
Partition(No
Discounts)
Valid
Partition(5%)
Valid
Partition(10%)
Valid
Partition(15%)
$0.01 $1-$50 $51-$200 $201-$500 $501-Above
From this table we can identify the boundary values of each partition. We assume that two decimal digits are allowed.
Boundary values for Invalid partition: 0.00 Boundary values for valid partition(No Discounts): 1, 50 Boundary values for valid partition(5% Discount): 51, 200 Boundary values for valid partition(10% Discount): 201,500 Boundary values for valid partition(15% Discount): 501, Max allowed number in the software application 3. State Transition Testing:-
State transition testing is used for systems where some aspect of the software system can be described in ‘finite state machine’. This means that the system can be in a finite number of different states, and the transitions from one state to another are determined by the rules of the ‘machine’.
![Page 37: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/37.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 12 of 17
What is a finite State System
Any system where you get a different output for the same input, depending on what has happened before, is a finite state system. A finite state system is often shown as a state diagram.
Let us take an example to explain this in detail:
Suppose you want to withdraw $500 from a bank ATM, you may be given cash. After some time you again try to withdraw $500 but you may be refused the money (because your account balance is insufficient). This refusal is because your bank account state has changed from having sufficient funds to cover withdrawal to having insufficient funds. The transaction that caused your account to change its state was probably the earlier withdrawal. A state diagram can represent a model from the point of view of the system or the customer.
A state transition model has four basic parts:
1. The states that the software may occupy (funded/insufficient funds) 2. The transitions from one state to another (all transitions are not allowed) 3. The events that cause a transition (like withdrawing money) 4. The actions that result from a transition (an error message or being given your cash)
Please note that in any given state, one event can cause only one action, but that the same event from a different state may cause a different action and a different end state.
4.Decision Table Testing
It is a table which shows different combination inputs with their associated outputs, this is also known as cause effect table.
In EP and BVA we have seen that these techniques can be applied to only specific conditions or inputs however if we have different inputs which result in different actions being taken or in other words we have a business rule to test where there are different combination of inputs which result in different actions.
For testing such rules or logic decision table testing is used.
It is a black box test design technique.
� White box testing :
White Box testing tests the structure of the software or software component. It checks what going on inside the software. Also Know as clear box Testing, glass box testing or structural testing. Requires knowledge of internal code structure and good programming skills. It tests paths within a unit and also flow between units during integration of units.
Advantages :
• As the knowledge coding structure is prerequisite • Testing is easy • Removing the extra lines of code, which can bring in hidden defects
![Page 38: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/38.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 13 of 17
Disadvantage :
• A skilled tester is needed to carry out this type of testing, which increase the cost • It is impossible to look into every bit of code to find out error
WhiteBox Test Design Techniques
Typically Whitebox Test Design Techniques include:
1. Line Coverage or Statement Coverage 2. Decision Coverage 3. Condition Coverage 4. Multiple Condition Decision Coverage 5. Multiple Condition Coverage
1. Line Coverage or Statement Coverage:-
Statement coverage is also known as line coverage. The formula to calculate statement coverage is: Statement Coverage=(Number of statements exercised/Total number of statements)*100 Studies in the software industry have shown that black-box testing may actually achieve only 60% to 75% statement coverage, this leaves around 25% to 40% of the statements untested. To illustrate the principles of code coverage lets take one pseudo-code which is not specific to any programming language. We have numbered the code lines just to illustrate the statement coverage example however this may not always be correct.
READ X READ Y IF X>Y PRINT “X is greater than Y” ENDIF
Let us see how can we achieve 100% code coverage for this pseudo-code, we can have 100% coverage by just one test set in which variable X is always greater than variable Y.
TEST SET 1: X=10, Y=5
A statement may be a single line or it may be spread over several lines. A statement can also contain more than one statement. Some code coverage tools group statements that are always executed together in a block and consider them as one statement.
2. Decision Coverage:-
Decision Coverage is also known as Branch Coverage. Whenever there are two or more possible exits from the statement like an IF statement, a DO-WHILE or a CASE statement it is known as decision because in all these statements there are two outcomes, either TRUE or FALSE. With the loop control statement like DO-WHILE or IF statement the outcome is either TRUE or FALSE and decision coverage ensures that each outcome(i.e TRUE and FALSE) of control statement has been executed at least once.
![Page 39: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/39.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 14 of 17
Alternatively you can say that control statement IF has been evaluated both to TRUE and FALSE. The formula to calculate decision coverage is:
Decision Coverage=(Number of decision outcomes executed/Total number of decision outcomes)*100% Research in the industries have shown that even if through functional testing has been done it only achieves 40% to 60% decision coverage.
Decision coverage is stronger that statement coverage and it requires more test cases to achieve 100% decision coverage. Let us take one example to explain decision coverage:
READ X
READ Y
IF “X > Y”
PRINT X is greater that Y
ENDIF
To get 100% statement coverage only one test case is sufficient for this pseudo-code.
TEST CASE 1: X=10 Y=5
However this test case won’t give you 100% decision coverage as the FALSE condition of the IF statement is not exercised. In order to achieve 100% decision coverage we need to exercise the FALSE condition of the IF statement which will be covered when X is less than Y. So the final TEST SET for 100% decision coverage will be:
TEST CASE 1: X=10, Y=5
TEST CASE 2: X=2, Y=10
Note: 100% decision coverage guarantees 100% statement coverage but 100% statement coverage does not guarantee 100% decision coverage.
� Gray Box Testing :
• Gray-box testing (International English spelling: grey-box testing) is a combination of white-box testing and black-box testing.
• The aim of this testing is to search for the defects if any due to improper structure or improper usage of applications.
• Gray-box testing is also known as translucent testing.
![Page 40: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/40.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 15 of 17
• Gray box testing is a software testing technique that uses a combination of black-box testing and white-box testing
• Gray box testing is not black box testing, because the tester does know some of the internal working of the software under test
• Gray box testing involves having access to internal data structure and algorithms for the designing the test case Grey box testing is that when one does have some knowledge, but not the full knowledge of the internal of the product
� Non functional test :
Non functional testing tests the characteristics of the software like how fast the response is, or what time does the software takes to perform any operation.
Some examples of Non-Functional Testing are:
1. Performance Testing 2. Load Testing 3. Stress Testing 4. Usability Testing 5. Security Testing
Non functionality testing focuses on the software’s performance i.e. How well it works.
Advantage :
• Create confidence in your system • Create confidence in your offering to your customer • Better planning of infrastructure • Demonstrate for legal requirement • Keep your IT group happy!
1. Performance Testing
Performance Testing is done to determine the software characteristics like response time, throughput or MIPS (Millions of instructions per second) at which the system/software operates.
Performance Testing is done by generating some activity on the system/software, this is done by the performance test tools available. The tools are used to create different user profiles and inject different kind of activities on server which replicates the end-user environments.
The purpose of doing performance testing is to ensure that the software meets the specified performance criteria, and figure out which part of the software is causing the software performance go down.
Performance Testing Tools should have the following characteristics:
� It should generate load on the system which is tested
� It should measure the server response time
� It should measure the throughput
2. Load testing
![Page 41: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/41.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 16 of 17
� Load testing tests the software or component with increasing load, number of concurrent users or transactions is increased and the behavior of the system is examined and checked what load can be handled by the software.
� The main objective of load testing is to determine the response time of the software for critical transactions and make sure that they are within the specified limit.
� It is a type of performance testing.
� Load Testing is non-functional testing.
3. Stress Testing
Stress testing tests the software with a focus to check that the software does not crashes if the hardware resources(like memory, CPU, Disk Space) are not sufficient.
Stress testing puts the hardware resources under extensive levels of stress in order to ensure that software is stable in a normal environment.
In stress testing we load the software with large number of concurrent users/processes which can not be handled by the systems hardware resources.
Stress Testing is a type of performance testing and it is a non-functional testing.
Examples:
1. Stress Test of the CPU will be done by running software application with 100% load for some days which will ensure that the software runs properly under normal usage conditions.
2. Suppose you have some software which has minimum memory requirement of 512 MB RAM then the software application is tested on a machine which has 512 MB memory with extensive loads to find out the system/software behavior.
4.Usability Testing
Usability means the software’s capability to be learned and understood easily and how attractive it looks to the end user. Usability Testing is a black box testing technique. Usability Testing tests the following features of the software.
1. How easy it is to use the software.
2. How easy it is to learn the software.
3. How convenient is the software to end user.
5.Security Testing
Security Testing tests the ability of the system/software to prevent unauthorized access to the resources and data.
Security Testing needs to cover the six basic security concepts: confidentiality,
integrity, authentication, authorization, availability and non-repudiation.
Confidentiality:-
![Page 42: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/42.jpg)
Sub.: SAD Chapter – 3 Basics of Software Testing B.C.A. Sem - 3
Page 17 of 17
A security measure which protects against the disclosure of information to parties other than the intended recipient that is by no means the only way of ensuring the security.
Integrity:-
A measure intended to allow the receiver to determine that the information which it is providing is correct.
Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding additional information to a communication to form the basis of an algorithmic check rather than the encoding all of the communication.
Authentication
The process of establishing the identity of the user.
Authentication can take many forms including but not limited to: passwords, biometrics, radio frequency identification, etc.
Authorization
The process of determining that a requester is allowed to receive a service or perform an operation.
Access control is an example of authorization.
Availability
Assuring information and communications services will be ready for use when expected. Information must be kept available to authorized persons when they need it.
Non-repudiation
A measure intended to prevent the later denial that an action happened, or a communication that took place etc.
In communication terms this often involves the interchange of authentication information combined with some form of provable time stamp.
![Page 43: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/43.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 1 of 18
1. Waterfall Model: The Waterfall Model was first Process Model to be introduced. It is also referred to as a linear-
sequential life cycle model. It is very simple to understand and use. In a waterfall model, each
phase must be completed before the next phase can begin and there is no overlapping in the
phases.
Waterfall model is the earliest SDLC approach that was used for software development .The
waterfall Model illustrates the software development process in a linear sequential flow; hence it
is also referred to as a linear-sequential life cycle model. This means that any phase in the
development process begins only if the previous phase is complete. In waterfall model phases do
not overlap.
Waterfall Model design Waterfall approach was first SDLC Model to be used widely in Software Engineering to ensure
success of the project. In "The Waterfall" approach, the whole process of software development is
divided into separate phases. In Waterfall model, typically, the outcome of one phase acts as the
input for the next phase sequentially.
3 Software Development Life Cycle Models, Autometed testing
Topic Covered
1. Waterfall Model
2. Iterative Model 3. V-Model 4. Spiral Model 5. Big Bang Model 6. Prototyping Model 7. Introduction
Concept of Freeware, Shareware and licensed tools
8. Theory and Practical Case-Study of Testing Tools Win runner
Load runner
QTP
Rational Suite
![Page 44: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/44.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 2 of 18
Following is a diagrammatic representation of different phases of waterfall model.
The sequential phases in Waterfall model are:
• Requirements
The first phase in waterfall model is requirements gathering, here the end user requirements are
captured and feasibility study is done. After this software requirements document(SRS) is
prepared.
• Design
High level and low level software design is done in this phase.
• Implementation
Developers start coding and finish software development in this phase.
• Testing
After developers are done with coding and provide final build to testers, testing starts in this
phase.
• Deployment
After testing is done and software is released it is then deployed in customer environment.
• Maintenance
In maintenance phase the maintenance activities are done for the deployed software.
![Page 45: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/45.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 3 of 18
All these phases are cascaded to each other in which progress is seen as flowing steadily
downwards (like a waterfall) through the phases. The next phase is started only after the defined
set of goals are achieved for previous phase and it is signed off, so the name "Waterfall Model".
In this model phases do not overlap.
Advantages & Disadvantages of Waterfall Model:
The advantage of waterfall development is that it allows for departmentalization and
control. A schedule can be set with deadlines for each stage of development and a product can
proceed through the development process model phases one by one. Development moves from
concept, through design, implementation, testing, installation, troubleshooting, and ends up at
operation and maintenance. Each phase of development proceeds in strict order.
The disadvantage of waterfall development is that it does not allow for much reflection
or revision. Once an application is in the testing stage, it is very difficult to go back and change
something that was not well-documented or thought upon in the concept stage.
2. Iterative Model:
In Iterative model, iterative process starts with a simple implementation of a small set
of the software requirements and iteratively enhances the evolving versions until the complete
system is implemented and ready to be deployed. Iterative Development Model is also known as
Incremental Development Model.
In the iterative development model there are number of smaller self-contained life cycle phases
for the same project. It does not contain only one large development cycle as in waterfall model.
There are many variants of Iterative Development Model.
In Iterative Development Model the delivery of software is divided into increments or builds with
each increment adding new functionality to the software product.
In Iterative Development Model each subsequent increment needs testing of the new functionality,
regression testing and integration testing of the new and existing functionality.
An iterative life cycle model does not attempt to start with a full specification of
requirements. Instead, development begins by specifying and implementing just part of the
software, which is then reviewed in order to identify further requirements. This process is
then repeated, producing a new version of the software at the end of each iteration of the model.
Iterative Model design: Iterative process starts with a simple implementation of a subset of the software
requirements and iteratively enhances the evolving versions until the full system is implemented.
At each iteration, design modifications are made and new functional capabilities are added. The
basic idea behind this method is to develop a system through repeated cycles (iterative) and in
smaller portions at a time (incremental).
Following is the pictorial representation of Iterative and Incremental model:
![Page 46: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/46.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 4 of 18
In incremental model the whole requirement is divided into various builds. During each iteration,
the development module goes through the requirements, design, implementation and testing
phases. Each subsequent release of the module adds function to the previous release. The process
continues till the complete system is ready as per the requirement.
Iterative Model Pros and Cons:
The advantage of this model is that there is a working model of the system at a very early stage of
development which makes it easier to find functional or design flaws. Finding issues at an early
stage of development enables to take corrective measures in a limited budget.
The disadvantage with this SDLC model is that it is applicable only to large and bulky software
development projects. This is because it is hard to break a small software system into further small
serviceable increments/modules.
3. V-Model : The V-model is SDLC model where execution of processes happens in a sequential manner in V-
shape. It is also known as Verification and Validation model.
The issues seen in the traditional waterfall model gave birth to the V-Model; it was developed
with an intention to address some of the problems found in waterfall model.
As you can see that in waterfall model defects were found very late in the development life cycle
because testing was not involved until the end of the project.
In V-Model testing begins as early as possible in the project life cycle, it is always a good practice
to involve testers at earlier phases of product life cycle. There are varieties of test activities that
need to be carried out before end of the coding phase. These activities should be carried out in
parallel to the development activities so that testers can produce a set of test deliverables.
![Page 47: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/47.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 5 of 18
The V- Model illustrates that testing activities (Verification and Validation) can be integrated into
each phase of the product life cycle. Validation part of testing is integrated in the earlier phases of
the life cycle which includes reviewing end user requirements, design documents etc.
V-Model is an extension of the waterfall model and is based on association of a testing phase for
each corresponding development stage. This means that for every single phase in the development
cycle there is a directly associated testing phase. This is a highly disciplined model and next phase
starts only after completion of the previous phase.
V-Model design:
Under V-Model, the corresponding testing phase of the development phase is planned in parallel.
So there are Verification phases on one side of the ‘V’ and Validation phases on the other side.
Coding phase joins the two sides of the V-Model.
The below figure illustrates the different phases in V-Model of SDLC.
Verification Phases:
In above Figure following are the Verification phases:
Business Requirement Analysis
System Design
Architectural Design
Module Design
![Page 48: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/48.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 6 of 18
Validation Phases:
In above Figure following are the Validation phases:
Unit Testing
Integration Testing
System Testing
Acceptance Testing
V-Model Pros and Cons: The advantage of V-Model is that it’s very easy to understand and apply. The simplicity of this
model also makes it easier to manage.
The disadvantage is that the model is not flexible to changes and just in case there is a
requirement change, which is very common in today’s dynamic world, it becomes very expensive
to make the change.
4. Spiral Model:
The spiral model was proposed by Bohem.
The spiral model combines the idea of iterative development with the systematic, controlled
aspects of the waterfall model.
Spiral model is a combination of iterative development process model and sequential linear
development model i.e. waterfall model with very high emphasis on risk analysis. It allows for
incremental releases of the product, or incremental refinement through each iteration around the
spiral.
In spiral model the radial dimension represents the cumulative cost incurred in finishing the steps
so far and angular dimension represents the progress made in completing each cycle of the spiral.
In the first quadrant of the spiral model each cycle begins with the identification of objectives for
that cycle, the alternatives possible for achieving objectives and the constraints.
The next step is to evaluate these different alternatives based on objectives and constraints. The
evaluation in this step is based on the risk perception of the project.
The next step is to develop strategies that resolve the risks, this step involves activities like bench
marking, simulation etc.
After this the software is developed keeping in mind the risks and finally next stages are planned.
Following is a diagrammatic representation of spiral model listing the activities in each phase
![Page 49: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/49.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 7 of 18
Spiral Model Pros and Cons:
The advantage of spiral lifecycle model is that it allows for elements of the product to be added in
when they become available or known. This assures that there is no conflict with previous
requirements and design. This method is consistent with approaches that have multiple software
builds and releases and allows for making an orderly transition to a maintenance activity. Another
positive aspect is that the spiral model forces early user involvement in the system development
effort.
On the other side, it takes very strict management to complete such products and there is a risk of
running the spiral in indefinite loop. So the discipline of change and the extent of taking change
requests is very important to develop and deploy the product successfully.
5. Big BangModel: The Big Bang model is SDLC model where there is no specific process followed. The
development just starts with the required money and efforts as the input, and the output is the
software developed which may or may not be as per customer requirement.
Big Bang Model is SDLC model where there is no formal development followed and very little
planning is required. Even the customer is not sure about what exactly he wants and the
requirements are implemented on the fly without much analysis. Usually this model is followed
for small projects where the development teams are very small.
Big Bang Model design and Application:
![Page 50: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/50.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 8 of 18
Big bang model comprises of focusing all the possible resources in software development and
coding, with very little or no planning. The requirements are understood and implemented as they
come. Any changes required may or may not need to revamp the complete software.
This model is ideal for small projects with one or two developers working together and is also
useful for academic or practice projects. It’s an ideal model for the product where requirements
are not well understood and the final release date is not given.
Big Bang Model Pros and Cons:
The advantage of Big Bang is that it’s very simple and requires very little or no
planning. Easy to mange and no formal procedure are required.
However the Big Bang model is a very high risk model and changes in the requirements
or misunderstood requirements may even lead to complete reversal or scraping of the project. It is
ideal for repetitive or small projects with minimum risks.
6. Prototyping Model:
Prototyping model was developed to counter the limitations of waterfall model. The basic idea
behind prototyping model is that instead of freezing the requirements before any design or coding
can begin, a throwaway prototype is built to understand the requirements. This prototype is build
based on currently known requirements.
The throwaway prototype undergoes design, coding, testing but each of these phases are not
formal. The prototype is developed and delivered to client, client uses this prototype and gets the
actual feel of the system.
Client interacts with the system and gets better understanding of the requirements of the desired
system, this at last results in more stable requirements from clients.
Prototyping model is mostly used for projects where the requirements are not very clear from
client. When a prototype is developed, using known unclear requirements and given to customer
he uses the prototype and gets the feel of the software system and then produces concrete
requirements.
In prototyping model the focus of the development is to include those features which are not
properly understood as
prototype is anyway to be discarded. So the well known requirements are not implemented in the
prototype.
Development Approach of Prototyping Model:
Development approach for prototyping model is quick and dirty, main focus is on quick
development rather than high quality prototype. Minimal documentation is done like Test Plan,
Design Documents, Test Cases Documents are not prepared.
Minimum testing is done as testing consumes major part of expenditure.
![Page 51: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/51.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 9 of 18
Advantages of Prototyping Model:
We have seen in waterfall model that requirements needs to be frozen before you can proceed
with any other phases.
1. In prototyping model requirements are frozen at a later stage by which time they are likely
to be stable.
2. Client or end users already get experience with the prototype system. It is more likely that
the requirements specified after prototype will be very close to actual requirements.
3. Reduces risks associated with the projects.
Automated Testing
Introduction:-
Using Automation tools to write and execute test cases is known as automation testing. No
manual intervention is required while executing an automated test suite. Testers write test
scripts and test cases using the automation tool and then group into test suites.
Advantages of Automation Testing
You would have tested software applications or web applications manually, so you might be
aware of the drawbacks of manual testing. Manual testing is time consuming, tedious and
requires heavy investment in human resources.
Time constraints often make it impossible to manually test every feature thoroughly before
software application or web application is to be released. This leaves you wondering whether
serious defects have been detected or not.
To address all these issues automation testing is done, you can create tests that check all
aspects of the software applications and then execute these test cases every time any changes
are made in software application.
Benefits of Automation Testing
• Fast: Runs tests significantly faster than human users.
• Repeatable: Testers can test how the website or software reacts after repeated
execution of the same operation.
• Reusable: Tests can be re-used on different versions of the software.
• Reliable: Tests perform precisely the same operation each time they are
run thereby eliminating human error.
• Comprehensive: Testers can build test suites of tests that covers every feature
in software software application.
• Programmable: Testers can program sophisticated tests that bring hidden
information.
![Page 52: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/52.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 10 of 18
Manual Testing Automation Testing
1. Time consuming and
tedious: Since test cases are executed
by human resources so it is very slow
and tedious.
1. Fast Automation runs test cases
significantly faster than human
resources.
2. Huge investment in human
resources: As test cases need to be
executed manually so more testers are
required in manual testing.
2. Less investment in human
resources:Test cases are executed by
using automation tool so less tester are
required in automation testing.
3. Less reliable: Manual testing is less
reliable as tests may not be performed
with precision each time because of
human errors.
3. More reliable: Automation tests
perform precisely same operation each
time they are run.
4. Non-programmable: No
programming can be done to write
sophisticated tests which fetch hidden
information.
4. Programmable: Testers can
program sophisticated tests to bring out
hidden information.
� Freeware:-
Freeware is copyrighted computer software which is made available for use free of charge,
for an unlimited time. Authors of freeware often want to "give something to the
community", but also want to retain control of any future development of the software.
� Shareware:-
Shareware refers to commercial software that is copyrighted, but which may be copied for
others for the purpose of their trying it out with the understanding that they will pay for it if
they continue to use it.
Comparison chart
Freeware
Shareware
About: Freeware refers to software that
anyone can download from the
Internet and use for free.
Sharewares give users a chance
to try the software before buying
it.
Inception: The term freeware was first used
by Andrew Fluegelman in 1982,
when he wanted to sell
a communications program
In 1982, Bob Wallace produced
PC-Write, a word processor, and
distributed it as a shareware. The
term was first used in 1970, in
![Page 53: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/53.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 11 of 18
Freeware
Shareware
named PC-Talk. InfoWorld magazine.
License and
Copyright:
User license or EULA (End User
License Agreement) is an
important part of freeware. Each
license is specific to the
freeware. Copyright laws are
also applicable to Freeware.
Copyright laws also apply to
Shareware but the copyright
holder or author holds all the
rights, with a few specific
exceptions.
Features: All the features are free. Most of the times, all features are
not available, or have limited
use. To use all the features of the
software, user has to purchase
the software.
Distribution: Freeware programs can be
distributed free of cost.
Shareware may or may not be
distributed freely. In many cases,
author’s permission is needed, to
distribute the shareware.
Example: Adobe PDF, Google Talk,
yahoomessenger,
VLC,MSN messenger
Winzip, Cuteftp, AVG Antivirus
License Tools:-
• What software functionality can be used. Functions provided by the software can be
separately licensed.
• The licensed functions are referred to as features.
• When multiple features are defined, different versions of the product can be licensed by
including different feature sets.
For example, the license for the ‘demo’ version of the product could include the feature ‘trial’,
the ‘standard’ version of the product the features ‘trial’ and ‘basic’ and the ‘professional’
version ‘trial’, ‘basic’ and ‘extend’ features.
• What versions of the software can be used.
• How many copies of the software can be running.
• The systems on which the software can be used.
• The period during which the software can be used.
These and other items in the license define how the software can be used and collectively are
referred to as a license model.
![Page 54: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/54.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 12 of 18
The license can be stored:
• In a license file: - a text file, file_name.lic, whose contents are protected by signatures that
are authenticated by the FlexNet Publisher licensing components.
• In trusted storage: - a secure location whose contents are encrypted. Licenses are stored as
fulfillment records. Fulfillment records in trusted storage can be read only by FlexNet Publisher
licensing components. The FlexEnabled application can obtain a license directly, either from a
license file or from local trusted storage on the same machine. Some license models, described
as served, provide licenses that are held centrally by a license server and used by FlexEnabled
applications connected to the license server across a TCP/IP network.
Win Runner:-
Win Runner is an automated functional GUI testing tool that allows a user to record and play
back UI interactions as test scripts. Win Runner is functional testing software for enterprise IT
applications. It captures, verifies and replays user interactions automatically, so you can
identify defects and determine whether business processes work as designed.
Win Runner is the most used Automated Software Testing Tool.
Main Features of Win Runner are
• Developed by Mercury Interactive
• Functionality testing tool
• Supports C/s and web technologies such as (VB, VC++, D2K, Java, HTML, Power
Builder, Delphe, Cibell (ERP))
• To Support .net, xml, SAP, Peoplesoft, Oracle applications, Multimedia we can use
QTP.
• Winrunner run on Windows only.
• Xrunner run only UNIX and linux.
• Tool developed in C on VC++ environment.
• To automate our manual test win runner used TSL (Test Script language like c)
The main Testing Process in Win Runner is
1) Learning:- Recognazation of objects and windows in our application by winrunner is called
learning. Winrunner 7.0 follows Auto learning.
2) Recording:- Winrunner records over manual business operation in TSL
3) Edit Script:- depends on corresponding manual test, test engineer inserts check points in to that
record script.
4) Run Script:- During test script execution, winrunner compare tester given expected values and
![Page 55: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/55.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 13 of 18
application actual values and returns results.
5) Analyze Results:-
Tester analyzes the tool given results to concentrate on defect tracking if required.
WinRunner (Features & Benefits):-
• Test functionality using multiple data combinations in a single test:- WinRunner's DataDriver Wizard eliminates programming to automate testing for large
volumes of data. This saves testers significant amounts of time preparing scripts and
allows for more thorough testing.
• Significantly increase power and flexibility of tests without any programming:- The Function Generator presents a quick and error-free way to design tests and enhance
scripts without any programming knowledge. Testers can simply point at a GUI object,
and WinRunner will examine it, determine its class and suggest an appropriate function
to be used.
• Use multiple verification types to ensure sound functionality:- WinRunner provides checkpoints for text, GUI, bitmaps, URL links and the database,
allowing testers to compare expected and actual outcomes and identify potential
problems with numerous GUI objects and their functionality.
• Verify data integrity in your back-end database:- Built-in Database Verification confirms values stored in the database and ensures
transaction accuracy and the data integrity of records that have been updated, deleted
and added.
• View, store and verify at a glance every attribute of tested objects:- WinRunner’s GUI Spy automatically identifies, records and displays the properties of
standard GUI objects, ActiveX controls, as well as Java objects and methods. This
ensures that every object in the user interface is recognized by the script and can be
tested.
• Maintain tests and build reusable scripts:- The GUI map provides a centralized object repository, allowing testers to verify and
modify any tested object. These changes are then automatically propagated to all
appropriate scripts, eliminating the need to build new scripts each time the application
is modified.
• Test multiple environments with a single application:- WinRunner supports more than 30 environments, including Web, Java, Visual Basic,
etc. In addition, it provides targeted solutions for such leading ERP/CRM applications
as SAP, Siebel, PeopleSoft and a number of others.
![Page 56: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/56.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 14 of 18
LoadRunner:-
• LoadRunner is a Performance test tool, it can be used for Load testing, Stress testing
and Endurance testing (performance).
• LoadRunner is available for Windows and UNIX versions (LoadRunner for Windows
and LoadRunner for UNIX)
• We can install LoadRunner in 2 ways
1) LoadRunner full installation
2) Load Generator Installation
• LoadRunner is a protocol based test tool, where as QTP is a object based test tool. We
can select single or multiple protocols.
• LoadRunner was developed by Mercury Interactive in 1990s, later takeover by HP in
2007.
• LoadRunner is a leading tool in performance testing sector, there it has more than 60%
market share.
• Other competitor tools for Performance testing are IBM-RPT (Rational Performance
Tester), Borland-Silk Performer, QAWebload, Jmeter etc…
LoadRunner has 3 external components and one internal component
External Components:
1) VUser Generator (Virtual user generator)
2) Controller
3) Analysis
Internal Components: 1) Remote Agent Process
External Components:
1) Vuser Generator:
It is used for Scripts generation, editing and deletion.
LoadRunner Script divided into 3 parts
a) VUser-init (for recording application launching operation)
b) Action (for recording main action to be tested)
c) VUser-end (for recording application closing operation)
For generating tests Recording is the only method in LoadRunner, LoadRunner uses C like
Vuser script, which we can edit.
We can insert Transaction points (Start and end) and Rendovez point in LoadRunner.
2) Controller:
![Page 57: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/57.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 15 of 18
It is a hub of LoadRunner, here we can generate scenarios and run scenarios.
In Controller we can create VUsers, add Load generators and schedule our tests.
In LoadRunner several schedules available, we can choose our desired schedule.
Schedules are:
General schedule: Starting all vusers at a time and closing all at a time.
Ramp-up: In this schedule we start some number of vusers for a specified time
(Example 100 users for every 10 seconds) and we stop all vusers at a
time.
Ramp-down:
In this schedule we start all vusers at a time and we stop vusers
step by step (Example 100 users for every 10 seconds)
Ramp-up and Ramp-down:
In this schedule, we start and stop vusers step by step, means:
Start some number of vusers for a specified time (Example 100
users for every 10 seconds)
Stop some number of vusers for a specified time (Example 100
users for every 10 seconds)
Endurance Testing:
Applying continues load for a specific period of time
Example: 10000 vusers load continuously for 6 hours, in this
approach; we can test reliability of our Software Application
3) Analysis: This component is used for Result viewing for analyzing results and for Result
reporting.
LoadRunner is providing Result reporting in several formats like Document,
HTML and Crystal Reports.
After analyzing the Results we can send Defects.
QTP – Introduction:-
• QTP stands for Quick Test Professional, a product of Hewlett Packard (HP).
• This tool helps testers to perform an automated functional testing seamlessly without
monitoring once script development is complete.
• HP QTP uses Visual Basic Scripting (VBScript) for automating the applications.
• The Scripting Engine need not be installed exclusively as it is available part of the
Windows OS.
• The Current version of VBScript is 5.8 which is available as part of Win 7.
• VBScript is NOT a object oriented language but a object based language.
• QTP is a Functional testing tool which is best suited for regression testing of the
applications.
![Page 58: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/58.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 16 of 18
• QTP is a licensed/commercial tool owned by HP which is one of the most popular tools
available in the market.
• It compares the actual and expected result and reports the results in the execution
summary
Testing Tools:-
Tools from a software testing context, can be defined as a product that supports one or more
test activities right from planning, requirements, creating a build, test execution, defect logging
and test analysis.
CLASSIFICATION OF TOOLS:-
Tools can be classified based on several parameters. It includes,
• The purpose of the tool
• The Activities that are supported within the tool
• The Type/level of testing it supports.
• The Kind of licensing (open source, freeware, commercial)
• The technology used
TYPES OF TOOLS:
S.No# Tool Type Used for Used by
1. Test Management Tool Test Managing, scheduling,defect logging,
tracking and analysis. testers
2. Configuration
management tool
For Implementation,execution, tracking
changes
All Team
members
3. Static Analysis Tools Static Testing Developers
4. Test data Preperation
Tools Analysis and Design, Test data generation Testers
5. Test Execution Tools Implementation, Execution Testers
6. Test Comparators Comparing expected and actual results All Team
members
7. Coverage measurement
tools Provides structural coverage Developers
8. Performance Testing
tools Monitoring the performance,response time Testers
9. Project planning and
Tracking Tools For Planning
Project
Managers
10. Incident Management
Tools For managing the tests Testers
![Page 59: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/59.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 17 of 18
QTP History and Evolution:
HP Quick Test Professional was originally owned by Mercury Interactive and it was acquired
by Hp. Its original name was Astra Quick Test and later named as Quick Test Professional but
the latest version is known as Unified Functional Tester(UFT).
VERSION HISTORY:
Now let us take a look at the version history of QTP.
Versions Timelines
Astra Quick Test v1.0 to v5.5 - Mercury Interactive May 1998 to Aug
2001
QuickTest Professional v6.5 to v9.0 - Mercury Interactive Sep 2003 to Apr 2006
Hp-QuickTest Professional v9.1 to v11.0 - Acquired and Released by
HP Feb 2007 to Sep 2010
Hp-Unified Functional Testing v11.5 to v11.53 2012 to Nov 2013
Advantages:
• Developing automated tests using VBScript doesn't require a highly skilled coder and
relatively easy when compared other object oriented programming languages.
• Easy to use, ease of navigation, results validation and Report generation.
• Readily Integrated with Test Management Tool(Hp-Quality Center) which enables easy
scheduling and Monitoring.
• Can also be used for Mobile Application Testing.
• Since it is a Hp product, the full support is provided by HP and by its forums for
addressing technical issues.
Disadvantages:
• Unlike Selenium, QTP works in Windows operating system only.
• Not all versions of Browsers are supported and the testers need to wait for the patch to be
released for each one of the major versions.
• Having said that it is a commercial tool, the licensing cost is very high.
• Even though scripting time is less, the execution time is relatively higher as it puts load on
CPU & RAM.
Rational Suite:-
• Rational Suite editions are sets of tools customized for every member of your team.
Each Suite edition contains the tools from the Rational Suite Team Unifying Platform.
The Team Unifying Platform is a common set of tools that focus on helping your team
perform more effectively. Each Rational Suite edition also contains tools selected for a
![Page 60: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/60.jpg)
Sub.: SAD Chapter – 5 Software Development Life Cycle Models B.C.A. Sem - 3
Page 18 of 18
specific practitioner on your development team. The following sections describe each
Suite edition and the tools they contain.
• To put these software development principles to work, Rational Software offers
Rational Suite, a family of market-leading software development tools supported by
the Rational Unified Process. These tools help you throughout the project lifecycle.
• Rational Suite packages the tools and the process into several editions, each of which
is customized for specific practitioners on your development team, including
analysts, developers, and testers.
• Alone, these tools have helped organizations around the world successfully create
software
.
• Integrated into Rational Suite, they:
• Unify your team by enhancing communication and providing common tools.
• Optimize individual productivity with market-leading development tools
packaged in Suite editions that are customized for the major roles on your team.
• Simplify adoption by providing a comprehensive set of integrated tools that
deliver simplified installation, licensing, and user support plans
• Manage change It is important to manage change in a track able, repeatable, and
predictable way. Change management includes facilitating parallel development,
tracking and handling enhancement and change requests, defining repeatable
development processes, and reliably reproducing software builds.
As change propagates throughout the life of a project, clearly defined and repeatable
change process guidelines help facilitate clear communication about progress and,
more importantly, allows you to more effectively control risks associated with
change.
![Page 61: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/61.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 1 of 27
CONCEPTS OF PROJECT MANAGEMENT:
The term “Project” is very wide in meaning. A project is completed by performing a set of
activities. For example, construction of a building is a project. A project consumes resources.
Following resources are required for completing a project.
-Men
-Material
-Money
-Time
This, resources are very limited and scarce in the nature. If a person wants to make software, the
first thing that comes to his mind is the financial budget within which the work should be
completed. Thus, resource constraint is a feature of all projects. If one wants to construct software
at an estimated cost of Rs. 20 lacs and within a period of 1 year, the project should be completed
subject to these constraints.
Thus, project is an organized programme of pre defined group of activities that are non-routine in
nature and that must be completed using the available resources within the given time limit.
According to the Encyclopaedia of Management, project is an organized unit dedicated to the
attainment of goal – the successful completion of a development project on time within budget, in
4 Project Economics, Project scheduling and Tracking
Topic Covered
1. Concept of project management
2. Project costing based on metrics
3. Empirical project estimation techniques
4. Decomposition techniques
5. Algorithmic methods
6. Automated estimation tools
7. Concepts of project scheduling and tracking
8. Effort estimation techniques
9. Task network and scheduling methods
10. Timeline chart
11. Pert Chart
12. Monitoring and control progress
13. Graphical Reporting Tools
![Page 62: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/62.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 2 of 27
conformance with pre-determined programme specifications. Project management is involves
scientific application of modern tools and techniques in planning, financing, implementing,
monitoring, controlling and coordinating unique activities or tasks to produce desirable outputs in
accordance with the predetermined objectives within the constitutes of time and cost.
Project management consists of the following stages
1. planning
2. scheduling
3. implementation
4. controlling
5. monitoring
Project management aims at optimum utilization of resources. As project becomes large, its
complexities like planning, scheduling, implementing, controlling and monitoring – increase. For
effective management of larger and complex projects, systematically techniques are to be followed.
Concepts of Project Management:
1. Objectives: A project has a set of objectives or a mission. Once the objectives are
achieved, the project is treated as completed.
2. Life cycle: A project has a life cycle. The life cycle consists of the following stages :
a. Design stage : Where detailed design of different project areas are worked out.
b. Implementation stage : Where the project is implemented as per the design.
c. Commissioning stage : Where the project is commissioned after implementation
commissioning of a project indicates the end of its life cycle.
3. Definite Time Limit: A project has a definite time limit that must be considered.
4. Team Work: A project normally consists of diver’s areas. There are person specialized
in their respective areas. Co-ordination among the divers areas come for teamwork.
Hence a project can be implemented only with teamwork.
5. Complexity: A project is a set of complex activities. Technology survey, choose the
appropriate technology, procuring the appropriate machinery and equipment hiring the
right kind of people, arranging for financial resources, execution of project in time by
proper scheduling of the different activities etc., contributes the complexity of the project.
6. Sub-Contracting: Some of the activities are entrusted to sub-contractors to reach the
complexity of the project. Subcontracting is advantageous if it reduces complexity of the
project so that the project manager can coordinate the remaining activities of the project
more effectively.
7. Risk and Uncertainty: Risk and uncertainty go hand in hand with project. A risk free
project cannot be thought of. Even if a project appears to be risk free, it only means that
the risk element is not apparently visible on the surface and it will be hidden base. The
risk factor will come to surface when conditions become conducive to it. Some of the risk
elements can be foreseen where as some other risk elements cannot be foreseen.
8. Change: A project is not rigid in its life span. Changes occur throughout the life of a
project. The changes may vary from minor changes which may have very little impact on
the project to major changes which may have a big impact on the project.
During the course of implementation, the technology gets improved further and
equipments with latest technology would have already started arriving. In such a case, if
![Page 63: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/63.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 3 of 27
the equipment originally planned had not yet been procured, it would be wise to switch
over to the equipment with the latest technology.
Characteristics of Good Project Manager:
1. Planning and organizational skills
2. Personnel management skills
3. Communications skills
4. Problem solving ability
5. Ability to take suggestion
6. Knowledge of project management methods and tools
7. Effective time management
8. Solving issues/problems immediately without postponing them
9. Risk taking ability
10. Familiarity with the organization
11. Tolerance for difference of opinion, delay, uncertainty
12. Knowledge of technology
13. Team building skills
14. Resource allocation skills
PROJECT COSTING BASED ON METRICS:
The success of the project management may be dependent on whether you have adapted a
metrics program or not. A metrics, by definition, is any type of measurement used to check some
quantifiable component of performance. A metrics can be directly collected through observation,
such as number of days late, or number of software defects found; or the metric can be derived
from directly observable quantities, such as defects per thousand lines of code or a cost
performance index (CPI). When used in a monitoring system to assess project or program health a
metric is called an indicator, or a key performance indicator (KPI).
1. Metrics Management Defined:
Intense interest in metrics within the project management community has defined an entire subfield
called metrics management. Project metrics can be categorized into three main categories:
1. Accurate project management measurements
2. Indicators of projects success (Customer satisfaction)
3. Indicators of business success
At the macro level, metrics management means identifying and tracking strategic objectives. This
is often done by the Project Management Official, if one exists. One Project Management
practitioner has even suggested that corporations should have a Chief Performance Officer (CPO),
who is responsible for metrics collection and analysis, and for communicating those metrics to
management for strategic decision making.
While reporting metrics to management, it is important to keep the time factor in mind. True
success or true failure may not be measure until project is formally closed. For example, a new
software application may success after six months of its production, when it finally reaches its
planned usage targets.
![Page 64: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/64.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 4 of 27
At the micro level, metrics management means identifying and tracking planned objectives. It is
only by looking at the task level metrics that status of higher-level work packages can be
ascertained, which can then be reported to project managers and customers. Different types of
projects will require different types of metrics.
The following criteria are the most common tactical measures people want to be updated about :
Tactical
Measure
Question Answered Sample Indicator
Time How are we doing against the
schedule?
Schedule Performance Index (SPI) = Earned
Value / Planned Value
Cost How are we doing against the
budget?
Cost Performance Index (CPI) = Earned
Value/Actual cost
Resources Are we within anticipated limits of
staff-hours spent?
Amount of hours overspent per software
iteration.
Scope Have the scope changes been more
than expected?
Number of change requests.
Quality Are the quality problems being
fixed?
Number of defects fixed per user acceptance
test.
Action items Are we keeping up with our action
item list?
Number of action items behind schedule for
resolution.
A common saying you may hear about metrics is: “If it cannot be measured, it cannot be
managed.” Clearly, if metrics are not proper then it’s difficult to make effective decision for
project manager.
If you want to put an effective metrics program in place, set aside time to plan the following
items in the following order:
What information are you going to collect?
How are you going to collect the information?
What methods will you use to process and analyze the information?
The best way to showcase your information is usually the simplest. Some project management
software packages include an automated dashboard feature, which may or may not fit your needs.
Visual displays, such as a simple graph to illustrate trends, or the classic “traffic light”, are
effective ways to show the status of key metrics indicators. A simple traffic light chart can be built
in Excel, using colours to show status. For Example:
• Green means “So far so good”.
• Yellow means “Warning – keep an eye on me”.
• Red means “Urgent attention needed.”
Your traffic light report should show detailed indicators and one rollup indicator for status at a
glance.
If using a traffic light format, one has to know when to change colours on the lights.
For example, for a schedule-based indicator, the rule can be “Turn the indicator yellow when the
number of overdue tasks is greater.” Indicators can also be split into monthly target ranges so that
trends in progress can be gradually visualized. It is better to turn the traffic light yellow when the
overall schedule is five days late during month 1. Than to turn it yellow when you are 15 days late
during Month & 3 When it is too late to react.
![Page 65: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/65.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 5 of 27
2. Metrics Management Defined :
If you continue to collect metrics about the projects in your company’s portfolio, you are creating a
valuable database of internal benchmarking data. We can compare our metrics to other projects in
our portfolio to see where process improvements can be made, or where we introduce compliance
requirements. We can also compare our metrics to benchmarked project data from other companies
in the same industry.
The challenge is to make sure that the project status includes metrics that express the value of
project management. As you can see, there are many tools and techniques available to
communicate and manage metrics at a project.
PROJECT COSTING BASED ON METRICS
If we want to decrease the product cost of business then, we have to monitor and identified the
cost and schedule risk factors, and to increase the skills of key staff members, a software cost
estimation process is followed.
This process is responsible for tracking and refining cost estimates throughout the project life
cycle. This process also helps in developing a clear understanding of the factors which influence
software development costs.
Cost of estimating software varies according to the nature and type of the product to be
developed. The cost of estimating an operating system, for example, will be more than the cost
estimated for an application program. Depending upon the nature of the project to be estimated,
different projects estimation techniques can be used.
1. Empirical Techniques : Empirical estimation techniques are based on making an educated
guess of the project parameters. While using this technique, prior experience with development
of similar products is helpful. Although empirical estimation techniques are based on common
sense, different activities involved in estimation have been formalized over the years. Two
popular empirical estimation techniques are :
Expert Judgment Technique & Delphi Cost Estimation
1) Heuristic techniques: Estimation in these techniques is performed with the help of
mathematical equations which are based on historical data or theory. In order to estimate costs
accurately, various inputs are provided to these techniques. These inputs include software size and
other parameters.
2) Analytical techniques: Estimation in these techniques has a scientific basis. In this
technique, certain basics regarding the project to the estimated are assumed to derive the required
results.
PROJECT COSTING BASED ON METRICS
Expert judgment is one of the most widely used estimation techniques. In this approach, an
expert makes and educated guess of the problem size after analyzing the problem thoroughly.
![Page 66: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/66.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 6 of 27
Usually, the expert estimates the cost of the different components (i.e. modules or subsystems) of
the system and then combines them to arrive at the overall estimate. However, this technique is
subject to human errors and individuals bias (misunderstandings).
Also, it is possible that the expert may overlook some factors inadvertently (without realizing).
Further, an expert making an estimate may not have experience and knowledge of all aspects of a
project. For example, he may be conversant with the database and user interface parts but may not
be very knowledgeable about the computer communication part.
A more refined form of expert judgement is the estimation made by group of experts. Estimation
by a group of experts minimizes factors such as individual oversight, lack of familiarity with a
particular aspect of a project, personal bias, and the desire to win contract through overly
optimistic estimates.
However, the estimate made by a group of experts may still exhibit bias on issues where the
entire group of experts may be biased due to reasons such as political considerations. Also, the
decision made by the group may be dominated by overly assertive members.
a. Delphi Cost Estimation :
Delphi cost estimation approach tries to overcome some of the shortcomings of the expert
judgment approach. Delphi estimation is carried out by a team comprising of a group of experts
and a coordinator. In this approach, the coordinator provides each estimator with a copy of the
software requirements specification (SRS) document and a form for recording his cost estimates.
Estimators complete their individual estimates anonymously and submit to the coordinator. In
their estimates, the estimators mention any unusual characteristics of the product which has
influenced his estimation. The coordinator prepares and distributes the summary of the response of
all the estimators, and includes any unusual rationale noted by any of the estimators.
Based on this summary, the estimators re-estimate. This process is iterated for several rounds.
However, no discussion among the estimators is allowed during the entire estimation process. The
idea behind this is that if any discussion is allowed among the estimators, then many estimators
may easily get influenced by the rationale of an estimator who may be more experienced or senior.
After the completion of several iterations of estimations, the coordinator takes the responsibility of
compiling the results and preparing the final estimate.
To estimate software project effort using three macro-estimation techniques :
• Estimation using equations
• Estimation using comparison
• Estimation using analogy.
It is important that you do not rely on a single estimation method for a project. Using a
combination of both micro and macro estimation techniques has proven to give the most accurate
results. In addition, a formal risk assessment is essential project estimation prerequisite.
Estimating using Equations: One technique for software project estimation involves the use of
regression equations. These equations allow you to calculate an estimate for a particular project
metric such as effort or duration by simply inserting the calculated or estimated, size of your
project into the appropriate equation.
This estimation technique is commonly used to produce indicative project estimates early in the
life of a project. This technique is not sufficiently accurate to produce an estimate that could be
relied on for quoting or business case requirements. This estimate can be used for an early
indication of whether a project idea is feasible, or when you are short of time and detailed
information.
The set of regression equations using the data may be used. These equations may be used to
calculate following project metrics:
• Project Delivery Rate (person hours per unit)
![Page 67: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/67.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 7 of 27
• Effort (person hours)
• Duration (elapsed hours)
• Speed of delivery (units delivered per elapsed calendar month)
Equations are provided for:
• Platform (Mainframe, Mid-range, PC & Mixed)
• Language Type (3GL, 4GL & Application Generator)
The combination of platform and Language type, it is easy to use the equations. Having selected
the appropriate equation you insert the functional size of your project and / or the maximum team
size to produce your estimate.
Estimating using Comparison: Estimation using comparison allows you to achieve more detailed
estimates than can be gained using regression equations. Estimates using comparison are aligned
more specifically to the attributes of the project being planned rather than being based on those of
the “average” project in the repository.
Estimation using comparison involves a technique based on comparison of your planned project
with a number of projects in the repository that have similar attributes to the planned project.
Comparison based estimation involves considering the attributes of the project to be estimated,
selecting projects with similar attributes from the repository then using the median values for
effort, duration etc., from the selected group of projects to produce and estimate of project delivery
rate and speed of delivery and consequently project effort and duration.
The steps are as follows.
1. Define the platform applicable to your project and identify that subset of data using the
estimating, benchmarking and research.
2. Define the other attributes of the project to be estimated.
3. Search the identified sub set of data for projects with the same attributes.
4. For each of the planned project’s attributes, obtain the median project delivery rate and speed
of deliver for all the projects in the repository exhibiting that attribute.
5. Determine the average of the medians of the project delivery rate and speed of delivery.
6. The result is your estimate.
Because the resulting values are aligned to the specific attributes of the project to be estimated,
they are better estimates of that project’s project delivery rate and speed of delivery than the values
obtained from the equations that reflected the ‘average’ project in the database.
Estimating using Analogy: Analogy based estimation is another technique for early life cycle
macro-estimation. Analogy based estimation involves selecting one or two completed projects that
most closely match the characteristics of your planned project. The chosen project(s) or analogues
are then used as the base for your new estimate.
Analogy based estimation differs from the comparison based estimation above, in that
comparison based estimation uses the medians from a group of similar projects. Analogy operates
with one, or perhaps two past projects selected on the basis of their close similarity to the proposed
project. Comparing a planned project to a past project is commonly used in an informal way when
“guess and estimating”, consequently it is a familiar technique to the practitioner. Estimating
software project effort by analogy involves a number of steps :
1. Establish the attributes of your planned project. (e.g. size, language, type etc)
2. Measure or estimate the values of those project attributes.
![Page 68: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/68.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 8 of 27
3. Search the repository for a project that closely matches the attributes of your planned project.
4. Use the known development effort from the selected project (analogue) as an initial estimate
for the target project.
5. Compare each of the chosen attributes. (size, platform etc.)
6. Establish or adjust the initial effort estimate in light of the differences between the analogue
and your planned project.
It is very important that you use your judgment to exclude inappropriate analogues and not be
tempted to adopt a “likely” analogue without due care.
DECOMPOSITION TECHNIQUES:
Decomposition involves dividing each project deliverable into smaller and smaller pieces until
there is enough detail to support scheduling, estimating and control.
Software project estimation is a form of problem solving and in most cases, the problem to be
solved is too complex to be considered in one piece. For this reason, we decompose the problem,
re-characterizing it as a set of smaller problems.
Decomposition of project scope generally involves the following activities.
1. Determine your main project deliverables.
2. Create a high-level Work Breakdown Structure(WBS) by ‘chunking’ work into smaller
tasks.
3. Continue to break down high-level tasks into smaller tasks.
4. Create a system for tracking each task.
5. Verify that the resulting tasks are manageable.
Project
Phases
Deliverable
Work
Activities
Decompositio
DecompositioOut of Scope
Out of Time Management
![Page 69: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/69.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 9 of 27
Excessive decomposition may lead to more work without much value for the time spent. It can also
leads to inefficient use of resource, and decreased work efficiency. So, knowing few basics about
work package helps us in deciding the level of decomposition.
As you decompose requirements, keep in mind that you can structure and manage the project
deliverables in various ways. Some of the most common approaches for different types of
organizations include:
1. Spreading deliverables across project phases : This is typical for projects conducted in
a waterfall fashion and for schedule-oriented projects.
2. Spreading deliverables across knowledge areas : Project teams organized by areas of
expertise (such as operations, legal or finance) usually use the technique in their project.
3. Spreading deliverable across processes : This is typical for process-oriented projects.
4. Spreading deliverables across sub-projects : This is most effective big projects with
added complexity, where deliverables are specific for parts of the project but not for the
project as a whole.
5. Organizing deliverables hierarchically, from major deliverables to sub-deliverables :
This is best for product-oriented projects.
ALGORITHMIC METHODS
1. Constructive Cost Model (COCOMO)
Software cost estimation is an important part of the software development process. The
COCOMO offers a powerful instrument to predict software costs.
The Constructive Cost Model is an algorithmic software cost estimation model developed by
Barry W. Boehm. The model uses a basic regression formula with parameters that are derived from
historical project data and current as well as future project characteristics.
COCOMO provides more support for modern software development processes and an updated
project database. The need for the new model came as software development technology moved
from mainframe and overnight batch processing to desktop development code reusability and the
use off-the-shelf software components.
Boehm proposed three levels of the model:
• Basic
• Intermediate
• Detailed
The first level, basic COCOMO is good for quick, early, rough order of magnitude estimates of
software costs, but its accuracy is limited due to its lack of factors to account for difference in
project attributes.
The intermediate COCOMO model computes software development effort as a function of
program size and a set of fifteen “cost drivers” that include subjective assessments of product
hardware, personnel and project attributes.
The advanced or detailed COCOMO model incorporates all characteristics of the intermediate
version with an assessments of the cost driver’s impact on each step(analysis, design etc.) of the
software engineering process.
Advantages of COCOMO’8:
1. COCOMO is transparent – one can see how it works unlike other models such as SLIM.
2. Drivers are particularly helpful to the estimator to understand the impact of different factors that
affect project costs.
![Page 70: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/70.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 10 of 27
Disadvantages of COCOMO’8:
1. It is hard to accurately estimate KDSI (Thousand Delivered Source.) early on in the project,
when most effort estimates are required.
2. KDSI, actually is not a size measured it is a length measured.
3. Extremely vulnerable to mis-classification of the development mode.
4. Success depends largely on tuning the model to the needs of the organization, using historical
data which is not always available.
Constructive Cost Model II (COCOMO II) is a model that allows one to estimate cost effort, and
schedule when planning a new software development activity. COCOMO II is the latest major
extension to the original COCOMO81 model published in 1981. It consists of three sub-models,
each one offering increased fidelity the further along one is in the project planning and design
process. Listed in increasing fidelity, theses sub-models are called the application composition,
early design, and Post-architecture models.
COCOMO II can be used for the following major decision situations.
1. Making investment or other financial decisions involving a software development effort.
2. Setting project budgets and schedules as a basis for planning and control.
3. Deciding on or negotiating tradeoffs among software cost schedule, functionality
performance or quality factors.
4. Making software cost and scheduled risk management decisions.
5. Deciding which parts of a software system to develop, reuse, lease or purchase.
6. Making legacy software inventory decisions : what parts to modify, phase out, outsource etc.
7. Setting mixed investment strategies to improve organization’s software capability, via reuse,
tools, process maturity, outsourcing etc.,
2. SLIM (Software Lifecycle Management) Model :
SLIM is one of the first algorithmic cost model. It is based on the Norden/Rayleigh function and
generally known as a macro estimation model (It is for large projects.) SLIM also uses historical
data from past projects for estimation. It also uses and considers other project parameters,
characteristics, attributes and KLOC for its estimation calculation. SLIM enables a software cost
estimator to perform the following functions :
• Calibration: Fine tuning the model to represent the local software development environment by
interpreting a historical database of past projects.
• Build: An information model of the software system, collecting software characteristics,
personal attributes, computer attributes etc.
• Software Sizing: SLIM uses an automated version of the lines of code (LOC) costing technique.
Advantages:
• It provides a set of software development management tools that support the entire program life
cycle.
• Offers value-added planning
• It simplifies strategic decision making.
• Supports “what-if” analysis.
• It allows report and graph generation.
Disadvantages:
![Page 71: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/71.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 11 of 27
• It works best on large projects.
• The software size needs to be estimated in advance in order to use this model.
• Estimates are extremely sensitive to the technology factor.
• Model is also sensitive to size estimate.
• Tool is considered to be fairly complex.
• It works for Waterfall life cycle that doesn’t cover up spiral model.
AUTOMATED ESTIMATION TOOLS :
The right tools can make all the differences in helping to provide better software faster. We
should focus on providing effective tools to help estimate, measure and report software project
performance, manage function point data and utilize historical and industry benchmark data. Tools
are designed to support software measurement program from project inception through final
performance reporting.
Some of the popular automated tools are given below :
Q/P Management Group has established the world’s largest functional sized based software metrics
benchmark database. Q/P has been collecting data since 1990. Project and application data are
added to the database annually after rigorous analysis and verification to ensure the highest degree
of data integrity in the industry. The database consists of statistics on thousands of projects and
applications.
• Project productivity for new development and enhancement efforts.
• Project cost and labour rates.
• Application maintenance productivity.
• Application support cost.
• Application and project quality.
• Time to market – schedule duration.
• Project staffing.
Q/P Management Group’s estimating model provides a structured framework to collect and
analyze the data required to estimate your organization’s software projects. The rough order of
magnitude estimates are based on current industry average benchmark data which are driven by the
adjusted function point size of a project.
This model produces an estimate that has not been calibrated for project risk or other factors that
can impact project productivity and quality. The estimate will include Project Effort, Project
Schedule and Project Staffing.
The required input data for the estimating model includes the project adjusted function point size,
the selection of the type of project (new development or enhancement) and identification of the
development delivery platform for the project.
Software Measurement, Reporting and Estimating (SMRS) is a tool that automates software
project estimating and the reporting of project performance metrics. Organizations can use SMRS
Benchmark Data
estimating
![Page 72: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/72.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 12 of 27
to estimate project size, effort, schedule and staffing early in the lifecycle using in-house and/or
industry benchmarks.
Once the project is complete SMRS is used to capture project data, report the performance of
development projects, and compare the performance to in-house and/or industry benchmarks.
SMRS ‘s intuitive interface allows a user to quickly develop project estimates, enter key project
statistics, compare the performance to benchmarks, analyze the results of the comparisons and
publish the report in either Word or PowerPoint formats. SMRS has been designed to work with
PQMPlus and the Function Point WORKBENCH in order to share relevant data to aid in the
production of software measurement reports.
The Function Point WORKBENCH is a network-ready Windows-based software tool which
makes it easy for an organization to implement the Function Point Analysis technique for sizing,
estimating and evaluating software.
The Function Point WORKBENCH is specifically designed to be saleable for effective use by
individual counters as well as for large distributed IT environments.
The Function Point WORKBENCH and SMRS have been designed to work together to share
relevant data to aid in the production of software measurement reports.
PQMPlus : The intelligent Software Measurement and Estimating Tool
PQMPlus is a productivity/quality measurement system developed for software development
project managers and measurement specialists. PQMPlus is a benchmarking and measurement tool
with a robust function point repository that provides project estimating based on historical data,
project scheduling and risk assessments. PQMPlus and SMR have been designed to work together
to share relevant data to aid in the production of software measurement reports.
1. Project Scheduling
The objective of software project scheduling is to create a set of engineering tasks that
will enable to complete the job in time.
When a network of software engineering tasks is developed, there can be assigned
responsibilities for each task, their execution can be tracked and controlled, and also there
can be adapted the risks if necessary.
Building of large software systems usually involves a large number of interdependent
tasks, which are difficult to understand and manage without a schedule. The progress of a
software project cannot be evaluated in practice without a schedule.
The steps for performing project scheduling, after effort and size estimation, include
allocation of effort and duration to each task and design of a task (activity) network to
enable the team to meet the established delivery deadline.
Principles of Software Project Scheduling
- compartmentalization: the project must be decomposed into manageable activities
and tasks;
![Page 73: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/73.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 13 of 27
- interdependency: the relationships between the tasks have to be established because
some activities will depend on other, while other activities may
occur independently;
- time allocation: each task must be allocated a number of time units, also possibly a
start date and a completion date;
- effort validation: every project has a defined number of staff;
- responsibilities: every task should be given to a specific member;
- outcomes: every task should have a defined result;
- milestones: every task should be associated with a milestone.
Relationship Between People and Effort
• Adding people to a project after it is behind schedule often causes the schedule to slip further
• The relationship between the number of people on a project and overall productivity is not
linear (e.g. 3 people do not produce 3 times the work of 1 person, if the people have to work
in cooperation with one another)
• The main reasons for using more than 1 person on a project are to get the job done more
rapidly and to improve software quality.
Project Effort Distribution
• The 40-20-40 rule:
o 40% front-end analysis and design
o 20% coding
o 40% back-end testing
o
Effort Allocation
![Page 74: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/74.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 14 of 27
![Page 75: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/75.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 15 of 27
• Generally accepted guidelines are:
o 02-03 % planning
o 10-25 % requirements analysis
o 20-25 % design
o 15-20 % coding
o 30-40 % testing and debugging
• Basic Principles for SE Scheduling
• Compartmentalization – define distinct tasks
• Interdependency- parallel and sequential tasks
• Time allocation - assigned person days, start time, ending time)
• Effort validation - be sure resources are available
• Defined responsibilities — people must be assigned
• Defined Outcomes- each task must have an output
• Defined milestones - review for quality
Software Project Types
1. Concept development - initiated to explore new business concept or new application of
technology
2. New application development - new product requested by customer
3. Application enhancement - major modifications to function, performance, or interfaces
(observable to user)
4. Application maintenance - correcting, adapting, or extending existing software (not
immediately obvious to user)
5. Reengineering - rebuilding all (or part) of a legacy system
Factors Affecting Task Set
• Size of project
• Number of potential users
• Mission criticality
• Application longevity
• Requirement stability
• Ease of customer/developer communication
• Maturity of applicable technology
• Performance constraints
• Embedded/non-embedded characteristics
• Project staffing
• Reengineering factors
Concept Development Tasks
• Concept scoping - determine overall project scope
• Preliminary concept planning - establishes development team's ability to undertake the
![Page 76: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/76.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 16 of 27
proposed work
• Technology risk assessment - evaluates the risk associated with the technology implied by
the software scope
• Proof of concept - demonstrates the feasibility of the technology in the
software context
• Concept implementation - concept represented in a form that can be used to sell it to
the customer
• Customer reaction to concept - solicits feedback on new technology from customer
Scheduling
• Task networks (activity networks) are graphic representations can be of the task
interdependencies and can help define a rough schedule for particular project
• Scheduling tools should be used to schedule any non-trivial project.
• Program evaluation and review technique (PERT) and critical path method (CPM) ) are
quantitative techniques that allow software planners to identify the chain of dependent tasks
in the project work breakdown structure (WBS) that determine the project duration time.
• Timeline (Gantt) charts enable software planners to determine what tasks will be need to be
conducted at a given point in time (based on estimates for effort, start time, and duration for
each task).
• The best indicator of progress is the completion and successful review of a defined software
work product.
• Time-boxing is the practice of deciding a priori the fixed amount of time that can be spent on
each task. When the task's time limit is exceeded, development moves on to the next task
(with the hope that a majority of the critical work was completed before time ran out).
Tracking Project Schedules
• Periodic project status meetings with each team member reporting progress and problems
• Evaluation of results of all work product reviews
• Comparing actual milestone completion dates to scheduled dates
• Comparing actual project task start-dates to scheduled start-dates
• Informal meeting with practitioners to have them asses subjectively progress to date and
future problems
• Use earned value analysis to assess progress quantitatively
Tracking Increment Progress for OO Projects
• Technical milestone: OO analysis complete
o All hierarchy classes defined and reviewed
o Class attributes and operations are defined and reviewed
o Class relationships defined and reviewed
o Behavioral model defined and reviewed
o Reusable classed identified
• Technical milestone: OO design complete
o Subsystems defined and reviewed
![Page 77: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/77.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 17 of 27
o Classes allocated to subsystems and reviewed
o Task allocation has been established and reviewed
o Responsibilities and collaborations have been identified
o Attributes and operations have been designed and reviewed
o Communication model has been created and reviewed
• Technical milestone: OO programming complete
o Each new design model class has been implemented
o Classes extracted from the reuse library have been implemented
o Prototype or increment has been built
• Technical milestone: OO testing complete
o The correctness and completeness of the OOA and OOD models has been reviewed
o Class-responsibility-collaboration network has been developed and reviewed
o Test cases are designed and class-level tests have been conducted for each class
o Test cases are designed, cluster testing is completed, and classes have been integrated
o System level tests are complete
� Effort Estimation
� Estimating
� The process of forecasting or approximating the time and cost of completing
project deliverables.
� The task of balancing the expectations of stakeholders and the need for control
while the project is implemented
� Types of Estimates
� Top-down (macro) estimates: analogy, group consensus, or mathematical
relationships
� Bottom-up (micro) estimates: estimates of elements of the work breakdown
structure
Estimating Techniques
The following estimating techniques fit into either the top-down or bottom-up approach. No one
estimating technique is ideal for all situations; each has its own strengths and weaknesses. When
estimating a project, you need to decide which technique is appropriate and what adjustments, if
any, are needed.
1. Ballpark Estimating
With this estimating technique you use a combination of time, effort, peak staff, and derived
from the QSM SLIM completed projects database. Each row represents a consistent set of
estimates that may be determined based on any one of the variables estimated using expert
judgment.
This technique can be used at any point in the lifecycle. It can be used early in the lifecycle and
when no historical information is available. Once the estimate is developed, a comparative
![Page 78: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/78.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 18 of 27
estimate can be developed using a proportional technique. The final estimates can be compared
to other estimates for analysis.
Using the Business Case documentation, a proposed solution is visualized, and using expert
judgment, the Modules, Interfaces, Configuration Items, or Programs in the visualized solution
can be identified and entered into Top Down Estimate by CI worksheet.
Using the factors table, a size can be determined between Very Very Small and Very Large. At
the same time the Category with Size can be determined. Estimates are developed using the Peak
Staff, Time in Months, and Effort in Hours columns for guidance in determining each size
estimate.
Using the size and category within size, the table is completed by locating the effort hours and
ESLOC in the top down factors table and entering them into the Effort Estimate and ESLOC
columns in the Top Down Estimate by CI worksheet. After all Configuration Items are
estimated, the totals can be calculated.
If there are logical groupings of configuration items, they may be numbered. The groupings can
then be used to combine individual configuration items into packages of work for estimation by
group.
The list of configuration items can be sorted by group and combined into a single estimate.
The total of the group can be used to create one estimate for the group using the top down factors
table to locate the total. The group total can then be used for a single estimate for size (ESLOC).
The ESLOC estimate is based on 100 Lines of Code per Function Point, with a productivity
index assumed to be slightly less than the average productivity of companies in the SLIM
database having a SEI CMM Level 2 productivity index.
2. Proportional Percentage Estimating
With this estimating technique you use the size of one component to proportionally estimate the
size of another. For example, the Design effort might be estimated as 22% of the Requirements
effort; Construction 45% of Requirements effort, and Testing/Pilot 33% of Requirements effort.
This technique is very effective when used appropriately, when the estimated value really does
depend proportionally on another factor. There are different proportional models for different
types of life cycles, which must be considered in developing proportional estimates.
Consideration needs to be given to whether the current estimate is for an effort that is more like a
Development/Enhancement effort or a Maintenance effort. If any portion of the labor
distribution is estimated, it can be used to expand the known portion into a total estimate.
For example:
Labor Distribution Standard for "Development / Enhancement" Work Types Development/
Enhancement
Project Management (start-up, manage, close) - Development/Enhancement 15.00%
Quality Assurance Reviews 5.00%
Development / Enhancement Analysis (Requirements) [Solution Definition] 15.00%
![Page 79: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/79.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 19 of 27
Development / Enhancement External Design 13.00%
Development / Enhancement Internal Design 12.00%
Development / Enhancement Procedures and Training 6.00%
Development / Enhancement Construction (Code/Unit Test) [Solution Generation]27.00%
Development / Enhancement Test [Solution Validation] 18.00%
Development / Enhancement Implementation [Solution Deployment] 9.00%
Labor Distribution Standard for "Maintenance" Work Types Maintenance
Project Management (start-up, manage, close) - Development/Enhancement 12.00%
Quality Assurance Reviews 8.00%
Maintenance Analysis (Requirements) [Solution Definition] 9.00%
Maintenance External Design 9.00%
Maintenance Internal Design 18.00%
Maintenance Procedures and Training 4.00%
Maintenance Construction (Code/Unit Test) [Solution Generation] 40.00%
Maintenance Test [Solution Validation] 15.00%
Maintenance Implementation [Solution Deployment] 5.00%
3. Comparative
Using this estimating technique, you compare the project at hand, the target project, with other
projects similar in scope and type to produce an estimate. The comparison is normally
performed at a high-level with little reference to detail. This technique relies heavily on the
experience of the estimators and their ability to gauge the target project in relation to the
comparative data available.
For example you have been asked to estimate the custom development for a new
telecommunications system. You also happen to know of a similar type of project that was also
custom developed. Since this reference project covered roughly 50% of the functionality needed
by the new system, you could develop a comparative estimate for the new telecommunications
systems by doubling the actual effort from your reference project.
You could even add an additional percentage of effort to account for some of the unknowns in
the new system. The comparison does not have to be at a project or phase level. You can use
this technique for lower-level tasks such as developing a reporting sub-system or a customer
maintenance window.
This technique is useful as a “sanity check” for an estimate produced by another method. It can
also be useful for estimating low-level components such as documentation, printer volume,
processor capacity, or programming a specific system component.
The major weakness of this technique is that a project is not thoroughly assessed. Therefore, it
should be used only if time is limited or a relatively large uncertainty in the estimate can be
tolerated. This technique also requires some type of historical data to compare against.
4. Expert Judgment
This technique relies on the extensive experience and judgment of the estimator to compare the
requirements for the component being estimated against all projects in his/her previous
![Page 80: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/80.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 20 of 27
experience. It differs from the comparative technique in that the reference projects are not
explicitly identified.
5. Proportional Estimating
With this estimating technique you use the size of one component to proportionally estimate the
size of another. For example, Quality Assurance might be estimated as 3% of the total project
effort; the design effort might be estimated as 40% of the coding effort; the number of printers
might be estimated as one for every 6 users. Previous personal experience or estimating
guidelines can help provide these proportionality factors.
This technique is very effective when used appropriately, when the estimated value really does
depend proportionally on another factor. However, it should not be used as a crutch to pass the
estimating responsibility on to some other component. Using this technique will magnify
estimating errors being made elsewhere.
Proportional estimates can be used in combination with other estimating techniques. For
example, after using widget counting to derive the estimate for the Requirements phase of a
project and then used proportional factors to estimate the Design, Code/Unit Test, System and
Integration Testing, Implementation, and Deployment phases of the project.
6. Widget Counting
Using this estimating technique, you identify project characteristics that can be counted and that
are performed on a recurring basis (the “widget”), estimate the effort for each type of widget, and
determine the total effort by applying these estimates against the total number of widgets.
Typical widgets may be menu choices, windows, screens, reports, database entities, database
fields, requirement specifications, pages of documentation, and test cases. You may assign
complexity factors to each type of widget (simple, medium, complex) and weight the effort
accordingly.
Use the following criteria when determining whether you should be using this estimating
technique:
� There must be enough detail information to allow you to identify and count the widgets.
� The effort to develop or complete the project must be reasonably proportional to the
number of widgets, even though the project is not necessarily made up purely of widgets.
� You must be able to produce an estimate for the effort of each widget type. This is
typically done by using the comparative approach based on historical metrics data or by
prototyping the implementation of one of the widgets.
7. Function Point Analysis
This estimating technique is suited for projects that are based on straightforward database input,
output, maintenance, and inquiry, with low algorithmic processing complexity. Function Point
Analysis is the basis for several automated estimating tools. The basic steps involved in this
estimating technique include:
![Page 81: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/81.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 21 of 27
1. Decomposing the project or application into a defined set of function types, described
below.
2. Assigning a complexity to each of these function types.
3. Tallying the function types and applying pre-defined weighting factors to these totals to
drive a single unadjusted function point count.
4. Adjusting this function point count based on the overall project complexity.
5. Translating the function point count to an effort estimate based on a function point
delivery rate. (This is probably the most difficult step.)
Function points are viewed from the perspective of the system boundary and are comprised of the
following types:
� Input—Any data or control information provided by the user that adds or changes data
held by the system. An input can originate directly from the user or from user-generated
transactions from an intermediary system. Inputs exclude transactions or files that enter
the system as a result of an independent process.
� Output—Any unique unit of data or control information that is procedurally generated by
the system for the benefit of the user. This would include logical units forming part of
printed reports, output display screens, audit trails, and messages.
� Inquiry—Each unique input/output combination, in which the online user defines an
inquiry as input and the system responds immediately with an output. An inquiry is
distinct from an output in that it is not procedurally generated. The result of an inquiry
may be a display/report or a transaction file that is accessible by the user.
� Logical Internal File—Any logical group of data held by the system. This includes
database tables and records on physical files describing a single logical object. A logical
file may span many physical files (e.g., index, data, and overflow). However, it is treated
as a single logical internal file for sizing purposes.
� External Interface File—Each logical group of data that is input to or output from the
system boundary to share that data with another system.
Advantages for using Function Point Analysis include:
� The project is viewed from the perspective of the user rather than the developer, that is, in
terms of user functions rather than programs, files, or objects.
� The estimates can be developed from knowledge of the requirements without a detailed
design solution being known. This provides for a level of independence from the specific
hardware platform, languages, developer’s skill level, and the organization’s line of
business.
� The use of Function Point Analysis is accepted internationally. There is also a users
group, International Function Point Users Group (IFPUG), which has established
standards to help encourage consistency in counting function points.
Disadvantages for using this estimating approach include:
� This approach does not accurately estimate systems that are largely algorithmic such as
military systems, space systems, robotics, process control, and middleware.
� Function Points can be complicated to administer. Formal training is needed before you
can consistently count, and therefore track, function points.
![Page 82: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/82.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 22 of 27
� The use of function points is not widely accepted within IT Services. As a result, we
have not gathered any estimating guidelines or metrics for function point estimating.
� Since the concept of Function Point Analysis was developed with older technologies and
development approaches, it is not certain how well this concept applies to newer
technologies and development approaches such as object-oriented development.
However, variations of Function Point Analysis are being developed to address the newer
technologies and development approaches.
8. Feature Points
This estimating technique is an extension to the function point analysis technique. It involves
adding a number of algorithms with an average complexity weight and changing the function
point weighting in other areas.
For typical management information systems, there is little difference in the results between
Function Points and Feature Points, both techniques result in nearly the same number of “points”.
For real-time or highly algorithmic systems, however, the results can be significantly different
between these two techniques.
The Function Point count for such systems totals only 60 to 80 percent of the Feature Point
count. Note: Before using this estimating technique, you should read one of the published books
on this subject.
Estimation
Technique
Strengths Weaknesses
Comparative � Estimate can be very accurate if a
suitable analogy can be identified.
� Historical data repository
required.
� Often difficult to find
comparable projects.
Expert
Judgment
� Estimate can be extremely
accurate.
� Identifies areas where
requirements clarification is
needed.
� Identifies requirements tradeoffs.
� Must be verified by another
method.
� High risk; may not be
repeatable by anyone other
than the “expert”.
� Single data point.
Proportional � Effective when estimated value
really does depend proportionally
on another factor (e.g., software
management, quality assurance,
configuration management).
� Requires previous personal
experience or experience-
based guideline metrics for
proportionality factors.
� Can magnify estimating errors
made in other areas.
Widget
Counting
� Effective for systems that can be
characterized by widgets
� Magnifies size errors if
widget effort estimates are
incorrect.
� Assumes effort to develop
system is proportional to
number of widgets, even
though system is not
necessarily made up purely of
![Page 83: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/83.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 23 of 27
widgets.
Function
Point
Analysis
� Well suited for standard
Management Information System
projects with little internal
processing complexity, especially
those using 4GL, report writer, or
CASE tool environments.
� Project viewed from user, not
developer, perspective (e.g., user
functions rather than programs,
files).
� Estimates can be developed from
knowledge of requirements
without a detailed design solution
being known.
� Provides independence from
hardware platform, languages,
developers’ skill at code
efficiency, business of
organization.
� Consistency encouraged through
established international standards
for function point counting.
� Does not accurately estimate
systems that are largely
algorithmic such as military
systems, space systems,
robotics, and process control.
� Does not have overall
acceptance within IT
Services.
� Can be complicated to
administer.
� Requires formal training.
Feature
Point
� Same strengths as Function Point
Analysis, with added benefit of
accounting for algorithms and
internal processing complexity.
� Does not yet have overall
acceptance.
� Can be complicated to
administer.
� Requires formal training.
3. Defining a Task Network
3.1 Task Set Selection
A task set is a collection of software engineering tasks, milestones
and deliverables that must be accomplished to complete the project.
3.2 Development of a Task Network
A task network, called also activity network, is a graphic representation of
the task flow of a project.
It depicts the major software engineering tasks from the selected process model arranged
sequentially or in parallel.
Consider the task of developing a software library information system. The scheduling of
this system must account for the following requirements (the subtasks are given in italic):
- initially the work should start with design of a control terminal (T0) class for no more than
eleven working days;
![Page 84: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/84.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 24 of 27
- next, the classes for student user (T1) and faculty user (T2) should be designed in parallel,
assuming that the elaboration of student user takes no more than six days, while the faculty user
needs four days;
- when the design of student user completes there have to be developed the network protocol
(T4), it is a subtask that requires eleven days, and simultaneously there have to be designed
network management routines (T5) for up to seven days;
- after the termination of the faculty user subtask, a library directory (T3) should be made for
nine days to maintain information about the different users and their addresses;
- the completion of the network protocol and management routines should be followed by design
of the overall network control (T7) procedures for up to eight days;
- the library directory design should be followed by a subtask elaboration of library staff (T6),
which takes eleven days;
- the software engineering process terminates with testing (T8) for no more than four days.
![Page 85: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/85.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 25 of 27
Make Student
User
Design
Network
Management
1/9/2009
12/9/2009
17/9/2009
19/9/2009
19/9/2009
17/9/2009
1/10/2009
27/9/2009
T0
T1
M2
M1
T5
T3
M4
T6
Make Control
Terminal
T212/9/2009 27/9/2009
1/10/2009
M3
T7
Develop
Network
Protocol
Make Faculty
User
Make Library
Directory
T419/9/2009
Design
Network
Controll
Elaborate
Library Staff
Testing
9/10/2009 9/10/2009M5 T8
Milestone
Milestone Milestone
Milestone
Milestone
4. Timeline Charts
Timeline charts, or also called Gantt charts, are developed for the entire project, for tracking and
control of all activities that need to be performed for project development.
The timeline chart is a kind of a table with the following fields:
- the left hand column contains the project tasks
- the horizontal bars indicate the duration of each task
- the diamonds indicate milestones
Gantt Chart Basics
Gantt charts are a project planning tool that can be used to represent the timing of tasks required to
complete a project. Because Gantt charts are simple to understand and easy to construct, they are used by
most project managers for all but the most complex projects.
• In a Gantt chart, each task takes up one row.
• Dates run along the top in increments of days, weeks or months, depending on the total length of
the project.
• The expected time for each task is represented by a horizontal bar whose left end marks the
expected beginning of the task and whose right end marks the expected completion date.
• Tasks may run sequentially, in parallel or overlapping.
• As the project progresses, the chart is updated by filling in the bars to a length proportional to the
fraction of work that has been accomplished on the task. This way, you can get a quick reading of
project progress by drawing a vertical line through the chart at the current date.
• Completed tasks lie to the left of the line and are completely filled in.
• Current tasks cross the line and are behind schedule if their filled-in section is to the left of the
line and ahead of schedule if the filled-in section stops to the right of the line.
• Future tasks lie completely to the right of the line.
![Page 86: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/86.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 26 of 27
In constructing a Gantt chart, keep the tasks to a manageable number (no more than 15 or 20) so that the
chart fits on a single page. More complex projects may require subordinate charts which detail the timing
of all the subtasks which make up one of the main tasks. For team projects, it often helps to have an
additional column containing numbers or initials which identify who on the team is responsible for the
task.
Often the project has important events which you would like to appear on the project timeline, but which are not tasks. For example, you may wish to highlight when a prototype is complete or the date of a
design review. You enter these on a Gantt chart as "milestone" events and mark them with a special
symbol, often an upside-down triangle.
![Page 87: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/87.jpg)
Sub.: SAD Chapter – 7 Project Economics B.C.A. Sem - 3
Page 27 of 27
day 0 12 19 12
T0
T2
T1
M2T3
M0
27 1
M1
T4
T5
M3
T6
M4T7
M5
T8
![Page 88: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/88.jpg)
Sub.: SAD Chapter – 9 CAD Project Management Tool B.C.A. Sem - 3
Page 1 of 10
Introduction:
Computer-aided design (CAD) is the use of computer systems to assist in the creation, modification,
analysis, or optimization of a design. CAD software is used to increase the productivity of the
designer, improve the quality of design, improve communications through documentation, and to
create a database for manufacturing.CAD output is often in the form of electronic files for print,
machining, or other manufacturing operations.
Computer-aided design is used in many fields. Its use in electronic design is known as Electronic
Design Automation, or EDA. In mechanical design is known as Mechanical Design Automation, or
MDA, it is also known as computer-aided drafting (CAD) which describes the process of creating a
technical drawing with the use of computer software.
CAD software for mechanical design uses either vector based graphics to depict the objects of
traditional drafting, or may also produce raster graphics showing the overall appearance of designed
objects. However, it involves more than just shapes. As in the manual drafting of technical and
engineering drawings, the output of CAD must convey information, such as materials, processes,
dimensions, and tolerances, according to application-specific conventions.
CAD may be used to design curves and figures in two-dimensional (2D) space; or curves, surfaces,
and solids in three-dimensional (3D) space.
CAD is an important industrial art extensively used in many applications, including automotive,
shipbuilding, and aerospace industries, industrial and architectural design, prosthetics, and many more.
CAD is also widely used to produce computer animation for special effects in movies, advertising and
technical manuals, often called DCC Digital content creation.
9 CAD Project Management Tool and UML
Topic Covered
1. MS VISIO for designing and documentation of project
2. MS Project for Controlling and managing project
3. Steps to insert the Visio drawings into other Microsoft Office
documents.
4. Steps for creating new diagram with Visio.
5. UML designing and skill based tools
• Class diagram
• Use case diagram
• Activity diagram
![Page 89: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/89.jpg)
Sub.: SAD Chapter – 9 CAD Project Management Tool B.C.A. Sem - 3
Page 2 of 10
MS-VISIO FOR DESIGNING AND DOCUMENTATION
Microsoft Office Visio is a diagramming and vector graphics application and is part of the Microsoft
Office suite. The product was first introduced in 1992, made by the Shape ware corporation. It was
acquired by Microsoft in 2000.
• MS-VISIO is diagramming and vector graphics application and is a part of Microsoft Office Suite.
• The product was first introduced in 1992, made by the Shapeware Corporation. It was acquired by
Microsoft in 2000.
• One of the versions of Microsoft is MS VISIO 2010 for windows, is available in three editions:
Standard, Professional and Premium.
The Standard and Professional editions share the same interface, but later on it has additional templates
for more advanced diagrams and layouts, as well as unique capabilities intended to make it easy for
users to connect their diagrams to data sources and display their data graphically.
• The premium edition features three additional diagram types as well as intelligent rules, validation
and sub process (diagram breakdown).
The new Microsoft Visio has:
- Features designed to make it easier to create diagrams including quicker access to frequently
used tools, new & updated shapes and patterns, and improved themes and effects.
- Tools to make teamwork easy, such as the ability to work together on the same diagram at the
same time.
- Improved touch support, including for Windows 8 and Visio Services in the new Microsoft
SharePoint.
- Options to make your diagrams more dynamic by linking shapes to real-time data.
- The ability to share your diagrams with other through a browser (even if they don’t have Visio
Installed) through Microsoft Office 365 or SharePoint.
• Let’s discuss an example of drawing flow chart using Microsoft Visio:
Step 1: To open a new Visio Drawing, go to the Start Menu and Select Programs - > Microsoft Office -
> Microsoft Visio 2007. (Figure1).
Step 2: Move your cursor over “Template Category” and select “Flowchart”.
![Page 90: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/90.jpg)
Sub.: SAD Chapter – 9 CAD Project Management Tool B.C.A. Sem - 3
Page 3 of 10
• Creating a new diagram :
Step1: Select a shape from the Shapes menu, and drag it to the workspace.
Step 2: On the toolbar, click the connector tool will appear highlighted and will remain active until it is
deselected.
![Page 91: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/91.jpg)
Sub.: SAD Chapter – 9 CAD Project Management Tool B.C.A. Sem - 3
Page 4 of 10
Step3: With the first shape still selected, drag a second shape to the workspace. The shapes are
connected automatically when the connector tool is turned on.
Step4: Continue adding shapes until you have enough to include all of the steps in the business process
being outline. The example here illustrates a multi step process.
Step5: Shapes can be resized or moved, and the connectors will remain intact. At this point, your
diagram should look something like the following example.
Adding text to a diagram and formatting the text : Step 1 : Double click on a shape to enter text. There is no need to create a text box (as required with
Microsoft Word or PowerPoint Shapes); Visio does this automatically for you.
Step 2 : The default format for text in Visio is Arial 8-point font. The most efficient way to format is to
enter all of the test, then format all of the shapes at once. To do this, click on one of the shapes to
select it. Hold down the Shift key, and click on the other shapes you wish to format.
• Creating a background:
Step1: From the menu on the left side of the screen, click on “Backgrounds”.
Step 2: Click on a design, drag it over your drawing, and drop it on your workspace.
• Modifying the color scheme:
Step 1: Right click on your workspace and select “Color Schemes”.
Connector Tool
![Page 92: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/92.jpg)
Sub.: SAD Chapter – 9 CAD Project Management Tool B.C.A. Sem - 3
Page 5 of 10
Step 2: The color scheme menu will open, select a scheme from the menu and click apply. Preview
different color schemes, then select “OK” when you find one you like.
Step 3: Sometimes you need to change the color of one or two shapes for impact. To do this, select the
shape you wish to change, then click on the print bucket tool, located in the formatting toolbar. Select
a color by clicking on it, and only the shape you selected will change.
• Visio drawing can be printed out just like any other Microsoft document. From the toolbar, select
File->Print.
Steps to insert the Visio drawings into other Microsoft Office documents: • Visio drawings can also be inserted into other Microsoft Office documents such as PowerPoint or
Word.
Step 1: From the Visio toolbar, select Edit->Copy drawing.
Step 2: Open your PowerPoint presentation or Word document, and position your cursor where you
would like to insert the Visio drawing.
Step 3: Select Edit -> Paste.
Step 4: To change your drawing, double click on it (while still in PowerPoint or word), and Visio will
open within PowerPoint or Word for what is called in-place editing.
MS-VISIO FOR DESIGNING AND DOCUMENTATION:
• Microsoft Project is a project management software program developed and sold by Microsoft.
• It is designed to help a project manager in project planning, assigning resources to tasks, progress
tracking and budget management and to analyze workload.
• Being a part of Microsoft office, it is never included in any office suites. Currently it is available in
two editions; Standard and Professional.
• Project creates budgets based on assignment work and cost of resources. As resources are assigned to
task and assignment work estimated, the program calculates the cost, equal to the work times the rate,
which rolls up to the task level and then to any summary tasks and finally to the project level.
• Resource like people, equipment and materials can be shared between projects using a shared
resource pool. Each resource maintains, its own calendar, which defines what days and shifts a
resource is available.
• Each resource can be assigned to multiple tasks in multiple plans and each task can be assigned
multiple resources, and the application schedules task based on the resource availability as defined in
the resource calendars.
• All resources can be defined in label without limit. Therefore it cannot be determine how many
finished products can be produced with a given amount of raw materials.
• This makes Microsoft Project unsuitable for solving problems of available materials constrained
production. Additional software is necessary to manage a complex facility that produces physical
goods.
• The application creates critical path schedules, and critical chain and event chain methodology third-
party add-ons also are available. Schedules can be resource leveled, and chains are visualized in a
Gantt chart.
• Additionally, Microsoft Project can identify different classes of users. These different classes of users
can have differing access levels to projects, views and other data.
![Page 93: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/93.jpg)
Sub.: SAD Chapter – 9 CAD Project Management Tool B.C.A. Sem - 3
Page 6 of 10
• Custom objects such as calendars, views, tables, filters and fields are stored in an enterprise global
which can be shared by all users.
What is UML ?
The Unified Modeling Language (UML) is a standard language for specifying, visualizing,
constructing, and documenting the artifacts of software systems, as well as for business modeling and
other non-software systems.
The UML represents a collection of best engineering practices that have proven successful in the
modeling of large and complex systems.
The UML is a very important part of developing object oriented software and the software
development process.
The UML uses mostly graphical notations to express the design of software projects.
Using the UML helps project teams communicate, explore potential designs, and validate the
architectural design of the software.
Goals of UML
1. Provide users with a ready-to-use, expressive visual modeling language so they can develop
and exchange meaningful models.
2. Provide extensibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development processes.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of the OO tools market.
6. Support higher-level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.
Why Use UML?
As the strategic value of software increases for many companies, the industry looks for techniques to
automate the production of software and to improve quality and reduce cost and time-to-market.
These techniques include component technology, visual programming, patterns and frameworks.
Businesses also seek techniques to manage the complexity of systems as they increase in scope and
scale.
In particular, they recognize the need to solve recurring architectural problems, such as physical
distribution, concurrency, replication, security, load balancing and fault tolerance.
Additionally, the development for the World Wide Web, while making some things simpler, has
exacerbated these architectural problems. The Unified Modeling Language (UML) was designed to
respond to these needs.
![Page 94: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/94.jpg)
Sub.: SAD Chapter – 9 CAD Project Management Tool B.C.A. Sem - 3
Page 7 of 10
Class Diagrams
Class diagrams are widely used to describe the types of objects in a system and their relationships.
Class diagrams model class structure and contents using design elements such as classes, packages
and objects.
Class diagrams describe three different perspectives when designing a system, conceptual,
specification, and implementation.
Classes are composed of three things: a name, attributes, and operations. Below is an example of a
class.
Class diagrams also display relationships such as containment, inheritance, associations and
others. Below is an example of an associative relationship:
![Page 95: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/95.jpg)
Sub.: SAD Chapter – 9 CAD Project Management Tool B.C.A. Sem - 3
Page 8 of 10
Example Of Class Diagrams :
Use Case Diagrams
A use case is a set of scenarios that describing an interaction between a user and a system. A use case
diagram displays the relationship among actors and use cases. The two main components of a use case
diagram are use cases and actors.
Actor Use Case
An actor is represents a user or another system that will interact with the system you are modeling. A
use case is an external view of the system that represents some action the user might perform in order
to complete a task.
![Page 96: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/96.jpg)
Sub.: SAD Chapter – 9 CAD Project Management Tool B.C.A. Sem - 3
Page 9 of 10
Activity Diagrams
• An Activity diagrams show the flow from activity to activity.
• It is an ongoing execution within the software. Activity ultimately results in some action which
is one type of computation result.
• Action than call another operation, sending signal creating or destroying an object of
computation
• It gives the dynamic view of a system. It shows activity within the state.
Following symbols are used in activity diagram.
How to Draw: Activity Diagrams
� Activity diagrams show the flow of activities through the system.
� Diagrams are read from top to bottom and have branches describe conditions and parallel
activities.
� Branch is used when multiple activities are occurring at the same time.
� This indicates that both activity2 and activity3 are occurring at the same time. After activity2
there is a branch.
![Page 97: System Analysis & Design AND Software Engineering](https://reader031.vdocuments.site/reader031/viewer/2022012511/618903dd5317745272597faa/html5/thumbnails/97.jpg)
Sub.: SAD Chapter – 9 CAD Project Management Tool B.C.A. Sem - 3
Page 10 of 10
� The branch describes what activities will take place based on a set of conditions.
� All branches at some point are followed by a merge to indicate the end of the conditional behavior
started by that branch.
� After the merge all of the parallel activities must be combined by a join before transitioning into
the final activity state.
Example Of Activity Diagrams :