software testing
DESCRIPTION
Manual Testing OverviewTRANSCRIPT
Software Testing
• Software Development life cycle (SDLC)
• Definition of testing
• Principles of testing
• Testing technique & Types
• Software Testing life Cycle (STLC)
• Defect Life Cycle (DLC)
Agenda
SDLC•Coherent sets of activities for specifying, designing, implementing and
testing software systems
•A structured set of activities required to develop a software system
SDLC
• Waterfall model• Spiral model• V model• V & V model• Agile Methodology
SDLC
Water fall modelThe standard waterfall model also called as Life cycle model for
systems development and an approach that goes through the following steps:
• Document system concept • Identify system requirements and Analyze them • Break the system into pieces (Architectural Design) • Design each piece (Detailed Design) • Code the system components and test them individually (Coding,
Debugging, and Unit Testing) • Integrate the pieces and test the system (System Testing) • Deploy the system and operate it
Note: This model is useful when all the requirements are clear for the system development
SDLC-Waterfall
Requirements
Design
Code & Unit Test
Test & Integration
Operation & Maintenance
Requirements
Design
Code & Unit Test
Test & Integration
Operation & Maintenance
SDLC -Waterfall model
V & V Model
Tests
Tests
BusinessRequirements
BusinessRequirements
Tests
Tests
SystemSpecification
SystemSpecification
Tests
DesignSpecification
DesignSpecification
CodeCode
Integration Testingin the Small
Integration Testingin the Small
Integration Testingin the Large
Integration Testingin the Large
SystemTesting
SystemTesting
ComponentTesting
ComponentTesting
AcceptanceTesting
AcceptanceTesting
RunTests
DesignTests
ProjectSpecification
ProjectSpecification
Agile Methodology
• “We are uncovering better ways of developing software by doing it and helping
• others do it. Through this work we have come to value:– Individuals and interactions over processes & tools– Working software over comprehensive documentation– Customer collaboration over contract negotiation– Responding to change over following a plan
• That is, while there is value in the items on the right, we value the items on the
Principles Of Agile Software• Partner with Customers • Work Toward a Shared Vision • Deliver Incremental Value • Working software is the primary measure of progress• Requirements evolve• Embrace change• Sustainable development• Invest in Quality and technical excellence• Empower Team Members • Interact with Business on a daily basis• Establish Clear Accountability • Learn from all experiences • Foster open communications
Agile-Roles in Agile Project• The Customer or Customer Proxy who is responsible for defining the
requirements, priorities and accepts delivery of the completed User Stories.
• The Project Manager who is responsible for delivering the completed system to the Customer.
• The Business Analyst (who often acts as a Customer Proxy) is responsible for ensuring that the requirements are fully formed into proper User Stories with accompanying Acceptance Criteria.
• The Designer is responsible for ensuring a coherent technical design with appropriate levels of quality, performance etc which satisfies the Customer's requirements.
• The Developer is responsible for delivering software code which fulfils User Stories by meeting their Acceptance Criteria.
• The Tester is responsible for ensuring that the Acceptance Criteria tests are run and that they pass. In addition they are responsible for ensuring the overall quality of the system.
Agile-Additional Roles• The Agile Coach is an experienced Agile practitioner who is responsible
for helping the team adopt Agile Practices. The Agile Coach may also take on the Iteration Manager role.
• The Agile Enablers who work within the team in specific roles. Agile favors learning via interactions between individuals, teams benefit greatly if they are seeded with experienced Agile Enablers in areas such as the Developer, Business Analyst and Tester teams.
• The Build Master, usually a Developer who has specific experience and skills in setting up and maintaining a Continuous Integration environment.
• The Iteration Manager who takes on the inward focusing parts of the Project Manager role, specifically dealing with ensuring that the team is productive and that the iterative process is running smoothly.
• The Technical Lead, usually a senior Developer working within the Developer team, who is responsible for ensuring that the technical Common Vision is maintained. The Technical Lead works closely with the Designer in order to both communicate design goals to the team and to provide feedback to the Designer about implementation issues.
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
Agile Process FlowRequirements & Planning Development Iteration (2-4 Weeks) Deployment
UserStories
Estimation
ReleasePlanning
IterationZero
IterationPlanning
IterationKick-off
DailyActivities
ShowcaseIteration Demo
IterationRetrospective
ContinuousCAT(UAT)
ProductionDeployment
Continuous Integration
Automated Testing
TrackingDevelopment
Automated BuildsAutomated Deployment
Daily Stand-upsIteration Tracking
Simple DesignRefactoring
Coding
Automated Unit TestingAutomated Functional Testing
• Testing
– The process of exercising software to verify that it satisfies specified requirements and to detect faults
– The purpose of testing is to show that a program performs its intended functions correctly
– Testing is the process of executing a program with the intent of finding errors
– Software testing validates the behavior of a program with a finite set of test cases, against the specified expected behavior
Testing Defined
Principles of Testing
What do Software Faults cost What is a bug Error – Fault – Failure Reliability versus faults Why do faults occur in software? Why is testing necessary? Why not just test everything?(complete testing) The most important principle.
What do Software Faults cost?(Cost of Quality)• Cost of quality is the term that is used to quantify the total cost of
failure, appraisal and prevention costs associated with the production of software.
• The cost of quality will vary from one organization to the next.• The goal is to optimize the production process to the extent that
rework is eliminated and inspection is built into the production process.
• Apply the concepts of continuous testing to the systems development process can reduce the cost of quality
Principles of Testing
What is a bug ? Any deviation from requirements.• Error: a human action that produces an incorrect result• Fault: a manifestation of an error in software
– also known as a defect or bug– if executed, a fault may cause a failure
• Failure: deviation of the software from its expected delivery or service
• Software faults become software failures only when the exact computation conditions are met, and the faulty portion of the code is executed on the CPU
Failure is an event; fault is a state of
the software, caused by an error
Principles of Testing
A person makesan error ...
… that creates afault in thesoftware ...
… that can causea failure
in operation
Error - Fault - Failure
Principles of Testing
Reliability versus faults
Reliability: the probability that software will not cause the failure of the system for a specified time under specified conditions– Can a system be fault-free? (zero faults, right first time)– Can a software system be reliable but still have faults?– Is a “fault-free” software application always reliable?.
Principles of Testing
Why do faults occur in software?•Software is written by human beings
Who know something, but not everythingWho have skills, but aren’t perfectWho do make mistakes (errors)
•Under increasing pressure to deliver to strict deadlinesNo time to check but assumptions may be wrongSystems may be incomplete
•Incomplete and misunderstood system requirements, errors in design and poor test coverage•A good test case is one that has a high probability of finding an as yet undiscovered error
Principles of Testing
So why is testing necessary?•Because software is likely to have faults•To learn about the reliability of the software•To fill the time between delivery of the software and the release date•To prove that the software has no faults•Because testing is included in the project plan•Because failures can be very expensive•To avoid being sued by customers•To stay in business
Principles of Testing
Why not just "test everything"? (complete testing)
System has20 screens
Average 4 menus3 options / menu
Average: 10 fields / screen2 types input / field(date as Jan 3 or 3/1)(number as integer or decimal)Around 100 possible values
Total for 'exhaustive' testing:
20 x 4 x 3 x 10 x 2 x 100 = 480,000 tests
If 1 second per test, 8000 min, 133 hrs, 17.7 days
(not counting finger trouble, faults or retest)
10 secs = 34 wks, 1 min = 4 yrs, 10 min = 40 yrs
So, Exhaustive testing is not possible
Principles of Testing
The most important principle of testing
Prioritise testsso that,
whenever you stop testing,you have done the best testing
in the time available.
Prioritise testsso that,
whenever you stop testing,you have done the best testing
in the time available.
Principles of Testing
Testing technique•A procedure for selecting or designing tests•Based on a structural or functional model of the software•Successful at finding faults•‘Best' practice•A way of deriving good test cases•A way of objectively measuring a test effort
Static TestingInspections, Walkthroughs and ReviewsThe inspection processBenefits of InspectionStatic Analysis
Dynamic TestingBlack Box TestingWhite Box Testing
Using techniques makes testing much more effective
Unit Test:
Unit testing verifies the smallest piece of a program (module) to determine if the actual structure is correct and if the function the code defines operates correctly and reliably(without crashing or hanging)
Types of Testing- Unit testing
Testing types - Smoke testing
• Smoke Testing : Smoke testing( some times called as Sanity test) is non-exhaustive software testing, ascertaining that the most crucial functions of a program work, but not bothering with finer details. The term comes to software testing from a similarly basic type of hardware testing, in which the device passed the test if it didn't catch fire the first time it was turned on. A daily build and smoke test is among industry best practices advocated by the IEEE (Institute of Electrical and Electronics Engineers).
Who : Usually Done by the QA Team
When : Before actual testing.
System Test: As soon as an integrated set of modules has been combined to form your application, system testing can then be performed. System testing verifies the system-level reliability and functionality of the product by testing your application in the integrated system.
Note: QA team performs System testing/ done by independent test group
Integration Test: Integration testing is used to test the reliability and functionality of groups of units (modules) that have been combined together into larger segments. The most efficient method of integration is to slowly and progressively combine the separate modules into small segments rather than merging all the units into large component.
Note: Test engineers performs Integration testing in development phase
Types of Testing- ST & SIT
Testing types- Regression Test
• Regression Test: Regression testing is the re-running of all tests after a fix, change or enhancement has been made to the code and a new build of the AUT has been delivered to QA. Regression testing verifies that previously identified problems have been fixed and changes to one part of your application have not introduced new problems elsewhere. Changing a line of code might cause a ripple effect that produces an unexpected result in another part of your application. If you do not re-run all of your test cases after your application has been changed, you cannot be certain about the quality of the entire system.
Who : Usually Done by the test engineers
When : After changes have been incorporated for the existing functionality
Testing types- Adhoc testing
• Adhoc testing :Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing.
Who : Usually Done by the skilled tester.
When :After normal testing .
Testing types- Compatibility test
• Compatibility Test: It checks how one product works with another, efficiently share the same data files simultaneously that resides in the same computer’s memory.
Who : Usually Done by the test engineer
When :In testing phase
Testing types - Usability testing
• Usability Testing:Usability is a quality attribute that assesses how easy user interfaces are to use. The word 'usability testing' refers to testing done for improving ease-of-use during the design process.
Who : Usually Done by the testers in users prospective.
When : Before actual release
Testing types - Security Testing
• Security Test: How easy would it be for an unauthorized user to gain access to this program. Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.
Who : Usually Done by the network expert
When : Before actual release
System Test: As soon as an integrated set of modules has been combined to form your application, system testing can then be performed. System testing verifies the system-level reliability and functionality of the product by testing your application in the integrated system.
Note: QA team performs System testing/ done by independent test group
Types of Testing
Acceptance Test: The objective of acceptance testing is to verify if the application is fit for deployment. Acceptance testing may include verifying whether the application is reliable, meets the requirements for business, performs well, and has a consistent look and feel.
Note: Acceptance testing is generally done by a QA person at Client Location
Types of Testing - UAT
Performance testing :Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing. Load, Stress and Volume testing come under this testing.
When :Conducted so as to meet the Performance criteria stated by the Client.
Who : By the performance test automation expert
Types of Testing- PT
Regression Test: Regression testing is the re-running of all tests after a fix, change or enhancement has been made to the code and a new build of the AUT has been delivered to QA. Regression testing verifies that previously identified problems have been fixed and changes to one part of your application have not introduced new problems elsewhere. Changing a line of code might cause a ripple effect that produces an unexpected result in another part of your application. If you do not re-run all of your test cases after your application has been changed, you cannot be certain about the quality of the entire system.
Who : Usually Done by the test engineersWhen :After changes have been incorporated for the existing functionality
Types of Testing
•Plan Test •Create Test Plan
•Design Test •Design Test Cases
•Automate tests•Create Test Automation Architecture•Create Automated Test Scripts
•Execute Test•Test Log
•Evaluate Test•Test Report
STLC
Plan Test
Design Test
Automate tests
Evaluate Test
Execute Test
Automation Yes
No
Yes
No Adherence To Exit Criteria
Software Testing life Cycle (STLC)
Test Lead Create Test PlanQATP
Change Request
DSRS Project Approach
Entry Criteria:• Project Kickoff
Meeting• DSRS should be
approved
Exit Criteria:• Reviewed,
Approved and Base-lined QATP
Workflow Detail: Plan Test
Test Engineer Design test Cases
Architecture Document
DSTD QATP Project Engineering Guidelines
Change Request
QATC
Work Load Analysis
Entry Criteria:• Baselined QATP, DSRS,
DSTD
Exit Criteria:• All the requirements are
mapped to Test cases to meet the Test Objective
• Peer reviews comments are tracked to closure
• Updated Traceability matrix
Design Test workflow
Automate Test
Test Engineer
QATC
Test Lead Create Test Automation Architecture
Test Automation Architecture Guidelines
Automated Test Suite
Build
Test Script
QATP
Create Automated Test Scripts
Test Automation Architecture
Entry Criteria:
Exit Criteria:
Entry Criteria: Exit Criteria:
Automate Test
Test Engineer
QATC
Test Lead Create Test Automation Architecture
Test Automation Architecture Guidelines
Automated Test Suite
Build
Test Script
QATP
Create Automated Test Scripts
Test Automation Architecture
Entry Criteria:• Signed off / baselined
test automation architecture
• Frozen test suite
• Build has to be ready
Exit Criteria:• Planned Test scripts
are developed• Stability of script used
for automation is to be checked
Entry Criteria:• Identifying all test
cases that are to be automated
• Validated Test Scripts(if already exists)
Exit Criteria:• Test suite should cover all
the requirements• Structure of the test suite
should be completed and approved
Execute Test
Test Engineer Execute Test
QATCTest Suite Build
Test Log
Entry Criteria:Exit Criteria:
Execute Test
Test Engineer Execute Test
QATCTest Suite Build
Test Log
Entry Criteria:• Unit Testing completed by
engineering team• Baselined QATP and QATC
• Software build ready for testing
• Test Data is available
Exit Criteria:• Execution of Test Cases
and Test Scripts are complete
• Test Results captured
Evaluate Test
Test Lead Evaluate Test
QATCTest Suite Test Log
Test Log
Test Report
QATP
Entry Criteria:• System Testing Completed• Results logged in Test log• Test Log reviewed by
Peers/Lead
Exit Criteria:• Test report generated and
distributed to stakeholders
Roles and Responsibility of For Software Test Process
Activity Artifact Owner Reviewer Approver
Create Test Plan
QATP Test Lead
Test Analyst, PM, Test
Team
PM,Client.
Design Test Cases
QATC Test Engineer
Test Lead, Test Team
Test Lead, PM,Client.
Create Test Automation Architecture
Test Automation ArchitectureAutomated Test Suite
Test Lead
Test Analyst PM.
Roles and Responsibility of For Software Test Process
Activity Artifact Owner Reviewer Approver
Create Automated Test Scripts
Test Scripts
Test Engineer
Test Lead, Test Team
Test Lead, PM,Client.
Execute Test
Test Log Test Engineer
Test Lead, Test Team
Test Lead, PM,Client.
Evaluate Test
Test Report
Test Lead Test Analyst PM,Client.
• The tester has to first identify the defect.
• He has to characterize the defects based on the severity and priority.
• Reports the bug to the development team.
• Once the bug is fixed by the developer, tester retests the same test case for validation.
• Test the related test cases for any adverse effects.
Defect Life Cycle- (DLC)
Defect Life Cycle- (DLC)
New OpenIn
ProgressFixed
VerifiedFixed
Closed
DeferredCannot
ReproduceDocumented
IsDuplicate
AsDesigned
DLC-Reason for a defect
• Defects may occur because of the following reasons:
– Time pressure– Code complexity– Change in technologies– Change in requirements– Programming errors– Miscommunication– Wrong Interpretation– Unavailability of software resources– Lack of human resources
Thank You
?