test factor test technique matrix

53
Siddiq 1 SOFTWARE QUALITIES 1. Meet Customer Requirements (MCR) 2. Meet Customer expectation (MCE) 3. Cost Of Purchase (CP) 4. Time To Release (TR) Meet Customer Requirements and Meet Customer Expectation is called Technical and Cost of Purchase and Time to Release is called non technical. Definition: 1. Meet Customer Requirements (MCR) :- MCR in term of functionality 2. Meet Customer expectation (MCE):- In term of performance/usability/capability etc... 3. Cost Of Purchase (CP):- By customer 4. Time to release (TR):- By development organization. SOFTWARE DEVELOPMENT PROCESSING REQIREMENT GATHERING ANALAYSIS AND PLANING DESIGN CODING TESTING RELEASE & MAINTAINENCE SOFTWARE DEVELOPMENT PROCESSING MODELS 1. WATER FALL MODEL :when customer requirements are clear and constant 2. PROTO TYPE MODEL: When customer requirements are in ambiguity, the S/W organization develop sample model, which goes first to develop real S/W. 3. SPIRAL MODEL: Spiral model when the requirements are enhancing. 1

Upload: api-27439675

Post on 10-Apr-2015

1.030 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Test Factor Test Technique Matrix

Siddiq 1 SOFTWARE QUALITIES

1. Meet Customer Requirements (MCR)2. Meet Customer expectation (MCE)3. Cost Of Purchase (CP)4. Time To Release (TR)

Meet Customer Requirements and Meet Customer Expectation is called Technical and Cost of Purchase and Time to Release is called non technical.

Definition: 1. Meet Customer Requirements (MCR) :- MCR in term of functionality2. Meet Customer expectation (MCE):- In term of performance/usability/capability etc...3. Cost Of Purchase (CP):- By customer4. Time to release (TR):- By development organization.

SOFTWARE DEVELOPMENT PROCESSING

REQIREMENT GATHERING

ANALAYSIS AND PLANING

DESIGN

CODING

TESTING

RELEASE & MAINTAINENCE

SOFTWARE DEVELOPMENT PROCESSING MODELS

1. WATER FALL MODEL :when customer requirements are clear and constant2. PROTO TYPE MODEL: When customer requirements are in ambiguity, the S/W organization

develop sample model, which goes first to develop real S/W.3. SPIRAL MODEL: Spiral model when the requirements are enhancing.4. AGILE MODEL: Agile model when the customer requirements suddenly changing.Note 1:-All S/W development models are derived from water fall model or linear sequential model.Note 2: Above all S/W development models are maintaining single stage of testing and that too by same development people.

SOFTAWARE QUALITY ASSURANCE & SOFTAWARE QUALITY CONTROL

SQA: The monitory and measuring the strength of development process is called SQA.

SQC: The validation of the S/W product with respect to the customer requirements and expectation called SQC.

BRS: BRS is defining the requirements of the customer to be developed as new software. This document is also known as CRS/URS.

1

Page 2: Test Factor Test Technique Matrix

Siddiq 2 SRS: Software requirements specification is defining functional requirements to be developed and system requirements to be used.

REVIEW: Determining the complete ness and correctness of documents by responsible people through Walkthrough, Inspection and peer-review is REVIEW.Walkthrough: Checking from first line to last line Inspection search and fakerPeer-review: Comparing one Document with another document for each point word to word.

HLD: Designing the overall architecture of system from root module to leaf module.Ex: Login ATM, Chat. HLD is also known as architectural design or external design.

LLD: LLD defines the internal architectural correspondence model (or) functionality. LLD is also known as internal design documents.

PROTO TYPE: A simple model of software is called proto type. It’s consists of interface (screen) with out functionality.

PROGRAMME: It indicates the set of executable statements. Statements in performing process and displaying outputs.

MODEL (OR) UNIT: - It’s a combination of programmer and S/W.

WHITE BOX OR GLASS BOX OR OPEN BOX TESTING TECHNIQUES: This is a program basis testing techniques. This technique also known as glass box or open box testing. Those responsible use this technique to verify the internal structure of corresponding program.

BLACK BOX TESTING: It’s a system level testing techniques. Those responsible use this technique to validate external functionality.BUILD: An .exe executable form of system is called build. OR a finally integrated all modules set is call build.V-MODEL: V-Stands for verification and validation. This model defines conceptual mapping in between develop man stage and testing stages.

V-model consists of multiple stages of developing process. Each embedding with multiple stages of testing process. From this model maximum organization are maintaining separate testing team only for system testing pass. B’coz this stage is a bottle neck stage of Software development. After system testing the organization is planning to release S/W to customer site.

2 3 4

6

1

5

2A 3A 4A 7

2

Page 3: Test Factor Test Technique Matrix

Siddiq 3

1. Required gathering2. Analysis 2A. Review3. Design 3A. Review4. Coding 4A. White box testing5. System testing (or) black box testing6. Main7. Test S/W changes.

1, 2, 2A, 3, 3A Verification (or) SQA4, 4A, 5, 6, 7 Verification (or) SQC

I. REVIEW IN ANALYSIS

Generally software development process starts with requirements gathering and analysis. In this phase business analysts develop BRS/SRS document. For complete ness and correctness of document the same business analyst conducts a review. In the review these are considered.

BRS & SRS Are they right requirement? Are they complete requirement? Are they achievable requirement? Are they reasonable requirement? Are they testable requirement?

II. REVIEWS IN DESIGN

After completion of analysis & Review, Designers develop HLD & LLD’S. To verifying completeness and correctness those document, the same designer conducts review meeting. In this review, they concentrate on below factors.

HLD & LLD’S Are they understandable design? Are they designing right requirements? Are they designing complete requirements? Are they following able design? Are they handling errors?

III .UNIT TESTING

1. Basis Path Testing: - Programmer is verifying whether a programmer is running or not. In basis path testing, programmer are following the below procedure to test complete programme.

Draw flow diagram for that programme Calculate no of independent paths in that programme (geometric complexity). Run that programme more than are time to cover all independent paths in that

programme.2. Control Structure Testing: - Programmers concentrate on the correctness and completeness of

the corresponding programme outputs. They are checking every statement including ‘if’ conditions for ‘loops, memory allocations.

3

Page 4: Test Factor Test Technique Matrix

Siddiq 4 3. Program Technique Testing:- Programmers verify the execution speed of corresponding

program. In this testing, programmer is taking the help of ‘monitors’ and “properties”. If the program speed is not good, then programmers are performing changes in structure of the programme without disturbing the functionality.

4. Mutation Testing:- Test Test Test

Passed passed passed/failMutation means a change is program. Programmers perform willing changes in programs and perform test separately. In this test programmers verify completeness and correctness of test on program.

TESTING TERMINOLOGY

1. Testing Strategy: It’s a document and it does define the required Testing approach to be followed by testing people.

2. Test Plan: It’s a document and its provides work allocation and in terms of schedule

3. Test Case: It does define test condition to validate functionality in term of completeness and correctness.

4. Test Log: It’s defining the result of test case in term of passed (or) fail of execution of testing case application build.

5. Error-Defect (Issue) -& Debug: a). A mistake in a coding is called ERROR b) This mistake found by Test engineer, during Testing called DEFECT/ISSUE. C). this defect/issue review & Accepted by development team to release is called BUG.

6. Re-Testing: It’s also known as data driven (or) iterative testing. Test engineers repeat the same test on same application build with multiple I/P values. This type of test repetition is called Re-Testing.

7. Regression: The Re-execution of scheduled test cases on modified build to ensure, bug-fix work without any side effect is called regression test.

4

----

-- Change--

--Change-

Page 5: Test Factor Test Technique Matrix

Siddiq 5

System testing process:-

S/W development process along with the S/W testing process.

Requiring Gathering (BRS)

Analysis & project planning (SRS& project plan/Test side plan)

Design & review (HLD, LLD) Test initiation

Coding & unit testing (WB) Test planning

Integration Testing Test Design

Initial Build

Test Execution Test reporting

Test cases

User Acceptance Test

Sign off5

Test initiation

Test planning

Test design

Test execution

Test closer

Test reporting

Page 6: Test Factor Test Technique Matrix

Siddiq 6

SYSTEM TEST INITIATIONGenerally the every organization system testing process is a starting with test initiation.

In this face project manager (or) test manager pepare test strategy documents. This document defines requires testing approach to be followed by testing team.

A) Components in test strategy: 1. Scope & objective; the important of testing & their mail stones.

2. Business issues; cost allocation in between development process and testing process.

3. Test approach; list of test factors of test issues to be applied by testing team on corresponding S/W build. This selection is depending on requirement is that S/W build scope of the requirements and risk involve in our project testing.

4. Roles & responsibilities; The names of the job in testing team & their responsibilities.

5. Communication & status Reporting; This required negotiation between every two consecutive jobs in our testing team.

6. Test automation and Testing tools; The purpose of automation & available tools in our organization.

7. Defect Reporting & Tracking; The required negotiation between this testing team & development team to review and resolve defect during testing.

8. Testing Measurements and machines; To estimate quality, compatibility & status testing team uses a set of measurement & metrics.

9. Risks & Assumptions; The exemptions list of problems this solution to overcome.

10. Change & Configuration management;

11. Training Plan; The required number of training session for testing team. To understand customers requirement (or) business logics.

6

Development data box

Test data box

S/W coding

Page 7: Test Factor Test Technique Matrix

Siddiq 7 12. Test deliverable; Names of test document to be prepared by testing team during testing. Ex:

Test plans, cases, log, defect reports and same reports.

13. Test factors & testing issues; To define quality S/W, Testing team use 15 issues (or) topics. These all topics are not mandatory in every project.

I. Authorization:- validity of users

II. Access Control:- Permission of users use specific functionality

III. Audit trail: The correctness of metadata mean data about data

IV. Continuity of processing:- Integration of processing

V. Data Integrity: - correction of me /P value: - compose mail (interim of size & Time).

VI. Correction of O/P value and Manipulation:-

VII. Coupling: - coexisting with other S/W to share common resources.

VIII. Ease-of-use : User friendly screen

IX. Ease of Operation: Installation, an installation dumping, downloading, upload.

X. Portability: Run on different platforms

XI. Performance: Speed of processing

XII. Reliability: Recover from abnormal situation

XIII. Service Levels: Orders of functionality are services to give to customer side people.

XIV. Maintainable: whether our s/w is time serviceable to customer site people (or) not.

XV. Methodology: Testing factor on whether on testing team following predefined approach

properly (or) not?

TEST PLAN

Test factors Vs Test Technique

Test factor indicate testing issue topic. This test engineer’s topic in our project testing team is following ECT of testing techniques

1. Authorization: security testing

2. Access control: security testing

3. Audit trial: Functionality testing

4. Continue of processing: Integrating (Programmer)

5. Data Integrity: I/P Domain testing

6. Correctness of proceeding of O/P value and manipulation: Functionality testing

7. Coupling: Inter system testing

8. Ease of use: user inter face testing/ manually support testing7

Page 8: Test Factor Test Technique Matrix

Siddiq 8 9. Ease of operating: Insulation testing

10. Portability compatibility / configuration testing

11. Performance : load/stress/data volume testing

12. Reliability : Recover/stress testing

13. Service levels: S/W changes/ regression testing (by CCB)

14. Maintainable: Compliance testing

15. Methodology: Compliance testing

Whether the project team is following our enhancement standards.

Case study: - Test factor15 (Project requirement)-4

11 (Scope of Requirement) +1 12 (Risks in Testing) -3

9 (Finalize Factor/Issue).

In Above example the project manager/ Test manager finalized 9 testing topics are issues to be applied by testing team On S/W Build.

8

Page 9: Test Factor Test Technique Matrix

Siddiq 9

IV. INTEGRATION TESTING

After completion of program development and unit testing, programmers are connecting them to form a complete software field. In this integration of programs, programmers verify interfaces in between every 2 programs or modules.They are four types of approaches to interactive modules such as

1. Top-Down Approach: In this model programmers interconnect main module to sub modules. In the place of under constructing sub modules, programmer use temporary programs, called as “stubs (or) called programs.

2. Bottom-Up Approach: - In this programmers interconnect sub modules without using under construction main module. In place of that under construction main module programmers use the temporary programme called driver or calling program.

3. Hybrid Approach: - In this approach we combine from top-down and bottom-up approaches also known as Hybrid approach. Or approach.

9

Main

Sub 1 Sub 2

Stub

Sub 1

Sub 2

Driver

Main

Main

Driver

Sub 1

Sub 2

Sub 3

Stub

Page 10: Test Factor Test Technique Matrix

Siddiq 10

4. Big-Ban Approach: - In this model programmers interconnect programs after completion of complete coding.

V. SYSTEM TESTING

After completion of integration testing, development people release S/W build to the separate testing team. This separate testing team is validating the S/W build with respect to customer requirements. In this level of testing the separate testing team uses block-box testing technique. These techniques are classifying into three categories.

1. Usability / Accessibility Testing2. Functional Testing3. Non-Functional Testing

1. Usability / Accessibility Testing: Generally the system test executing is starting with usability test. During this test, test engineers validate user-friendliness of every screen in application build.

This Usability testing two consists of Sub-Techniques

A) User- Interface Testing:

During this test, Test engineers are applying below 3 factors for every screen of our application window.

Screen of Build Ease of Use Look & Feel Speed in Interface.1. Understandable Screen2. Attractive3. Less no events of to completed task

Ex: Short Navigation.

B): Manual Support Testing: During this Test, Test engineers study help of application build to estimate context sensitiveness. Generally, technical writers of company develop user manuals before the releasing the S/W to customer. Due to this reason the manual supporting testing is coming into picture at the end of the system testing.

USABILITY TESTINGReceive S/W from developers

UI Testing

Functional & Non-Functional Testing

Manual Supporting Testing

2. Functional Testing. It’s monitory testing level in system testing. During functional testing, test engineer concentrate

to meet customer requirements. Functional testing is classified into two sub testing techniques.

10

Page 11: Test Factor Test Technique Matrix

Siddiq 11

A) Functionality Testing:- 1) GUI Coverage or Behavioral2) Error- Handling-Coverage 3) I/P domain Coverage4) Manipulation Coverage or O/P Coverage 5) Back-End Coverage6) Order of Functionality Coverage

During this test, test engineer verify whether functionality is working correct or not. In this testing, test engineer construct below coverage’s.

GUI: Changing properties of screen Objects before.E-H.C: Verify the prevention of wrong operations.I/P-D-C- verify the size and type of every I/P object values.M/C: Correctness of O/P.B-E: The I/P of front-end operations on B/E tables.O.F.C.:

B.) Sanitation Testing: Is also known as GARBAGE Testing. During this test, test engineers find test functionality of application build with respect to customer.

3 . Non-Functionality Testing: During non-functionality testing, testing team concentrates on extra characteristic of S/W build to customer.

A) Compatibility Testing: It’s also known as portability testing. During this testing, Test engineers’ validate whether the application build is running on customer expected platform or not?

Platform means that O/S, compilers, browsers and other system S/W.

B) Configuration Testing: Is also known as H/W compatibility testing. During this test, test engineer test remaining application builds with various technologies. H/W devices to estimate H/W compatibility.

Example: Technical Printers, Technical Networks, Technical Topologies.

C) Recovery Testing: Error-Handling is also known as reliability resting. During this test, test engineer validate whether, application build is changing from, abnormal status to normal status are not?

D) Inter System Testing: It’s also known as interring operator ability End-to-End testing. During this test, test engineers validate, whether our application Co-exists with other S/W application to share common resources.

E) Security Testing: It’s also known as penetration testing. During this test, test engineers validate below 3 factors.

1. Authorization2. Access control3. Encryption/decryption.

In authorization testing, test engineers validate whether application accepting users and reject invalid users are not?In access control testing test, test engineers validate permission of users, for utilize of application

11

Page 12: Test Factor Test Technique Matrix

Siddiq 12 Then En/De testing, test engineers are trying to trace CIPHER test to original test.

Security testing authorization and access control test are reasonable to application, but En/De is conducted by separate test security testing people.

F) Data volume Testing: It’s also known as storage testing or memory testing. During this test, test engineer find peek limit data handled by our application build. Example: Ms-Access Technologies, data bases, Supporting as a 2 GB. Data base Max.

I) Load Testing: No of users using the software at same time. Ex: websites, yahoo—G-mail.Load means that no of concurrent users which are accessing, application at the same times.Execution of application on customer expected configuration is called Load Testing.

H) Stress Testing: The performance of application on customer expected configuration and whichever load levels to estimate stability is called S/T.I) Installation Testing: During this test, testing team practice, on Initial S/W. During this initial

testing, testing team is taking S/W build and supporting S/W to run our application build at customer side.

J) Parallel Testing: It is also known as comparative are competitive testing. During this, test, test engineer compare S/W with old version of same S/W and with other company similar product to estimate competitiveness. This testing applicable to S/W product only.

VI) USER ACCEPTANCE TESTING

After completion of testing system, the project management concentrates on user acceptance testing to collect feed back from real customer or model customer. There are 2 ways to conduct user acceptance test.

α- ALPHATESTING β-BETA TESTING

Real customer by model customer

At Development site at model customer

S/W application For S/W product

After completion of user acceptance testing, project management concentrate on S/W release

VII) MAINTAINS (OR) SUPPORT TESTING

Project management declares release team with developers and test engineers and H/W engineers. This release team conducts part testing (or) deployment testing (or) release testing. During this test release team observe below factors.

Complete installation Our all functionality I/P devices handling O/P device handling Secondary storage devices handling O/S error handling Co-Existence with other S/W application

12

Page 13: Test Factor Test Technique Matrix

Siddiq 13 After completion of above observations, release team gives training on customer software and coming back.During utilization of the S/W, customer side people send change requests to Organization. This responsible team of organization handles changes. This responsible team is also known as Change Control Board (CCB).

CHANGE REUQEST

ENHANCEMENT MISSED EFFECT

IMPACT ANALYSIS IMAPACT ANALYSIS

PERFORM S/W CHANGES PERFORM S/W CHANGES

TEST S/W CHANGES TEST S/W CHANGES

IMPROVE TEST EFFICIENCY

INTRODUCTION OF THE TESTINGCASE STUDY

Testing Phase Testing Technique ResponsibilityTesting in analysis Review Business analysisTesting Design Review/ Proto type Designers/ ArchivesUnit Testing White box Testing Techniques ProgrammersIntegration Testing Top-down/Bottom-Up/Hybrid/

Big-bang/Programmers

System Testing Black Box Testing Test engineersUser Acceptance Testing Alpha/ Beta Testing Really /Model customersPort Testing Complete installation over all

functionalityRelease Team

Test S/W Changes in Maintains Regression Testing CCB( Change Control Board)

Planned Testing Verses Ad-Hoc Testing

Generally every testing process is planned to conduct complete system testing with respect to project requirements. Testing team may not be able to conduct complete testing due to risks or challenges.

Ex: lack of times, knowledge, recourses…ECT, lack of Document.

AD-HOC: Due to above risks testing team plans to follow informal testing methods.

A) Monkey Testing: During this style of testing, testing people concentrate on main activities on S/W. Due to lack of time for testing. This style of testing is also knowledge Chinpanji (or) Random testing.

B) Buddy Testing: In this style of testing, test engineers are grouped with developers, due to lack of time for testing. Buddy means group, of a programmer & a tester .

13

Page 14: Test Factor Test Technique Matrix

Siddiq 14

C) Exploratory Testing: Generally the testing team conducts system testing depending on functionality and system requirements as in SRS. If the SRS is in complete with requirements, then test engineer depend on past experience, discussion with others, similar projects, browsing ECT to collect complete requirements. This style of testing is called Exploratory Testing.

D) Pair Testing: In this style of testing junior test engineers are grouped with senior test engineers to share the knowledge of testing. This style of testing is called PAIR Testing.

E) De-bugging: The development people add bugs into coding and release to testing team. This type of defect seeding/feeding is useful to estimate efficiency of testing team. It’s also known as defect seeding/feeding.

TESTING PLANNING

After completion of test strategy finalization, the test lead categorizes people and develops test plan. In this stage the test lead categorizes people and develops system test plan & divide that plan into module test plan. The plan defines ‘What to test’, ‘How to test’, ‘When to test’, and ‘Who to test’.

To develop this plan test lead categorizes people as below

Project planTesting team formation

Develop document (SRS) Identify tactical RisksPrepare test plan Test plan

Test strategy Review test plan

A) Testing Team formation: Generally test planning task starts with testing team formation. In this stage test lead depends on below factors.

Project SizeLines of columns (or) FunctionalityAvailability of Test engineerTest time/durationTest/ environment

Case Study 1. Client server, website, ERP--- 3 to 5 months of system techniqueSystem S/W (networking, mobile) 7 to 9 months of system techniqueMachine Criteria S/W (SAT) 12 to 15 months of system technique

B) Identify technical risks: After completion of testing team formation test lead concentrates on risk analysis & exception with that of formed testing team.Example:

Risk 1: Lack knowledge of testing engineers that project domain.Risk 2: Lack of timeRisk 3: Lack of Document’s

14

Page 15: Test Factor Test Technique Matrix

Siddiq 15 Risk 4: Delays in deliveryRisk 5: Lack of development process ResourceRisk 6: Lack of Resources.Risk 7: Lack of communication.

C) Prepare Test plan: After completion of team formation & risks analysis, test lead concentrates on test plan document. In this stage test lead uses IEEE 829 test plan document format.

Formats: 1. Test plan ID: The title of test plan document for format reference.

2. Introduction: About project.

3. Test Item: List of modules in project.

4. Feature to be tested: List of modules (or) Function to be tested.

5. Feature not to be tested: List of modules, which are already tested in previous version

6. Approach:-List of testing technique to be applied on modules ( from test strategy)

7. Test deliverables: Required testing document to be prepared by testing team.

8. Test environment: Required H/W, S/W to test modules.

9. Entry criteria: Test engineer are able to start test execution after creating asa. Test cases develop & reviewb. Test environment establishedc. S/W build received from development.

10. Suspension criteria: Same times test engineer stop test execution. Due to

a. Testing environment is not working.b. Reported defects are more at development side. (Quality Gap)

11. Exit criteria: It defines test execution process exit point.

a. All requirement are tested b. All major burgs resolvedc. Final build is stable with respect to customer required.

What to test 3, to 5How to test 6 to 11Who to test 12 to 13When to test 14

12. Staff & training needs: Selected test engineers name & required no of training sessions

13. Responsibilities: The match between the names of test engineer & requirements.

14. Schedule( dates and time): Date and time of Project

15. Risks & exemption: List of analysts and their exemptions to overcome.

16. Approach: Signature of project manager (or) test manager and test lead.

15

Page 16: Test Factor Test Technique Matrix

Siddiq 16 D) Review Test Plan: After completion of test plan document preparation, test lead conducts review meeting, to estimate completeness and correctness of document. In this review the Testing team members of that project also involve.

In the preview meeting the testing people depend on below factors.a. Requirement based coverage (what to test).b. Testing technique based coverage(How to test)c. Risks based coverage (Who and when to meet).

TEST DESIGN

After completion of test planning the corresponding selected test engineers concentrate on test design, test execution, and test reporting.Generally the selected test engineers design test cases for every project. In test design every test engineer studies all the requirements of the project and prepares test cases for selected requirement only with respect to test plan. In this test design, test engineers use three types of test case design methods to prepare test cases for responsible requirements.1. Function and systems specification based test design.2. Use cases based test case design.3. Application build test case design based.

TEST CASE DEFINITION:

Every test case defines a unique test condition, and then every test case is self standing and clear. To ensure understandability test engineers, start every test case with verify or check English words. Every test case is traceable with requirements.

Functional and System Specification Bases Test Case Design

BRS

SRS Test cases

(Functional system requirement)

HLD & LLD

Coding .Exe Build

From the above diagram test engineers prepare the maximum test cases depending on functional and system requirement as in SRS. In this type of Test cases, Test engineer follow below approach.

Step 1: Collect functional & system specification of requirements.Step 2: Select specification from that list

2.1: Identify entry point.2.2: Identify I/P required.2.3: Study normal flow.2.4: Identify O/P & Out comes2.5: Identify exit point.2.6: Identify alternative flow & exception. (Rules).

Step 3: Prepare test case titles, or test scenarios.16

Page 17: Test Factor Test Technique Matrix

Siddiq 17 Step 4: Review Test cases for Completeness & Correctness.Step 5: Prepare complete documents for every test case title.Step 6: Go to Step 2 & until specifications are met and test cases

Functional & Specification-1

A login process involves user ID & password authorization. Use ID is accepting alpha numeric in upper case of 4 to 16 characters along.The password is accepting alphabets in upper case of 4 to 8 characters.

Prepare Test Case Titles (Or) Scenarios

Test case 1:Verify user ID Value

Boundary value Analysis’s Equivalence Class Participation BVA (Size) (ECP)/ TypeMin- 4 char –pass Valid InvalidMax-16 char—pass a-z A_ZMin-1, 3 char—fails 0-9 Special charactersMin +1, 5 char—Pass Blank filedMax-1, 15 char-pass Max+1, 17 char—Fail

Test case 2: Verify password value

Boundary value Analysis’s Equivalence Class ParticipationBVA (Size) (ECP)/ Type

Min- 4 char –pass Valid InvalidMax-8 char—pass a-z A_ZMin-1, 3 char—fails Special charactersMin +1, 5 char—Pass Blank filedMax –1 7 char-pass 0-9 Max+1, 9 char—Fail

Test case 3: Verify login Operation

Decision Table

Use ID Password Criteria

Valid valid passValid Invalid failInvalid valid failValid blank fail (from validation)Blank valid fail (from validation)

17

Page 18: Test Factor Test Technique Matrix

Siddiq 18

Functional specification 2

An insurance application apply for different types of policies when a user selects the type, insurance system asks age of that user. The age value should be greater than 16 years and should be less than 80 years prepare test case titles and scenarios.

TC 1: Verify the selection of insurance type. TC2: Verify, focus on age when user selects insurance.TC3: Verify age value

BVA on range ECP in typeValid Invalid

Min 17 passMax 79 passMin-1, 16 failMin+1,18 passMax+1, 80 failMax-1, 78 pass

0-9 a-zA-Zspecial charactersblank field

Functional specification 3A door opened when a person comes in front of the door and the door closed when that person comes inside. Prepare test case title for this scenario.

TC 1: verify door openPerson Door OpenPresentPresentAbsentAbsent

OpenClosedOpen

Closed

PassFailFailPass

TC2: Verify door close (operation)Person Door CriteriaInsideInside

ClosedOpen

PassFail

TC3: verify door operation when that person is standing in the middle of the door.

Person Door MiddlePresentPresent

Not presentNot present

OpenClosedOpen

Closed

PassFailFailPass

18

Page 19: Test Factor Test Technique Matrix

Siddiq 19

Functional specification 4A computer shut down operation prepare test case title or scenario.TC1: verify shutdown option section using start menu.TC2: verify shutdown option section using Alt+ F4.TC3: verify shutdown operation.TC4: verify shutdown operation when a process is in running.TC5: verify shutdown using power ON/OFF.

Functional specification 5In a shopping application users purchase different types of items. In purchase order, system is allowing users select item number and enter quality up to 10, purchase order returns total amount along with one item price. Prepare test cases or scenarioTC1: verify the selection of item numbersTC2: verify quantity value

TC3: verify calculation such as total equal price X-given quality.

Functional specification 6Washing machine operation: prepare test case titles or scenario TC1.verify power supplyTC2.verify door openTC3.verify water filling with detergentTC4.verify cloths fillingTC5.verify door closingTC6.verify door closing due to cloths overflowTC7 verify washing settingsTC8 verify washing operationTC9 verify washing operation with low voltage.TC10 verify washing operation with cloths overload insideTC11 verify washing operation with door open in middle of this processTC12 verify washing operation with lack of waterTC13 verify washing operation with water leakageTC14 verify washing operation with in proper settingTC15 verify washing operation with machine problem.

BVA ECP(type)Valid Invalid

Min 1passMax 10 passMin-1, 0, failMin+1, 2 passMax-1, 9 passMax+1,11 fail

0-9 a-zA-ZSpecial charactersBlank field

19

Page 20: Test Factor Test Technique Matrix

Siddiq 20

Functional specification 7In an e_banking application, users are connecting to bank server through internet connection. In this application users are filling below fields to connect to bank server.

Password 6 digit numbers. Area code 3 digit numbers. Prefix 3 digit numbers but doesn’t start with 0 and 1. Suffix 6 digit alpha numeric. Commands cheque deposit , money transfer, mini statement, and bill pay

Prepare tests cases title or scenarios.TC1: verify password value

BVA on size ECP typevalid Invalid

Min=max=6 digitsMin-1,5 –failMin+1,7—failMax+1,7 failMax-1, 5 fail

0-9 a-zA-Z

Special character

Blank fields

TC2: verify area code value.BVA on size ECP type

valid Invalid

Min=max=3 passMin-1,2 –failMin+1,4-failMax+1,2 failMax-1, 4 fail

0-9blank field

a-zA-Z

Special character

Blank fields

TC3: verify prefix valueBVA on size ECP type

valid Invalid

Min 200, Max 999Min-1, 199-failMin+1,201-passMax+1, 1000-failMax-1, 998-pass

0-9 a-zA-Z

Special character

Blank fields

TC4 verify suffix value.BVA on size ECP type

valid Invalid

Min=max=6 passMin-1,5 failMin+1, 7 failMax+1,7 failMax-1, 5 fail

0-9a-zA-Z

Special character

Blank fields

20

Page 21: Test Factor Test Technique Matrix

Siddiq 21

TC5: verify connection to bank serverTC6: verify solution of commands such as cheque deposit, money transfer, mini statement and bill pay

Fields value CriteriaAll are valid valuesAnyone is invalid

valueAnyone is blank expect area code

All are valid values and area code is

blank

PassFail

Fail

Pass

Functional specification 8A computer restart operation prepares test case tittles or scenario.

Functional specification 9Money withdrawal from ATM machine. Prepare test case titles.

1. Verify card insertion2. Verify card insertion is wrong angle or improper angle3. Verify card insertion with improper account4. Verify PIN number entry5. Verify operation when you enter wrong pin number 3 times6. Verify language selection7. Verify account type selection8. Verify operation when you selected invalid account type with respect that inserted card.9. Verify withdrawal option selection10. Verify amount entry11. Verify withdrawal operation correct amount, right receipt and able to take back the card.12. Verify withdrawal operation with wrong demonstrations in amount13. Verify withdrawal operation when our amount greater than possible amount14. Verify withdrawal operation due to lack of amount in ATM.15. Verify withdrawal operation when our amount is greater than day limit16. Verify withdrawal operation whenever current transaction number greater than day limit on

number of transaction.17. Verify withdrawal operation when we have network problem18. Verify cancel after insertion of card19. Verify cancel after entry of pin number.20. Verify cancel after selection language.21. Verify cancel after selection of correct type.22. Verify cancel after entry of amount.

Test case documentation: After completion of test case scenarios selection, test engineers document the test cases with complete information. In this test case document test engineers use IEEE 829 formats. ('Institute of Electrical and Electronics Engineers').

21

Page 22: Test Factor Test Technique Matrix

Siddiq 22

Test case Formats: 1. Test case ID: unique number or name2. Test case name: the title or scenario of corresponding test case3. Featured to be tested: corresponding module or function or service4. Test suite ID: The name of test batch, in this batch our test case is a member 5. Priority: The importance of test case in terms of functionality Po: basic functionality, P1, general

functionality P2: cosmetic functionality6. Test environment Require H/W and S/W to execute this test case and application build. 7. Test effort: Expected time to execute the test case on build Example: 20 min is our average time8. Test duration: approximate date and time9. pre condition or test setup: necessary task to do before start this test case execution10. Test procedure or data matrix: A step by step.

Step no Action I/P required expected Actual Result Dept.IDAbove table first 4 columns call test design and another 3 columns call test execution.

Data matrix:11. Input ECP(type) BVA(Size)

Object Valid Invalid Min Max

11. Test cases pass or fail criteria: when this case is passed and when this case is failed.

Note 1: Above 11 fields test case format is not mandatory because some field values are common to max test cases and some fields values are is to remember or derive Note 2: Generally the test cases cover objects and operations. These test cases cover objects values, then test engineers’ prepare data matrix. Note3: If test case covers operation or execution then test engineers prepare test procedures from base state to end state.

Functional specification 10A login is process is allowing a user id and password to authorise users. User id is taking alpha numeric in lower case from 4 to 16 long. The password object is accepting alphabets in lowercase from 4 to 8 characters long. Prepare test case document

Document 11. Test case ID: TC_login_arjun_12. Test case name: verify user ID3. Test suite ID: TS_Login4. Priority: Po5. Pre condition user id object taking values from key board6. Data matrix:

I/P Object ECP (type) BVA (size)Valid Invalid Min Max

a-z0-9

A-ZSpecialblank

4man

16sasidhar

Document 21. Test case ID: TC_login_arjun_2

22

Page 23: Test Factor Test Technique Matrix

Siddiq 23 2. Test case name: verify user ID3. Test suite ID: TS_Login4. Priority: Po5. Pre condition password, object is taking values from Key board.6. Data matrix

I/P Object ECP (type) BVA (size)Valid Invalid Min Max

a-z 0-9A-ZSpecialblank

4man

8sasidhar

Document 31. Test case ID: TC_login_arjun_32. Test case name: verify login operation3. Test suite ID: TS_Login4. Priority: Po5. Pre condition password, object is taking values from Key board.6. Test procedure

Step no Action I/P required Expected1 Focus to

login window

None User id object focused

2 Fill fields User ID & password

‘OK’ button

3 Click ‘OK’ Valid-validValid-invalidInvalid-validValue-blank

Blank-password

Next windowError message

--do—--do—--do—

2. USE CASES BASED TEST CASE DESIGN

Use cases Based: The other method for test case selection is use cases based test case design. This method is preferable for out sourcing testing companies. Generally max testers preparing TEST CASE depending functional system specifications in corresponding project SRS, some times the testing people prepare test cases depending on use cases also. Use cases are more elaborative and understandable than functional and systems specifications.

BRSDepends

SRS Test cases

HLD

LLD’S

CODING (BUILD)

23

Page 24: Test Factor Test Technique Matrix

Siddiq 24

From the above diagram, testing team receives use cases from project management to prepare test cases. Every use case is describing functionality with all required information. Every use case follows a standard format unlike theoretical functional specification.

FORMATS:1. Use case name: The name of use case for future reference.2. Use case description: Summary of functionality3. Actors: Name of actors, who participate in corresponding function 4. Related use cases: Name of selected use cases, which have dependency with other use cases5. Pre conditions: List of necessary tasks to do before start this functionality testing6. Activity flow diagram: The graphical notation of corresponding functionality.7. Primary scenario: A step by step action to perform corresponding functionality 8. Alternative scenario: Alternative list of actions to perform same functionality9. Post conditions: It specifies the exist point of corresponding functionality 10. User Interface makeup: Model screen or proto type11. Special requirement: List of rules to be followed if possible from the above use case format;

project management provide all functionality documents with complete details. Depending on that use cases, test engineers prepare test case titles and then documentation using IEEE 829 format.

3 APPLICATION BUILD BASED TEST CASE DESIGN

Generally the test engineers prepare test cases depending on functional and system specifications or use cases. After completion of maximum test cases selection, test engineers prepare some test cases depending on application build, which received from development team these new test cases concentrate on usability of the screens in an application. These test cases cover ease of use, look and feel, speed in interface, and user manual correctness.

Example of Test cases: Verify spelling in every screen Verify contrast of each object is every screen Verify alignment of object in every screen Verify column commonness in all screens Verify font commonness in all screens Verify size commonness in all screens Verify functionality grouped objects in screens Verify borders of functionality grouped objects Verify tool tips Extinguisher: messages about icons in screens Verify the place of multiple data objects in screen Extinguisher: list boxes, table grids, active

X control menus and data windows. Verify scroll box Verify labels of objects in every screen as init ’Cap’ Verify key board accessing in one application build Verify abbreviation in all screens Extinguisher: short cuts Verify information repetition in screens Ex: Date of birth (dd/mm/yyyy, mm/dd/yyyy). Verify help document (help menu contents in our application menu).

24

Page 25: Test Factor Test Technique Matrix

Siddiq 25 Note: Generally the test engineers prepare maximum test cases depending on functional and system specifications in SRS. The remaining test cases are prepared using application build because the functional and system specifications don’t provide complete information about every small issue in our project. Some times the testing people use Use cases instead of functional and system specification in SRS.

Review Test Cases: After completion of test cases selection and documentation, test lead conducts review meeting along with test engineers. In this review test lead concentrates on completeness and correctness of test cases. In coverage analysis, test lead considers 2 types of factors.

Requirement based test cases coverage to conduct test cases

Testing techniques based test cases coverage after completion of this review meeting test engineers concentrate on test execution.

IV.TEST EXECUTION

In test execution, test engineers concentrate on test cases execution, defect reporting and tracking. In this stage the testing team conduct a small meeting with development team for version controlling and establishment of test environment.

Development Initial builds Testing (sanity/smoke)Stable build Level 0

Bug fixing Defect reporting Level 1 (Comprehensive/ exclusive)

Bug resolving Modified build Level 2 (Regression)

(Master build) Level 3(Final regression / post mar testing)

1. Version control: During test execution developers assign unique version numbers to software builds after performing required changes. This version numbers system should be understandable to testing people. For build version controlling the development people use version control software’s. Example: VSS (visual source save).

LEVELS OF TEST EXECUTION Vs TEST CASES

Selected test cases for

Level 0 Basic functionality25

Software Test case

BuildsDefect

Page 26: Test Factor Test Technique Matrix

Siddiq 26 Level 1 Total functionality

Level 2 Selected test cases W.R.T. Modification

Level 3 Selected test cases W.R.T. Bug density

Note: Level 0 on Initial build

Level 1 on stable build

Level 2 on modified build

Level 3 on master build & Golden build is released to customer

Level 0 Generally testers start test execution with level 0 testing. It’s also known as sanity or smoke or

octangle or tester acceptance or build verification or testability.In testing level test engineers concentrate on below 8 factors through

operating corresponding initial build.1. Sanity or smoke test or Understandable2. Operatable3. Observable4. Controllable5. Consistency6. Simplicity7. Maintainable8. Automat able

Level 1 :( comprehensive testing):

After receiving stable build from development team, test engineers execute all test cases sequentially either in manual or in automation. In manual test execution, test engineers compare test cases specified expected values & build specified actual value. In test execution test engineers prepare test log . This document consists of 3 types of entries.

1. Passed: all excepted values of the test case are equal to all actual value build.2. Failed: Any one expected value is variation with in anyone actual value builds.3. Blocked: dependent Test case execution postponed to next cycle. Due to wrong parent

functionality.

Level 2 regression testing): During level1 comprehensive testing, testing people report miss matches

in between test cases expected and build actual to development team as defect report. After reviewing and resolving the defect the developer releases note. The responsible test engineers study the release note and try to understand modification in modified build, and then the test engineers concentrate on regression testing to ensure that modification.

26

All test cases in QUE Execution

SKIP

BLOCKED

Passed

Partial (PASS/FAIL)

FAILCLOSE

Page 27: Test Factor Test Technique Matrix

Siddiq 27

Level 0

Level 1

Check in Level 2 Check out(Regression)

From the above diagram test engineers conduct regression testing on modified build which are modified in release note. Study release note and consider severity of resolved bug.

High Medium LowAll P0 All P0 Same P0All P1 Max P1 Same P1Max P2 TC Same P2 TC Same P2 TC

Case 1: The development team resolves bug severity as high, then test engineers re-execute all Po, All P1, and carefully select max, P2 test cases. On modified build with respect to, modification mentioned in release note.

Case 2: If the development team resolves bug severity as medium, then test engineers execute all Po, carefully selected all P1, and some of P2, test cases, on modified build W.R.T Modification mentioned in release note.

Case 3: If the development team resolves bug severity as how, then test engineers are re-executing some Po, P1, P2, test cases are that modified build W.R.T. to mention is release.

Case 4: If the test team modified build due to sudden changes in customer requirements, and then test engineers re-execute all Po allP1, carefully selected max P2, test cases on modified build W.R.T. modification mentioned in release notes.

V.TEST REPORTING

During level-1 and level2 test execution test engineers report miss-matches is between test cases expected values and build actual values as defect report to development team. In test reporting development receive defect report from testing team in standard format. This format is followed by every test engineer during test execution to report defects.

IEEE 829 defect report format:

1. Defect ID: unique number or name for future reference 27

Page 28: Test Factor Test Technique Matrix

Siddiq 28 2. Description : summary about defect3. Build version number: The version number of current build, in this build test engineers

detected this defect.4. Feature: The name of the module or function where the test engineers found defect. 5. Test case name: The name of failed test case, in test case execution test engineers found

this defect.6. Status: New/Reopen: New Reporting first time@ Reopen Re-reporting7. Reproducible: Yes Defect appears every time in test case execution. No Defect

appear rarely in test execution8. If, yes attach test procedure.9. If, no attach snap shot and strong reasons10. Severity: The seriousness of defect in terms of functionality

High Not able to continue remaining testing without resolving this defect snapshot.Medium: Able to continue remaining testing but mandatory to resolve it.Low: Able to continue remaining testing and may or may not resolve it.

11. Priority Importance of the defect to restore in terms of customer (high, medium low)12. Detected by the name of test engineer13. Detected on date and time of defect reporting14. Assigned to: Name of responsible person at development side to receive defect report.15. Suggest fix :( Optional): Reasons to accept and restore these defect.

Resolution type: After receiving defect report from testing team the responsible development people conduct review meeting and then send resolution to responsible testing team.

These are 12 types of resolution types

1. Enhancement: The reported defect is rejected because this defect is related to future requirements of this customer.

2. Duplicate: The reported defect is rejected because that defect is similar to previously accepted defect.

3. H/W invitation: Reported defect is rejected because the defect rose due to limitation of H/W devices.

4. S/W limitations: The reported defect is rejected because that defect rose due to limitation of corresponding S/W technology.

5. Not applicable: The reported defect is rejected because this defect has improper meaning.6. Functions as designed: The reported defect is rejected because the coding is correct

W.R.T.Designed documents.7. Need more information: The reported defect is not accepted and not rejected but the developers

are requesting more information about the defect to understand.8. UnProduceble: The reported defect is not accepted and not rejected but the developer is

requiring correct procedure to reproduce that defect.9. No_ plan to fix it: The reported defect is not accepted and not rejected but the development

people are requiring same more extra time.10. Open: The reported defect is accepted and the development people are ready to resolve through

changes in coding.11. Deferred: The reported defect is accepted but postponed to future release due to low severity

and low priority

28

Page 29: Test Factor Test Technique Matrix

Siddiq 29 12. Use Direction: The reported defect is accepted but developer is providing same valid

information about that defect to customer site people through is application build screens.

DEFECT TRACKING PROCEDURE

3Test manager PM2 9 8 7 4Test lead Test lead1 10 6 5Test engineer Programmer

1-5 defect reporting 6-10 defect resolution

Note: Generally test engineers judge the severity and priority of defect during reporting; generally the development people assign low priority and low severity.

PM

2 7 6 3

Test lead Team lead

1 8 5 4

Test engineer programmer

1-4 defect reporting5-8 defect reporting

DEFECT LIFE CYCLE OR BUG LIFE CYCLE

New

Open Rejected differed

Closed Reopen

Types of defects: Generally black box testers find the fallowing type of defects during system testing such as

29

Page 30: Test Factor Test Technique Matrix

Siddiq 30

1. User interface defects

2. Boundary related defects

3. Error handling defects

4. Calculations defects

5. Race condition defects

6. Load condition defects

7. H/W related defects

8. ID control bugs

9. Version control bugs

10. Source bugs

1. User interface defects: (Low severity):Example1: spelling mistake (low severity and high priority) Example 2: Improper right alignment: how severity and low severity.

2. Boundary related defects: (Medium severity): Example1: The object is not taking valid type of values as I/P (medium severity and high priority) Example2 :One object is taking invalid type also ( medium severity and low priority)

3. Error handling bugs (medium severity): Example1 does not return error message to prevent wrong operation on build (Medium severity and low priority) Example 2: returns error message but complex to under stand( medium severity and low priority)

4. calculation bugs: (High severity): Example1 dependent I/P is wrong (application shows stopper) (High severity and high priority) Example2 final O/P is wrong;(module shows stopper) (high severity and low priority)

5. Race condition bugs:(high severity): Example dead lock or hang (application shows stopper)( high severity and how priority) Example2: does not run on other customer expected platforms (high severity low priority)

6. Load condition bugs: (high severity): Example does not allow multiple users (application shows stopper) (high severity and high priority). Example2 : does not allow customer expected load (high severity and low priority.)

7. H/W related bugs :( High severity): Example1 does not activate required H/W device(application shows stopper) (High severity and high priority) Example2: does not support all customer excepted H/W technologies (high severity and low priority)

8. ID control bugs(medium priority severity) Example wrong logo, logo missing, copy right window missing wrong version number, develop meet and testing people names missing…

30

Page 31: Test Factor Test Technique Matrix

Siddiq 31 9. Version control bugs (medium severity) Example Invalid differences in between old build version

and current version build.

10. Source bugs: (medium severity): Example: mistakes in help document.

31

Page 32: Test Factor Test Technique Matrix

Siddiq 32

TEST CLOSURE

After completion of reasonable cycles of test execution, test lead concentrates on test closure to estimate completeness and correctness of test execution and bugs resolved. In review meeting, the test lead is considering some factors to review testing team to responsibility.

1. Coverage analysis: requirement coverage or modules coverage—Testing technique coverage

2. Defect density: Module name no of defectsA 20%B 20%C 40% need forD 20% regression

3. Analysis of deferred (postponed) defects: Whether deferred defects are postponed or not?After completion, closure review by testing team concentrates on postmortem testing or final regression testing or pre acceptance testing, if required.

4. User acceptance testing: After completion of testing and their reviews, project management concentrates on user acceptance testing to collect feedback from real customer model customers. There are two way to conduct UAT such as α-testing and β-testing.

5. Sign off: After completion of user acceptance testing and modification, project management declares release team and CCB. In both teams few developer and test engineers are involved along with project manager. In sign off stage testing team submits all prepared testing documents to project manager.

32

Select high defect density module

Test reporting

Effort estimation

Regression testing

Plan regression

Page 33: Test Factor Test Technique Matrix

Siddiq 33

Test strategy Test plans Test case titles/ test scenario Test case document Test logo Test defect reports above all documents combination is also known as a final Test summery

report(FTSR)Requirements Traceability Matrix

It’s a document. The document creation and updation are done by test lead. This document starts from test planning and with tests closer.

Requirement ID Test cases ID Pass/fail Defect ID Closer/defined

Requrement1 TC 1 Passed ------- -------TC2 Passed ------- -------TC3 Passed ------- -------TC4 Failed D1 closed.

Above matrix is also known as requirement validation matrix.

Testing Measurements & MetricsMeasurement is basic unit. Metrics are a compound unit. In system testing project management

& test management use 3 types of measurement & metrics.

1. Quality assessment measurement: The measurement used by project manager or test manager during testing (monthly once).

a) Stability :

b) Sufficiency:- Requirements coverage

Testing Technologies coverage

c) Defect severity distribution: - Organization trend limit check.

II. Test management measurements:- This measurements used by test lead Categories people during testing (weekly once).

a) Test status:-

No. of. Test cases execution & their O/P’s.

33

Page 34: Test Factor Test Technique Matrix

Siddiq 34

No. of. Test cases execution.

No. of. Test cases yet to execute.

b) Delays in delivery :

Defect arrival rate

Defect resolution rate

Defect ageingc) Test efficiency

No. of defects detected /person _day

No. of test cases prepares/person _day

No. of test cases executed/person _day.

III. Process capability measurement This measurement is used by Project manager & test manager 2 improve testing team effort.

a) Test effectiveness requirement coverage

Testing technique coverage.

b) Defect escaped Type-phase analysis &

Defect removal efficiency = A/ A+BHere A is no .of. defects detected by testing team & B—no.of bugsfaced by customs side.

MANUAL TESTING TEST AUTOMATION

A test engineers executes test cases without using any third party testing tool. This style of test execution is manual testing

A test engineers executes test cases with the help of testing tool. This style of test execution is calling of test automation Test engineers are preferring automation.

Test reputation& compatibility manual testing 2 types of testing tools available in market such as functionality testing tools & load/stress (performance) testing tolls.

Example:

FUNCTIONALITY TESTING WinRunner, k-runner, QTP, rational robot, silk test

LOAD/STRESS TESTING TOOLS: Load runner, ration load test, SILK performs, ECT.

34