basics of software testing part 2

Upload: vidya

Post on 05-Apr-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/2/2019 Basics of software testing part 2

    1/28

    Black Box Testing

    Black Box Testing Definition, Example, Application, Techniques, Advantages andDisadvantages:

    DEFINITION

    Black Box Testing, also known as Behavioral Testing, is a software testing method in

    which the internal structure/design/implementation of the item being tested is not known to

    the tester. These tests can be functional or non-functional, though usually functional.

    This method is named so because the software program, in the eyes of the tester, is like ablack box; inside which one cannot see.

    Black Box Testing is contrasted with White Box Testing. View Differences between Black

    Box Testing and White Box Testing.

    This method of attempts to find errors in the following categories:

    Incorrect or missing functions

    Interface errors

    Errors in data structures or external database access

    Behavior or performance errors

    Initialization and termination errors

    EXAMPLE

    A tester, without knowledge of the internal structures of a website, tests the web pages by

    using a browser; providing inputs (clicks, keystrokes) and verifying the outputs against theexpected outcome.

    LEVELS APPLICABLE TO

    Black Box Testing method is applicable to all levels of the software testing process:

    Unit Testing

    http://softwaretestingfundamentals.com/white-box-testing/http://softwaretestingfundamentals.com/differences-between-black-box-testing-and-white-box-testing/http://softwaretestingfundamentals.com/differences-between-black-box-testing-and-white-box-testing/http://softwaretestingfundamentals.com/unit-testing/http://softwaretestingfundamentals.com/white-box-testing/http://softwaretestingfundamentals.com/differences-between-black-box-testing-and-white-box-testing/http://softwaretestingfundamentals.com/differences-between-black-box-testing-and-white-box-testing/http://softwaretestingfundamentals.com/unit-testing/
  • 8/2/2019 Basics of software testing part 2

    2/28

    Integration Testing

    System Testing

    Acceptance Testing

    The higher the level, and hence the bigger and more complex the box, the more black box

    testing method comes into use.

    BLACK BOX TESTING TECHNIQUES

    Following are some techniques that can be used for designing black box tests.

    Equivalence partitioning

    Equivalence Partitioning is a software test design technique that involves dividing inputvalues into valid and invalid partitions and selecting representative values from each

    partition as test data.

    Boundary Value Analysis

    Boundary Value Analysis is a software test design technique that involves determination ofboundaries for input values and selecting values that are at the boundaries and just

    inside/outside of the boundaries as test data.

    Cause Effect Graphing

    Cause Effect Graphing is a software test design technique that involves identifying the

    cases (input conditions) and effects (output conditions), producing a Cause-Effect Graph,

    and generating test cases accordingly.

    BLACK BOX TESTING ADVANTAGES

    Tests are done from a users point of view and will help in exposing discrepancies

    in the specifications

    Tester need not know programming languages or how the software has been

    implemented

    Tests can be conducted by a body independent from the developers, allowing for anobjective perspective and the avoidance of developer-bias

    Test cases can be designed as soon as the specifications are complete

    BLACK BOX TESTING DISADVANTAGES

    Only a small number of possible inputs can be tested and many program paths will

    be left untested

    Without clear specifications, which is the situation in many projects, test cases will

    be difficult to design

    http://softwaretestingfundamentals.com/unit-testing/http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/acceptance-testing/http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/acceptance-testing/
  • 8/2/2019 Basics of software testing part 2

    3/28

    Tests can be redundant if the software designer/ developer has already run a test

    case.

    Ever wondered why a soothsayer closes the eyes when foretelling events? So isalmost the case in Black Box Testing.

    Definition by ISTQB

    black box testing: Testing, either functional or non-functional, without reference to

    theinternal structure of the component or system.

    black box test design technique: Procedure to derive and/or select test cases based

    on ananalysis of the specification, either functional or non-functional, of a component or

    system

    without reference to its internal structure.

    White Box TestingWhite Box Testing Definition, Example, Application, Advantages and Disadvantages

    DEFINITION

    White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box

    Testing, Transparent Box Testing, Code-Based Testing or Structural Testing) is a softwaretesting method in which the internal structure/design/implementation of the item being

    tested is known to the tester. The tester chooses inputs to exercise paths through the code

    and determines the appropriate outputs. Programming know-how and the implementationknowledge is essential. White box testing is testing beyond the user interface and into thenitty-gritty of a system.

    This method is named so because the software program, in the eyes of the tester, is like a

    white/transparent box; inside which one clearly sees.

    White Box Testing is contrasted withBlack Box Testing. View Differences between BlackBox Testing and White Box Testing.

    EXAMPLE

    A tester, usually a developer as well, studies the implementation code of a certain field on a

    webpage, determines all legal (valid and invalid) AND illegal inputs and verifies theoutputs against the expected outcomes, which is also determined by studying the

    implementation code.

    LEVELS APPLICABLE TO

    http://softwaretestingfundamentals.com/black-box-testing/http://softwaretestingfundamentals.com/black-box-testing/http://softwaretestingfundamentals.com/differences-between-black-box-testing-and-white-box-testing/http://softwaretestingfundamentals.com/differences-between-black-box-testing-and-white-box-testing/http://softwaretestingfundamentals.com/differences-between-black-box-testing-and-white-box-testing/http://softwaretestingfundamentals.com/black-box-testing/http://softwaretestingfundamentals.com/differences-between-black-box-testing-and-white-box-testing/http://softwaretestingfundamentals.com/differences-between-black-box-testing-and-white-box-testing/
  • 8/2/2019 Basics of software testing part 2

    4/28

    White Box Testing method is applicable to the following levels of software testing:

    Unit Testing: For testing paths within a unit

    Integration Testing: For testing paths between units

    System Testing: For testing paths between subsystems

    However, it is mainly applied to Unit Testing.

    WHITE BOX TESTING ADVANTAGES

    Testing can be commenced at an earlier stage. One need not wait for the GUI to be

    available.

    Testing is more thorough, with the possibility of covering most paths.

    WHITE BOX TESTING DISADVANTAGES

    Since tests can be very complex, highly skilled resources are required, withthorough knowledge of programming and implementation.

    Test script maintenance can be a burden if the implementation changes toofrequently.

    Since this method of testing it closely tied with the application being testing, tools

    to cater to every kind of implementation/platform may not be readily available.

    White Box Testing is like the work of a mechanic who examines the engine to see

    why the car is not moving.

    Definition by ISTQB

    white-box testing: Testing based on an analysis of the internal structure of thecomponent or

    system.

    white-box test design technique: Procedure to derive and/or select test cases based

    on ananalysis of the internal structure of a component or system.

    Gray Box Testing

    Gray Box Testing Definition, Example:

    DEFINITION

    Gray Box Testing is a software testing method which is a combination ofBlack Box

    Testingmethod andWhite Box Testing method. In Black Box Testing, the internal

    structure of the item being tested is unknown to the tester and in White Box Testing theinternal structure in known. In Gray Box Testing, the internal structure is partially known.

    http://softwaretestingfundamentals.com/unit-testing/http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/black-box-testing/http://softwaretestingfundamentals.com/black-box-testing/http://softwaretestingfundamentals.com/black-box-testing/http://softwaretestingfundamentals.com/white-box-testing/http://softwaretestingfundamentals.com/white-box-testing/http://softwaretestingfundamentals.com/unit-testing/http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/black-box-testing/http://softwaretestingfundamentals.com/black-box-testing/http://softwaretestingfundamentals.com/white-box-testing/
  • 8/2/2019 Basics of software testing part 2

    5/28

    This involves having access to internal data structures and algorithms for purposes of

    designing the test cases, but testing at the user, or black-box level.

    Gray Box Testing is named so because the software program, in the eyes of the tester islike a gray/semi-transparent box; inside which one can partially see.

    EXAMPLE

    An example of Gray Box Testing would be when the codes for two units/modules arestudied (White Box Testing method) for designing test cases and actual tests are conducted

    using the exposed interfaces (Black Box Testing method).

    LEVELS APPLICABLE TO

    Though Gray Box Testing method may be used in other levels of testing, it is primarily

    useful in Integration Testing.

    SPELLING

    Note that Gray is also spelt as Grey. Hence Grey Box Testing and Gray Box Testing mean

    the same.

    Smoke Testing

    Smoke Testing Definition, Elaboration, Advantages, Details:

    DEFINITION

    Smoke Testing, also known as Build Verification Testing, is a type of software testing

    that comprises of a non-exhaustive set of tests that aim at ensuring that the most importantfunctions work. The results of this testing is used to decide if a build is stable enough to

    proceed with further testing.

    The term smoke testing, it is said, came to software testing from a similar type of

    hardware testing, in which the device passed the test if it did not catch fire (or smoked) thefirst time it was turned on.

    http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/integration-testing/
  • 8/2/2019 Basics of software testing part 2

    6/28

    ELABORATION

    Smoke testing covers most of the major functions of the software but none of them in

    depth. The result of this test is used to decide whether to proceed with further testing. If the

    smoke test passes, go ahead with further testing. If it fails, halt further tests and ask for anew build with the required fixes. If an application is badly broken, detailed testing might

    be a waste of time and effort.

    Smoke test helps in exposing integration and major problems early in the cycle. It can be

    conducted on both newly created software and enhanced software. Smoke test is performedmanually or with the help of automation tools/scripts. If builds are prepared frequently, it is

    best to automate smoke testing.

    As and when an application becomes mature, with addition of more functionalities etc, the

    smoke test needs to be made more expansive. Sometimes, it takes just one incorrectcharacter in the code to render an entire application useless.

    ADVANTAGES

    It exposes integration issues.

    It uncovers problems early.

    It provides some level of confidence that changes to the software have not

    adversely affected major areas (the areas covered by smoke testing, of course)

    LEVELS APPLICABLE TO

    Smoke testing is normally used in Integration Testing,System Testingand AcceptanceTestinglevels.

    NOTE

    Do not consider smoke testing to be a substitute of functional/regression testing.

    http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/acceptance-testing/http://softwaretestingfundamentals.com/acceptance-testing/http://softwaretestingfundamentals.com/acceptance-testing/http://softwaretestingfundamentals.com/integration-testing/http://softwaretestingfundamentals.com/system-testing/http://softwaretestingfundamentals.com/acceptance-testing/http://softwaretestingfundamentals.com/acceptance-testing/
  • 8/2/2019 Basics of software testing part 2

    7/28

    DEFINITION

    Regression testing is a type of software testing that intends to ensure that changes

    (enhancements or defect fixes) to the software have not adversely affected it.

    ELABORATION

    The likelihood of any code change impacting functionalities that are not directly associated

    with the code is always there and it is essential that regression testing is conducted to make

    sure that fixing one thing has not broken another thing.

    During regression testing, new test cases are not created but previously created test casesare re-executed.

    LEVELS APPLICABLE TO

    Regression testing can be performed during any level of testing (Unit, Integration, System,

    or Acceptance) but it is mostly relevant during System Testing.

    EXTENT

    In an ideal case, a full regression test is desirable but oftentimes there are time/resource

    constraints. In such cases, it is essential to do an impact analysis of the changes to identify

    areas of the software that have the highest probability of being affected by the change and

    that have the highest impact to users in case of malfunction and focus testing around thoseareas.

    Due to the scale and importance of regression testing, more and more companies and

    projects are adopting regression test automation tools.

    LITERAL MEANING OF REGRESSION

    Regression [noun]: the act of going back to a previous place or state; return or reversion.

    ANALOGY

    You can consider regression testing as similar to moonwalk or backslide, a dance technique

    (popularized by Michael Jackson) that gives the illusion of the dancer being pulled

    backwards while attempting to walk forward. Well, many will not agree with this analogy

    but what the heck! Lets have some fun!

  • 8/2/2019 Basics of software testing part 2

    8/28

    Test Plan

    Test Plan Definition, Types, Template, and Guidelines:

    TEST PLAN DEFINITION

    A Software Test Plan is a document describing the testing scope and activities. It is the

    basis for formally testing any software/product in a project.

    ISTQB Definition

    test plan: A document describing the scope, approach, resources and schedule of

    intended test activities. It identifies amongst others test items, the features to betested, the testing tasks, who will do each task, degree of tester independence, the

    test environment, the test design techniques and entry and exit criteria to be used,

    and the rationale for their choice,and any risks requiring contingency planning. It isa record of the test planning process.

    master test plan: A test plan that typically addresses multiple test levels.

    phase test plan: A test plan that typically addresses one test phase.

  • 8/2/2019 Basics of software testing part 2

    9/28

    TEST PLAN TYPES

    One can have the following types of test plans:

    Master Test Plan: A single high-level test plan for a project/product that unifies allother test plans.

    Testing Level Specific Test Plans:Plans for each level of testing.

    o Unit Test Plan

    o Integration Test Plan

  • 8/2/2019 Basics of software testing part 2

    10/28

    o System Test Plan

    o Acceptance Test Plan

    Testing Type Specific Test Plans: Plans for major types of testing like

    Performance Test Plan and Security Test Plan.

    TEST PLAN TEMPLATE

    The format and content of a software test plan vary depending on the processes, standards,

    and test management tools being implemented. Nevertheless, the following format, which isbased on IEEE standard for software test documentation, provides a summary of what a

    test plan can/should contain.

    Test Plan Identifier:

    Provide a unique identifier for the document. (Adhere to the ConfigurationManagement System if you have one.)

    Introduction:

    Provide an overview of the test plan.

    Specify the goals/objectives.

    Specify any constraints.

    References:

    List the related documents, with links to them if available, including the following:

    o Project Plan

    o Configuration Management Plan

    Test Items:

    List the test items (software/products) and their versions.

    Features to be Tested:

    List the features of the software/product to be tested.

    Provide references to the Requirements and/or Design specifications of the featuresto be tested

    Features Not to Be Tested:

    List the features of the software/product which will not be tested.

    Specify the reasons these features wont be tested.

  • 8/2/2019 Basics of software testing part 2

    11/28

    Approach:

    Mention the overall approach to testing.

    Specify the testing levels [if it's a Master Test Plan], the testing types, and thetesting methods [Manual/Automated; White Box/Black Box/Gray Box]

    Item Pass/Fail Criteria:

    Specify the criteria that will be used to determine whether each test item

    (software/product) has passed or failed testing.

    Suspension Criteria and Resumption Requirements:

    Specify criteria to be used to suspend the testing activity.

    Specify testing activities which must be redone when testing is resumed.

    Test Deliverables:

    List test deliverables, and links to them if available, including the following:o Test Plan (this document itself)

    o Test Cases

    o Test Scripts

    o Defect/Enhancement Logs

    o Test Reports

    Test Environment:

    Specify the properties of test environment: hardware, software, network etc.

    List any testing or related tools.

    Estimate:

    Provide a summary of test estimates (cost or effort) and/or provide a link to the

    detailed estimation.

    Schedule:

    Provide a summary of the schedule, specifying key test milestones, and/or provide alink to the detailed schedule.

    Staffing and Training Needs:

    Specify staffing needs by role and required skills.

    Identify training that is necessary to provide those skills, if not already acquired.

  • 8/2/2019 Basics of software testing part 2

    12/28

    Responsibilities:

    List the responsibilities of each team/role/individual.

    Risks:

    List the risks that have been identified.

    Specify the mitigation plan and the contingency plan for each risk.

    Assumptions and Dependencies:

    List the assumptions that have been made during the preparation of this plan.

    List the dependencies.

    Approvals:

    Specify the names and roles of all persons who must approve the plan. Provide space for signatures and dates. (If the document is to be printed.)

    TEST PLAN GUIDELINES

    Make the plan concise. Avoid redundancy and superfluousness. If you think you do

    not need a section that has been mentioned in the template above, go ahead anddelete that section in your test plan.

    Be specific. For example, when you specify an operating system as a property of a

    test environment, mention the OS Edition/Version as well, not just the OS Name.

    Make use of lists and tables wherever possible. Avoid lengthy paragraphs.

    Have the test plan reviewed a number of times prior to baselining it or sending it forapproval. The quality of your test plan speaks volumes about the quality of the

    testing you or your team is going to perform.

    Update the plan as and when necessary. An out-dated and unused document stinks

    and is worse than not having the document in the first place.

    Test Case

    Test Case Definition, Template, Example, Tips

    DEFINITION

    A test case is a set of conditions or variables under which a tester will determine whether a

    system under test satisfies requirements or works correctly.

  • 8/2/2019 Basics of software testing part 2

    13/28

    The process of developing test cases can also help find problems in the requirements or

    design of an application.

    TEST CASE TEMPLATE

    A test case can have the following elements. Note, however, that normally a testmanagement tool is used by companies and the format is determined by the tool used.

    Test Suite ID The ID of the test suite to which this test case belongs.

    Test Case ID The ID of the test case.

    Test Case

    SummaryThe summary / objective of the test case.

    RelatedRequirement

    The ID of the requirement this test case relates/traces to.

    PrerequisitesAny prerequisites or preconditions that must be fulfilled prior to executing

    the test.Test Procedure Step-by-step procedure to execute the test.

    Test DataThe test data, or links to the test data, that are to be used while conductingthe test.

    Expected Result The expected result of the test.

    Actual Result The actual result of the test; to be filled after executing the test.

    StatusPass or Fail. Other statuses can be Not Executed if testing is not

    performed and Blocked if testing is blocked.

    Remarks Any comments on the test case or test execution.

    Created By The name of the author of the test case.

    Date of Creation The date of creation of the test case.Executed By The name of the person who executed the test.

    Date ofExecution

    The date of execution of the test.

    Test

    Environment

    The environment (Hardware/Software/Network) in which the test was

    executed.

    TEST CASE EXAMPLE / TEST CASE SAMPLE

    Test Suite ID TS001

    Test Case ID TC001Test Case

    SummaryTo verify that clicking the Generate Coin button generates coins.

    Related

    RequirementRS001

    Prerequisites 1. User is authorized.

  • 8/2/2019 Basics of software testing part 2

    14/28

    2. Coin balance is available.

    Test Procedure

    1. Select the coin denomination in the Denomination field.

    2. Enter the number of coins in the Quantity field.

    3. Click Generate Coin.

    Test Data

    1. Denominations: 0.05, 0.10, 0.25, 0.50, 1, 2, 5

    2. Quantities: 0, 1, 5, 10, 20

    Expected Result

    1. Coin of the specified denomination should be produced if thespecified Quantity is valid (1, 5)

    2. A message Please enter a valid quantity between 1 and 10 should

    be displayed if the specified quantity is invalid.

    Actual Result

    1. If the specified quantity is valid, the result is as expected.

    2. If the specified quantity is invalid, nothing happens; the expected

    message is not displayed

    Status Fail

    Remarks This is a sample test case.

    Created By John Doe

    Date of Creation 01/14/2020

    Executed By Jane Roe

    Date of

    Execution02/16/2020

    Test

    Environment

    OS: Windows Y

    Browser: Chrome N

    WRITING GOOD TEST CASES

    As far as possible, write test cases in such a way that you test only one thing at atime. Do not overlap or complicate test cases. Attempt to make your test cases

    atomic.

    Ensure that all positive scenarios and negative scenarios are covered.

    Language:

    o Write in simple and easy to understand language.o Use active voice: Do this, do that.

    o Use exact and consistent names (of forms, fields, etc).

    Characteristics of a good test case:

    o Accurate: Exacts the purpose.

    o Economical: No unnecessary steps or words.

    o Traceable: Capable of being traced to requirements.

    o Repeatable: Can be used to perform the test over and over.

  • 8/2/2019 Basics of software testing part 2

    15/28

    o Reusable: Can be reused if necessary.

    Test Script

    Test Script Definition, Explanation, Example:

    A Test Script is a set of instructions (written using a scripting/programming language) that

    is performed on a system under test to verify that the system performs as expected. Test

    scripts are used in automated testing.

    Sometimes, a set of instructions (written in a human language), used in manual testing, isalso called a Test Script but a better term for that would be aTest Case.

    Some scripting languages used in automated testing are:

    JavaScript Perl

    Python Ruby

    Tcl

    Unix Shell Script

    VBScript

    There are also many Test Automation Tools/Frameworks that generate the test scripts for

    you; without the need for actual coding. Many of these tools have their own scripting

    languages (some of them based on a core scripting languages). For example, Sikuli, a GUI

    automation tool, uses Sikuli Script which is based on Python. A test script can be as simpleas the one below:

    def sample_test_script (self):type ("TextA")

    click (ImageButtonA)assertExist (ImageResultA)

    A test execution engine and a repository of test scripts (along with test data) arecollectively known as a Test Harness.

    DefectSoftware Defect / Bug: Definition, Explanation, Classification, Details:

    DEFINITION

    A Software Defect / Bug is a condition in a software product which does not meet a

    software requirement (as stated in the requirement specifications) or end-user expectations

    http://softwaretestingfundamentals.com/test-case/http://softwaretestingfundamentals.com/test-case/http://softwaretestingfundamentals.com/test-case/
  • 8/2/2019 Basics of software testing part 2

    16/28

    (which may not be specified but are reasonable). In other words, a defect is an error in

    coding or logic that causes a program to malfunction or to produce incorrect/unexpected

    results.

    A program that contains a large number of bugs is said to be buggy.

    Reports detailing bugs in software are known as bug reports. (SeeDefect Report) Applications for tracking bugs are known as bug tracking tools.

    The process of finding the cause of bugs is known as debugging. The process of intentionally injecting bugs in a software program, to estimate test

    coverage by monitoring the detection of those bugs, is known as bebugging.

    Software Testing proves that defects exist but NOT that defects do not exist.

    CLASSIFICATION

    Software Defects/ Bugs are normally classified as per:

    Severity / Impact (See Defect Severity)

    Probability / Visibility (SeeDefect Probability)

    Priority / Urgency (SeeDefect Priority)

    Related Dimension of Quality (See Dimensions of Quality)

    Related Module / Component

    Phase Detected

    Phase Injected

    Related Module /Component

    Related Module / Component indicates the module or component of the software where the

    defect was detected. This provides information on which module / component is buggy or

    risky.

    Module/Component A

    Module/Component B

    Module/Component C

    http://softwaretestingfundamentals.com/defect-report/http://softwaretestingfundamentals.com/defect-report/http://softwaretestingfundamentals.com/defect-severity/http://softwaretestingfundamentals.com/defect-probability/http://softwaretestingfundamentals.com/defect-probability/http://softwaretestingfundamentals.com/defect-priority/http://softwaretestingfundamentals.com/defect-priority/http://softwaretestingfundamentals.com/dimensions-of-quality/http://softwaretestingfundamentals.com/defect-report/http://softwaretestingfundamentals.com/defect-severity/http://softwaretestingfundamentals.com/defect-probability/http://softwaretestingfundamentals.com/defect-priority/http://softwaretestingfundamentals.com/dimensions-of-quality/
  • 8/2/2019 Basics of software testing part 2

    17/28

    Phase Detected

    Phase Detected indicates the phase in the software development lifecycle where the defect

    was identified.

    Unit Testing

    Integration Testing

    System Testing

    Acceptance Testing

    Phase Injected

    Phase Injected indicates the phase in the software development lifecycle where the bug was

    introduced. Phase Injected is always earlier in the software development lifecycle than the

    Phase Detected. Phase Injected can be known only after a proper root-cause analysis of thebug.

    Requirements Development

    High Level Design

    Detailed Design

    Coding

    Build/Deployment

    Note that the categorizations above are just guidelines and it is up to the

    project/organization to decide on what kind of categorization to use. In most cases, the

    categorization depends on the defect tracking tool that is being used. It is essential thatproject members agree beforehand on the categorization (and the meaning of each

    categorization) so as to avoid arguments, conflicts, and unhealthy bickering later.

    NOTE: We prefer the term Defect over the term Bug because Defect is morecomprehensive.

    Defect Severity

    Defect/Bug Severity Definition, Classification, Caution:

    Defect Severity or Impact is a classification of software defect (bug) to indicate the degreeof negative impact on the quality of software.

    ISTQB Definition

    severity: The degree of impact that a defect has on the development or operation of

    a component or system.

  • 8/2/2019 Basics of software testing part 2

    18/28

    DEFECT SEVERITY CLASSIFICATION

    The actual terminologies, and their meaning, can vary depending on people, projects,

    organizations, or defect tracking tools, but the following is a normally acceptedclassification.

    Critical: The defect affects critical functionality or critical data. It does not have a

    workaround. Example: Unsuccessful installation, complete failure of a feature.

    Major: The defect affects major functionality or major data. It has a workaroundbut is not obvious and is difficult. Example: A feature is not functional from one

    module but the task is doable if 10 complicated indirect steps are followed in

    another module/s.

    Minor: The defect affects minor functionality or non-critical data. It has an easy

    workaround. Example: A minor feature that is not functional in one module but the

    same task is easily doable from another module.

    Trivial: The defect does not affect functionality or data. It does not even need a

    workaround. It does not impact productivity or efficiency. It is merely aninconvenience. Example: Petty layout discrepancies, spelling/grammatical errors.

    Severity is also denoted as:

    S1 = Critical

    S2 = Major

    S3 = Minor

    S4 = Trivial

    CAUTION:

    Defect Severity is one of the most common causes of feuds between Testers andDevelopers. A typical situation is where a Tester classifies the Severity of Defect as

    Critical or Major but the Developer refuses to accept that: He/she believes that the defect is

    of Minor or Trivial severity.

    Though we have provided you some guidelines in this article on how to interpret each level

    of severity, this is still a very subjective matter and chances are high that one will not agree

    with the definition of the other. You can however lessen the chances of differing opinions

    in your project by discussing (and documenting, if necessary) what each level of severitymeans and by agreeing to at least some standards (substantiating with examples, if

    necessary.)

    ADVICE: Go easy on this touchy defect dimension and good luck!

    For more details on defects, see Defect.

    Defect Probability

    http://softwaretestingfundamentals.com/defect/http://softwaretestingfundamentals.com/defect/http://softwaretestingfundamentals.com/defect/
  • 8/2/2019 Basics of software testing part 2

    19/28

    Defect Probability / Visibility: Definition, Categorization, Details:

    Defect Probability (Defect Visibility or Bug Probability or Bug Visibility) indicates the

    likelihood of a user encountering the defect / bug.

    High: Encountered by all or almost all the users of the feature Medium: Encountered by about 50% of the users of the feature

    Low: Encountered by very few or no users of the feature

    Defect Probability can also be denoted in percentage (%).

    The measure of Probability/Visibility is with respect to the usage of a feature and not theoverall software. Hence, a bug in a rarely used feature can have a high probability if the

    bug is easily encountered by users of the feature. Similarly, a bug in a widely used feature

    can have a low probability if the users rarely detect it.

    For more details on defects, see Defect.

    Defect SeverityDefect

    Defect Life Cycle

    Life cycle of a Software Defect/Bug:

    Defect Life Cycle (Bug Life cycle) is the journey of a defect from its identification to its

    closure. The Life Cycle varies from organization to organization and is governed by thesoftware testing process the organization or project follows and/or the Defect tracking tool

    being used.

    Nevertheless, the life cycle in general resembles the following:

    http://softwaretestingfundamentals.com/defect/http://softwaretestingfundamentals.com/defect/http://softwaretestingfundamentals.com/defect-severity/http://softwaretestingfundamentals.com/defect-severity/http://softwaretestingfundamentals.com/defect/http://softwaretestingfundamentals.com/defect/http://softwaretestingfundamentals.com/defect/http://softwaretestingfundamentals.com/defect-severity/http://softwaretestingfundamentals.com/defect/
  • 8/2/2019 Basics of software testing part 2

    20/28

    Status Alternative Status

    NEWASSIGNED OPEN

    DEFERRED

    DROPPED REJECTED

    COMPLETED FIXED, RESOLVED, TEST

    REASSIGNED REOPENED

    CLOSED VERIFIED

    Defect Status Explanation

    NEW: Tester finds a defect and posts it with the status NEW. This defect is yet to

    be studied/approved. The fate of a NEW defect is one of ASSIGNED, DROPPED

    and DEFERRED. ASSIGNED / OPEN: Test / Development / Project lead studies the NEW defect and

    if it is found to be valid it is assigned to a member of the Development Team. The

    assigned Developers responsibility is now to fix the defect and have it

  • 8/2/2019 Basics of software testing part 2

    21/28

    COMPLETED. Sometimes, ASSIGNED and OPEN can be different statuses. In

    that case, a defect can be open yet unassigned.

    DEFERRED: If a valid NEW or ASSIGNED defect is decided to be fixed inupcoming releases instead of the current release it is DEFERRED. This defect is

    ASSIGNED when the time comes.

    DROPPED / REJECTED: Test / Development/ Project lead studies the NEW defectand if it is found to be invalid, it is DROPPED / REJECTED. Note that the specific

    reason for this action needs to be given.

    COMPLETED / FIXED / RESOLVED / TEST: Developer fixes the defect that isASSIGNED to him or her. Now, the fixed defect needs to be verified by the Test

    Team and the Development Team assigns the defect back to the Test Team. A

    COMPLETED defect is either CLOSED, if fine, or REASSIGNED, if still not fine.

    If a Developer cannot fix a defect, some organizations may offer the followingstatuses:

    o Wont Fix / Cant Fix: The Developer will not or cannot fix the defect due

    to some reason.

    o Cant Reproduce: The Developer is unable to reproduce the defect.o Need More Information: The Developer needs more information on the

    defect from the Tester.

    REASSIGNED / REOPENED: If the Tester finds that the fixed defect is in fact not

    fixed or only partially fixed, it is reassigned to the Developer who fixed it. A

    REASSIGNED defect needs to be COMPLETED again.

    CLOSED / VERIFIED: If the Tester / Test Lead finds that the defect is indeed fixed

    and is no more of any concern, it is CLOSED / VERIFIED. This is the happy

    ending.

    Defect Life Cycle Implementation Guidelines

    Make sure the entire team understands what each defect status exactly means. Also,

    make sure the defect life cycle is documented.

    Ensure that each individual clearly understands his/her responsibility as regards

    each defect.

    Ensure that enough detail is entered in each status change. For example, do not

    simply DROP a defect but provide a reason for doing so.

    If a defect tracking tool is being used, avoid entertaining any defect relatedrequests without an appropriate change in the status of the defect in the tool. Do

    not let anybody take shortcuts. Or else, you will never be able to get up-to-date

    defect metrics for analysis.

    Defect Report

    Defect Report Description, Template, Guidelines:

  • 8/2/2019 Basics of software testing part 2

    22/28

    After uncovering a defect (bug), testers generate a formal defect report. The purpose of a

    defect report is to state the problem as clearly as possible so that developers can replicate

    the defect easily and fix it.

    DEFECT REPORT TEMPLATE

    In most companies, a defect reporting tool is used and the elements of a report can vary.

    However, in general, a defect report can consist of the following elements.

    ID Unique identifier given to the defect. (Usually Automated)

    Project Project name.

    Product Product name.

    Release Version Release version of the product. (e.g. 1.2.3)

    Module Specific module of the product where the defect was detected.

    Detected Build

    Version

    Build version of the product where the defect was detected (e.g.

    1.2.3.5)Summary Summary of the defect. Keep this clear and concise.

    Description Detailed description of the defect. Describe as much as possible butwithout repeating anything or using complex words. Keep it simple

    but comprehensive.

    Steps to Replicate Step by step description of the way to reproduce the defect. Number

    the steps.

    Actual Result The actual result you received when you followed the steps.

    Expected Results The expected results.

    Attachments Attach any additional information like screenshots and logs.

    Remarks Any additional comments on the defect.Defect Severity Severity of the Defect. (SeeDefect Severity)

    Defect Priority Priority of the Defect. (See Defect Priority)

    Reported By The name of the person who reported the defect.

    Assigned To The name of the person that is assigned to analyze/fix the defect.

    Status The status of the defect. (See Defect Life Cycle)

    Fixed Build Version Build version of the product where the defect was fixed (e.g. 1.2.3.9)

    REPORTING DEFECTS EFFECTIVELY

    It is essential that you report defects effectively so that time and effort is not unnecessarilywasted in trying to understand and reproduce the defect. Here are some guidelines:

    Be specific:o Specify the exact action: Do not say something like Select ButtonB. Do

    you mean Click ButtonB or Press ALT+B or Focus on ButtonB and

    click ENTER? Of course, if the defect can be arrived at by using all thethree ways, its okay to use a generic term as Select but bear in mind that

    http://softwaretestingfundamentals.com/defect/http://softwaretestingfundamentals.com/defect-severity/http://softwaretestingfundamentals.com/defect-severity/http://softwaretestingfundamentals.com/defect-priority/http://softwaretestingfundamentals.com/defect-life-cycle/http://softwaretestingfundamentals.com/defect/http://softwaretestingfundamentals.com/defect-severity/http://softwaretestingfundamentals.com/defect-priority/http://softwaretestingfundamentals.com/defect-life-cycle/
  • 8/2/2019 Basics of software testing part 2

    23/28

    you might just get the fix for the Click ButtonB scenario. [Note: This

    might be a highly unlikely example but it is hoped that the message is

    clear.]o In case of multiple paths, mention the exact path you followed: Do not say

    something like If you do A and X or B and Y or C and Z, you get D.

    Understanding all the paths at once will be difficult. Instead, say Do Aand X and you get D. You can, of course, mention elsewhere in the report

    that D can also be got if you do B and Y or C and Z.o Do not use vague pronouns: Do not say something like In ApplicationA,

    open X, Y, and Z, and then close it. What does the it stand for? Z or,

    Y, or X or ApplicationA?

    Be detailed:o Provide more information (not less). In other words, do not be lazy.

    Developers may or may not use all the information you provide but they

    sure do not want to beg you for any information you have missed.

    Be objective:

    o Do not make subjective statements like This is a lousy application orYou fixed it real bad.o Stick to the facts and avoid the emotions.

    Reproduce the defect:

    o Do not be impatient and file a defect report as soon as you uncover a defect.

    Replicate it at least once more to be sure. (If you cannot replicate it again,

    try recalling the exact test condition and keep trying. However, if you

    cannot replicate it again after many trials, finally submit the report for

    further investigation, stating that you are unable to reproduce the defectanymore and providing any evidence of the defect if you had gathered. )

    Review the report:

    o Do not hit Submit as soon as you write the report. Review it at least once.Remove any typos.

    Defect Age

    Defect Age Definition, Elaboration, Formula, and Uses:

    Defect Age can be measured in terms of any of the following:

    Time

    Phases

    DEFECT AGE (IN TIME)

    Definition

  • 8/2/2019 Basics of software testing part 2

    24/28

    Defect Age (in Time) is the difference in time between the date a defect is detected and the

    current date (if the defect is still open) or the date the defect was fixed (if the defect is

    already fixed).

    Elaboration

    The defects are confirmed and assigned (not just reported).

    Dropped defects are not counted.

    The difference in time can be calculated in hours or in days.

    fixed means that the defect is verified and closed; not just completed by the

    developer.

    Defect Age Formula

    Defect Age in Time = Defect Fix Date (OR Current Date) Defect Detection Date

    Normally, average age of all defects is calculated.

    Example

    If a defect was detected on 01/01/2009 10:00:00 AM and closed on 01/04/2009 12:00:00

    PM, the Defect Age is 74 hours.

    Uses

    For determining the responsiveness of the development/testing team. Lesser the agebetter the responsiveness.

    DEFECT AGE (IN PHASES)

    Definition

    Defect Age (in Phases) is the difference in phases between the defect injection phase and

    the defect detection phase.

    Elaboration

    defect injection phase is the phase in the software life cycle where the defect was

    introduced.

    defect detection phase is the phase in the software life cycle where the defect wasidentified.

    Defect Age Formula

    Defect Age in Phase = Defect Detection Phase Defect Injection Phase

    Normally, average of all defects is calculated.

  • 8/2/2019 Basics of software testing part 2

    25/28

    Example

    Lets say the software life cycle has the following phases:

    1. Requirements Development

    2. High-Level Design3. Detail Design

    4. Coding

    5. Unit Testing6. Integration Testing

    7. System Testing

    8. Acceptance Testing

    If a defect is identified in System Testing and the defect was introduced in RequirementsDevelopment, the Defect Age is 6.

    Uses

    For assessing the effectiveness of each phase and any review/testing activities.

    Lesser the age better the effectiveness.

    Defect Density

    Defect Density Definition, Elaboration, Formula, and Uses:

    DEFINITION

    Defect Density is the number of confirmed defects detected in software/component during

    a defined period of development/operation divided by the size of the software/component.

    ELABORATION

    The defects are:

    confirmed and agreed upon (not just reported).

    Dropped defects are not counted.

    The period might be for one of the following:

    for a duration (say, the first month, the quarter, or the year).

    for each phase of the software life cycle.

    for the whole of the software life cycle.

    The size is measured in one of the following:

  • 8/2/2019 Basics of software testing part 2

    26/28

    Function Points (FP)

    Source Lines of Code

    DEFECT DENSITY FORMULA

    USES

    For comparing the relative number of defects in various software components so

    that high-risk components can be identified and resources focused towards them.

    For comparing software/products so that quality of each software/product can be

    quantified and resources focused towards those with low quality.

    Defect Detection Efficiency

    Defect Detection Efficiency Definition, Elaboration, Formula, Unit, Target Value, Uses,

    Example:

    DEFINITION

    Defect Detection Efficiency (DDE) is the number of defects detected during a phase/stage

    that are injected during that same phase divided by the total number of defects injected

    during that phase.

    ELABORATION

    defects:o Are confirmed and agreed upon (not just reported).

    o Dropped defects are not counted.

    phase:

    o Can be any phase in the software development life cycle where defects can

    be injected AND detected. For example, Requirement, Design, and Coding.

    injected:o The phase a defect is injected in is identified by analyzing the defects [For

    instance, a defect can be detected in System Testing phase but the cause of

    the defect can be due to wrong design. Hence, the injected phase for thatdefect is Design phase.]

    FORMULA

    http://softwaretestingfundamentals.com/defect-density/defect_density/
  • 8/2/2019 Basics of software testing part 2

    27/28

    DDE = (Number of Defects Injected AND Detected in a Phase / Total Number of

    Defects Injected in that Phase) x 100 %

    UNIT

    Percentage (%)

    TARGET VALUE

    The ultimate target value for Defect Detection Efficiency is 100% which means that

    all defects injected during a phase are detected during that same phase and none aretransmitted to subsequent phases. [Note: the cost of fixing a defect at a later phase

    is higher.]

    USES

    For measuring the quality of the processes (process efficiency) within softwaredevelopment life cycle; by evaluating the degree to which defects introduced during

    that phase/stage are eliminated before they are transmitted into subsequent

    phases/stages.

    For identifying the phases in the software development life cycle that are theweakest in terms of quality control and for focusing on them.

    EXAMPLE

    Phase Injected

    Defects

    Injected

    Phase

    Specific

    Activity

    Detected

    Defects

    Detected

    Phase Specific

    Activity

    Detected

    Defects that

    were Injected

    in the same

    Phase

    Defect

    Detection

    Efficiency

    Require-

    ments

    10 Require-

    mentDevelop-

    ment

    4 Require- ment

    Review

    4

    40.00%[= 4 /

    10]

    Design 24 Design 16 Design

    Review

    15 62.50%[=

    15 / 24]

    Coding 155 Coding 23 Code Review 22 14.19%[=22 / 155]

    Unit

    Testing

    0 25 Unit Testing

  • 8/2/2019 Basics of software testing part 2

    28/28

    Integra-

    tion

    Testing

    0 30 System

    Testing

    System

    Testing0 83 Integration

    Testing

    Accept-

    anceTesting

    0 5 Acceptance

    Testing

    Opera-

    tion

    0 3 Operation

    The DDE of Requirements Phase is 40.00% which can definitely be bettered.

    Requirement Review can be strengthened.

    The DDR of Design Phase is 62.50 % which is relatively good but can be bettered.

    The DDE of Coding Phase is only 14.19% which can be bettered. The DDE for this

    phase is usually low because most defects get injected during this phase but oneshould definitely aim higher by strengthening Code Review. [Note: sometimes,

    Coding and Unit Testing phases are combined.] The other Phases like Integration Testing etc do not have DDE because defects do

    not get Injected during these phases.