social hub test plan

Upload: rohit-hambar

Post on 05-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/2/2019 Social Hub Test Plan

    1/24

    1

    Social Hub Release 1.0

    Software Test Plan

    Version TP 1.0

    March 19, 2012

    Submitted By:

  • 8/2/2019 Social Hub Test Plan

    2/24

    2

    Revision History

    Date Version Description Author

  • 8/2/2019 Social Hub Test Plan

    3/24

    3

    Table of Content

    1. Test Plan Identifier ..5

    2.

    Introduction .....52.1Team Interaction ...5

    3. Objective and Scope ....53.1Objective...5

    3.1.1 Primary Objective..5

    3.1.2 Secondary Objective. .63.2 Scope ....6

    4. Risk ...6

    4.1Schedule ....6

    4.2Technical ...64.3Management ..6

    4.4

    Personnel ...74.5Requirements .7

    5. Features and Functions to Test76. Features and Functions not to Test..7

    7. Process Overview 8

    8. Testing Process 99. Testing Strategy ...9

    9.1Usability Testing .11

    9.2Unit Testing.11

    9.2.1 White Box Testing ...119.2.2 Black Box Testing ...11

    9.3

    Iteration/Regression Testing...119.4System Testing129.4.1 Performance Testing12

    9.5Final Release Testing ..12

    9.6Testing completeness Criteria.13

  • 8/2/2019 Social Hub Test Plan

    4/24

    4

    10. Test Levels ...13

    10.1 Build Tests .1310.1.1 Level 1 - Build Acceptance Tests13

    10.1.2 Level 2 - Smoke Tests.13

    10.1.3 Level 2a - Bug Regression Testing13

    10.2 Milestone Tests ..1410.1.4 Level 3 - Critical Path Tests...14

    10.3 Release Tests.. 14

    10.3.1 Level 4 - Standard Tests.1410.3.2 Level 5 - Suggested Test.14

    11. Suspension / Exit Criteria.14

    12. Resumption Criteria .1513. Bug Tracking/ Bug Process. .15

    13.1 Various Roles in Bug Resolution ...15

    14. Test Deliverables ...16

    14.1

    Deliverables Matrix ...1614.2Documents .17

    14.2.1 Test Approach Document1714.2.2 Test Plan ..17

    14.2.3 Test Schedule ..18

    14.2.4 Test Specifications Requirements Traceability Matrix...1815. Remaining Test Tasks ..18

    16. Resource & Environment Needs ..19

    16.1Data Entry workstations .19

    16.2Main Frame1917. Staffing and Training Needs.19

    18. Roles and Responsibilities20

    19.

    Schedule ...21

    20. Planning Risks and Contingencies ...2121. Approvals .22

    22. Glossary 23

    23. References 23

  • 8/2/2019 Social Hub Test Plan

    5/24

    5

    1. Test Plan IdentifierSocial Hub Release 1.0 TP 1.0

    Note, the structure of this document is primarily based on the IEEE 829-2008 Standardfor Software Test Documentation.

    2. IntroductionThis document is a high-level overview defining testing strategy for the Social Hub(Release 1.0) Project. Its objective is to communicate project-wide quality standards and

    procedures. It represents a snapshot of the project as of the end of the planning phase.

    This document will address the different standards that will apply to the unit, integration

    and system testing of the specified application. We will utilize testing criteria under thewhite box, black box, and system-testing paradigm. This paradigm will include, but is

    not limited to, the testing criteria, methods, and test cases of the overall design.Throughout the testing process we will be applying the test documentation specificationsdescribed in the IEEE Standard 829 for Software Test Documentation.

    2.1Team InteractionThe following describes the level of team interaction necessary to have a successful

    product.

    The Test Team will work closely with the Development Team to achieve a highquality design and user interface specifications based on customer requirements.

    The Test Team is responsible for visualizing test cases and raising quality issuesand concerns during meetings to address issues early enough in the development

    cycle.

    The Test Team will work closely with Development Team to determine whetheror not the application meets standards for completeness. If an area is not

    acceptable for testing, the code complete date will be pushed out, giving thedevelopers additional time to stabilize the area.

  • 8/2/2019 Social Hub Test Plan

    6/24

    6

    3 Objective and Scope3.1Objective

    3.1.1

    Primary Objective

    A primary objective of testing application systems is to: assure that the system meets the

    full requirements, including quality requirements (AKA: Non-functional requirements)

    and fit metrics for each quality requirement and satisfies the use case scenarios and

    maintain the quality of the product. At the end of the project development cycle, the usershould find that the project has met or exceeded all of their expectations as detailed in the

    requirements.

    Any changes, additions, or deletions to the requirements document, FunctionalSpecification, or Design Specification will be documented and tested at the highest level

    of quality allowed within the remaining time of the project and within the ability of thetest team.

    3.1.2 Secondary Objective

    The secondary objective of testing application systems will be to: identify and expose allissues and associated risks, communicate all known issues to the project team, and ensure

    that all issues are addressed in an appropriate matter before release. As an objective, this

    requires careful and methodical testing of the application to first ensure all areas of the

    system are scrutinized and, consequently, all issues (bugs) found are dealt with

    appropriately.

    3.2ScopeThe Social Hub (Release 1) Test Plan defines the unit, integration, system, regression,

    and Client Acceptance testing approach. The test scope includes the following:

    Testing of all functional, application performance, security and usecases requirements listed in the Use Casedocument.

    Quality requirements and fit metricsto the project.

    End-to-end testing and testing of interfaces of all systems that interact

    with the project.

    4 Risks4.1Schedule

  • 8/2/2019 Social Hub Test Plan

    7/24

    7

    The schedule for each phase is very aggressive and could affect testing. A slip in the

    schedule in one of the other phases could result in a subsequent slip in the test phase.Close project management is crucial to meeting the forecasted completion date.

    4.2TechnicalSince this is a new system to assist the public. In the event of a failure there is no

    option of old system which can be used. We will run our test on new application not

    depend on existing system.

    4.3ManagementManagementsupport is required so when the project falls behind, the test schedule

    does not get squeezed to make up for the delay. Management can reduce the risk ofdelays by supporting the test team throughout the testing phase and assigning

    people to this project with the required skills set.

    4.4PersonnelDue to the aggressive schedule, it is very important to have experienced testers on

    this project. Unexpected turnovers can impact the schedule. If attrition does happen,

    all efforts must be made to replace the experienced individual.

    4.5RequirementsThe test plan and test schedule are based on the current Requirements Document. Any

    changes to the requirements could affect the test schedule and will need to beapproved.

    5 Features and Functions to TestTesting will consist of several phase each phase may or may not include testing of

    anyone or more of the following aspects of the Social Hub (Release 1.0) (listed

    alphabetically):

    Accessibility

    Audit

    Availability

    Coding standards Compatibility

    Content

    Functional

    Legal

    Marketing

    Navigation

  • 8/2/2019 Social Hub Test Plan

    8/24

    8

    Performance

    Reliability

    Scalability

    Security

    Site recognition

    Usability

    6 Features and Functions Not to TestIt is the intent that all of the individual test cases contained in each test plan will be

    performed.

    However, if time does not permit, some of the low priority test cases may be dropped.

    7 Process OverviewThe following represents the overall flow of the testing process:

    1. Identify the requirements to be tested. All test cases shall be derived using the

    current Program Specification.

    2. Identify which particular test(s) will be used to test each module.

    3. Review the test data and test cases to ensure that the unit has been thoroughly

    verified and that the test data and test cases are adequate to verify proper

    operation of the unit.

    4.

    Identify the expected results for each test.

    5. Document the test case configuration, test data, and expected results.

    6. Perform the test(s).

    7. Document the test data, test cases, and test configuration used during the testing

    process. This information shall be submitted via the Unit/System Test Report

    (STR).

    8. Successful unit testing is required before the unit is eligible for component

    integration/system testing.

    9. Unsuccessful testing requires a Bug Report Form to be generated. This documentshall describe the test case, the problem encountered, its possible cause, and the

    sequence of events that led to the problem. It shall be used as a basis for later

    technical analysis.

  • 8/2/2019 Social Hub Test Plan

    9/24

    9

    10.Test documents and reports shall be submitted. Any specifications to be reviewed,

    revised, or updated shall be handled immediately.

    8 Testing Process

    Figure 1: Test Process Flow

    The diagram above outlines the Test Process approach that will be followed.

    a. Organize Project involves creating a System Test Plan, Schedule & Test

    Approach, and assigning responsibilities.

    b. Design/Build System Test involves identifying Test Cycles, Test Cases, Entrance& Exit Criteria, Expected Results, etc. In general, test conditions/expected results

    will be identified by the Test Team in conjunction with the Development Team.

    The Test Team will then identify Test Cases and the Data required. The Testconditions are derived from the Program Specifications Document.

    c. Design/Build Test Procedures includes setting up procedures such as Error

    Management systems and Status reporting.

    d. Build Test Environment includes requesting/building hardware, software and

    data set-ups.

    e. Execute System Tests The tests identified in the Design/Build Test Procedures

    will be executed. All results will be documented and Bug Report Forms filled outand given to the Development Team as necessary.

    f. Signoff- Signoff happens when all pre-defined exit criteria have been achieved.

    9 Testing StrategyThe following outlines the types of testing that will be done for unit, integration, and

    system testing. While it includes what will be tested, the specific use cases thatdetermine how the testing is done will be detailed in the Test Design Document. The

    template that will be used for designing use cases is shown in Figure 2.

    a. Organize

    Project

    b. Design SystemTest

    c. Design/Build

    Test Proc.

    d. Build TestEnvironment

    e. Execute

    system Test.f. Signoff

  • 8/2/2019 Social Hub Test Plan

    10/24

    10

    Tested By:

    Test Type

    Test Case NumberTest Case Name

    Test Case Description

    Item(s) to be tested

    1

    2

    Specifications

    Input

    ExpectedOutput/Result

    Procedural Steps

    1

    2

    3

    4

    5

    6

    7

    Figure 2: Test Case Template

  • 8/2/2019 Social Hub Test Plan

    11/24

    11

    9.1Usability TestingThe purpose of usability testing is to ensure that the new components and features will

    function in a manner that is acceptable to the customer.Development will typically create a non-functioning prototype of the UI components to

    evaluate the proposed design. Usability testing can be coordinated by testing, but actual

    testing must be performed by non-testers (as close to end-users as possible). Testing

    will review the findings and provide the project team with its evaluation of the impactthese changes will have on the testing process and to the project as a whole.

    9.2Unit Testing (Multiple)Unit Testing is conducted by the Developer during code development process to ensurethat proper functionality, language-specific programming errors such as bad syntax, logic

    errors and code coverage have been achieved by each developer both during coding and

    in preparation for acceptance into iterations testing. The unit test cases shall be designedto test the validity of the programs correctness.

    The following are the example areas of the project must be unit-tested and signed-offbefore being passed on to regression Testing:

    Databases, Stored Procedures, Triggers, Tables, and Indexes

    NT Services

    Database conversion

    .OCX, .DLL, .EXE and other binary formatted executable.

    9.2.1

    White Box Testing

    In white box testing, the UI is bypassed. Inputs and outputs are tested directly at the code

    level and the results are compared against specifications. This form of testing ignores thefunction of the program under test and will focus only on its code and the structure of that

    code. Test case designers shall generate cases that not only cause each condition to take

    on all possible values at least once, but that cause each such condition to be executed atleast once. To ensure this happens, we will be applying Branch Testing. Because the

    functionality of the program is relatively simple, this method will be feasible to apply.

    9.2.2 Black Box TestingBlack box testing typically involves running through every possible input to verify that it

    results in the right outputs using the software as an end-user would. It includes

    performing Equivalence Partitioning and Boundary Value Analysis testing.

    9.3Iteration/Regression Testing

  • 8/2/2019 Social Hub Test Plan

    12/24

    12

    During the repeated cycles of identifying bugs and taking receipt of new builds

    (containing bug fix code changes), there are several processes which are common to thisphase across all projects. These include the various types of tests: functionality,

    performance, stress, configuration, etc. There is also the process of communicating

    results from testing and ensuring that new drops/iterations contain stable fixes

    (regression). The project should plan for a minimum of 2-3 cycles of testing(drops/iterations of new builds).

    At each iteration, a debriefing should be held. Specifically, the report must show that tothe best degree achievable during the iteration testing phase, all identified severity 1 and

    severity 2 bugs have been communicated and addressed. At a minimum, all priority 1

    and priority 2 bugs should be resolved prior to entering the beta phase.

    9.4System TestingThe goals of system testing are to detect faults that can only be exposed by testing the

    entire integrated system or some major part of it. Generally, system testing is mainlyconcerned with areas such as performance, security, validation, load/stress, and

    configuration sensitivity. But in our case well focus only on function validation andperformance. And in both cases we will use the black-box method of testing.

    9.4.1 Performance TestingThis test will be conducted to evaluate the fulfillment of a system with specified

    performance requirements. It will be done using black-box testing method. And this will

    be performed by:

    Storing the maximum data in the file and trying to insert, and observe how the

    application will perform when it is out of boundary. Deleting data and check if it follows the right sorting algorithm to sort the

    resulting data or output.

    Trying to store new data and check if it over writes the existing once.

    Trying to load the data while they are already loaded

    9.5Final Release TestingTesting team with end-users participates in this milestone process as well by providingconfirmation feedback on new issues uncovered, and input based on identical or similar

    issues detected earlier. The intention is to verify that the product is ready for distribution,

    acceptable to the customer and iron out potential operational issues.

    Assuming critical bugs are resolved during previous iterations testing- Throughout the

    Final Release test cycle, bug fixes will be focused on minor and trivial bugs (severity 3

    and 4). Testing will continue its process of verifying the stability of the application

    through regression testing (existing known bugs, as well as existing test cases).

  • 8/2/2019 Social Hub Test Plan

    13/24

    13

    9.6Testing completeness CriteriaRelease for production can occur only after the successful completion of the applicationunder test throughout all of the phases and milestones previously discussed above.

    The milestone target is to place the release/app (build) into production after it has beenshown that the app has reached a level of stability that meets or exceeds the client

    expectations as defined in the Requirements, Functional Specifications.

    10Test LevelsTesting of an application can be broken down into three primary categories and several

    sub-levels. The three primary categories include tests conducted every build (Build

    Tests), tests conducted every major milestone (Milestone Tests), and tests conducted atleast once every project release cycle (Release Tests). The test categories and test levels

    are defined below:

    10.1 Build Tests10.1.1 Level 1 - Build Acceptance Tests

    Build Acceptance Tests should take less than 2-3 hours to complete (15 minutes is

    typical). These test cases simply ensure that the application can be built and installedsuccessfully. Other related test cases ensure that adopters received the proper

    Development Release Document plus other build related information (drop point, etc.).

    The objective is to determine if further testing is possible. If any Level 1 test case fails,the build is returned to developers un-tested.

    10.1.2 Level 2 - Smoke Tests

    Smoke Tests should be automated and take less than 2-3 hours (20 minutes is typical).These tests cases verify the major functionality a high level.

    The objective is to determine if further testing is possible. These test cases should

    emphasize breadth more than depth. All components should be touched, and every majorfeature should be tested briefly by the Smoke Test. If any Level 2 test case fails, the build

    is returned to developers un-tested.

    10.1.3 Level 2a - Bug Regression Testing

    Every bug that was Open during the previous build, but marked as Fixed, Needs Re-

    Testing for the current build under test, will need to be regressed, or re-tested. Once the

    smoke test is completed, all resolved bugs need to be regressed. It should take between 5

    minutes to 1 hour to regress most bugs.

  • 8/2/2019 Social Hub Test Plan

    14/24

    14

    10.2 Milestone Tests10.2.1 Level 3 - Critical Path Tests

    Critical Path test cases are targeted on features and functionality that the user will see anduse every day.

    Critical Path test cases must pass by the end of every 2-3 Build Test Cycles. They do notneed to be tested every drop, but must be tested at least once per milestone. Thus, the

    Critical Path test cases must all be executed at least once during the Iteration cycle, and

    once during the Final Release cycle.

    10.3 Release Tests10.3.1 Level 4 - Standard Tests

    Test Cases that need to be run at least once during the entire test cycle for this release.

    These cases are run once, not repeated as are the test cases in previous levels. FunctionalTesting and Detailed Design Testing (Functional Spec and Design Spec Test Cases,

    respectively). These can be tested multiple times for each Milestone Test Cycle

    (Iteration, Final Release, etc.).Standard test cases usually include Installation, Data, GUI, and other test areas.

    10.3.2 Level 5 - Suggested Test

    These are Test Cases that would be nice to execute, but may be omitted due to timeconstraints.

    Most Performance and Stress Test Cases are classic examples of Suggested test cases

    (although some should be considered standard test cases). Other examples of suggestedtest cases include WAN, LAN, Network, and Load testing.

    11Suspension / Exit CriteriaIf any defects are found which seriously impact the test progress, the QA manager may

    choose toSuspend testing. Criteria that will justify test suspension are:

    Hardware/software is not available at the times indicated in the project schedule.

    Source code contains one or more critical defects, which seriously prevents or limitstesting progress.

    Assigned test resources are not available when needed by the test team.

    12Resumption Criteria

  • 8/2/2019 Social Hub Test Plan

    15/24

    15

    If testing is suspended, resumption will only occur when the problem(s) that caused the

    suspension has been resolved. When a critical defect is the cause of the suspension, the

    FIX must be verified by the test department before testing is resumed.

    13Bug Tracking/ Bug ProcessDuring testing, the testing team members normally encounter behavior that goes against a

    specified or implied design requirement in the product. When this happens, we will

    document and reproduce the bugs for the developers.

    Expectation of a bug:

    Keep track of what version of the application the bug is found

    Determine if bug has already been written up

    Indicate the steps to reproduce the bugwrite enough details for others looking atthe bug to be able to duplicate it; exclude unnecessary steps (i.e. If access point is

    irrelevant, be more general in your steps). Actual resultsbe specific on your findings.

    Expected resultshow the product should behave based on the specified orimplied requirements.

    ImplicationsHow does the defect affect the quality of the product?

    The following chart defines the impact levels to be used when entering bugs.

    Impact Definitions

    1Fatal Test Stopper: If you cant access a function and need the bug to be fixed

    immediately. The defect prevents QA from testing the feature area, sub-

    area or functionality of the feature.2Serious Beta Stopper: This is a bug that users would experience such as: data

    corruption, calculation errors, incorrect data, UEs and system crash oncommon user scenarios, significant QA risk, and major UI defects.

    3Minor Live Release: A bug that must be fixed before the product is officiallycompleted, UEs or crashes, content, and UI and graphic changes required

    for release.

    13.1 Various Roles in Bug Resolution AuthorThe person who wrote the bug; this will be someone on the QA team

    Resolver

    Normally an Engineer assigned to a specific area of the application. VerifierNormally a QA Engineer responsible for testing the fix and closing the

    bug.

    14Test Deliverables

  • 8/2/2019 Social Hub Test Plan

    16/24

    16

    Testing will provide specific deliverables during the project. These deliverables fall into

    three basic categories: Documents, Test Cases / Bug Write-ups, and Reports. Here is adiagram indicating the dependencies of the various deliverables:

    As the diagram above shows, there is a progression from one deliverable to the next.Each deliverable has its own dependencies, without which it is not possible to fullycomplete the deliverable.

    Program function specifications

    Program source code

    Test plan document - this document should address testing objectives, criteria,standards, schedule and assignments, and testing tools.

    Unit Testing Plan

    Integration Plan

    System Testing Plan

    Test Design Document

    Unit white-box test designcovers white testing criteria, methods and testcases

    Require-ments [PM]

    Project Plan

    [PM]

    Functional

    Spec [PM]

    Test Plan Test Spec.

    / Outline

    Detailed

    Design[Dev]

    Test

    Cases

    Bugs

    Bug

    Results

    TC CoverageReports

    Bug Reports

    Weekly

    Status

    Reports

    Test Case

    Results

  • 8/2/2019 Social Hub Test Plan

    17/24

    17

    Unit black-box test designcovers black-box testing criteria, methods andtest cases

    System test designcovers system test criteria, methods, and test cases,scripts.

    Test report document Unit white-box test reportcovers unit white box test results, problems,

    summary and analysis

    Unit black-box test reportcovers unit black box test results, problems,summary and analysis

    System Test reportcovers system test results, problems, summary andanalysis

    15Remaining Test TaskTask Assigned To Status

    Create Acceptance Test Plan TM, PM, Client

    Create System/Integration Test

    Plan

    TM, PM, Dev.

    Define Unit Test rules and

    Procedures

    TM, PM, Dev.

    Define Turnover procedures for

    each level

    TM, Dev

    Verify prototypes of Screens Dev, Client, TM

    Verify prototypes of Reports Dev, Client, TM

    16 Test EnvironmentsThere are essentially two parts to the Social Hub application in production: the client-side, which because the application is going to accessed over the Internet by members of

    the general public, Social Hub has little control over. And the server-side which (initially)

    will be comprised of a single cluster of servers residing at Social Hubs corporate center.

    Available Client-side Environment

    Social Hub will utilize set of desktop and laptop machines, which currently consists ofthe following machine specifications:

    Low-end PC

    Processor: Pentium4 (2.4 GHz)

  • 8/2/2019 Social Hub Test Plan

    18/24

    18

    RAM: 512 MB

    Hard Disk Drive: 40 GBMonitor: 17 Color Screen (default 1024 x 76816 bit color)

    56.6kps Modem or 100MB Ethernet Internet connection typically running

    Windows XP or Windows 2000 Professional

    The following Windows based Browsers are readily available for installation on any of

    the client platforms:

    Internet Explorer ofMicrosoft CorporationFirefox (also called Mozilla Firefox) ofMozilla Corporation

    Chrome ofGoogle

    Opera ofOpera Software ASANetscape Navigator ofNetscape Communications Corporation (now part of AOL)

    SeaMonkey ofMozilla Foundation

    Maxthon Browser ofMaxthon

    Browser settings (cache size, # of connections, font selection etc.) where possible wereleft unchanged

    i.e. the installation defaults were used for all testing. No optional Plug Ins will beinstalled.

    Available Server-side Environments

    In addition to the cluster of servers used for production, two functionally exact replicas of

    the serverside production environment will be created and maintained. The development

    team will use one replica for unit and integration testing, while the second replica will bereserved for system testing by the testing team. Prior to a new release being put into

    production, the Web application will be moved to a staging area on the production system

    where a final series of acceptance tests can be performed.

    While the replica systems will be functionally the same as the production environmente.g. same system software installed in the same order, with the same installation options

    selected etc. Due to budget constraints, the replicas will be scaled down versions of the

    production system (e.g. instead of several Web servers, there will only be one) and in thecase of the unit/integration replica, the hardware specifications may not be exactly the

    same.

    In addition, several network file and print servers will be made available (on a limitedbasis) for the testing team to use as load generators during performance tests.

    Available Testing Tools

    The following 3rd party free tools were available to scan the Web site and providefeedback:

    Bobby (accessibility, performance & html syntax)cast.org

    Freeappraisal (performance from 35 different cities)keynote.com Scrubby (meta tag analyzer)scrubtheweb.com

    Site analysis (search engine ratings)site-see.com

    Stylet (style sheet validation)microsoft.com

    http://windows.microsoft.com/en-US/internet-explorer/products/ie/homehttp://windows.microsoft.com/en-US/internet-explorer/products/ie/homehttp://www.mozilla.com/en-US/firefox/new/http://www.google.com/chrome/intl/en/landing_chrome.html?hl=enhttp://www.google.com/chrome/intl/en/landing_chrome.html?hl=enhttp://www.opera.com/http://www.opera.com/http://isp.netscape.com/http://isp.netscape.com/http://www.seamonkey-project.org/http://www.seamonkey-project.org/http://www.maxthon.com/http://www.maxthon.com/http://www.seamonkey-project.org/http://isp.netscape.com/http://www.opera.com/http://www.google.com/chrome/intl/en/landing_chrome.html?hl=enhttp://www.mozilla.com/en-US/firefox/new/http://windows.microsoft.com/en-US/internet-explorer/products/ie/home
  • 8/2/2019 Social Hub Test Plan

    19/24

    19

    Tune up (performance & style checker) & gif lube (gif analyzer)websitegarage.com

    Websat (usability)nist.gov Web metasearch (search engine ratings)dogpile.com

    Webstone (performance benchmarking tool) - mindcraft.com

    Windiff (file comparison)microsoft.com

    W3C validation service (html and css syntax)w3c.org

    In addition the following commercial tools were available:

    Aetgweb (pair-wise combinations) from Telcordia/Argreenhouse Astra Site Manager (linkage) from Mercury Interactive

    eTester suite (capture/reply, linkage & performance) from RSW 100 virtual user

    license FrontPage (spell checking) from Microsoft

    Ghost (software configuration) from Symatec

    KeyReadiness (large scale performance testing) from Keynote systems

    LinkBot Enterprise (link checking, HTML compliance and performance estimates) from

    Watchfire Prophecy (large scale performance testing) from Envive

    WebLoad (performance) from Radview1000 virtual user license Word (readability estimates) from Microsoft

    A manual digital stopwatch was also available.

    17 Staffing and Training NeedsIf a separate test person is not available the project manager/test manager will assume this

    role.In order to provide complete and proper testing the following areas need to be addressed

    in terms of training if not experienced: General development & testing techniques

    Web site development lifecycle methodology

    All development and automated testing tools that they may be required to use

    The administration staff will require training on the new screens and reports.

    18 Roles and ResponsibilitiesTesting Team

    Test Manager

    Ensure Phase 1 is delivered to schedule and quality

    Produce high level and detailed test conditions

    Produce expected results

    Report progress at regular status reporting meetings

    Co-ordinate review and signoff of test conditions

    Manage individual test cycles and resolve tester queries/problems.

  • 8/2/2019 Social Hub Test Plan

    20/24

    20

    Tester

    Identify test data

    Execute test conditions and mark-off results

    Prepare software error reports

    Administrate error measurement system Ensure test systems outages/problems are reported immediately and followed up.

    Ensure entrance criteria are achieved prior to system test start.

    Ensure exit criteria are achieved prior to system test signoff.

    19ScheduleThe section contains the overall project schedule. It discusses the phases and key

    milestones as they relate to quality assurance. It discusses the testing goals and standards

    that wed like to achieve for each phase of testing that will be deployed, e.g ., Usability

    Testing, Code Complete Acceptance, Beta Testing, Integration Testing, Regression

    Testing, System Testing.The key dates for overall development and Testing are outlined below.

    Milestones End Date Notes QA Deliverables/Roles

    Planning Phase 00/00/00 At this Milestone, the high level

    planning should be completed. Some

    of the deliverables are: Project Plan,

    Program function specifications.

    High-level test planning activities,

    which include preliminary

    development of Master QA Plan (this

    document, QA schedule.

    Design Phase This is a feature-driven milestone

    where the requirements and initiatives

    are further defined and solutions arefinalized. The deliverables for this

    phase are Program source code and

    other design related documents.

    Development and Test engineers

    participate actively in feature design

    by inspecting and reviewing therequirements and design documents.

    As the design documents are

    completed, the test engineers are

    encouraged to start working on the

    Test Plan document and test design

    planning.

    Code Complete

    -Infrastructure

    This milestone is when all

    infrastructure development and

    functions should be complete. The

    testing team should have performed

    unit & integration testing before

    checking the code into any build.

    The Test Engineers should have

    completed or in the final stages of

    their preliminary Infrastructure Test

    Plan, test cases and other QA

    documents related to test execution

    for each feature or component such

    as test scenarios, expected results,

    data sets, test procedures, scripts and

    applicable testing tools.

    Code Complete

    -Function

    This milestone includes unit testing and

    code review of each function

    component prior to checking the code

    The Test Engineers should have

    provided Code Complete Assessment

    Test to Development Engineer one

  • 8/2/2019 Social Hub Test Plan

    21/24

    21

    Milestones End Date Notes QA Deliverables/Roles

    into the test phase. The deliverables

    include system-testing specification,

    Unit testing specifications, Integration

    plan.

    week prior to Code Complete Review

    date. The Test Engineers should also

    have completed or in the final stages

    of their preliminary White Box TestPlan, test cases and other QA

    documents related to test execution

    for each feature or component such

    as test scenarios, expected results,

    data sets, test procedures, scripts and

    applicable testing tools.

    Beta Ready This milestone represents that all

    features are ready for Beta release

    shutdown.

    2 Weeks regression of Binary Tree

    features to Beta and preparation for

    Beta Shutdown.

    Feature

    Complete

    This phase allows for feature clean up

    to verify remaining bug fixes andregression testing around the bug fixes.

    This milestone indicates that the

    feature is ready for Beta regression.

    All bugs verified and QA

    documentation is finalized. The testEngineers should assess that Binary

    Tree features are ready for Beta

    regression and have started their

    preliminary Test Summary Reports.

    Regression Test This milestone represents that project

    is ready for Regression Testing.

    Complete regression test execution of

    complete system and update Test

    Summary Reports for regression.

    Ship/Live 04/03/03 Product is out. Any unfinished Testing documents

    should be complete.

    .

    20Planning Risks and ContingenciesThe following seeks to identify some of the more likely project risks and propose

    possible

    Contingencies:

    Web site becomes unavailable Testing will be delayed until this situation isrectified - May need to recruit more staff to do the testing or reduce the number of

    test cases.

    Web testing software is not available/does not work (e.g. Web site uses cookiesand tool cannot handle cookies) - This will delay the introduction of automated

    testing and result in more manual testing - May need to recruit more staff to dothe testing or reduce the number of test cases.

    Testing staff shortages/unavailability, many of the test staff are part-time and haveother higher priorities, in addition no slack time is allocated for illness or vacation- May need to recruit more staff to do the testing or reduce the number of test

    cases.

    A large number of defects/incidents make it functionally impossible to run all ofthe test cases.

  • 8/2/2019 Social Hub Test Plan

    22/24

    22

    Not enough time to complete all test cases. If time cannot be extended, individualtest cases will be skipped, starting with the lowest priority

    21ApprovalsName (Print) Signature Date

    1.

    2.

    3.

    4.

    5.

  • 8/2/2019 Social Hub Test Plan

    23/24

    23

    22GlossaryTERM/ACRONM DEFINITION

    API Application Program Interface

    BCP Business Continuity Plan

    Branch testing A white box test design technique in which test cases are

    designed to execute branches.

    CAT Client Acceptance Testing

    Coverage The degree, expressed as a percentage, to which a specifiedcoverage item has been exercised by a test suite.

    End-to End Testing Tests user scenarios and various path conditions by verifying

    that the system runs and performs tasks accurately with the sameset of data from beginning to end, as intended.

    N/A Not Applicable

    QA Quality Assurance

    Severity The degree of impact that a defect has on the development or

    operation of a component or system.

    SME Subject Matter Expert

    SOP Standard Operating Procedure

    STR System Test Report

    TBD To Be DeterminedTSR Test Summary Report

    23ReferencesPressman, Roger S. Software Engineering - A Practitioner's Approach. Fifth edition. The

    McGraw-Hill companies, Inc.

    Kaner, C., Falk, J., Nguyen, H.-Q. Testing Computer Software. Wiley Computer Publishing,

    1999.

  • 8/2/2019 Social Hub Test Plan

    24/24

    24