[ebook] testing sap r3 a manager_'s step-by-step guide7
TRANSCRIPT
Testing SAP R/3A Manager’s Step-by-Step Guide
JOSE FAJARDOELFRIEDE DUSTIN
John Wiley & Sons, Inc.
00_FM_4782 2/5/07 10:27 AM Page iii
00_FM_4782 2/5/07 10:27 AM Page ii
Testing SAP R/3A Manager’s Step-by-Step Guide
00_FM_4782 2/5/07 10:27 AM Page i
00_FM_4782 2/5/07 10:27 AM Page ii
Testing SAP R/3A Manager’s Step-by-Step Guide
JOSE FAJARDOELFRIEDE DUSTIN
John Wiley & Sons, Inc.
00_FM_4782 2/5/07 10:27 AM Page iii
This book is printed on acid-free paper.
Copyright © 2007 by John Wiley & Sons, Inc. All rights reserved.
Wiley Bicentennial Logo: Richard J. Pacifico
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, ortransmitted in any form or by any means, electronic, mechanical, photocopying,recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the1976 United States Copyright Act, without either the prior written permission of thePublisher, or authorization through payment of the appropriate per-copy fee to theCopyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400, fax 978-646-8600, or on the Web at www.copyright.com. Requests tothe Publisher for permission should be addressed to the Permissions Department, JohnWiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, 201-748-6011, fax 201-748-6008, or online at http://www.wiley.com/go/permissions.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have usedtheir best efforts in preparing this book, they make no representations or warrantieswith respect to the accuracy or completeness of the contents of this book andspecifically disclaim any implied warranties of merchantability or fitness for a particularpurpose. No warranty may be created or extended by sales representatives or writtensales materials. The advice and strategies contained herein may not be suitable for yoursituation. You should consult with a professional where appropriate. Neither thepublisher nor author shall be liable for any loss of profit or any other commercialdamages, including but not limited to special, incidental, consequential, or otherdamages.
For general information on our other products and services, or technical support, pleasecontact our Customer Care Department within the United States at 800-762-2974,outside the United States at 317-572-3993 or fax 317-572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content thatappears in print may not be available in electronic books.
For more information about Wiley products, visit our Web site atwww.wiley.com.Anand, Sanjay.
Sarbanes-Oxley guide for finance and information technology professionals / Sanjay Anand.Fajardo, Jose, 1974–Testing SAP R/3 : a manager’s step-by-step guide / Jose Fajardo, Elfriede Dustin.p. cm.Includes index.ISBN: 978-0-470-05573-1 (cloth : acid-free paper)1. SAP R/3—Testing. 2. Business enterprises—Computer programs—Testing.3. Client/server computing. I. Dustin, Elfriede. II. Title.HF5548.4.R2F34 2007650.0285’53—dc22
20060366512005031928
Printed in the United States of America
10 9 8 7 6 5 4 3 2 1
00_FM_4782 2/5/07 10:27 AM Page iv
This book is dedicated to the loving memory of my mother, Maria T. Arregoces
00_FM_4782 2/5/07 10:27 AM Page v
00_FM_4782 2/5/07 10:27 AM Page vi
vii
Contents
CHAPTER 1Introduction 1
CHAPTER 2Status Quo Review of Existing Testing Practices 19
CHAPTER 3Requirements 35
CHAPTER 4Estimating Testing Costs 61
CHAPTER 5Functional Test Automation 69
CHAPTER 6Test Tool Review and Usage 91
CHAPTER 7Quality Assurance Standards 171
CHAPTER 8Assembling the QA/Test Team 187
CHAPTER 9Establishing and Maintaining Testing Resources 219
CHAPTER 10Planning and Construction of Test Cases 231
CHAPTER 11Capacity Testing 243
CHAPTER 12Test Execution 267
00_FM_4782 2/5/07 10:27 AM Page vii
CHAPTER 13Management of Test Results and Defects 285
CHAPTER 14Testing in an SAP Production Environment 299
CHAPTER 15Outsourcing the SAP Testing Effort 319
APPENDIX AAdvanced Testing Concepts 333
APPENDIX BCase Study: Accelerating SAP Testing 355
Index 365
viii Contents
00_FM_4782 2/5/07 10:27 AM Page viii
ix
About the Authors
Jose Fajardo is a former SAP consultant of PricewaterhouseCoop-ers LLP and Computer Sciences Corporation (CSC), he has
worked as an independent SAP consultant for Fortune 100 companiesutilizing automated testing strategies and in particular implementingSAP R/3. His competency in automated test tools includes productsfrom Mercury Interactive as well as Rational Corporation.
With subject matter expertise in validating and managing testingof ERP systems, Fajardo has participated in verification of cus-tomized implementations of SAP R/3, SAP R/3 bolt-ons, customapplications, and non-SAP applications interfacing with SAP R/3.Fajardo has been instrumental in guiding and mentoring Fortune 100companies in the development of testing strategies and methodolo-gies, creating testing standards, documenting Test Readiness Reviewchecklists, documenting entrance/exit/release criteria, implementingtesting best practices, creating quality assurance (QA) teams and QAprocesses, mentoring junior programmers, staffing testing effortswith resources, performing verification and validation activities,managing outsourcing agreements, preparing for audits of testing re-sults, managing the execution of test scripts, and implementing auto-mated testing strategies.
Fajardo has published several articles on automation strategy,performance testing, regression testing, functional testing, imple-menting testing best practices, and testing standards and procedures.
Elfriede Dustin is author of Effective Software Testing and lead au-thor of Automated Software Testing and Quality Web Systems,books that have been translated into various languages and have soldtens of thousands of copies throughout the world. Her latest book,
00_FM_4782 2/5/07 10:27 AM Page ix
The Art of Software Security Testing, coauthored with securityexperts Chris Wysopal, Lucas Nelson, and Dino Dai Zovi, was pub-lished by Symantec Press in November 2006. Dustin has also au-thored various white papers on the topic of software testing, teachesvarious testing tutorials, and is a frequent speaker at various softwaretesting conferences. She is the cochair of VERIFY, an internationalsoftware testing conference held in the Washington, D.C., area. Insupport of software test efforts, Dustin has been responsible for im-plementing automated test, or has performed as the lead consul-tant/manager guiding the implementation of automated and manualsoftware testing efforts. Dustin has a BS in computer science withover 15 years of information technology experience and currentlyworks as an independent consultant in the Washington, D.C., area.You can reach her via her Web site at www.effectivesoftwaretest-ing.com or at [email protected].
x About the Authors
00_FM_4782 2/5/07 10:27 AM Page x
xi
About the Contributors
Lorrie Collins is a national solutions director for Spherion Corpora-tion. She leads the Software Quality Management Practice, whichprovides Quality Assurance, Validation and Testing, and TestAutomation services to help clients maximize their technology invest-ments. Collins is certified in information technology (IT) project man-agement and has over 20 years of IT experience across numerousindustries, technical platforms, and environments.
Bob Koche began his career developing software and evolved toa writer and speaker on software development practices. As a soft-ware entrepreneur he is associated with a number of category firstsincluding the first SQL database on a PC (acquired by IBM), the firstWeb QA tool (acquired by Microsoft), and now the first SAP-centrictest automation tool.
Linda G. Hayes is the CTO of WorkSoft, Inc., developer of next-generation test automation solutions. She is the founder of three soft-ware companies including AutoTester, the first PC-based testautomation tool. Hayes holds degrees in accounting, tax, and lawand is a frequent industry speaker and award-winning author on soft-ware quality. She has been named as one of Fortune magazine’s Peo-ple to Watch and one of the Top 40 under 40 by Dallas BusinessJournal. She is a columnist for ComputerWorld, Datamation, andStickyMinds.com; authored the Automated Testing Handbook; andcoedited Dare to Be Excellent with Alka Jarvis on best practices in thesoftware industry. Her article “Quality Is Everyone’s Business” wona Most Significant Contribution award from the Quality AssuranceInstitute and was published as part of the Auerbach Systems Devel-opment Handbook. You can contact her at [email protected].
00_FM_4782 2/5/07 10:27 AM Page xi
00_FM_4782 2/5/07 10:27 AM Page xii
xiii
Preface
Planning, preparing, scheduling, and executing SAP test cycles is atime-consuming and resource-intensive endeavor that requires par-
ticipation from several project members. SAP projects are prone tohave informal, ad-hoc test approaches that decrease the stability ofthe production environment and tend to increase the cost of owner-ship for the SAP system. Many SAP project and test managers cannotprovide answers for questions such as how many requirements havetesting coverage, the exit criteria for a test phase, the audit trails fortest results, the dependencies and correct sequence for executing testcases, or the cost figures for a previously executed test cycle. Fortu-nately, through established testing techniques predicated on guide-lines and methodologies (i.e., ASAP SAP Roadmap methodology,IBM’s Ascendant methodology, and Deloitte’s ThreadManagermethodology), enforcement of standards, application of objectivetesting criteria, test case automation, implementation of a require-ments traceability matrix (RTM), and independent testing and for-mation of centralized test teams, many of the testing risks that plagueexisting or initial SAP programs can be significantly reduced.
This book is written for SAP managers, SAP consultants, SAPtesters, and team leaders who are tasked with supporting, managing,implementing, and monitoring testing activities related to test plan-ning, test design, test automation, test tool management, execution oftest cases, reporting of test results, test outsourcing, planning a bud-get for testing activities, enforcing testing standards, and resolvingdefects.
The book revisits testing standards and techniques supported bythe software engineering institute, the Institute of Electrical and Elec-tronics Engineers, and Unified Modeling Language (UML), which
00_FM_4782 2/5/07 10:27 AM Page xiii
have dominated the landscape for producing software-based applica-tions. The book provides the reader with information for incorporat-ing proven software testing standards and techniques when planninga major SAP testing cycle for either an SAP upgrade or initial SAPinstallation. The methods and techniques described in this bookoffer the reader a different (not new) way to look at SAP testingdeliverables.
The approaches and methodologies advocated in the book forSAP testing are recommended for teams of people involved in differ-ent aspects of SAP testing. Typically, in SAP implementations there ismuch confusion and obfuscation in determining which project re-sources are responsible for testing tasks, and test results in particular,for testing cycles such as performance and user acceptance testing,and the book addresses this prevalent problem. A major SAP testcycle such as an integration, regression, or performance test may re-quire the expertise and contributions of subject matter experts(SMEs), business analysts, system architects, integration managers,functional team leaders, and Basis team, test team, and development(advanced business application programming [ABAP]) team mem-bers. The book aims to logically define and identify the roles and re-sponsibilities for all expected stakeholders affiliated with an SAP testcycle and makes arguments in favor of adopting the concept of a cen-tralized test teams and enforcing quality assurance (QA) standards.
The book also provides much needed industry guidance for com-panies that want to establish an automated testing framework,construct an RTM, adhere to industry regulations (i.e., Sarbanes-Oxley), participate in an outsourced agreement for SAP testing, re-duce and compress the testing schedule with the concept oforthogonal arrays, and learn about recent trends in SAP testing vis-à-vis the concept of SAP accelerators.
The methodology presented in these pages is not offered as apanacea for SAP testing. It is simply a reiteration of powerful,straightforward, proven testing techniques and approaches for everyaspect of SAP testing.
xiv Preface
00_FM_4782 2/5/07 10:27 AM Page xiv
1
CHAPTER 1Introduction
SAP testing is complex, difficult, and esoteric. The perils and risksto the intended SAP production system are maximized when the
project team does not have enough skilled testers and a robustapproach for testing the system, tracing the entire system design andarchitecture to testable requirements, and eliminating defects. A com-prehensive plan or approach for testing an SAP system even for asmall project includes assembling a test team, acquisition of test tools,establishment of a test lab, construction of a test calendar, monitor-ing of testing activities, training for testing participants, and comple-tion of a test cycle based on predefined criteria.
A test plan for SAP includes the approach; description; roles andresponsibility for conducting usability testing; white box and blackbox testing for interfaces; negative testing; security testing for roles,integration, scenario, unit, and performance; user acceptance testing;and regression testing. Few companies implementing SAP are struc-tured or organized to address all these various types of testing.
This book offers guidance and assistance to project managers,test managers, and configuration leaders who want to establish a test-ing framework from the ground up based on industry-establishedprinciples for evaluating requirements, providing coverage for re-quirements, diagramming processes, test planning, estimating testingbudgets and resource allocation, establishing an automation frame-work, and reviewing case studies from large SAP implementations.
WHY THIS BOOK?
SAP is by far the world’s largest enterprise resource planning (ERP)application, and this position is not likely to be relinquished anytimesoon. SAP traces its origins to its mainframe-based R/2 version from
01_4782 2/5/07 10:29 AM Page 1
the 1970s. Even though SAP and other vendors have developed imple-mentation methodologies for SAP that provide emphasis for activitiesneeded to design the system and for going live, many of these method-ologies fall short of addressing robust testing practices for SAP in par-ticular in light of the prominence of automated test tools andoutsourcing testing activities.
The roles and activities for SAP configuration, for SAP advancedbusiness application programming (ABAP) development, and SAPBasis are for the most part well understood and clearly defined. Incontrast, when it comes to SAP testing, the roles and activities are amystery and subject to interpretation. It is possible for a person whois entertaining becoming an SAP consultant to take courses on howto configure a particular SAP module, how to build security roles, orhow to develop ABAP programming code but not how to test SAPR/3. Functional testing of SAP is often left to individuals without atesting background whose main tasks are to configure the system.Other types of SAP testing such as technical and system testing forsystem performance and backup and recovery are also left to indi-viduals without a testing background whose main responsibilities areto design and maintain the technical architecture for SAP.
This book was written to help SAP projects address weaknessesin the SAP testing life cycle, define testing and quality assurance ac-tivities, and overcome misconceptions about SAP testing. The bookcontains contributions from industry leaders in the fields of SAP test-ing, test tool automation, and templates to help project managers andtest managers establish immediate testing best practices. It covers allaspects of SAP testing from preparation to resolution of defects.
WHAT DOES THIS BOOK OFFER ME?
This book is written from the point of view of the company or entityrequesting SAP services from a systems integrator. The book empha-sizes testing practices predicated on the following principles:
■ Building a system with quality as opposed to merely testing forquality.
■ Adhering to testing practices from SAP’s ASAP Roadmap Method-ology, IBM’s Ascendant™ methodology and guide for imple-
2 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
01_4782 2/5/07 10:29 AM Page 2
menting SAP, Institute of Electrical and Electronics (IEEE) stan-dards, and the Capability Maturity Model (CMM) from the Soft-ware Engineering Institute (SEI).
■ Drafting of clearly stated requirements that can be validated withtest cases.
■ Construction of a requirements traceability matrix (RTM) to ver-ify all in-scope requirements.
■ Supporting each testing effort with an exit, entrance, or suspen-sion criterion.
■ Validation of production support changes through thoroughregression testing.
■ Subjecting all test results to third-party verification and approval(sign-offs) from appropriate project stakeholders.
■ Diagramming processes and requirements with Unified ModelingLanguage (UML) notation.
■ Documentation and adherence to test plans and test strategiesthat are subjected to version control.
■ Early formation of a test team that participates in design work-shops during the blueprint phase, change control boards meet-ings, and the “go/no go” decision.
■ Inclusion and enforcement of quality assurance (QA) standards.■ Peer reviews and inspections for testable requirements.■ Independent verification and validation of system design.■ Independent “hands-on” testing with participation from end
users who execute test cases.■ Functional testing with manual testing and automated test tools.■ Maintaining testing deliverables such as test cases, test results,
and testing defects in a test management tool that includes secu-rity features and audit trails.
■ Compliance with company and industry audits.
The book is a primer for testing SAP from unit testing through re-gression testing for production support. The intended audience forthe book includes test managers, project managers, integration lead-ers, test team members, QA personnel, and auditing groups. Thebook provides templates, case studies, and criteria to establish aframework for SAP testing, establish a test case automation strategy,mentor junior testers, and identify tasks and activities that the SAPsystem integrator is expected to accomplish for the client requesting
Introduction 3
01_4782 2/5/07 10:29 AM Page 3
SAP services. Specifically, this book will address the following activi-ties that are prevalent yet poorly conducted at most SAP projects:
■ Identifying the testable requirements.■ How to ensure that requirements are testable, unambiguous,
clearly defined, necessary, prioritized, and consistent with corpo-rate policies or industry standards.
■ How to retain testing documentation to support audits (i.e., Sec-tion 404 from Sarbanes-Oxley).
■ Defining the scope of testing.■ Creating a test plan and test strategy.■ Developing a strategy for acquiring automated test tools and for
automating processes that includes verification at the graphicaluser interface (GUI) and back-end layers.
■ How to create a library of automated test cases for productionregression testing that can be executed unattended.
■ How to define and verify service-level agreements (SLAs).■ Boundary testing for negative and positive testing.■ How to apply quality standards from the SEI and Rational Uni-
fying Process (RUP).■ Creating robust flow process diagrams that include narratives.■ Techniques for estimating the testing schedule, duration for test-
ing activities, and number of testers needed.■ How to compress or reduce the necessary number of test cases
with the technique of orthogonal arrays (OATS) for projectsimplementing SAP variant configuration or projects that have mul-tiple variations for end-to-end processes and are time constrained.
■ Defining criteria for deciding which processes or scenarios aresuitable for testing.
■ How to estimate testing costs and budget.■ Creating an RTM to ensure that coverage has been provided for
all types of testable requirements (i.e., functional, security, per-formance, archiving, technical, and development requirements).
■ Creating a test schedule and a test calendar.■ Defining objective criteria to commence, finish, and stop testing.■ Managing, categorizing, and resolving test results.
4 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
01_4782 2/5/07 10:29 AM Page 4
CHALLENGES IN SAP TESTING
Established commercial implementation methodologies for SAP typ-ically fail to address how requirements will be met, the criteria fortesting, the framework for utilizing test tools, necessary resources fortesting, estimating testing budgets, specific testing roles and responsi-bilities, and how test defects will be managed and resolved. Further-more, many factors hamper successful testing at most SAP projectssuch as unclear requirements, inability to trace the system design torequirements, missing dedicated test teams, waiving defects withoutappropriate workarounds, and inadequate involvement of neededparticipants for testing such as subject matter experts (SMEs) for cap-turing requirements and end users for user acceptance testing. Despitethese testing challenges, many SAP project managers perceive thattheir SAP implementation is successful or “fine” even when the pro-duction help desk team is flooded with complaints that the systemdoes not perform necessary functionality, the production system doesnot meet intended performance SLAs, security roles are not definedand implemented correctly, the system produces short dumps becauseit cannot perform exception handling or not enough negative testingwas conducted, data is not converted properly from legacy systems,and end users cannot find even the most basic data or necessaryreports.
The SAP arena is replete with functional, development, and tech-nical consultants that moonlight and parade as SAP testers for vari-ous testing efforts but often lack sufficient knowledge to establish asuccessful testing strategy and framework. What is more puzzling andbaffling at SAP projects is that it is the individuals with the leastamount of knowledge and skills in the area of testing who are theones in charge of leading and managing the testing effort since manySAP projects do not have dedicated test managers or centralized testteams. Admittedly, testing at any SAP project is an integrated effortthat requires the expertise and skills of several resources such asSMEs, functional configuration resources, ABAP developers, andbusiness analysts. Yet executing testing activities without the guid-ance and help of testing professionals is analogous to taking a tripwithout knowing what the final destination will be.
Introduction 5
01_4782 2/5/07 10:29 AM Page 5
Frequently, an individual moonlighting as an SAP tester will statethat “testing is breaking or exploring the system” or that he “knowshow to test,” which undoubtedly leads to a misconception aboutwhat SAP testing is really all about. The truth is that many compa-nies fail to adequately test an SAP system and rather deploy the sys-tem into production because testing is taking too long, whichconsequently forces the production support team to fix the system forthe first six to eight months after the system is deployed because itwas never properly tested prior to its release into the production en-vironment. Whether by accident or on purpose, often the modusoperandus for many corporations is to deploy an unstable and/orpoorly tested SAP system into production because defects and systemproblems can be dealt with at a later date in the production environ-ment, even while there is substantial and empirical data that demon-strates that removing system defects is least expensive when done inthe early stages of testing.
Industry data shows that removing system defects in a live pro-duction environment is at least 20 to 40 times more expensive thandoing so in the unit-testing phase or during the requirements-gathering phase. Many defects can be eliminated or prevented alto-gether with thorough evaluation and peer review of requirements.Many corporations pay expensive consulting fees to fix productionproblems arriving at the production help desk rather than addressthese problems or defects during the applicable testing phase. Themain reason that this occurs is that SAP projects often do not spendthe time or have the appropriate resources to ensure that the capturedrequirements are peer reviewed and evaluated with objective criteria,or to construct an RTM to provide coverage for all requirements andestablish objective testing criteria for each testing phase. Another crit-ical or overlooked reason that causes defects that should have beenresolved during testing to slip into the production environment is thatindividuals acting as SAP testers cannot reach consensus on testingnomenclature or the test approach.
The mere term testing in the SAP world is in and of itself broadenough to create ambiguity, since different individuals will have dif-ferent perceptions and experiences about what testing means. Testingencompasses many activities such as requirements gathering andtraceability, test planning, test design, test execution, test reporting,
6 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
01_4782 2/5/07 10:29 AM Page 6
test results, and resolution of defects to cover a wide range of testingefforts such as unit, boundary, scenario, development, white/blackbox, security, smoke, integration, performance, user acceptance, andregression testing. Rarely, if ever, do two or more individuals fromthe configuration, development, or technical teams have the samenomenclature or understanding for a particular type of test. Chaosand inconsistency are the ensuing results from misunderstandingabout what the term testing entails or what activities are associatedwith testing. Dedicated test teams can establish consistency for alltesting terms for all project members based on established guidelinesand nomenclature from credible sources such as the Software Engi-neering Institute (SEI), IEEE standards, and the SAP ASAP Roadmapmethodology.
A common theme repeated at many SAP projects is that conclu-sive evidence is missing to show that requirements have been met be-fore releasing the system into the production environment. Mostproject managers or functional managers cannot answer with any de-gree of confidence or objectively whether the in-scope requirementscaptured during the requirements phase have been met before releas-ing or deploying a system. This occurs because the concept of anRTM is not embedded within most, if not all, of the mainstream orconventional methodologies for implementing SAP for either initialSAP implementations or SAP upgrades.
Test tools pose a challenge for many SAP projects. SAP tools holdthe promise of unattended test case playback at any time, increasedtesting coverage, testing of processes with multiple data and processvariations, verification of objects and calculations, and generation ofautomated test logs with time stamps for audit purposes and compli-ance. Many SAP system integrators and test tool vendors are adept atconvincing companies to spend hundreds of thousands of dollars inacquiring automated test tools and test tool training only to have thetest tools gather dust. Test tools can sit idle because the company ac-quiring the test tools is missing an automation framework and thuscannot successfully engage the appropriate resources to maintain, in-stall, and utilize the test tools. The payback period or return on in-vestment (ROI) for test tools is not maximized or even reached untila series or library of automated test cases can be constructed andreused frequently for future system releases or to support production
Introduction 7
01_4782 2/5/07 10:29 AM Page 7
regression testing to the point where automated execution is cheaperthan doing the same tasks with manual labor hours. Constructing alibrary of automated test cases is rarely achieved even by companiesthat have had SAP in the production environment for years becausethey do not allocate the necessary skilled resources to maintainingand utilizing the test tools. Companies that commit hundreds of thou-sands of dollars to the acquisition of test tools that sit idle compro-mise their testing budgets. Consequently, these companies resort totesting SAP exclusively with manual testers.
Another common challenge for testing SAP is inadequate trainingat all levels for either cross-matrixed testing resources or dedicatedtesting resources. Training is needed for testers who are participatingin one-time testing efforts such as user acceptance testing, or partici-pating in all testing efforts for execution of test cases and resolutionof defects. The test manager needs to develop the procedures for men-toring and educating all project resources who are expected to par-ticipate in testing activities. Training consists of the followingactivities:
■ Training dedicated testers on how to maintain and install auto-mated test tools, test management tools, and develop automatedtest cases.
■ Training testing participants on test procedures for logging defectsand reporting test results.
■ Training on how to evaluate and peer review requirements.■ Training on testing nomenclature to standardize testing terms for
all project members.■ Training for roles and responsibilities for resolving defects.
The challenges mentioned above are some of the most prevalentproblems and issues that permeate most SAP projects. By no meansare these the only challenges present at SAP projects. Many SAP pro-jects suffer from poor documentation and configuration managementfor testing deliverables or work products, and inability to successfullymeet audits or design a solution that is in compliance with industryregulations and requirements. The aforementioned challenges areused as illustrations that highlight the need to establish robust testingtechniques, methodologies, strategies, and frameworks.
8 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
01_4782 2/5/07 10:29 AM Page 8
EARLY TESTING MATTERS
It is never too early to implement and establish the testing program.In fact, for readers familiar with the SAP ASAP Roadmap methodol-ogy, Exhibit 1.1 shows that testing strategies are defined as early asthe project preparation phase. For readers familiar with the differentsoftware development life cycles including the waterfall model, thesoftware industry has developed a similar model known as the V-shaped model that emphasizes testing as a consideration throughoutthe development life cycle. Furthermore, industry standards suggestand manifest that testing early helps to decrease costs since identify-ing and resolving defects early on during the initial software devel-opment life cycle is much more economical than troubleshooting andresolving defects once the system has been deployed into the produc-tion environment.
Testing early and often is instrumental to reducing developmentcosts, ensuring fulfillment of in-scope requirements, and aligningwith the project’s scope statement. The most effective testing pro-grams start at the beginning of a project, long before any programcode has been written. The requirements documentation is verifiedfirst; then, in the later stages of the project, testing can concentrate onensuring the quality of the application code. Expensive reworking isminimized by eliminating requirements-related defects early in theproject’s life, prior to detailed design or coding work.
The requirements specifications for a software application or sys-tem must ultimately describe its functionality in great detail. Typicallyin SAP, requirements for initial implementations are captured duringthe blueprint phase with workshops where various stakeholders statewhat they expect SAP to accomplish for them or for existing SAP im-plementations that are undergoing major upgrades or implementingpreviously deferred requirements. One of the most challenging as-pects of requirements development is communicating with the peoplewho are supplying the requirements. Each requirement should bestated precisely and clearly, so it can be understood in the same wayby everyone who reads it.
If there is a consistent way of documenting requirements, it ispossible for the stakeholders responsible for requirements gatheringto effectively participate in the requirements process. As soon as a re-quirement is made visible, it can be tested and clarified by asking the
Introduction 9
01_4782 2/5/07 10:29 AM Page 9
10
EXHI
BIT
1.1
Test
ing
and
Qua
lity
Ass
uran
ce A
ctiv
itie
s fo
r an
Ini
tial
SA
P Im
plem
enta
tion
Proj
ect
Phas
ePr
epar
atio
nB
luep
rint
Rea
lizat
ion
Fina
l Pre
para
tion
Go-
Liv
e Su
ppor
t
•D
efin
e te
stin
gst
rate
gies
•P
rese
nt Q
A s
tand
ards
and
proc
esse
s•
Rev
iew
req
uire
men
ts•
Att
end
wor
ksho
ps f
orga
ther
ing
requ
irem
ents
•C
ondu
ct r
equi
rem
ents
peer
rev
iew
s•
Enf
orce
QA
pro
cess
es•
Ass
embl
e te
st t
eam
•Se
t up
tes
t la
b•
Pro
cure
tes
t to
ols
•T
est
tool
tra
inin
g fo
rfu
ncti
onal
use
rs•
Cus
tom
ize
test
too
ls
•D
efin
e B
asel
ine
Test
Cas
es•
Cre
ate
test
pla
n fo
r ba
selin
e•
Kic
k-of
f pr
esen
tati
on•
Test
bas
elin
e•
Def
ine
fina
l sco
pe t
est
case
s•
Cre
ate
test
pla
n fo
r fi
nal s
cope
•Te
st f
inal
sco
pe•
Con
duct
dev
elop
men
t te
stin
g•
Con
duct
inte
grat
ion
test
ing
•Pr
epar
e fo
r sy
stem
tes
ting
•U
AT
pre
para
tion
and
exe
cuti
on
•C
ondu
ct S
yste
mTe
stin
g•
Con
tinu
ous
Impr
ovem
ent
•D
efin
e re
gres
sion
tes
ting
stra
tegy
•D
efin
e ch
ange
con
trol
proc
esse
s•
Aut
omat
e te
st s
crip
ts•
Mod
ify
exis
ting
tes
t sc
ript
s•
Par
tici
pate
in C
CB
mee
ting
s•
Exe
cute
tes
t ca
ses
•D
ocum
ent
test
fin
ding
s•
Supp
ort
test
too
ls•
Supp
ort
SOX
•Te
stin
g st
rate
gypa
per
•A
utom
atio
nst
anda
rds
•Q
ualit
y A
ssur
ance
Pla
n•
Est
abli
shed
Pro
ject
Met
hodo
logi
es, T
ools
and
Gov
erna
nce
Stan
dard
s
•C
reat
e C
heck
lists
for
eval
uati
ngsp
ecif
icat
ions
and
diag
ram
s•
Est
ablis
h T
est
lab
wit
hin
stal
led
test
too
ls•
Con
stru
ct R
TM
(req
uire
men
tstr
acea
bilit
y m
atri
x)•
Test
Pla
n an
d C
rite
ria
•B
asel
ine
test
cas
e•
Tes
t R
eadi
ness
Rev
iew
•T
est
Cas
es•
Tes
t R
esul
ts•
Tes
t re
port
•D
evel
oped
aut
omat
ed s
crip
ts•
Exe
cuti
on c
alen
dar
•L
esso
ns L
earn
ed
•E
xecu
te s
tres
s/lo
ad/v
olum
e/pe
rfor
man
ce t
esti
ng•
Gen
erat
e an
dpr
oduc
e sy
stem
test
ing
scri
pts
•G
athe
r an
din
terp
ret
syst
emte
stin
g re
sult
s•
Tes
t re
port
•A
utom
atio
n fr
amew
ork
•R
egre
ssio
n te
stin
g pa
per
•R
efin
ed li
brar
y of
Aut
omat
ed t
est
scri
pts
•Fu
ncti
onal
Des
ign
Spec
s•
Tec
hnic
al D
esig
n Sp
ecs
•Fl
ow p
roce
ssD
iagr
ams
•B
PPs
wit
h te
st c
ondi
tion
s•
Test
sce
nari
o te
mpl
ate
•B
PML
•St
ress
tes
ting
str
ateg
y•
Stre
ss/V
olum
e te
stin
g sa
mpl
e pl
ans•
Loa
drun
ner
•Q
uick
test
Pro
•K
inta
na
DeliverablesActivities Tools
•Te
stin
g St
rate
gyW
hite
pap
ers
Not
e: D
urin
g re
aliz
atio
n ph
ase,
50%
or
mor
e of
all
proj
ect
cost
s ar
e de
dica
ted
to t
esti
ng a
ctiv
itie
s.
01_4782 2/5/07 10:29 AM Page 10
stakeholders detailed questions. Whether your team develops re-quirements using UML and some form of use case or writing “thesystem shall” statements, a variety of requirement tests can be appliedto ensure that each requirement is relevant, and that everyone has thesame understanding of its meaning. UML is a widely accepted tech-nique for requirements gathering or reverse engineering an existingsystem, and its notation is composed of multiple diagramming tech-niques such as use-case notation, activity, class and sequence dia-grams, and so on.
In order to introduce the concept of “test early and test often,” itis important to recognize the following two items: (1) involve testersfrom the beginning and (2) verify the requirements.
Involve Testers from the Beginning1
Testers need to be involved from the beginning of a project’s life cycleso they can understand exactly what they are testing and can workwith other stakeholders to create testable requirements.
Not only can testers verify testability of the requirement, but theywill also learn the thought process that went into the requirement asit applies to the application under test (AUT), making the tester moreknowledgeable about the AUT.
A defect occurs when an executed test case produces test resultsthat do not match the expected test results. Defect prevention is theuse of techniques and processes that can help detect and avoid errorsbefore they propagate to later development phases. Defect preventionis most effective during the requirements phase, when the impact ofa change required to fix a defect is low. The only modifications willbe to requirements documentation and possibly to the testing plan,also being developed during this phase. If testers (along with otherstakeholders) are involved from the beginning of the development lifecycle, they can help recognize omissions, discrepancies, ambiguities,and other problems that may affect the project requirements’ testa-bility, correctness, and other qualities.
Introduction 11
1Adapted from Elfriede Dustin, Effective Software Testing, Reading, MA: AddisonWesley, 2002.
01_4782 2/5/07 10:29 AM Page 11
A requirement can be considered testable if it is possible to designa procedure in which the functionality being tested can be executed,the expected output is known, and the output can be programmati-cally or visually verified.
Testers need a solid understanding of the product so they can de-vise better and more complete test plans, designs, procedures, andcases. Early test-team involvement can eliminate confusion aboutfunctional behavior later in the project life cycle. In addition, early in-volvement allows the test team to learn over time which aspects of the application are the most critical to the end user and which are thehighest-risk elements. This knowledge enables testers to focus on themost important parts of the application first, avoiding overtestingrarely used areas and undertesting the more important ones.
Some organizations regard testers strictly as consumers of the re-quirements and other software development work products, requir-ing them to learn the application and domain as software builds aredelivered to the testers, instead of involving them during the earlierphases. This may be acceptable in smaller projects, but in complexenvironments it is not realistic to expect testers to find all significantdefects if their first exposure to the application is after it has alreadybeen through requirements, analysis, design, and some software im-plementation. More than just understanding the “inputs and out-puts” of the software, testers need deeper knowledge that can comeonly from understanding the thought process used during the specifi-cation of product functionality. Such understanding not only in-creases the quality and depth of the test procedures developed, butalso allows testers to provide feedback regarding the requirements.
Verify the Requirements
In his work on specifying the requirements for buildings, ChristopherAlexander describes setting up a quality measure for each require-ment: “The idea is for each requirement to have a quality measurethat makes it possible to divide all solutions to the requirement intotwo classes: those for which we agree that they fit the requirementand those for which we agree that they do not fit the requirement.”In other words, if a quality measure is specified for a requirement, any
12 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
01_4782 2/5/07 10:29 AM Page 12
solution that meets this measure will be acceptable, and any solutionthat does not meet the measure will not be acceptable. Quality mea-sures are used to test the new system against the requirements.
Attempting to define the quality measure for a requirement helpsto eliminate requirements not suitable for implementation and thustesting. For example, everyone would agree with a statement like “thesystem must provide good value,” but each person may have a dif-ferent interpretation of “good value.” In devising the scale that mustbe used to measure “good value,” it will become necessary to identifywhat that term means. Sometimes requiring the stakeholders to thinkabout a requirement in this way will lead to defining an agreed-uponquality measure. In other cases, there may be no agreement on a qual-ity measure. One solution would be to replace one vague requirementwith several unambiguous requirements, each with its own qualitymeasure.
It is important that guidelines for requirement development anddocumentation be defined at the outset of the project. In all but thesmallest programs, careful analysis is required to ensure that the sys-tem is developed properly. Use cases from UML notation are one wayto document functional requirements, and can lead to more thoroughsystem designs and test procedures. (Here, the broad term require-ment will be used to denote any type of specification, whether a usecase or another type of description of functional aspects of thesystem.)
In addition to functional requirements, it is also important toconsider nonfunctional requirements, such as performance and secu-rity, early in the process. They can determine the technology choicesand areas of risk. Nonfunctional requirements do not endow the sys-tem with any specific functions, but rather constrain or further definehow the system will perform any given function. Functional require-ments should be specified along with their associated nonfunctionalrequirements.
Chapter 3 offers a checklist that can be used by testers during therequirements phase to verify the quality of the requirements. Usingthis checklist is a first step toward trapping requirements-related de-fects as early as possible, so they don’t propagate to subsequentphases, where they would be more difficult and expensive to find andcorrect.
Introduction 13
01_4782 2/5/07 10:29 AM Page 13
TYPES OF SAP TESTS
Traditionally, the main types of SAP tests include unit, development,scenario, integration, performance, and regression testing. These testsare further described below to provide greater granularity into whateach type of test entails.
Unit Testing
This is the lowest level of testing at the SAP transaction level. Unittesting includes boundary testing for positive and negative testing.Negative testing should be performed for custom fields and transac-tions to ensure that the system only allows valid input and can ade-quately perform exception handling. An example of a negative testfor a process would be attempting to process an order with the wrongstatus.
Unit testing includes testing security roles. The configurationteam owns the unit-testing effort and is responsible for planning andexecution of unit testing. The main focuses for unit testing are:
■ Master data■ Negative-positive testing■ Transaction functionality■ Security roles and profiles
Negative testing is performed on security roles and profiles, cus-tom fields, objects, and processes. Each test in negative testing needstwo elements:
1. Intentionally specify conditions that will cause the software togenerate an error.
2. Ensure that the generated error is handled in a specified manner.
An example of a negative test condition would be “Attempting topost a material to an invalid profit center should produce an errormessage.” Another negative testing example for security roles andsegregation of duties would be “An inventory clerk attempts to ap-
14 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
01_4782 2/5/07 10:29 AM Page 14
prove a million-dollar purchase order when he is only permitted toapprove purchase orders for a maximum of $500,000.”
Negative testing will be designed to address the followingsituations:
■ Check exception handling and error message.■ Prove that the system will deal with program exceptions and erro-
neous data.■ Limit or prevent an end user from trying to do something he
should not.■ Demonstrate that the system does not do anything that it is not
supposed to do.■ Users are permitted to perform only actions based on their autho-
rizations, position roles, and permissions.
Development Testing
This is the testing for reports, interfaces, conversions, enhancements,work flows, and forms (RICEWF) development objects developedprimarily with ABAP code. Testing of development objects includestesting for security authorizations, performance, extracts, data trans-fer rules, reconciliations, and batch scheduling jobs. In many SAPprojects, third-party tools such as Control-M and AutoSys areacquired to schedule reports and interfaces with dependencies, andthese scheduled jobs need to be tested prior to releasing the systeminto the production environment. Development testing should alsoensure that data can be tested through the intended target system. Theowner(s) of the target system can specify the applicable or represen-tative sets of data needed to test interfaces and conversions, whichallows the development or ABAP team to conduct white box andblack box testing on ABAP programs.
The development or ABAP team is responsible for planning andexecuting the development tests, but the configuration team is re-sponsible for approving the results for the development tests.
Development testing ensures that the interfaced data originatingfrom legacy systems can be effectively transferred into SAP or sentfrom SAP into a legacy system. In order to design test cases for
Introduction 15
01_4782 2/5/07 10:29 AM Page 15
RICEWF objects, technical specifications that can contain pseudo-code will need to be developed. The development test cases need toreflect the testable conditions from the technical specifications.
Business Warehouse (BW) testing is also part of the developmenttests. BW testing includes testing the infocubes, queries, reports, andmulticubes. The main types of tests for BW testing are:
■ Reconciliations. Are financial calculations rolling up correctly?■ Extracts. Is there a match between the number of extracted
records and the number of received records?■ Performance. How fast can a query be performed, and does it
conform to established performance SLAs?■ Security. Who is permitted to slice and dice the data in the Bex
Analyzer? What are the established roles for generating queries?■ Data transfer rules. Is data transformed correctly for all fields
from the source system to the target system?
Scenario Testing
The equivalent of a string test, scenario testing is the testing of chainsof SAP transactions that make up a process within a single area ormodule. Scenario testing includes testing of a process with data fromexternal systems and applicable SAP roles/profiles.
Scenario testing is primarily a manual effort but can include somepartial automation with test tools for processes that are stable,frozen, and proven to have worked manually. The scenario testing isowned by the configuration teams but includes participation fromSMEs and members of the test team and development team.
Integration Testing
Integration testing is the testing of chains of SAP transactions thatmake up an end-to-end process that cuts across multiple modules, forinstance, order-to-cash, purchase-to-pay, and hire-to-retire withexternal data and converted data. Integration testing includes testingthrough the external systems and SAP bolt-ons with security roles andworkflow. Integration testing consists of multiple iterations. The ded-
16 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
01_4782 2/5/07 10:29 AM Page 16
icated test team is the owner of the integration test. Integration test-ing requires participation from members of the configuration anddevelopment teams for defect resolution. Additionally, SMEs and endusers participate in the integration test as reviewers and for approvalof test results.
Integration testing is mostly a manual effort but can include somepartial automation with test tools.
Performance Testing
Performance testing encompasses load, volume, and stress testing todetermine system bottlenecks and degradation points. A performancetest helps to determine the optimal system settings to meet and fulfillthe established SLAs.
The dedicated test team is the owner of the performance test. Per-formance tests are conducted primarily with automated test tools. Intheory it is possible to conduct performance testing with manual testcases, but this proves highly impractical since it is not easily repeat-able and requires both human and hardware resources that are oftennot available. A performance test, even if automated, can still includemanual execution of interfaces, batch jobs, and external processesthat send data into SAP.
The basis, database, and infrastructure teams help monitor theperformance test, whereas the configuration team helps to identifytest data and document test cases that are suitable for the perfor-mance test.
User Acceptance Testing
User acceptance testing allows the system’s end users to independentlyexecute test cases from the perspective of how the end users plan toperform tasks in the production environment. The owners of the useracceptance testing are the end users, and the configuration and testteam members resolve defects identified during the user acceptancetest. The test team and change management team members help trainend users and prepare them for the user acceptance test.
Introduction 17
01_4782 2/5/07 10:29 AM Page 17
Regression Testing
Regression testing ensures that previously working system function-ality is not adversely affected by the introduction of new systemchanges. System changes targeted for the production environmentneed to be analyzed for impact and cascading effects on otherprocesses. Since SAP R/3 is an integrated system, a single systemchange—whether it is a hotpack, an OSS note, or a transport to resolvea defect—can have far-reaching consequences for other processes, andthus regression testing is needed to ensure that “nothing is broken”as a result of a new system change. Regression testing is primarily anautomated testing effort. For regression testing, a library of auto-mated test cases is constructed and played back to ensure that systemtransports do not break or alter system functionality.
The test team owns the execution of the regression test. Deter-mining the impact of a system change is primarily the responsibilityof the integration team and change control board (CCB).
Other types of SAP tests include usability, archiving, data migra-tion testing, and technical tests. Usability testing is discussed in Ap-pendix A. Technical tests such as backup and recovery, printing,faxing, electronic data interchange (EDI), availability, and so on arealso needed in particular for initial SAP implementations and/orglobal SAP rollouts. The concept of technical testing is beyond thescope of this book.
Data migration testing for established SAP implementation refersto SAP projects that have global SAP rollouts or multiple businessunits and want to introduce SAP to other company divisions or busi-ness segments. For example, a company may have designed the order-to-cash business process within SAP for one division and may haveplans to extend the same or slightly modified version of the order-to-cash business process to a different division that has different datavalues, and thus the new data values need to be tested.
Depending on contractual, scope of the project, project’s over-sight, or industry regulations, the SAP tests described above may needto be either very formal and structured or casual. The tests describedwill at a minimum require identification of valid test data, rewritingof test cases, or creation of new test cases; manual testing; peer re-views; and approvals at the end of each testing cycle.
18 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
01_4782 2/5/07 10:29 AM Page 18
19
CHAPTER 2Status Quo Review of
Existing Testing Practices
In order to establish a successful SAP testing program it is importantto determine what strengths and weaknesses the existing testing
approach offers. The status quo for many SAP projects is outdatedtesting documentation, underutilized automated test tools, over-looked lessons learned, and testing resources who are fillers or stand-ins until a formal and dedicated test team is established.
Reviewing the status quo requires elimination of flawed testingpractices and the furthering of successful practices. Naturally, this iseasier to discern when lessons learned are captured. However, even inthe absence of lessons learned, it is possible to dissect and detect theprogram’s testing strengths and weaknesses by reviewing the project’smethodologies, reviewing the reported defects, holding testing semi-nars, or hiring third-party organizations that specialize in softwaretesting.
HOW ARE YOU TESTING?
Project managers at most SAP installations have the perception thattheir projects follow structured testing approaches, that their projectmembers “know how to test,” and that they successfully completedtesting for previous system releases. While this notion is prevalent atmany SAP installations, most SAP installations cannot accuratelyanswer the following 10 questions:
1. What is the number of fulfilled testable requirements for a previ-ous testing cycle, or how does the system design trace back to allcaptured functional, development, and technical requirements?
02_4782 2/5/07 10:32 AM Page 19
2. What is the number of test cases that need to be planned for thenext testing cycle?
3. What is the expected cost associated with all testing activities?4. What is the number of resources needed to test the system either
for a major system upgrade or for an initial system installation?5. What business processes are suitable for test automation based
on predefined criteria? How are automated test cases reusedfrom one testing cycle to the next?
6. How is the system configuration compliant with company poli-cies, corporate business rules, and/or industry regulations?
7. How many defects remain outstanding from a system release,what is their priority, or how they will be resolved in a future sys-tem release?
8. What are the assessment criteria for evaluating proposed systemchanges and what objects are impacted that require regressiontesting as a result of the introduction of a system change?
9. What are the components of a test exit criterion or how is a testreadiness review (TRR) conducted?
10. What testing metrics are captured for each system release and/orwhat testing documentation is retained at the end of each testingcycle to support company audits?
This is only a partial list of the questions that most SAP installa-tions struggle to answer when planning and executing their testingtasks. The status quo at most SAP installations is to interpret or de-fine the intention of captured requirements, configure the system, de-velop advanced business application programming (ABAP) objectswithout code walk-through, rush or compress the testing schedulewith project resources who are fillers or devoid of a testing back-ground, transport objects into the production environment to meetproject deadlines, and confront system defects through the produc-tion help desk. This approach of compressing the testing scheduleand rushing transports into the production environment increasesproject costs since resolving and eliminating system defects after theyhave been introduced in the production environment is much morecost expensive than doing so in the earlier stages of the software lifecycle. But the practice at many SAP installations is to place the bur-den on the SAP production support for resolving defects that were
20 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
02_4782 2/5/07 10:32 AM Page 20
missed, overlooked, or waived from a previous system release in thehopes of dealing with system defects at a later date.
In a current era of strict compliance with government acts and in-dustry regulations, many companies implementing or maintainingSAP installations must show how their SAP projects tie in with theproject charter, scope statement, captured requirements, and the re-tention of testing artifacts such as test logs and test results, which re-quires a robust and comprehensive testing approach. In order tomaximize the effectiveness of the project’s testing resources, it is nec-essary to review and analyze existing testing practices in order to ad-dress a wide range of situations that may impact testing cycles such as:
■ Does the project have centralized or decentralized test teams? Isthe test team or what passes for a test team composed of “fillers”or individuals from other project teams who are moonlighting astesters?
■ How are process flow diagrams constructed to meet and enhanceunderstanding of testable requirements? How are links andinterdependencies among process flow diagrams captured, main-tained, and depicted? Do process flow diagrams include swim-lanes and expected SAP production roles for each swimlane?
■ Can the project manually execute all planned test cases and testscenarios?
■ Do test cases contain sufficient information and test conditions toverify and validate SAP profiles, segregation of duties, workflow,inbound data from legacy systems, reports, and company policies?
■ Does the project have documented peer-reviewed and -approvedtest plans and test strategies? If so, how do project membersadhere to such documentation?
■ How are referenced documents for configuring the system suchas business process procedures (BPPs), flow process diagrams,functional and technical specifications, and requirements man-aged and updated so that the system configuration settings andABAP objects are in sync and in harmony with the project’s doc-umentation?
■ How are lessons learned from a previous testing cycle applied tofuture testing efforts? Or are lessons learned gathered and col-lected only to be gathering dust and never applied?
Status Quo Review Existing Testing Practices 21
02_4782 2/5/07 10:32 AM Page 21
■ Do project resources show resistance to change when new testinginitiatives or approaches are introduced?
The inability to address these situations can substantially com-promise and undermine all testing tasks and deliverables. For SAPprojects that can successfully address all the preceding situations, thenext path forward is continuous process improvement for testing inthe never-ending quest of streamlining the testing effort without com-promising system quality and/or increasing project costs. However,practical experience would manifest that few companies can addressall the aforementioned conditions and rather excel at meeting someof the conditions while completely ignoring the rest. SAP installationsthat recognize the gaps and disconnects associated with their testingmethodology—even when their SAP has been in production supportfor a number of years—can reduce project costs, increase the chancesof verifying testable requirements, and minimize end-user-logged de-fects about the production environment.
TEST TEAM STRUCTURE
Reviewing the status quo for SAP testing requires the project man-ager and/or test manager (if one exists) to determine the makeup andcomposition of the testing team. In SAP projects the makeup of thetesting team consists of centralized, decentralized, or “outsourced”testing teams. After the composition of the test team is determined,the next steps include deciding whether the current team makeup issuitable for the project’s testing needs or what it would take to shiftthe team structure.
Centralized test teams are dedicated test teams whose main oronly responsibilities are to maintain automated test tools, executemanual and automated test cases, report defects, design test cases,and enforce testing standards. Centralized test teams are under thecontrol of and report directly to the appointed test manager and in-terface heavily with members from the configuration and develop-ment teams to document and design test cases, execute test cases, andresolve defects. They tend to bring consistency to the creation of test-ing deliverables since testers have a single reporting hierarchy, who isthe test manager, and adhere to the testing standards implemented by
22 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
02_4782 2/5/07 10:32 AM Page 22
the test manager. Centralized test teams would conduct hands-ontesting for regression, performance, and integration testing. They aremost suitable for projects that meet the following criteria:
■ There is a large functional scope.■ Data is migrated from multiple legacy systems and/or SAP is heav-
ily customized.■ They are subject to audits and industry standards.■ They have automated test tools.■ They can allocate the necessary budget for the test team.■ They have constructed a requirements traceability matrix (RTM).■ They have experienced a high volume of complaints and/or
requests for changes about the system functionality and perfor-mance from the production end users.
■ They are paying a system integrator to deliver SAP services andrequire some “independent” testing to verify and validate thedeliverables from the system integrator.
Decentralized test teams are by far the most common structure atSAP implementations whereby members from the functional and de-velopment teams act as filler testers as required by the project-testingschedule. For example, under decentralized test teams, a functionalSAP configuration team member or ABAP team member becomes a“tester” when he develops and executes test cases to test a functionalprocess. Admittedly, initial testing efforts such as unit testing are usu-ally conducted with members from the configuration and ABAPteams, but these individuals are not dedicated testers and have a pro-clivity to produce deliverables with varying degrees of quality.
With decentralized test teams, different standards, methodolo-gies, or templates may be used for testing SAP, which gives rise tomultiple “test leads” based on the functional SAP area being tested.Accountability and ownership tend to fall through the cracks since nosingle individual is responsible for test results, maintaining auto-mated test tools, ensuring that requirements have traceability withtest cases, reporting testing metrics to measure testing progress, ordocumenting test plans and lessons learned. Decentralized test teamsare suitable for projects with limited budgets that do not have auto-mated test tools, when the SAP implementation is plain vanilla andimplemented primarily out-of-the-box, or when the project has a
Status Quo Review Existing Testing Practices 23
02_4782 2/5/07 10:32 AM Page 23
limited functional scope. Decentralized test teams yield results thatvary widely across the functional teams, and hinder the ability of theproject manager to evaluate the exit criteria for each testing effort.These test teams create confusion over testing deliverables, roles, andresponsibilities for testing the system, and produce results that are toleverage for future testing efforts during future system releases.
Outsourced test teams are the latest industry hype in order tolower testing costs. With an outsourcing agreement a project wouldhand off testing activities to a third-party company that specialized intesting, typically located in a country with markedly lower laborcosts. For example, an SAP installation may turn over all test au-tomation tasks for developing, constructing, and executing auto-mated test cases to a third-party company in the hopes of loweringthe costs of performing the same automation tasks in-house. Out-sourcing agreements also require at a minimum a project liaison tomonitor, guide, and verify the deliverables that the outsourcing third-party company is performing. The outsourcing team is a subset of acentralized test team since in theory the outsourced testers were hiredbased on their testing expertise and ability to follow the same ap-proach, templates, methodology, and practices for producing deliver-ables, which adds consistency. They also report to a single manager.Exhibit 2.1 highlights the drawbacks and benefits of different testteams.
In addition to determining the makeup of the test team, it is alsoimportant to determine the makeup of the configuration test teams.The test team needs to be a reflection or mirror image of the makeupof the functional and development teams. For instance, if the SAPproject has a “hire-to-retire” functional team, then the test teamneeds to have a tester assigned to test the cases for the “hire-to-retire”team, which include negative, security, workflow, migrated data, anduser exit testable conditions. SAP implementations structure theirfunctional teams either to emulate an end-to-end production businessprocess (i.e., hire-to-retire, order-to-cash, purchase-to-pay, etc.) or toemulate standalone SAP modules (i.e., Human Resources module,Sales & Distribution module, Materials Management module, etc).Setting up functional teams according to end-to-end processes is moreeffective than a team structure that emulates standalone SAP mod-ules since it takes into account the SAP integration points (“touch
24 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
02_4782 2/5/07 10:32 AM Page 24
25
Dec
entr
aliz
edC
entr
aliz
ed
Con
cept
Eac
h te
am (
RIC
E, F
unct
iona
l, Se
curi
ty)
QA
and
tes
ting
tea
ms
are
esta
blis
hed
for
defi
ning
and
enf
orci
ng t
he
cond
ucts
its
own
test
ing
wit
hout
st
anda
rds
and
proc
edur
es f
or t
he v
ario
us t
esti
ng c
ycle
s an
d do
cu-
coor
dina
tion
and
sta
ndar
dsm
enti
ng t
he t
est
plan
, tes
t re
sult
s, e
tc. Q
A is
gea
red
tow
ard
prev
enti
on. T
esti
ng is
gea
red
tow
ard
dete
ctio
n.
Ben
efit
sPe
rcei
ved
grea
ter
cont
rol o
ver
test
ing
Uni
form
ity
and
cons
iste
ncy
(i.e
., sa
me
tem
plat
es)
arti
fact
sIn
crea
sed
test
ing
expe
rtis
ePe
rcep
tion
tha
t te
stin
g go
es “
fast
er”
Test
too
l kno
wle
dge,
indu
stry
cer
tifi
cati
ons
(i.e
., C
STE
, Mer
cury
, etc
.)G
reat
er f
lexi
bilit
y to
tes
t w
itho
ut
Can
bui
ld a
cen
ter
of e
xcel
lenc
ein
terf
eren
ce f
rom
a t
hird
par
ty (
grou
p)In
depe
nden
t po
int
of v
iew
Faci
litat
e go
/no
go d
ecis
ion
Incr
ease
d te
st c
over
age
Dra
wba
cks
Red
unda
ncy
Mor
e re
sour
ces,
hig
her
payr
oll
Inco
nsis
tenc
yC
ultu
ral s
hift
Test
ing
rush
ed t
o m
eet
dead
lines
Can
slo
w t
he p
roce
ss t
o en
forc
e Q
APo
tent
ial c
onfl
ict
of in
tere
stL
imit
ed m
anag
eria
l vis
ibili
tyL
ack
of s
tand
ards
Lim
ited
tes
t co
vera
geM
issi
ng t
esti
ng m
etri
cs, r
esul
ts
EXHI
BIT
2.1
Ben
efit
s an
d D
raw
back
s of
Dif
fere
nt T
est
Team
Com
posi
tion
s
02_4782 2/5/07 10:32 AM Page 25
points”) as well as the variations (whether data or process driven) foreach end-to-end process.
The review of the existing test team structure is the first step indeciding how future testing cycles will be conducted. In order tochange the test team structure, the project manager will need to re-view existing project deadlines, budget constraints, the project’s test-ing methodology and test plans, and the project’s charter. However,before making any decisions about revamping the structure of the testteam, it is necessary to review any documented lessons learned fromprevious testing cycles and how those lessons learned have beenapplied.
REVIEW LESSONS LEARNED
One of the most critical components of a testing program is docu-menting lessons learned and implementing corrective actions toaddress any deficiencies identified in the documentation. Recognizedindustry methodologies and models such as the Capability MaturityModel (CMM) and the Project Management Institute (PMI) place apremium on leveraging off lessons learned. Lessons learned, however,are rarely documented because most projects do not have the band-width or discipline to document “what went wrong.” This is a fal-lacy, as most companies have a proclivity to repeat the same mistakesin future testing cycles. Lessons learned are needed for the followingshortcomings that plague many SAP testing programs:
■ Inability or difficulty in collecting and reporting testing metrics.■ Making little or no use of automated test tools and test manage-
ment tools even when a significant financial investment has beenmade in these tools.
■ Not allowing sufficient time to conduct a performance test orhave trial runs for a performance test.
■ Having a shared test environment that is subject to frequentchanges and transports that can adversely affect the test execu-tion of manual test cases or the automation of test cases.
■ Obsolete and outdated documentation for BPPs, flow process dia-grams, and functional specifications.
■ Missing peer reviews and signoffs for test cases.
26 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
02_4782 2/5/07 10:32 AM Page 26
■ Missing test execution calendar.■ Not managing testable requirements or creating an RTM.■ Not documenting workarounds for defects that do not have a res-
olution prior to cut-over or go-live.■ Not having predefined exit criteria for each testing cycle.■ Not implementing or testing a solution that traces to the intent of
the original scope statement.■ Poorly tested SAP roles/profiles or having documented test cases
that do not consider SAP roles for the SAP transactions that needto be conducted.
■ Ignoring negative testing conditions.
After every testing cycle the test manager is expected to documentall lessons learned for continuous process improvement. Lessonsfrom testing typically describe what areas of testing require improve-ment or were not performed in accordance with approved standards.Lessons learned are needed because SAP projects usually need tocompress the testing schedule, which causes testers to cut corners inorder to meet deadlines, or because the test manager encountered sit-uations during testing that were not planned for in the test plan andthat caused the test manager to make hasty decisions during the test-ing cycle, which did not yield the expected results.
In the event that the SAP project does not have a dedicated testmanager, an organization implementing SAP can hire a third-partyorganization to document “what went wrong” after the fact. Thesethird-party organizations analyze test results and apply software life-cycle techniques to discern why testing was not conducted as ex-pected or why a large number of defects slipped into the productionenvironment. The main reasons for bringing in a third-party organi-zation to document testing lessons learned are manifested as follows:when the original SAP system integrator is dismissed in favor of anew system integrator, and when the SAP production support team isexpected to resolve all defects that were missed during previous test-ing cycles, which causes the workload for the production team tosurge in the first six months following a major SAP system upgradeor SAP initial go-live.
Documenting lessons learned can help programs meet future au-dits for test results, reduce rework costs, increase the stability of theproduction environment, and provide a template for planning the
Status Quo Review Existing Testing Practices 27
02_4782 2/5/07 10:32 AM Page 27
next test cycle. Captured lessons, whether documented by the testmanager or a third-party company, must be reviewed at the end ofthe testing cycle, which may cause modifications to test plans, teststrategies, automation approach, design and construction of futuretest cases, and the approach for resolving and closing defects. Forlarge or global organizations implementing SAP in multiple sites withpotentially different system integrators, it is important to communi-cate the lessons learned company-wide and to have them posted in acommon repository such as a company-sponsored intranet.
A company’s methodology, whether created in-house or adoptedfrom another body, needs to be aligned with the need to capturelessons learned.
EXISTING METHODOLOGY
When reviewing the status quo, companies implementing SAP need toassess what software methodology or approach guides the workproducts and deliverables of the SAP resources, including the SAPtesting team.
Large SAP system integrators such as Deloitte Consulting andIBM offer methodologies and implementation guides such as Thread-Manager and Ascendant™ for either upgrading or initial installationsof SAP. SAP itself offers the SAP Roadmap methodology embeddedwithin the Solution Manager platform. Recognized bodies such asIEEE, SEI, and the U.S. Department of Defense (DoD) 5000 series di-rectives for life-cycle management and acquisition, to name a few,also provide software methodologies for implementing an ERP/Cus-tomer Relationship Management (CRM) solution such as SAP R/3.
Corporations that are missing a recognized methodology for im-plementing SAP can rely on software approaches that conform to thewaterfall, spiral, and evolutionary models. These models offer differ-ent approaches for implementing software that include prototyping,dealing with programs that have a large scope, or unstable require-ments. Depending on the size of the corporation implementing SAP,it is possible that the corporation already has other large software ini-tiatives and a successful life cycle for doing so that can be leveragedfor implementing SAP.
28 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
02_4782 2/5/07 10:32 AM Page 28
A successful software methodology, whether created in-house oradopted from another body, needs to have templates, accelerators,and white papers for testing ERP applications. Methodologies specif-ically designed for building software from scratch or from the groundup may not be suitable for implementing an out-of-the-box solutionsuch as SAP and thus not offer any relevant guidance for testing SAP.Methodologies can differ in style, nomenclature, or deliverables, butfor testing purposes, guidelines need to be clearly identified for thetesting tasks that need to be performed. For instance, in the DoD theacquisition 5000 series governs by law what the government requiresfor a user acceptance test (UAT), which can differ drastically from therecommendations and guidelines from other SAP methodologies forconducting UATs. In the aforementioned example, methodologies candiffer in what is necessary to conduct a UAT, but what is importantis that the methodologies address the need for UAT testing and pro-vide some guidance for conducting the UAT.
The project and test manager must provide special attention tothe project’s methodologies and how existing testing activities andtasks conform and align to the project’s methodologies. If no formalmethodology exists within the project, then efforts must be taken toensure that the testing approach and test plans are adequate for theproject to help fulfill testing exit criteria, comply with testing audits,document lessons learned, and ensure that the system successfullytraces its design to the in-scope requirements.
MANAGING TESTING CHANGES
An appointed or chartered committee such as a change control boardor project management operations (PMO) office has the ability to for-mally congregate project stakeholders who are authorized to intro-duce project or system changes affecting project resources, projectschedule, project quality, or project scope. But finding a similar com-mittee for introducing testing changes at SAP projects can proveinconceivable. SAP or other ERP projects need a testing committeeconsisting of members such as solution architects, integration man-ager, test manager, configuration and development managers, releasemanagers, and project manager who are capable and authorized to
Status Quo Review Existing Testing Practices 29
02_4782 2/5/07 10:32 AM Page 29
introduce changes to the test plan, test strategy, or test approach. Thetest manager can chair the testing committee and members can meetwith a specified frequency (i.e., monthly) in order to review testingchanges for acceptance, rejection, introduction, implementation, andenforcement across all project teams. The concept of a testing steer-ing committee or chartered testing group is often ignored at manySAP projects, which causes the testing life cycle to remain static, inef-ficient, or useless.
One of the most pressing and frequent challenges that a testingprogram at an SAP project experiences is how to bring or introducechange to an organization for testing artifacts, deliverables, and stan-dards. Most SAP projects lack an appointed or chartered committeefor introducing, implementing, and enforcing new testing changes toeither the testing approach or testing standards. A test manager whodoes not have authority over functional (configuration) and develop-ment (technical) resources who are test fillers or appointed to testingtasks may encounter resistance from various team leaders or projectmanagers who have direct reports impacted by the introduction oftesting changes. A test manager may propose an ostensibly “minor”change to the testing life cycle or testing methodology only to findthat the configuration team leaders or configuration manager will re-ject or resist the proposed testing change.
It is often the case that the test manager cannot align the testingobjectives and consequences with the individuals in charge of man-aging the resources appointed to testing tasks. Under these circum-stances, the test manager would need to convince a project manageror several team leaders (who may not be authorized to make deci-sions) in a series of ongoing meetings that a test change is necessary,of the rationale for the testing change, and of the benefits of intro-ducing the testing change. An approach whereby a test manager mustconvince several project entities and stakeholders that a testingchange is needed is typically time consuming, and likely to delay theimplementation of the testing change or cause the testing change toget scrapped due to the project members’ inability to reach an agree-ment on a testing change.
During the implementation or support of an SAP system it will benecessary for the test manager to propose changes at a minimum tothe following artifacts or work products testing methodology, testplans, automation strategy, testing templates, testing presentations,
30 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
02_4782 2/5/07 10:32 AM Page 30
the approach for defect resolution, and test execution. Introducingchanges to a test artifact, however, has the potential to affect projectmembers responsible or assigned to testing tasks, the project scope,training of resources, project schedule, and project costs. Examples oftesting changes that a test manager may propose to the project andtheir consequences to the project include the following:
■ Customization changes to a test management tool (i.e., addingnew fields, screens, or validation logic to the management tool),which can trigger the creation of new custom training materialsfor the changes introduced to the test management tool, and thetraining of project resources who may have different learningcurves for becoming familiar with the changes to the test man-agement tool.
■ Applying lessons learned from a previous testing cycle to a futuretesting cycle can over the long run increase quality or improve thetesting methodology, but may cause a cultural shift within thetesting resources who resist the adoption of a new test approachderived from lessons learned.
■ Conducting requirements-based testing as opposed to executingtest cases that do not map to a requirement can cause the config-uration team members or test team members to allocate time fromtheir schedules toward constructing an RTM and updating all testcases to map to a valid requirement.
■ Peer-reviewing test cases. Peer reviews are recognized in theCMM. Peer reviewing helps to refine the test cases and ensurethat the documented test conditions are valid. However, intro-ducing the concept of peer reviews would require at a minimuma form to collect feedback from the peer-review session andassignment of project resources. The consequences of peerreviews can include conflict among testing resources, and rework,and require training since a peer review may cause one testingmember to critique and evaluate the work from another test teammember.
■ Introducing a test readiness review (TRR) checklist. A TRRbrings a discipline approach to assessing the preparedness for test-ing prior to the start of a testing cycle. A TRR is a checklist ofitems that must be met or have workarounds before the test exe-cution phase is commenced. Items not met for a TRR may indi-
Status Quo Review Existing Testing Practices 31
02_4782 2/5/07 10:32 AM Page 31
cate that the project is not ready to begin execution for the test-ing cycle or that project members have not fulfilled all theirresponsibilities associated with a testing cycle. Under a TRR, dif-ferent individuals are assigned items from a checklist that theymust address in meetings prior to the start of testing. A TRRwould require different project resources to provide statuses fortheir assigned tasks under a large audience, which can expose pro-ject members to political considerations and greater transparencyfor their assigned tasks.
■ Enhancing flow process diagrams with narratives. It is often thecase that a flow process diagram representing either an entire end-to-end scenario or a portion of an end-to-end scenario is outdatedor does not include a narrative. A narrative describes the actors,preconditions, postconditions, description of the process, andassumptions made that are associated with a flow process dia-gram. In UML (Unified Modeling Language) notation, use-casesare constructed with narratives that describe the modeled process.Narratives for diagrams are particularly useful for projects thatexperience high levels of turnover or have complex (highly cus-tomized) business processes. Enhancing a flow process diagramwith narratives would require the author of the diagram todescribe different attributes of the modeled diagram, which cancause the diagram’s author to shift attention from other tasks inorder to document the diagram’s narratives.
■ Automating testing processes. Automation is a useful techniquefor providing greater testing coverage, in particular for regressiontesting to support new project releases, system transports, or sys-tem upgrade. However, initial automation efforts are time con-suming, require robust documentation of test cases andconditions to be validated, and most likely would require func-tional support from subject matter experts and SAP configurationmembers. Projects are often reluctant to provide SAP configura-tion members for extended periods of time or any period of timeto construct automated test cases or update documentation suchas test cases or business process procedures.
The aforementioned examples are situations that have training,political, rework, and labor hours implications on project membersassigned to testing tasks. Adopting these changes or similar testing
32 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
02_4782 2/5/07 10:32 AM Page 32
changes within an SAP project has the potential to end in futility asdecisions on testing changes move at a slow pace if at all. Projectmanagers must formally establish a test engineering committee withparticipation from project members empowered and authorized toaccept or reject testing changes to avoid the inertia encountered withthe introduction of testing changes.
Status Quo Review Existing Testing Practices 33
02_4782 2/5/07 10:32 AM Page 33
02_4782 2/5/07 10:32 AM Page 34
35
CHAPTER 3Requirements
Total quality management (TQM) proponent Crosby defined qual-ity as “conformance to requirement.” At the time it was coined,
Crosby’s definition applied primarily to statistical process control andmanufacturing processes, but his definition is also applicable to soft-ware projects whereby software requirements are mapped to testcases and ultimately to the designed software solution to be deployedinto the production environment.
Our definition of quality is “a system that performs as expected,making available the required system features in a consistent fashion,demonstrating high availability under any type of constraint (i.e.,stress, concurrency, security breach, etc.); thus, consistently meetingthe user’s expectations and satisfying the system’s user needs can beconsidered to be high quality.”
Whose responsibility is it to build high-quality systems? All stake-holders that take part in the software development process are re-sponsible for the system quality and need to have tasks assignedaccordingly to be able to contribute to the quality of a system.
How is this type of quality achieved? Simply put, by documentingthe system’s user requirements and needs. However, achieving this typeof quality is more complicated than that. For example, many SAP pro-jects do not have a stringent and effective methodology for drafting,capturing, managing, and verifying requirements, nor the schedules andbudgets to implement this effectively. It can also be counterproductiveto document every single requirement detail, something that is just notfeasible unless you are working in an environment with no deadlines.
Even simple concepts such as the basic requirements traceabilitymatrix (RTM1) and requirements-based testing are often obscure or
1An RTM links all requirements to the SAP implementation and to test cases to allowfor measuring completeness.
03_4782 2/5/07 10:37 AM Page 35
esoteric at most SAP implementations. So can be efforts to prioritizerequirements. When working on a project where all requirements areconsidered high priority, it is not possible to implement or test basedon requirement risk.
The inability to successfully manage and prioritize requirementsbased on risk or link requirements to SAP components or test casesdue to unrealistic deadlines and other internal project pressures andpolitical ramifications can lead to deployment of an SAP system thatseverely lacks quality (i.e., it does not meet the client’s needs or goals,or is in violation of the company’s rules). Companies that do not ver-ify all requirements cannot answer the simple yet critical question:“Have we built the system correctly?”
Recent industry surveys also demonstrate that enterprise resourceplanning (ERP) systems (e.g., SAP) were judged unsuccessful. Manyprojects that cannot answer “have we built the system completelyand correctly based on prioritized, traceable, and testable require-ments” engage in the inexact art of hunches and wild guesses todetermine and assess whether the delivered SAP solution or function-ality can be deployed into a production environment on time andwithin budget, which is the equivalent of sticking out a thumb in theair to determine the wind velocity.
While the quest for perfect requirements may never be reached,with the help of requirements management tools, risk analysis, peerreviews, RTMs, and stakeholder involvement, companies imple-menting and upgrading SAP projects can design robust solutions ca-pable of meeting end-user (client) expectations.
REQUIREMENTS BACKGROUND
According to Wikipedia, the free encyclopedia:
By definition a requirement is a description of what a system shoulddo. A requirement is a singular documented need of what a partic-ular product or service should be or do. In the classical engineeringapproach, sets of requirements are used as inputs into the designstages of product development.
A requirement specifies what the system will do but not how it willdo it.
36 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
03_4782 2/5/07 10:37 AM Page 36
In SAP projects, requirements are often morphed into terms suchas business processes, business scenarios, business models and dia-grams, business process master list (BPML) of transaction codes,functional specifications, and so on. Requirements allow the SAPproject team to design, configure, and develop a solution that meetsthe end users’ expectations as well as the entities’ business rules andprocedures. Requirements can help drive the project’s scope, person-nel resources, and budget.
In ERP packages such as SAP, functional requirements fall intothe category of the tasks that the system’s intended end users mustperform in order to complete their job functions. An SAP end usermay perform tasks such as creating invoices, sales orders, deliveries,requisitions, outline agreements, purchase orders, general ledger en-tries, and performance appraisals in order to complete his job func-tions. These tasks are considered the “what” that the SAP systemmust meet. These job tasks and functions that make up “what” thesystem should do for end users must be captured as requirements insome form along with their associated company’s rules and policies.
Functional requirements are only a subset of the universe of re-quirements for an SAP system. In addition to functional require-ments, SAP systems will have requirements for the following:
■ System performance, which can lead to service-level agreements(SLAs)
■ Security■ Development objects (i.e., reports, interfaces, conversions, en-
hancements, forms, workflow)■ Usability■ Industry regulations■ Other
For instance, as part of the implementation of an SAP system, the se-curity components related to segregation of duties, roles, profiles, andsecurity authorizations may need to be captured and verified astestable requirements.
In SAP, requirements can come from various sources for either aninitial or existing implementation. For an existing implementation,requirements can come from one or more of the following events: (1)a previous system release where some of the requirements were de-ferred, (2) help desk tickets where end users report problems or new
Requirements 37
03_4782 2/5/07 10:37 AM Page 37
features that the system needs, (3) gap analysis and site surveyswhereby end users cannot effectively perform their tasks, (4) addinga new system module, (5) implementing a new industry-specific solu-tion, (6) adding a new company division, and (7) initially misunder-stood requirements. All these events could cause the SAP project teamto capture requirements to implement new system features, enhance-ments, modifications, or fixes.
However, an initial SAP implementation will need to capture re-quirements from workshop participants, derive requirements from“as-is” documentation detailing the functionality for legacy systems,ensure that requirements are compliant with industry regulations(i.e., Federal Drug Administration, Federal Energy Regulatory Com-mission, Sarbanes-Oxley Act, Department of Transportation, etc.),client’s feedback for expected system functionality, end-user surveys,scope statement, and so on. Gathering initial requirements requiresparticipation from several stakeholders and is often facilitatedthrough the use of workshops. Once requirements are captured, theycan be moved into a requirements management tool and subjected toinspections. Requirements can be managed with the procedures andpolicies from the Change Control Board (CCB). In SAP jargon, re-quirements are often thought of as business scenarios, process steps,security roles, performance expectations, business rules, and so on.
Interpreting the intent of requirements is often the source of con-fusion and friction during the testing phases. However, these prob-lems can be mitigated with requirements inspections, guidelines, andstandards for drafting the requirements. For example, Karl Wiegersprovides the following guidelines:
“Avoid using intrinsically subjective and ambiguous words whenyou write requirements. Terms like minimize, maximize, optimize,rapid, user-friendly, easy, simple, often, normal, usual, large, intuitive,robust, state-of-the-art, improved, efficient, and flexible are particu-larly dangerous. Avoid ‘and/or’ and ‘etc.’ like the plague. Require-ments that include the word ‘support’ are not verifiable; define justwhat the software must do to ‘support’ something. It’s fine to include‘TBD’ (to be determined) markers in your SRS to indicate current un-certainties, but make sure you resolve them before proceeding withdesign and construction.”2
38 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
2Karl Wiegers, Software Requirements, Microsoft Press, 1999.
03_4782 2/5/07 10:37 AM Page 38
A requirement is not a mere flow process diagram, an SAP trans-action code, a picture of an interface, a survey response, a broadgeneric statement on a scope document, or some verbose unwieldydocumentation in a text editor that cannot be verified. A requirementfocuses on what the system and user tasks are and validates themwith test cases. There are several types of requirements that fall underthe hierarchy of business requirement that evolve into a user require-ment, and further to a functional requirement. Requirements can helpdetermine or refine a project’s scope, which subsequently determinespersonnel needs and budget. The next section provides some tech-niques for collecting, inspecting, and managing requirements.
METHODS FOR GATHERING REQUIREMENTS
In an SAP implementation there are various methods for collectingand gathering requirements. Some of the methods and techniques thatlead to the capturing of new SAP requirements include workshopsduring the blueprint phase where customer input (CI) templates arepopulated, user surveys, company’s business rules, use cases, govern-ment regulations, deferred requirements from a previous release, helpdesk tickets, prototypes, gap analysis, “as-is” documentation fromlegacy systems, interviews with subject matter experts (SMEs), andscope statement.
In order to successfully design, configure, and implement an out-of-the-box or prepackaged solution such as an ERP system, it is nec-essary to capture, manage, and verify requirements. Requirementscan be captured for various categories such as functionality, security,performance, usability, development objects, and workflow. Cap-tured requirements need to be evaluated, and requirements changesneed to be managed in order to avoid scope creep.
Most initial SAP implementations will capture requirements dur-ing the workshops held in the blueprint phases, whereas existing SAPimplementations may collect requirements from site surveys, gapanalysis, end-user feedback, addition of a module, and so on. Inde-pendent of the method for capturing requirements, the objective andgoal of the test team and project manager is to ensure that the systemconforms to all documented and in-scope requirements.
For companies either implementing or upgrading SAP with theimplementation guide Solution Manager or IBM’s Ascendant™ guide,
Requirements 39
03_4782 2/5/07 10:37 AM Page 39
a methodology is presented to allow end users, SMEs, and businessanalysts to voice their system expectations, expected system tasks,and requirements. Solution Manager offers CI templates that containfields and sections whereby requirements can be identified, docu-mented, and organized under three categories: (1) organizationalunits, (2) master data, and (3) business processes. CI templates can bemodified as needed to meet the project’s needs. Furthermore, compa-nies can identify guidelines and standards for populating, storing,version control, and applying statuses for a CI template.
An example of modifying a CI template could entail the inclusionof fields that capture how frequently a business process is executed,its expected business volume, and error handling conditions to com-pensate for end-user mistakes and priority. Examples of guidelinesand standards for populating a CI template include the identificationof which fields are mandatory, the stakeholders who must review thecontents of the CI template, ensuring that the CI template does nothave any fields that are left blank, criteria for considering when a CItemplate is “completed,” and so on.
The CI templates can contain fields that allow the functional an-alyst to capture the following information for a given SAP businessprocess:
■ Security authorizations■ Estimated number of users■ Description of the business process■ Ownership of the process■ A high-level bulleted list of the requirements and expectations for
the process■ Development objects considerations (e.g., reports, interfaces, con-
versions, enhancements)■ Rationale for a requirement■ Associated business process model■ Master data■ Dependencies■ Impact to existing organization
After the CI template for business process is chosen from thedrop-down list, the user is provided with a form that constitutes theCI template to populate information for the business process.
40 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
03_4782 2/5/07 10:37 AM Page 40
Well-documented CI templates can lead to the identification ofSAP touch points, which are areas of integration for SAP scenarios,and also lead to the identification of business scenarios. A scenario isa subset of requirements. A scenario may provide coverage for one ormore requirements. For instance, the order-to-cash scenario can inte-grate functionality from four or more SAP modules with data frominterfaces containing extended components such as remote functioncalls (RFCs), multiple SAP roles, and trigger workflow, and ulti-mately transfer data based on business logic from one SAP transac-tion code to the next. In this example the order-to-cash scenario mayprovide coverage for functional, security, workflow, and developmentrequirements. With the help of a CI template, the functional and busi-ness analysts can determine the touch points for a scenario such asorder-to-cash.
Scenario variations identified from CI templates can be driven bythe input data or the process. For instance, the order-to-cash scenar-ios may verify that SAP transaction “VA01” has been documentedwith multiple order types. The IBM tool Ascendant provides an ex-ample of a scenario, “Sale from Stock Using Scheduling Agreement,”which has six different variations (cases or scenarios) based on fac-tors such as type of sales order, source of stock, stock pulled from in-ventory or configured from scratch, and exceptions.
The following example for commercial orders was drawn fromIBM’s Ascendant methodology:
1. Commercial order: standard order flow with no exceptions2. Commercial order: ship from stock3. Commercial order: configure to order4. International order: standard order flow with no exceptions5. International order: ship from stock one half from Mexico, one
half from Canada6. Intercompany order: standard order flow with no exceptions,
and so on
The requirements identified from the CI templates can lead to thecreation of scenarios such as order-to-cash and sale from stock usingscheduling agreement that have different variations. Completed CItemplates can also help the functional and business analysts developfunctional and technical specifications, security roles, performance
Requirements 41
03_4782 2/5/07 10:37 AM Page 41
targets for system response times, SLAs, data elements, and develop-ment objects that determine the project’s scope. Consequently, CItemplates can lead to the creation of user requirements that can beverified through the execution of test cases.
Workshops held during the SAP blueprint phase provide a forumfor populating the CI templates. Populating the CI templates helps toestablish, confirm, or refine system requirements. The CI templatesprovide information as to what the system will do once it is deployed.Workshops are sessions whereby project members affected by the de-sign of the SAP solution participate in seminars with scribes to iden-tify what the SAP system will do for them. For existing or deployedSAP production-based systems, gap analysis, change requests, or end-user tickets logged through the production help desk may provide op-portunities to gather and collect requirements, which can be furtheranalyzed in workshops.
The workshop participants can consist of SMEs, business ana-lysts, a scribe, configuration experts, test team members, developers,and end users. Each participant plays a vital role in ensuring that therequirement is captured, peer reviewed, inspected, and finalized dur-ing the requirements gathering stage, which is part of the SAP phaserecognized as blueprint. For example, for an initial SAP implementa-tion during an initial workshop, the SAP functional consultant maydescribe the capabilities of an SAP module and the subject matter ex-pert and end user can describe how the legacy system works in rela-tion to the described SAP capabilities or what SAP functionality needsto be extended in order to meet needed functionality, and this infor-mation can be captured in the CI templates that reside within the SAPSolution Manager platform.
The implementation SAP Roadmap methodology embeddedwithin the Solution Manager platform provides hints and proceduresfor conducting a workshop. In particular, Section 2.4 from SAPRoadmap focuses on the mechanics for conducting a workshop tocollect SAP requirements. Exhibit 3.1 shows specific steps from theSAP Roadmap methodology for drafting requirements during theblueprint phase.
During a workshop, expected SAP functionality and require-ments can be gathered and collected for a given SAP process within asingle module, enterprise area, or end-to-end process. Typically, theSAP workshop gives the SAP configuration expert (who is considered
42 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
03_4782 2/5/07 10:37 AM Page 42
the workshop facilitator) the opportunity to describe to the work-shop participants the capabilities of the SAP system. As the workshopprogresses, the facilitator probes the participants for information todetermine their business needs, business functions, and expectedtasks and incorporates the information within the body of the CItemplate. A workshop facilitator can rely on a dedicated scribe tocapture all feedback and comments from the workshop participants.Furthermore, the workshop scribe can document issues, concerns, orquestions that the workshop participants raise in order to resolvethem at a later time.
Depending on the size and complexity of the SAP implementa-tion, the project may need multiple iterations (rounds) of workshopsin order to complete and finalize the CI templates. For processes andrequirements that are captured and are subject to interpretation andambiguity it might be necessary to illustrate the requirements withprocess flow diagrams containing swim lanes. The workshop facili-tators and participants need to ensure that the captured requirementswithin the CI templates are consistent with their company’s policies,business rules, and industry regulations. For instance, industries such
Requirements 43
EXHIBIT 3.1 Activities to Support Gathering of Requirements from SAP’sRoadmap Methodology
Phase Blueprint
Activities • 2.4.2 General Requirements Workshops• 2.4.3 Business Process Workshops• 2.4.4 Gap Analysis• 2.5.2 Development Requirements Review• 2.7 SAP Feasibility Check• 2.8 Authorization Requirements and Design• 2.8.2 User Roles and Authorization Design
Deliverables • Functional Design Specs• Flow Process Diagrams• Technical Design Specs• Customer Input (CI) Templates
Tools • Identify Criteria for Evaluating Requirements• Create a Requirements Traceability Matrix (RTM)• Requirements Management Tool• Throwaway Prototypes
03_4782 2/5/07 10:37 AM Page 43
as pharmaceutical, airline, and utilities may have their business rulesand logic governed by regulations from the Food and Drug Adminis-tration (FDA), Department of Transportation (DOT), and FederalEnergy Regulatory Commission (FERC). In another example of con-sidering acts and policies when capturing system requirements, fed-eral agencies and Department of Defense units within the UnitesStates have financial requirements that are governed by the FederalFinancial Management Improvement Act of 1996 (FFMIA) and JointFinancial Management Improvement Program (JFMIP) and thus theimplemented SAP solution must be FFMIA and JFMIP compliant.
After requirements are documented, they should be inspected,peer reviewed, and subjected to a disciplined approval process. Ex-hibit 3.2 shows some of the roles associated with gathering and man-aging requirements. Well-documented and enforced criteria factorsand standards can improve the quality of the documented require-ments. Industry experts also recommend that the development of testcases be drafted in parallel with the documentation of requirementsas a means of improving the quality of the requirement. As require-
44 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
Roles for Requirements
CCBSAPExpert
Testers SMEs
Drafting andmanaging
requirementsis an integrated
team effort
• Manage changes to requirements• Freeze (“lock-down”) requirements• Communicate requirement changes
• Lead/manage workshops• Fill out CI templates• Construct functional and technical specs• Diagram requirements• Author’s requirements
• Help construct requirements traceability matrix (RTM)• Peer-review requirements• Develop test cases to verify requirements
• Peer-review requirements• Ensure requirements align with company’s policies and business rules• Sign off on requirements• Participate in requirements• Ensure requirements fulfill “as-is” system functionality
EXHIBIT 3.2 Roles for Managing and Collecting SAP Requirements
03_4782 2/5/07 10:37 AM Page 44
ments are captured within a workshop, the functional test team mem-ber can develop test cases to verify the requirement before it is codedor configured within SAP. Illustrating the requirements with flowprocess diagrams or demonstrating them with a throwaway proto-type can further enhance the quality of the requirement and reducerequirement ambiguity.
Under the SAP Roadmap methodology, opportunities are pro-vided to refine, clarify, and address issues and gaps within require-ments. One of these opportunities is known as the SAP feasibilitycheck (Section 2.7 from Roadmap). The SAP feasibility check bringsexperienced SAP resources to the project to perform services such asevaluation of documented business processes and risks, determina-tion of expected business volumes, and risk assessment. According toSAP Roadmap, the following activities are major outputs of the SAPfeasibility check:
■ “Mapping of the core business processes with SAP standard func-tionality and planned developments that identifies:● Major gaps and modification requirements● Critical functions and critical integration requirements
■ Check any functional risk of the planned solution, including busi-ness processes and gap analysis
■ Determination of expected business volume and number of usersacross the different components in the solution landscape● Check of sizing and performance● Assessment of availability requirements and management
demands”
The SAP feasibility check concludes with written reports and pre-sentations from the SAP experts. An SAP feasibility check may revealthat the requirements cannot be implemented, need to be modified,or can be implemented. For example, the experts conducting the SAPfeasibility check may show that a company’s plan to have 10,000company codes in a production environment may deteriorate the sys-tem response times.
In the absence of CI templates or Solution Manager, another re-quirements elicitation technique is user questions or interviews. Thisapproach presents a questionnaire to legacy and production usersthat allows further decomposition of the project’s scope statement.
Requirements 45
03_4782 2/5/07 10:37 AM Page 45
Most projects develop scope statements at a high level that need fur-ther decomposition in order for the SAP functional and developmentteams to discern what needs to be configured or coded. The owner ofthe questionnaire can provide instructions, deadlines, and guidelinesfor filling out the questionnaire. The elicitation can be documentedwithin a text editor or spreadsheet. The elicitation queries end userson their expected production tasks. Exhibit 3.3 is a sample question-naire for the configuration of the SAP human resources (HR) modulefor a global SAP rollout to multiple countries.
Responses for the questionnaire in Exhibit 3.3 can help the SAPHR functional expert define or refine the project’s scope statement,
46 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
EXHIBIT 3.3 Questionnaire to Collect Requirements for the SAP HRModule
Questions: Responses:
Do you use SAP for HR?
If not, what do you use?
Do you use SAP for Payroll?
If not, what do you use?
Who enters time at your location? (Specify: 1. Timekeepers, 2. Employees, 3. Administrators, 4. Contractors, 5. All of them, or 6. Other)
Define the people who are responsible for time entry and timesheet approval.
What system do you use to enter time? Describe your current process for entering time (include diagrams if necessary).
What locations and sites do you enter time for?
What Payroll areas do you enter time for?
What type of identification number is required in your timesheets (Social Security number, employee number, etc.)?
How many users are expected to enter timesheets at your location?
Describe any target or source systems where the timesheet data is received from or sent to.
List any reports that are necessary to review timesheet entries.
03_4782 2/5/07 10:37 AM Page 46
develop business processes, draft high-level requirements, and conse-quently configure the system. After the questionnaire is completelyfilled out and turned in, the functional expert can illustrate the ques-tionnaire responses with diagrams and/or system prototypes to con-firm understanding with the responses from the end users. The sameprinciple of peer reviewing and inspecting the requirements that orig-inated from CI templates applies to the requirements (responses) ob-tained from the sample end-user questionnaire. SAP projects candevelop similar questionnaires for other SAP modules.
Another technique for capturing SAP requirements includes thecreation of Unified Modeling Language (UML) use cases. Use casesallow analysts to capture what the system does, but not how it doesit. The use case approach focuses on the end user’s tasks and systemactions. Use cases can be used to derive functional requirements. Forinstance, for a Web site bookseller a use case may show that an In-ternet shopper (the actor) accesses the Web site to buy books (the usecase). Use cases are accompanied with narratives that provide attrib-utes and information for the use cases, such as priority, frequency,description, preconditions, postconditions, primary actors, error han-dling, and variations. A use case that is missing a narrative is deficientand incomplete. Use cases can be verified with test cases. UML nota-tion and diagrams such as use cases can be drawn with software fromvendors such as Rational Rose, Altova UModel, or Smart Draw, toname a few. The software ARIS™ from IDS Scheer, which integrateswith SAP’s Solution Manager, offers the capability to design UML di-agrams and SAP business process diagrams. Intellicorp offers theLiveModel™ solution, which is a graphical repository for document-ing SAP business processes and also comes with a prebuild 4.7 or 5.0SAP reference model preloaded. Exhibit 3.4 shows an example of ause case for ordering CDs from a Web site.
For existing production-based SAP implementations, require-ments can come from a gap analysis, requirements deferred from aprevious release, or help desk–reported errors. Examples include alarge corporation that has rolled out SAP to some of its divisions orto a specific geographical area and the SAP project team now needsto include new regions or divisions and must take into account re-quirements, business processes, and business rules from a different setof users. Gap analysis is the term used to identify what new func-tionality will be included or modified from an existing release to meet
Requirements 47
03_4782 2/5/07 10:37 AM Page 47
48 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
the requirements for a new set of users. User surveys or question-naires such as the one shown in Exhibit 3.3 can assist in document-ing the information gathered from a gap analysis. An existing SAPproject can also expect to inherit new requirements from a previoussystem release. For instance, an SAP project that is subject to multi-ple go-lives and releases may have had some requirements deferredfrom one system go-live to the next due to budget or schedule con-siderations. The deferred requirements now need to be included inthe system design. For projects that have allowed much time to elapsebetween SAP releases, it may be necessary to reevaluate all previouslydeferred requirements for consistency, necessity, and completeness.
In an existing production environment the SAP production teamand help desk may get calls from end users reporting deficiencies or
<<includes>>
<<includes>>
<<extends>>
<<extends>>
<<includes>>
Distribution System Internet Sales System
Send CDto Customer
Credit CardVerification
Lookfor CD
Browse byCategory
Search bySpecific CD
Run WeeklyReport
CDDatabaseC.R.U.D.
MarketingDatabaseC.R.U.D.
E-mailMarketingMaterial
ReviewMarketingMateriale-mails
Orderspulled out
of DB
Placeinformation
on Web
OrderCheckout
CD
Notify thatCD is sent to
Customer whenUPS/Mail
DistributionSystem
EM
Vendors
Customer
CustomerNotification
EXHIBIT 3.4 Use Case for Ordering CDs from a Web site
03_4782 2/5/07 10:37 AM Page 48
software problems. Depending on the nature of the reported problem,its solution may require the application of an OSS (On-line ServiceSystem) note, a new graphical user interface (GUI) upgrade, a patch,a configuration change, or a new system enhancement. These reportedproblems create opportunities to add new or improve existing systemfunctionality and can represent a feature that was not addressed bypreviously gathered requirements or constitutes a new requirement.Complex enhancements or system changes may cause various areas ofthe system to be reconfigured, design of new development objects, andthe addition of new security roles. For system requests originatingfrom the help desk, it is highly recommended that a CCB is in place toevaluate the merits, costs, efforts, and rationale associated with im-plementing the request to avoid scope creep or gold plating.
The methods, techniques, and approaches described in the nextsection can help SAP functional experts determine what processes arein scope for the next SAP go-live or cutover. The aforementioned re-quirement-gathering techniques can lead to the creation of a BPMLthat identifies which SAP transactions are in scope, the associatedSAP roles/profiles for accessing the SAP transaction codes, functionaland technical specifications, the identification of end-to-end scenar-ios including integration areas (touch points), and the associated re-ports, interfaces, conversions, and enhancement (RICE) objects.Collecting and drafting requirements is important, but managingthem is also arguably as important.
TOOLS FOR MANAGING REQUIREMENTS
While gathering and collecting requirements is important in order todesign a system that meets the end user’s expectations, another criti-cal element that needs to be considered is how the requirements willbe managed, tracked, monitored, and changed after they have beencaptured. In an SAP implementation a requirement may undergomany statuses during its life cycle. For instance, after a proposedrequirement has been approved, it may enter other states, includingchanged, deleted, deferred, and rejected. A requirement may also beverified and implemented. Additionally, the typical SAP implemen-tation will have requirements that fall under various categories suchas: (1) functionality, (2) performance, (3) security, (4) workflow,(5) development objects, and (6) usability.
Requirements 49
03_4782 2/5/07 10:37 AM Page 49
Managing the status of requirements in disconnected spread-sheets, text editors without version control, audit trails, and restrictedaccess from remote areas is a logistical nightmare that can compro-mise the quality of the requirement and subsequently the system’s de-sign. Fortunately, many commercial vendors offer solutions formanaging requirements within a single repository that includes thenecessary security features, version control, audit trails, history logs,and approvals for changing the requirements. Other benefits of usingrequirement management tools include the ability to link require-ments to test cases and third-party integration with companies thatmake test management tools. For instance, Rational’s Requisite Prointegrates with Mercury Interactive’s Quality Center (TestDirector).Other software solutions for build requirement management includeDOORS, Serena-RTM, and Caliber-RM.
The full promise of a requirement management tool is not un-leashed until a process is defined for managing the requirements thattakes into account evaluating the impact of a change to a require-ment, communicating requirement changes, identifying criteria forprioritizing a requirement, identifying the owner of the requirement,requirement inspection, and the establishment of a CCB.
The promise of a requirement management tool is manifestedwhen the project stakeholders understand the documented proce-dures and formal process for changing a requirement and recognizethat the requirement management tool is the only official repositoryfor all project requirements. After requirements have been capturedand stored in a requirement management tool, they can be evaluatedbased on predefined criteria.
EVALUATING REQUIREMENTS
In her book, Effective Software Systems: 50 Ways to Improve yourTesting,3 Elfriede Dustin reveals several criteria and provides a check-list to help peer reviewers and authors of requirements to verify and
50 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
3Dustin, Elfriede, Effective Software Systems: 50 Ways to Improve your Testing,Addison-Wesley Professional, December 2002.
03_4782 2/5/07 10:37 AM Page 50
measure the quality of a requirement. Dustin measures requirementsbased on the following nine attributes:
1. Correctness2. Completeness3. Consistency4. Testability5. Feasibility6. Necessity7. Prioritization8. Unambiguousness9. Traceability
These attributes are essential for evaluating the quality of capturedSAP requirements that subsequently become business processes, sce-narios, transaction codes, interfaces, reports, conversions, securityroles, workflow logic, and so on.
Requirements are evaluated through inspections and peer re-views. In an inspection, various stakeholders meet and subject the re-quirement to the criteria. Requirements not meeting the criteriaattributes are either modified or deleted. The following descriptionsare provided for the various attributes that help evaluate the qualityof a requirement:
1. Correctness. Ensures that the requirement does not conflict withthe company’s business rules, policies, standards, regulations orother previously approved requirements. This attribute ensuresthat the user’s voice (expectation) for the system functionality orexpected system task is captured correctly during the requirements-gathering phase.
2. Completeness. All requirement information and elements shouldbe included. A requirement cannot have any missing elements.Since many SAP functional requirements focus on what the userdoes or the tasks that the user performs, the likelihood of over-looking a requirement is minimized.
3. Consistency. Requirements that are consistent do not conflictwith each other. Since SAP is an integrated system with inte-grated areas (touch points), one has to ensure that the require-ments are in harmony with one another.
Requirements 51
03_4782 2/5/07 10:37 AM Page 51
4. Testability. A requirement needs to be verified with one of themethods for software verification (e.g., inspection, demonstra-tion, test, or analysis). If a requirement cannot be verified withany verification method including the execution of test cases,then it falls short of the definition of a requirement.
5. Feasibility. This criterion ensures that the requirement can be im-plemented and tested given the project’s resources, technology,budget, and time frame. Functional, security, workflow, anddevelopment SAP resources can work with the workshop partic-ipants to determine which requirements can be built or cus-tomized in the SAP system with user exits, ABAP development,or system configuration.
6. Necessity. A requirement needs to add value and have merits forthe project. The requirements must address the needs and expec-tations of the system stakeholders and end users. Question all re-quirements, asking “What would happen if this requirement didnot exist?” If the answer is “Nothing would happen,” that is amanifestation that the requirement is not needed or adding anyvalue. Requirements that cannot be traced to any origin also re-veal that they are probably not needed or out of scope for the ex-isting implementation.
7. Prioritization. While all requirements are important, some aremore so than others. Giving priorities to requirements allows theCCB and project manager to respond to requirement changesbased on events such as descoping, unexpected compressed testingtime frames, or budget cuts. Ordinal scales that rank requirementson a sliding scale (i.e., 1 to 5) are helpful for prioritizing the im-portance of a requirement. Alternatively, one can also prioritize arequirement low, medium, or high. Requirements that are crucialto running the business and have no workarounds are the mostimportant ones, whereas requirements that have workarounds ordo not bring down the business if not verified can be considered ofmedium importance or “nice to have.” For instance, security re-quirements are extremely important since their implementationhelps companies comply with SOX regulations. In addition, func-tional requirements for making payroll runs, creating sales orders,and making a materials requirement planning (MRP) run are alsoextremely important because they allow an entity to operate andif they are poorly implemented the company would suffer business
52 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
03_4782 2/5/07 10:37 AM Page 52
disruptions. The following criteria from IBM’s Ascendant help torank and prioritize requirements:
● Frequency (How often does the process occur?)● Impact (What is the effect if the process is down?)● Difficulty (What is the probability that problems occur?)
8. Unambiguousness. Requirements that can be interpreted differ-ently by a different set of users are too broad, not decomposedenough, or unambiguous. Often, during user acceptance testing(UAT), the SAP end user will say, “This is not what the system issupposed to do” after a test case is executed, and the config-uration expert will quip, “But that’s how I interpreted your re-quirement/business scenario.” This situation is a sign that therequirements were not thoroughly inspected. Requirements needto be stated precisely without leaving room for doubt or confu-sion. Requirements cannot be subjective, meaning person X in-terprets the requirement one way and person Y interprets therequirement differently. For example, ambiguous statementssuch as “Make the system as fast as possible for response times,”“Provide error messages for invalid input from end user,” and“Provide financial reports for month-end closing activities” canbe interpreted in multiple ways, which will cause many designchanges and testing defects. In the case of the performance re-quirement, one can build a system that has average responsetimes of 10 seconds per SAP transaction code screen or responsetimes of two minutes per screen.
9. Traceability. A requirement needs to refer back to the source thatoriginated it. For instance, captured requirements in a require-ment management tool need to trace to their origin, which can bethe workshops, surveys, help desk tickets, end-user questionnaires,CI templates, use cases, flow process diagram, prototype, and soon. Traceability tells the project where the requirement came from,which helps to justify the existence of the requirement.
BUILDING A REQUIREMENTS TRACEABILITY MATRIX
After requirements are drafted, entered into the requirements man-agement tool, and peer reviewed against corporate policies, business
Requirements 53
03_4782 2/5/07 10:37 AM Page 53
rules, and against project scope document, one can proceed to createan RTM. The RTM is a representation of user requirements alignedagainst system functionality. The RTM is used to ensure that allrequirements (not just functional requirements) are being met by sys-tem deliverables. The RTM will map requirements to test cases.
The benefits of the RTM include:
■ Bridging requirements to functional and technical design.■ Addressing the verification question: “Are we building the system
right?”■ Ensuring full requirements coverage—links all requirements to
design, development, and test cases.■ Keeping track of changes to requirements for retesting (when a
requirement is changed it needs to be retested).
Without an RTM, it is difficult to determine whether the pro-posed solution fulfills all end-user requirements. This causes projectmanagers to make uneducated and subjective guesses as to whetherthe system can be deployed into production based on the verificationof the entire set of requirements.
An RTM can be constructed with the help of a requirements man-agement tool where requirements are stored within a single reposi-tory. Constructing RTMs in text editors or spreadsheets is a pointlessand fruitless exercise, as requirements changes cannot be effectivelycommunicated or managed, which causes chaos, design flaws, andscope creep.
A traceability matrix is created by associating requirements withthe test cases or scenarios that verify them. Each requirement includ-ing parent and child requirement should have a unique identifier. Testcases or scenarios are associated with the requirements that they rep-resent. Test scenarios represent the model of the system to be built. Arequirement may be verified with one or more test cases. However, acomplex, lengthy requirement that needs to be verified with multipletest cases is usually a sign that the requirement has not been decom-posed sufficiently. The construction of an RTM may require inputand feedback from several stakeholders, although one team such asthe test team can take ownership of the RTM.
54 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
03_4782 2/5/07 10:37 AM Page 54
CHANGE CONTROL BOARD
A CCB is instrumental in enforcing change control policies forrequirements. CCBs can enforce policies for the following potentialsituations:
■ The end user creates a help desk ticket for what is perceived to befaulty system functionality, and the CCB determines that the helpdesk ticket is, in fact, a system enhancement feature not in scopeor captured in the original functional requirements and thatimplementing the feature would cause scope creep.
■ During the UAT phase after test cases are executed, the end userreports that the system is “missing” functionality, and the CCBdetermines that the functionality is “missing” because it has beendeferred to a future release and thus it will not be added duringthe existing UAT phase.
■ The project experiences budget cuts, and the CCB determineswhich requirements have to be deferred, modified, or rejected.
■ After prototypes are shown, the end user requests system changesthat cause new “proposed” requirements to be considered for theproject scope, and the CCB determines the impact, resources, timeframes, costs, and benefits of adding new requirements to thesystem.
■ After system changes are made and tested, the CCB ensures that allprocedures, project documentation, and approvals are adhered tobefore the object (system change) is transported into a productionenvironment. The CCB also ensures that objects are transported inthe correct order and sequence to the target environment.
The CCB has the responsibility of communicating system changesto the project and ensuring that the requirement management tool isthe official repository for all project requirements. The CCB deter-mines the priority of a system change and which system changes needto be transported on an emergency basis to avoid system failure ordisruption to the business.
Members of a CCB can include functional team leaders, integra-tion manager, project manager, test team manager, Basis leader, secu-rity and development team leaders, and SMEs. The CCB plays a vital
Requirements 55
03_4782 2/5/07 10:37 AM Page 55
role in the life cycle of the requirements after they have been captured,baselined, and approved. Without a CCB, the requirements may bemodified or deleted without appropriate communication and impactanalysis. This may cause project delays, budget overruns, and systemfailure.
INDEPENDENT VERIFICATION OF REQUIREMENTS
Independent verification allows companies requesting SAP servicesfrom system integrators to inspect from a neutral point of viewtestable requirements before the system is designed and configured,and also to verify that the system fulfills the testable requirementsbefore the design or solution is deployed into a live production envi-ronment. Ideally, independent testing efforts meet the definition ofInstitute of Electrical and Electronics Engineers (IEEE) standards andare performed by individuals who do not have a vested interest in thesuccess or failure of the SAP implementation.
According to IEEE standards,4 the definition for independent ver-ification and validation (IV&V) is as follows:
Independent verification and validation (IV&V) is performed by anindividual or organization that is technically, managerially, andfinancially independent of the development organization.
Given limited budgets, resources, and stringent deadlines that ac-company most SAP implementations and SAP upgrades, it may provedifficult to establish IV&V activities at an SAP project per establishedIEEE guidelines. SAP projects may have quasi-independent testersthat meet one or two of the independence criteria as established byIEEE for IV&V but not all three criteria factors. For example, a clientmay pay a third-party agency not affiliated with the SAP system inte-grator or client to independently verify testable requirements and testresults that could meet the criteria of technical and managerial inde-pendence per IEEE, but the third-party agency may gets its fundingfrom the project management operations (PMO) office or project
56 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
4729/610.12, Glossary of Software Engineering Terminology, New York, 1990.
03_4782 2/5/07 10:37 AM Page 56
manager, and thus would fail to meet the criterion of financially in-dependent.
Despite the inability of most SAP projects to fully satisfy inde-pendent verification and validation per IEEE standards, the conceptof “independent” testing needs to be introduced to SAP projectswhereby a system integrator is paid to implement SAP and thus re-duce the notion of a conflict of interest between the system integra-tor and the client requesting SAP services. A conflict of interest ariseswhen a system integrator is paid to deliver SAP services and alsoplaced in a position of finding defects for its delivered SAP design andfunctionality, which could hamper the system integrator’s ability tomeet deadlines and collect incentive bonuses from the client.
The trend and pattern for implementing or upgrading SAP is thatthat the company seeking to implement SAP will spend millions ofdollars for an implementation partner to analyze, design, construct,test, implement, deploy the solution, and provide end-user training.While this approach and model for implementing SAP has been em-ulated at thousands of SAP projects, it raises the question of inde-pendence. In other words, how does the client company know thatthe implementation partner has properly tested the solution? Is it pos-sible that the implementation partner has a conflict of interest in get-ting the SAP solution to be deployed as quickly as possible to meetproject deadlines and bonuses that causes testing activities related totest case design, test case execution, and test defect reporting to takea backseat? Even if the implementation partner does not have ulteriormotives for improperly testing the system and verifying the testablerequirements, can one reasonably assume that the implementationpartner does not have a robust testing methodology, knowledge ofautomated test tools, and the available resources or expert resourcesto test and verify all captured requirements?
After pondering all these questions and considering compliancewith regulations such as Sarbanes-Oxley (SOX) Section 404, andagencies such as the FDA, FERC, and the Securities and ExchangeCommission (SEC), one reaches the inevitable conclusion that havingthe company that designs and installs the system also test the systemis the equivalent of “putting the fox in the chicken coop.”
To mitigate the risk of having an implementation partner thatdoes not verify all requirements or has a conflict of interest, the fol-lowing recommendations are made:
Requirements 57
03_4782 2/5/07 10:37 AM Page 57
■ Construct and build an RTM with a tool specifically designed foran RTM such as Serena RTM, which is preferred over construct-ing an RTM in disconnected spreadsheets.
■ Hire a firm that provides independent testing services and is notunder the authority or control of the implementation partner. Theindependent firm should verify all requirements, not just the func-tional requirements. Requirements related to performance, secu-rity, disaster recovery, usability, and development objects shouldbe verified. The independent firm has the right to challenge andquestion the implementation partner’s design of the solution andconfiguration of the system based on the drafted requirements.
■ Conduct a thorough user acceptance test with qualified candi-dates representing the end users and SMEs. UAT should probethat the solution works from the point of view of the end users.Merely having system prototypes or presentations is not sufficientto demonstrate to the end users that the solution meets theirexpectations. UAT is more valuable when end users can executeactual hands-on test cases and report defects where applicable.
■ Implement exit criteria and certification processes at the end ofeach testing phase.
■ Avoid the use of waivers and exceptions that the implementationpartner typically proposes to the client for unfulfilled or unveri-fied requirements in order to promote the system into production.
An RTM provides a unique requirement identification for eachrequirement and ensures that each requirement is mapped to a testcase. Reports from an RTM can demonstrate which requirements are“orphans” because they have not been tested or verified with a testcase. For requirements that are mapped to test cases the client shouldseek and request evidence that the test case has actually been exe-cuted with an audit trail and printouts of screen shots where appro-priate. Every time a requirement has the status of “verified” the clientcompany can ask to see which test case was executed and its resultsto verify the requirement.
Implementation partners vary in expertise, breadth, and numberof resources when implementing a solution such as SAP. An imple-mentation partner that excels in developing SAP interfaces may nothave expertise in developing test plans, exit/entrance criteria, creatingautomated test cases, designing test cases, developing testing stan-
58 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
03_4782 2/5/07 10:37 AM Page 58
dards, gathering and managing requirements, and so on. Out of po-litical considerations, some implementation partners hide defectsidentified during the testing phases from the client to minimize clientconcerns over the system design. Bringing in a firm that has no alle-giance to the implementation partner can mitigate the risk of pro-moting into production a system with defects. Independent firms arenot paid by the implementation partner and thus the implementationpartner is unlikely to exert any political pressure on them. An inde-pendent firm that specializes in SAP testing will report system defectswhen the requirements are not verified through the execution of testcases that can cause delays to the go-live date, but will also reduce theimpact of having an unstable production system and shifts the bur-den onto the implementation partner to build the system correctlybased on the approved requirements.
A comprehensive UAT—one that follows the integration-testingphase—allows the end users to interact with the system before it isdeployed. A good approach for UAT is to have a preselected list oftest scenarios previously executed during integration testing reexe-cuted during UAT with specially trained members from the end-usercommunity. The Roadmap methodology implies that the integrationtest should have participants known as “extended users,” who can beend users. However, that in and of itself does not create an indepen-dent test. UAT should be a dedicated testing effort where the endusers have the opportunity to report errors and defects with the ap-plication and challenge the validity of “verified” requirements. UATparticipants should perform their testing based on their designatedSAP roles (i.e., inventory clerk) as opposed to testing the system withSAP_ALL access. Problems, errors, and defects unresolved from UATshould be taken into account before making a go/no go decision todeploy the application.
Exit criteria also allow the project to put a safeguard in place toensure that requirements have been verified. Exit criteria define theconditions under which a testing effort may come to an end beforemoving on to another testing effort or project phase. Exit criteria canspecify that no defects exist or that all requirements have been veri-fied before the testing ends.
Finally, avoid the use of “waivers” and “exceptions” for in-scoperequirements that were not fulfilled. Implementation partners thatcannot fully design the solution rely heavily on client waivers as a
Requirements 59
03_4782 2/5/07 10:37 AM Page 59
means of expediting the project tasks and dealing with problems“later on” or to “have the production team” fix the problems. Waiversdo have merit when the implementation partner has a situation out-side of its control such as waiting for a patch from the vendor. How-ever, when used haphazardly for waiving requirements, they increasethe risk and instability of the production system. All waivers need tobe accompanied with mitigation or workaround strategy, expectedresolution date, and ownership so that they do not fall off the edge ofthe earth. The CCB can evaluate how a waiver to a requirement canimpact other requirements.
60 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
03_4782 2/5/07 10:37 AM Page 60
61
CHAPTER 4Estimating Testing Costs
Testing is an activity that typically consumes the most resources andbudget in any SAP implementation or SAP upgrade. When broken
down into smaller subtasks, testing is expensive because it requireslabor hours from multiple project resources for test planning, testautomation, test execution, recording test results, resolving defects,impact analysis, retesting the application, and applying lessonslearned from testing. For example, the creation, peer review, andapproval of a single SAP test case may require labor hours from thebusiness analyst, SAP consultant, system architect, quality assurance(QA) representative, subject matter expert (SME), and test teammember.
The costs associated with SAP testing are primarily attributed tothe following activities and/or events:
■ Hardware equipment (i.e., machines, servers, laptops, printers, etc.)■ Software costs (i.e., automation, version control, and test man-
agement software)■ Billable hours for testing activities such as test planning, test
design, test execution and defect resolution (i.e., hourly rate forcontractors, employee’s labor costs)
■ Outsourcing agreements (i.e., paying a third-party entity to con-duct independent or automated testing)
■ Training costs (i.e., learning test procedures for reporting defects)■ Person-hours spent enforcing quality standards and lessons learned
Accurate estimation of SAP testing costs for a given SAP test cycle(i.e., regression, performance, integration, etc.) is complex becausefew companies implementing SAP maintain historical records fortime actually spent planning and executing test cases for previous testcycles, and because testing is typically viewed as an activity that is
04_4782 2/5/07 10:38 AM Page 61
subject to truncation or compression when the project experiencesbudget cuts or falls behind schedule. In order to properly plan and es-timate the costs for testing within an SAP environment it is necessaryto recognize all factors and conditions that contribute to testing costs,such as historical data from previous testing cycles, expert estimatesfor estimating duration times for test activities, the use and mainte-nance of automated test tools, the thoroughness needed for docu-menting test cases and test results, and the level of expertise for theassigned SAP testers. After the testing costs have been estimated andplanned, the project manager and test manager can spread the testingcosts across the project’s schedule to produce a budget and validatethe budget against project constraints.
CHALLENGES IN COST ESTIMATING
The key factors that help to dissemble the true testing costs are asfollows:
■ Basing cost estimates on poorly defined testing activities andwork packages
■ Not considering the cost of hardware, human, and softwareresources needed to conduct testing (i.e., automated test tools, testmanagement tools, test labs, contractor’s billable rates for con-ducting a stress test, etc.)
■ Tying testing costs to an ill-developed project schedule (missingdependencies, erroneous activity duration times, missing activi-ties, etc.)
One of the factors contributing the most to miscalculation of test-ing costs at most SAP implementations is that many activities relatedto test execution and test planning are not always clearly identified ina project schedule. For instance, the following activities and eventsare related to testing but may not be clearly tracked or identified assuch in an SAP implementation:
■ Purchasing automated test tools and test management tools■ Maintaining test tools and test management tools
62 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
04_4782 2/5/07 10:38 AM Page 62
■ Prototyping testing concepts (i.e., prototype of an automated testcase for a project that has recently purchased an automated testtool)
■ Applying lessons learned from a previous testing cycle■ Ongoing and year-round regression tests to support system
upgrades, OSS notes, system enhancements, hot packs, etc.■ Resolving end-user tickets reported to the production help desk■ Reconfiguring and retesting the system because the original
requirements were not captured or interpreted correctly■ Training resources on test procedures and test standards■ Enforcing testing standards■ Time spent retesting identified defects■ Time spent developing automated test cases, which can require
support from subject matter experts, business analysis, configu-ration experts
■ Time allocated for a performance/stress test that can require assis-tance and support from the Basis team, DBAs (database admin-istrators), infrastructure team, business analysts, SAP functionalexpert, advanced business application programming (ABAP)developers, etc.
■ Time and resources allocated to establishing the test environment■ Time spent executing manual test cases■ Costs to establish and maintain a test lab, which can include
machines, printers, phones, separate local area network (LAN),etc.
■ Costs of collecting, gathering, and managing test results■ Time spent maintaining and updating testing deliverables (i.e.,
test plans, test strategies, test cases, test scripts)
These activities and events are only a partial listing of what itwould take to establish and maintain a testing program at an SAP im-plementation that can hide or disguise the “true” testing costs. Thebiggest obstacle in estimating testing costs, however, is that most pro-jects do not have historical data to recycle from previous testing cy-cles or a breakdown of testing activities with sufficient granularity todetermine the time spent on individual testing subtasks. SAP testingis time consuming and resource intensive for both initial and existingSAP implementations. For an initial SAP implementation, up to 50
Estimating Testing Costs 63
04_4782 2/5/07 10:38 AM Page 63
percent of all project resources and budget may be dedicated to sup-porting all testing activities, either directly or indirectly. For an exist-ing SAP implementation with a large functional scope, one can expectto spend over 5,000 person-hours in planning test cases, executingtest cases, recording test cases, logging test results, and resolving test-ing defects for a regression test. But exactly how much time is allo-cated to each testing activity and testing subtask based on previoustesting cycles is an enigma at most projects since this information isnot stored or maintained anywhere. For instance, the creation andexecution of a test case to validate a new system change (i.e., creationof a custom SAP transaction with different screen and validationrules) to be transported into the production environment may requiremultiple activities and person-hours from various project resources,which may not be individually tracked or monitored in a projectschedule. The potential testing activities associated with a new systemchange are as follows:
■ Identification, modification, or creation of test requirement, busi-ness process procedure (BPP), and flow process diagram(resources: subject matter expert, business analyst, SAP consul-tants, end user, system architect, and test team member)
■ Construction of test case with test steps, test conditions, andexpected results (resources: SAP consultant, test team member)
■ Rehearsing of documented test case to ensure that it was docu-mented properly (resources: SAP consultant, test team member)
■ Peer review and approval of test case (resources: SME, end user)■ Manual execution of test case (resources: test team member)■ Recording of test results (resources: test team member)■ Resolution of defects (resources: business analyst, SAP consultant)■ Retesting of change if defects were identified (resources: test team
member)■ Automation of new process with automated test tool (resources:
contractor from the test team who specializes in test tool, SAPconsultants, SME)
All the aforementioned testing activities for promoting a systemchange into production may consume hundreds of person-hours fromproject resources, which translate into thousands of dollars, yet thesetesting costs might be overlooked because the test manager or project
64 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
04_4782 2/5/07 10:38 AM Page 64
management office does not document or track these individual ac-tivities in a project schedule. For larger testing efforts such as perfor-mance, integration, or regression testing, omitting or overlookingindividual testing subtasks and work packages from a scheduling sys-tem would obscure the true cost of all testing activities.
Overlooking testing subtasks is one of the many factors that hidethe true cost of testing. The cost estimates used for test planning andtest execution can also be impacted by other factors, such as the levelof SAP expertise from the individual planning and executing testcases; the enforcement of QA standards for documenting test casesand test results, which can cause rework of testing deliverables; thestability of the system configuration, which may cause rework of au-tomated test cases; and compliance with industry acts such as goodmanufacturing practices (GMPs) or Sarbanes-Oxley (SOX) thatstrictly govern the documentation for recording and validating testresults. Furthermore, hardware and software resources needed to fa-cilitate and enhance the planning and execution of test cases are coststhat must be taken into consideration when estimating expected test-ing costs.
To overcome the challenges of testing costs it is necessary to iden-tify all activities, deliverables, and tasks related to testing, the laborhours allocated to each testing task, the rate per hour, and the costsof training, hardware, and software resources needed to support test-ing. The entire scope of testing is in fact far more encompassing thanmere test execution of test cases, which is often viewed as the only ac-tivity that produces testing costs.
SCOPE OF COSTS
The scope of testing costs includes all activities associated withrequirements gathering, test planning, test design, test execution, testreporting, and defect resolution. Testing activities occur in multiplephases, which include unit, development (reports, interfaces, conver-sions, and enhancements), scenario, integration, performance, stress,and regression testing.
To implement SAP from scratch or maintain an SAP system, it isnecessary to conduct continuous testing cycles and commit hardware,software, and human resources in support of testing tasks. For an
Estimating Testing Costs 65
04_4782 2/5/07 10:38 AM Page 65
established or production-based SAP system, it is possible that a pro-ject may have resources assigned to tasks from the production-basedteam and the development team. For example, a new system en-hancement will require unit, string, integration, and regression test-ing by the SAP development team before it is deployed into theproduction environment, and subsequently the new system enhance-ment will be tested by the production team after they assume owner-ship of the promoted system enhancement.
After the SAP system has been promoted into production, thescope of testing costs include modifying test cases, authoring new testcases, identification of data sets, construction of a test environment,costs of testing software and hardware, and staff-hours needed to ex-ecute test cases, record test results, and resolve defects. In an SAP pro-duction-based mode, other testing costs may include costs fromoutsourcing agreements to test the application remotely or with au-tomated test cases developed in offshore locations, and also the test-ing costs associated with the rework and redevelopment of previouslyimplemented system functionality that was promoted but does notbehave as expected. For production-based environments, the testingcosts are also manifested in other qualifiers such as maintaining thetesting software, maintaining and updating testing documentation,applying testing lessons learned, generating reports, and showingtesting evidence to support testing audits. Given the inherent natureassociated with patches, OSS notes, hot packs, system upgrades, andsystem enhancements that frequent an SAP production system, it isconceivable that multiple test types would need to be executed inorder to support a production-based SAP installation.
For initial SAP implementations, the extent and scope of testingcosts include the establishment and enforcement of testing standards,staffing the testing team, acquiring and procuring necessary testingresources (hardware, software), executing test cases, recording test re-sults, and resolving defects. Initial SAP implementations may have re-duced or limited functional scope, which in turn may decrease thetesting costs and the size of the testing team. However, as the SAP im-plementation increases in functionality, the number of modules, mod-ifications, and SAP bolt-on components (i.e., business warehousing)after it has been promoted into the production environment, the sizeof the test team, and the overall testing costs will proportionally in-crease to take into account testing from the production and the de-
66 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
04_4782 2/5/07 10:38 AM Page 66
velopment team before changes are deployed into the productionenvironment.
The costs of testing for a production-based SAP implementationare primarily a by-product of the amount of regression testing that isneeded before a new functionality is deployed into the production en-vironment. Companies that want to reduce manual testing costs tomove objects into the production environment are shifting to test au-tomation or outsourcing agreements as a means of executing andplaying back large, functional SAP scenarios unattended and withina short time span before promoting an object into production. On theother hand, companies relying solely on manual testing and record-ing of test results for moving objects and new functionality into theproduction environment have increased risk of not having all the nec-essary resources and bandwidth to test all impacted combinations ofbusiness scenarios and end-to-end functionality before an object istransported into the production environment, which increases thetesting risk and testing costs, decreases the stability of the productionenvironment, and creates rework.
TECHNIQUES FOR ESTIMATING COSTS
Estimating testing costs can be facilitated with two models: (1) expertjudgment or (2) historical information. With expert judgment, a pro-ject member who is experienced with testing activities can estimatethe resources and hours needed to conduct a testing task such as timeneeded to design a test case that involves five transaction codes, ortotal time needed to execute an end-to-end scenario. A person withexpert judgment estimating testing costs may rely on establishedindustry guidelines and benchmarks, software methodologies, or ownhands-on experience. For instance, IBM’s Ascendant SAP methodol-ogy shows that documenting test results for test cases with an aver-age complexity level may take as long as 30 minutes per person.
On the other hand, historical information provides a repositoryof information for the number of time units needed by each project-specific resource to conduct a testing task. For instance, historical in-formation may show that a given project member takes on averagetwo business days to resolve a defect with a severity level of one, ormay also show that test cases for a particular business area (i.e.,
Estimating Testing Costs 67
04_4782 2/5/07 10:38 AM Page 67
warehouse management) take on average five hours to execute per re-source. Alternatively, for companies relying on earned value calcula-tions, historical data may show that the testing costs for a previoustesting phase required 5,000 person-hours at a total cost of $2 mil-lion, which can serve as a basis of estimate for planning testing costs.
Both historical and expert judgment models may not take into ac-count other testing costs that facilitate and support testing activities.The acquisition, purchasing, and maintenance costs of software testtools and hardware resources may not be captured correctly withboth the historical and expert judgment models. This makes it im-perative that, independent of the technique used for estimating test-ing costs, the cost estimating technique includes labor costs for hourlyrates, hardware costs, software costs, number of labor hours needed,traveling costs for remote participants traveling to conduct testingtasks, and the costs from an outsourcing agreement if one exists.
Aggregating and decomposing all testing costs allows the testmanager to create a more accurate budget for all testing activities.
68 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
04_4782 2/5/07 10:38 AM Page 68
69
CHAPTER 5Functional Test Automation*
Functional testing assures that your implementation of SAP meetsyour business requirements. Given the highly configurable and
tightly integrated nature of the SAP modules, as well as the probabil-ity that you will also integrate in-house applications or third-partyplug-ins, it is a critical and challenging task requiring the verificationof hundreds or even thousands of business processes and the rulesthat govern them.
This chapter explores the business case for automating your func-tional testing, the alternative automation approaches to consider, andorganizational considerations and techniques for maintaining andmanaging your test automation assets.
WHY AUTOMATE?
Test automation is not a panacea, but it can make a dramatic differ-ence in the quality and stability of your SAP deployment over the longterm. The key is to understand when automation works and when itdoes not, and how to assure your success.
Business Case for Automation
There are three key benefits to automation:
1. Expand your test coverage.2. Save time and resources.3. Retain knowledge.
*This chapter was authored by Linda Hayes, CTO of WorkSoft, Inc.
05_4782 2/5/07 10:41 AM Page 69
Expanding your test coverage is one of the most valuable benefitsof automation because it translates into higher quality and thus lesscosts associated with downtime, errors, and rework. Over the life ofyour SAP deployment you will likely experience an increase in thenumber of business processes it supports, either through the imple-mentation of additional modules or integration with other systems.
As a result, each successive implementation or modification af-fects a greater number of business processes, which increases the riskand opportunity for failure. Even a 10 percent increase in total func-tionality still requires testing of 100 percent of the process inventorydue to the risk of unexpected impact. The tightly integrated nature ofSAP increases this risk.
As Exhibit 5.1 shows, a manual test process cannot keep pacewith this expanding workload because time and resources availablefor testing are either fixed or even declining. In this exhibit, thelighter arrow indicates the processes that need to be tested and thedark arrow indicates the number of test resources. This combinationof increasing processes that need to be tested with a static number oftesters leads to increased risk and potential cost of failure.
Under the scenario represented in Exhibit 5.1, automation is theonly practical answer. It enables one to capture tests as repeatable as-sets that can be executed for each successive release or deployment,so that the inventory of tests can keep pace with the inventory ofbusiness processes at risk.
This repeatability saves time and resources as well. Instead of re-quiring repetitive manual effort to reverify processes each timechanges are introduced, tests can be automatically executed in an un-attended mode. This allows your resources to focus on adding new
70 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
Risk
Test Resources
Business Processes
Time
#
EXHIBIT 5.1 Test Workload Compared to Test Resources
05_4782 2/5/07 10:41 AM Page 70
tests to support new functionality instead of constantly repeating ex-isting tests.
Ironically, when test time is short, testers will often sacrifice re-gression testing in favor of testing new features. The irony is that thegreatest risk to the user is in the existing features, not the new ones.If a business process that the enterprise depends on stops working—or worse, starts doing the wrong thing—then you could halt opera-tions. The loss of a new feature may be inconvenient or evenembarrassing, but it is unlikely to be devastating.
This benefit will be lost if the automated tests are not designed tobe maintainable as the application changes. If they either have to berewritten or require significant modifications to be reused, you willkeep starting over instead of building on prior efforts. Therefore, it isessential to adopt an approach to test library design that supportsmaintainability over the life of the application.
Finally, the process of automating your test cases introduces dis-cipline and formality to testing, which results in the capture of appli-cation knowledge in the form of test assets. You cannot automatewhat is not defined. By defining your business processes and rules astest cases, you are converting the experience of subject matter experts(SMEs) and business analysts (BAs) into an asset that can be pre-served and reused over the long term, protecting you from the in-evitable loss of expertise due to turnover.
When to Automate
Conventional wisdom holds that you should automate your tests onlyfor regression testing; that is, the initial deployment should be per-formed manually and only successive changes automated. This beliefarises from the historical record/playback approach to test automa-tion, which requires that the software be completed and stable beforescripts can be captured.
New approaches exist, however, that allow automated tests to bedeveloped well in advance of configuration or code completion.These approaches are further described later in the Test AutomationApproaches section.
Using these new approaches, automated tests can serve a dualpurpose: They can provide documentation of the “to be” businessprocess as well as deliver test automation. This collapses two steps—
Functional Test Automation 71
05_4782 2/5/07 10:41 AM Page 71
documentation and automation—into one, thus further conservingtime and resources.
What to Automate
Automation is all about predictability. If you cannot express the pre-cise inputs and expected outputs, you cannot automate a test. Thismeans that it should be used to verify what is known or predicted.Typically this means positive tests, as in assuring that the businessprocess is executed successfully, but can also be applied to negativetests that verify if business or field edit rules are violated, such asinvalid data types or out-of-range values in which the data is rejectedand an error message given. Think of these tests as “making sure”that processes work as expected.
In the context of SAP, the obvious automation candidates are the“to-be” processes, processes that are executed frequently, critical tothe business, and contain integration points (touch points). For SAP-based production systems, SAP transaction ST03 allows for quick fil-tering of which SAP transaction codes are actually used in productionand to what extent/volume.
Further, for each process, the data variations that exercise busi-ness and edit rules can also be automated. Applying data-driven tech-niques to automation enables you to quickly expand your test casesby adding data. This also means, however, that automation is not ap-propriate for ad hoc, random, or destructive testing. These tests mustbe performed manually because by their very nature they introduceunexpected or intentionally random conditions. Think of these testsas covering “what-if” scenarios.
Ad hoc tests are uniquely suited to manual testing because theyrequire creativity and are deliberately unpredictable. By allowing au-tomation to take care of what you expect to work, you can free yourexperts to try to break the system.
Critical Success Factors
Successful test automation requires:
■ Management commitment■ Planning and training
72 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
05_4782 2/5/07 10:41 AM Page 72
■ Dedicated resources■ Controlled test environment■ Pilot project
No project can succeed without management commitment, andautomation is no exception. In order for management to manage,they must know where things stand and what to expect. By lettingmanagement know up front what investment you need to succeedwith automation, then keeping them informed every step of the way,you can get their commitment and keep it. This requires a compre-hensive project plan.
Your automation plan must clearly identify the total costs and ben-efits of the project up front, provide a detailed project plan with the re-quired resources, timelines, and related activities, then track results andreport to management regularly. Costs include selecting and licensingthe right tool, training the team, establishing a test environment, de-veloping your test library, and maintaining both the tests and the tool.The number and type of resources you need, the time required, and thespecific activities will depend on the approach you adopt.
If and when obstacles are encountered, let management knowright away. Get bad news out as early as possible and good news outas soon as you can back it up. Nothing is more disconcerting formanagement than to invest resources without seeing progress or,worse, by sudden surprises. Also keep focus on the fact that the testautomation project will last as long as SAP is being used and main-tained. Every successive release, update, patch, or new integrationwill need to be tested and the automated test assets accordingly main-tained and reexecuted.
No matter how easy to use the tool is claimed to be, plan fortraining as well, and perhaps consulting. Learning a tool through trialand error is costly and time consuming, and it is better to get off onthe right foot. Since it is easier to get money allocated all at once in-stead of piecemeal, be careful not to buy the software first and thendecide later you need training or additional services.
Although the promise of automation is exciting, realize that testtools do not work by themselves. Buying a test tool is like buying atreadmill—the only weight you lose is in your wallet! You must usethe equipment, do the exercises, and sweat it out to get the benefits.Also understand that even though test automation saves time and re-sources in the long run, in the short term it will require more than
Functional Test Automation 73
05_4782 2/5/07 10:41 AM Page 73
manual testing. Make sure management understands this, or you mayfind yourself with a tool and no one to implement it.
Not only must you have the right resources, you must also com-mit to a controlled test environment that supports predictable data.Automation is all about repeatability, and you cannot repeat the sametests if the data keeps changing. In most cases the data values are thekey to the expected results. Identifying, creating, and maintaining theproper data is not a trivial problem to address and often representsmore than half of the overall effort. Do not wait until you are readyto start testing to implement your strategy.
The ideal test data strategy is to have a known data state that canbe archived and refreshed for each test cycle. If this is not possible orpractical, you may consider using automation to “seed” or conditionthe data to create or modify data to meet your needs.
If this is your first automation effort, start with a small pilot pro-ject to test your project plan assumptions. Invest two to four weeksand a couple of resources in automating a representative subset ofyour business processes, and carefully document the effort and resultsduring the pilot as these results can be used to estimate a larger im-plementation. Since you can be sure you do not know what you donot know, it is better to learn your lessons on a small scale.
Also be sure to commit the right type of resources. As describedin the following section on test automation approaches, depending onthe approach you adopt you will need a mix of skills that may or maynot be part of your existing test group. Do not imagine that having atool means you can get by with less skill or knowledge: The truth isexactly the opposite.
Common Mistakes
Pitfalls to avoid when automating your SAP testing include:
■ Selecting the wrong tool.■ Using record and play techniques.■ Writing programs to test programs.■ Ignoring standards and conventions.
In order to select the right test tool you must perform an evalua-tion in your environment with your team. This is the only way to
74 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
05_4782 2/5/07 10:41 AM Page 74
assure that the tool is compatible with your SAP implementation—including any gap applications—and especially that your team hasthe right skill set to be productive with it. A scripting tool that re-quires programming skills, for example, will not be successful unlessyou have technical resources available on your team.
For purposes of this evaluation, make sure you understand howthe tool handles not only test development but also test managementand especially maintenance, since these are critical to long-term pro-ductivity. Do not settle for a simplistic record-and-play script. Insiston understanding how to write robust tests that are well structured,documented, reliable, and easy to maintain.
Record and play is a very attractive approach: Simply perform amanual process and have it automatically recorded into a script. Butwhile these scripts are easy to record, they are unstable when exe-cuted and all but impossible to maintain. They do not have the doc-umentation and structure to be readable, and they lack any logic todetect and respond to the inevitable errors and changes that willoccur. Even variations in the response time of an SAP transaction cancause failures.
Another drawback to recorded scripts is that they contain hard-coded data. Recording the process of creating a hundred invoices, forexample, will yield a script containing the same steps 100 times over.This means if a configuration change is made to any step of theprocess, it must be made 100 times. Since this is impractical, recordedscripts are rarely reused after changes and must often be re-recorded.Thus, the value of automation is lost.
While the issues with capture/playback can be resolved using ad-vanced scripting code, this leads to the other extreme: writing pro-grams to test programs. This technique requires programming skills,which may exclude your most experienced testers. Further, if each testcase is converted to script code, you will have more code than theSAP module does. This approach results in custom code that is alsodifficult to maintain, especially by anyone other than the originalauthor.
Balancing the trade-offs between ease of use and coding is thesubject of the discussion of test automation approaches in the nextsection.
The last common mistake is to ignore the need for naming stan-dards and test case conventions. If each tester is permitted to adopttheir own approach and store their tests wherever they wish, it will
Functional Test Automation 75
05_4782 2/5/07 10:41 AM Page 75
be impossible to implement a centralized, unified test library wheretests can be easily located and shared. Treat your automated tests asthe asset they are and ensure that they are easily found, understood,and maintained.
TEST AUTOMATION APPROACHES
Test automation has steadily evolved over the past two decades(longer if you count mainframes) from record and play, which is allcode and no data, to code-free approaches that are all data and littleor no code. This trend reflects the fact that code is more costly todevelop and maintain than data.
This evolution has followed these four stages:
1. Record and play2. Data-driven3. Frameworks4. Code-free automation
These represent varying combinations of code and data used to con-struct test cases and each has different advantages and drawbacks.
Record and Play
Record and play appears to be easy but turns out to be difficult.Recorded scripts usually have a very short useful life because they arehard to read, unstable when executed, and almost impossible to main-tain. The time that is saved during the initial development is morethan offset by the downstream costs of debugging failed sessions orre-recording after changes. Exhibit 5.2 shows an example of arecorded script.
The ideal use of record and play, oddly enough, is to capture theresults of manually executed tests. This assists the tester in docu-menting results and perhaps reproducing the exact steps that led toany errors.
76 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
05_4782 2/5/07 10:41 AM Page 76
Traditional test automation tools that cost thousands of dollarsper user are overkill for this use. Instead, look for simple sessionrecorders that are available for as low as $100.
Data-Driven
Data-driven techniques address the hard-coded data issue of recordand play by removing the data from the scripts and instead reading itfrom an external file. Typically, a process is recorded once, then scriptcode is added to substitute variables for literal data, read the variablevalues from a file, and loop until all records are completed.
This approach reduces overall code volume and allows test casesto be added quickly as data records, but requires programming skillsto implement. It also results in custom code for each process thatmust be maintained as the application changes. Exhibit 5.3 reflectsthe type of changes introduced into a recorded script in order to makeit data-driven.
Frameworks
While data-driven techniques succeeded in reducing code volumeattributable to hard-coded data, they did not directly address the
Functional Test Automation 77
EXHIBIT 5.2 Example of Recorded Script
05_4782 2/5/07 10:41 AM Page 77
inefficiencies of not sharing common code to handle common tasksacross test cases. They also limited the analyst’s ability to design testflows consisting of multiple scenarios and data.
In response, frameworks evolved as a way to provide an infra-structure to handle common tasks and allow business and quality an-alysts to write test flows by calling reusable code components.
Typical elements of a framework include:
■ A layer that allows test flows to be constructed as data in aspreadsheet or database.
■ Reusable or generated code components that execute testing tasksagainst SAP.
■ An infrastructure that handles test packaging, timing synchro-nization, error handling, context recovery, result and error log-ging, and other common tasks.
Frameworks require two roles and skills: the test engineer, a pro-grammer or scripter who develops the framework and reusable codecomponents, and the test designer, a business or quality analyst whoconstructs processes by ordering these components, together with thetest data values they require, into a spreadsheet or database.
78 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
EXHIBIT 5.3 Example of Data-Driven Script and Data File
05_4782 2/5/07 10:41 AM Page 78
A framework offers several advantages. Nontechnical testers candesign automated test flows and provide data in a standard format,and test engineers can optimize their coding and maintenance effortby developing reusable components. The framework also takes careof managing and monitoring execution to provide reliable results.
There are three basic types of frameworks: key/action word,screen/window, and class library. Each type can be implemented usingtext files (spreadsheets) or databases. Spreadsheets are more eco-nomical, as most users already have access to them and are familiarwith their use, but they are more challenging to manage and maintainbecause they are not centrally stored or controlled. It is also easier tomake typographical or spelling errors in a spreadsheet.
A database, however, requires more cost and effort to implementbut is easier to manage. By providing a graphical user interface (GUI)front end, users can select from drop-down lists and otherwise beprotected from making input errors. Relational databases also enablemore rapid maintenance as mass changes can be introduced usingStructured Query Language (SQL) statements and similar functions.
Key/Action Word Framework A key or action word framework comprisesbusiness functions that perform tasks against SAP such as entering anorder or verifying an invoice. Each key or action word has a set ofdata values associated with it for either input or verification. Exhibit5.4 illustrates a typical key/action word implementation using spread-sheets.
Key/action word frameworks can be developed internally or ac-quired from commercial vendors. Some of the commercial tools gen-erate the scripts for common components, then allow test engineersto add additional code to handle errors and decision making at run-time as well as other application-specific logic or functionality.
The maintenance of key/action word frameworks is divided be-tween the code and the data. The code may have to be regenerated ormodified when the application behavior changes and the spreadsheetor database may have to be updated as functionality is enhanced orchanged.
Screen/Window This type of framework is a variation of key/actionword in that there are reusable code components that perform specifictasks, but in this case they are organized around actions such as data
Functional Test Automation 79
05_4782 2/5/07 10:41 AM Page 79
input or verification against each SAP screen. Exhibit 5.5 shows ascreen/window implementation using a database and GUI interface.
When a screen changes, the related screen action code compo-nents must be modified or regenerated as well as the related test casespreadsheet or database.
Class Library A class library framework is built around code compo-nents that are mapped to SAP objects instead of tasks or screens. Eachobject class has an associated set of actions that can be performedagainst it—for example, input to a text box or pressing a button.These class/action code components may be developed or generated,with code added for decision-making logic based on the results dur-ing execution. Exhibit 5.6 is an example of a spreadsheet implemen-tation for a class library framework.
As with other framework types, these can be organized into testprocesses in spreadsheets or databases. In this case, the data is pro-vided for each single action.
Since the SAP class library rarely changes, the only code that re-quires maintenance for functional changes is any decision-making orother custom code that has been added. The rest of the maintenanceoccurs in the spreadsheets or database.
80 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
EXHIBIT 5.4 Key/Action Word Implementation Using Spreadsheets
Test Name: Add Order
Description Create ordersand verify totaland tax
Testcase Customer Product Quantity Price
Add Order Acme Signs Posterboard 1000 5Add Order Baltimore Sun Paper 65000 1.15Add Order Crosstown, Inc. Confetti 1250000 0.05
Testcase Customer Product Tax Total
Verify Order Acme Signs Posterboard 400 5400Verify Order Baltimore Sun Paper 5980 80730Verify Order Crosstown, Inc. Confetti 0 1000000
05_4782 2/5/07 10:41 AM Page 80
Functional Test Automation 81
EXHIBIT 5.5 Screen/Window Implementation Using a Database and GUIInterface
EXHIBIT 5.6 Spreadsheet Implementation for a Class Library Framework
Build versus Buy Any of these framework types can be internally devel-oped or licensed from a commercial vendor. While building your ownframework may sometimes appear to be less costly and provide themost flexibility and control, it requires an investment in the develop-ment and ongoing support and maintenance of the framework. Sincerobust frameworks consist of tens of thousands of lines of code, the
05_4782 2/5/07 10:41 AM Page 81
resource costs and time to create and support this code may besubstantial.
Further, if the original framework developers leave, it is commonfor the replacement engineer to rewrite or restructure the frameworkcode according to their own style or preferences. This adds to the on-going cost of ownership.
Of course, buying a framework incurs a licensing fee, but this costmay be offset by reducing the continued support and maintenancecosts to a fixed-price annual fee. The decision as to which option ismore economical should also take into consideration how much cus-tom code is needed in either scenario. If the commercial frameworkstill needs significant code development to support the desired testwork flows, it may not offer enough of a cost advantage to offset thelicense costs.
Code-Free Automation
A new type of test automation solution has recently emerged thatdoes not require any code to be developed at all. This approachincludes vendor-supported reusable code components that aremapped to the SAP class library and allows test analysts to constructprocesses using point and click within a GUI interface. The testerselects the SAP screen, the object and the action to be performed froma series of drop-down lists, then provides the test data or variablename for any required values.
The difference between the code-free approach and the previousframeworks is that no code is written or generated in order to auto-mate the tests. All test processes are stored as data within a database.Even decision making is supported through a GUI without requiringthe development of any additional code.
In this approach, the application screens and fields are defined ei-ther by learning the SAP screens or by extracting the screen informa-tion directly from the SAP metadata. This information is stored as amap within the database and it is used to populate the drop-downlists as tests are defined. Test analysts can further select from prede-fined options for making decisions at runtime to control the processworkflow. Exhibit 5.7 depicts an example GUI process editor for acode-free automation solution.
82 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
05_4782 2/5/07 10:41 AM Page 82
Aside from removing the need for test engineers to develop andmaintain custom code, code-free solutions enable automated mainte-nance. As the application changes, the map components are com-pared and all differences documented, including the affected testassets. Global changes can be made automatically as well to modifyor remove test steps related to changes or deletions. Since all test as-sets are stored as data, this can be more easily accomplished thanfinding impact and making changes to code.
Even a code-free solution, however, should support extensibilityoptions in the event that your implementation of SAP contains inter-faces to non-SAP applications to fill gaps.
TEST LIBRARY MAINTENANCE
The primary benefit of automating your SAP test processes is forfuture reuse as configuration changes are made or new patches or
Functional Test Automation 83
EXHIBIT 5.7 Example GUI Process Editor for a Code-Free AutomationSolution
05_4782 2/5/07 10:41 AM Page 83
versions installed. By automatically reexecuting all of your testprocesses after changes, you can ensure that there have been no unin-tended effects. This level of test coverage can prevent costly produc-tion errors, failures, and downtime.
In order to enjoy this benefit you must be able to maintain yourtest processes as changes are made to SAP or your configuration. Ifyou do not update the tests each time you make a change, they willbecome obsolete. In the same vein, you must add new test processesor test data to verify new functionality as it is added so that your testcoverage continues to expand as your usage of SAP does.
One way to limit maintenance time and overhead is to adopt aframework or code-free approach so that script code maintenance islimited or eliminated entirely and most changes occur in data instead.
Because maintenance is an ongoing requirement, it is critical thatit be efficient. Extensive manual changes to custom-coded compo-nents may be too time-consuming or difficult, resulting in a reduceduseful life for your automated tests. This means you must design yourtests to be easily maintained by following development standards andnaming conventions, and by enforcing change management and ver-sion control on all test assets.
Maintenance Events
There are three primary events that can trigger maintenance of yourtest assets. The first arises when your SAP configuration changes,whether to modify screens or the business process workflow. Depend-ing on your automation approach, this will require that your testcomponents—whether stored in code, spreadsheets, or a database—be modified to accommodate the differences.
The second maintenance event is a change to a business processdue to new or different rules. The SAP screens themselves may not bemodified, but the rules that govern the workflow may be changed. Ina script code–based framework, this may necessitate scripting or re-generation of code; a code-free solution will need only changes to thetest processes.
Changes to data can cause the third type of maintenance event.This change may arise from different data in the test environment it-
84 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
05_4782 2/5/07 10:41 AM Page 84
self, or new data may be needed to exercise new process or rules. Un-less you are using record and play, your test data should be located ina text file, spreadsheet, or database.
In each of these cases it is important that your naming conven-tions or coding standards permit you to easily identify which test as-sets are affected by changes without individually inspecting every testprocess or data file. Depending on the automation technique andframework type you select, the impact of a change may be analyzedautomatically or manually. Generally, assets stored as code make itmore difficult to locate and make changes than assets stored as data.Similarly, data housed in a database is easier to manage and maintainthan data stored in text files or spreadsheets.
Version Control
Because maintenance events result in changes to test assets, it is nec-essary to institute version control. Prior versions of a test should bekept in case the functionality has to be rolled back, or for audit trailpurposes to comply with regulatory requirements.
If your tests are stored as script code, you may use a softwaresource control system that supports check in/check out for code mod-ules and allows you to identify differences and perform merges be-tween versions.
If your tests are stored as data in text files or spreadsheets, youmay also use most software source control systems. For test assetsstored in a database, make sure the database schema permits multi-ple versions to be maintained and compared, and if a test asset isbeing modified, that it is protected from being overwritten by some-one else.
MANAGING TEST AUTOMATION
Your test automation team will require a mix of skills, depending onthe approach you have selected. Estimating the time and effort willalso depend on the techniques and tools you have adopted.
Functional Test Automation 85
05_4782 2/5/07 10:41 AM Page 85
Team Roles
As described in previous sections, the code-based approaches andframeworks require a minimum of two roles: test engineers, whodevelop and maintain the script code components, and test analystsor designers, who construct and execute the test processes and data.
Test designers should be SMEs or BAs who have domain exper-tise in the business processes to be tested. Test engineers need to haveprogramming skills and either training or experience with the script-ing tool of choice. Test analysts need SAP domain expertise and busi-ness process experience. If you have adopted a database repository,you will also need database administration skills.
Whether your test framework is internally developed or commer-cially licensed, you will need to plan for training the test team on howto design and develop reusable, maintainable test processes.
It is important not to skimp on training team members on nam-ing standards and coding conventions. These are essential skills forimplementing a test library that can be managed, maintained, andtransferred over the long term.
Estimating
Estimating the timeline for your test automation effort requires youto consider the following factors: the automation approach youadopt, the number of business processes to be executed, and the num-ber of business rules to be verified.
For example, if you select the key/action word framework ap-proach you will need to define the inventory of key or action wordsthat are needed, together with any custom decision-making code.Generally, if it takes one hour to record a process, it will take anotherfive to modify the script to add variables, timing, logic, error han-dling, and so forth, plus another five to integrate it into the frame-work, test, and debug it. So a one-hour manual process will takeabout 10 hours to reduce to a script code component. From there, ad-ditional rules can be tested by adding rows of test data, which may berapid if the data is already defined and slower if not.
If you are developing the framework internally, add time to de-velop and test the infrastructure as well. A typical custom framework
86 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
05_4782 2/5/07 10:41 AM Page 86
infrastructure for a single application is about 50,000 lines of code.Plan for time to design the library, develop, test, and document it. At25 to 50 tested lines of code (LOC) per developer per day, this trans-lates into about four to eight person-years of development.
Likewise, the screen/window approach can be estimated bycounting the number of SAP screens you need to traverse, then mul-tiplying by the number of actions you intend to support for each (e.g.,input, verify, and store). Finally, automate one screen of average com-plexity and use it as a baseline to project the remaining effort.
The class library implementation can be estimated by identifyingthe number of classes and related actions plus the infrastructure.There are about 12 different GUI object classes in the SAP GUI; if youprovide an average of five actions for each one of approximately 50LOC each, you will have 600 LOC for the classes and actions plusany custom code needed.
After that, estimate the number of test processes and test data val-ues needed; developing the test workflows may take from half anhour to an hour including writing, testing, and debugging. Addingtest data to a workflow to exercise different rules may take only a fewminutes by adding rows to a data file.
Code-free approaches require estimates for the number of busi-ness processes and rules to be verified. Processes can typically be con-structed in 15 minutes to half an hour depending on complexity, andtest data can be added in minutes as another row in a table. This doesnot include any extensions for non-SAP applications.
In all approaches, however, be certain to plan for gathering andanalyzing the business process flows and the business rules and re-lated data. Ideally, these were documented during the initial businessprocess engineering phase in the form of the “to-be” processes. If not,plan time to interview application subject matter experts to extractthis information. Exhibit 5.8 summarizes the estimating factors foreach approach.
OUTSOURCING SCRIPTING TASKS
If you adopt one of the techniques that requires test engineers—andespecially if you elect to build instead of buy your framework—yourorganization will need skilled script coders. If you do not already have
Functional Test Automation 87
05_4782 2/5/07 10:41 AM Page 87
these resources available, you have three options: Hire new employ-ees, retain contractors, or outsource.
Outsourcing may offer the benefits of reduced costs and access toresources already skilled in the test tool at hand. However, realizethat the test designer role requires domain expertise and cannot beoutsourced.
The biggest challenge of outsourcing is facilitating efficient com-munication and project management between the designers and engi-neers, especially if the engineers are offshore. Be sure to include extratime for detailed, explicit test case documentation to support remoteengineering. Insist on industry best coding practices such as namingstandards, coding conventions, version control, and documentation,as discussed previously in this chapter: All are essential to assure thelong-term viability of your automated tests.
88 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
EXHIBIT 5.8 Effort Estimation Factors by Approach
Approach Framework Code Components Data Components
Key/action 50,000 LOC # business tasks × # processes × testword 25–50 LOC/day 200 LOC each case variations × 1
or licensed minute per row
Screen 50,000 LOC # screens × 4 tasks # processes × testword 25–50 LOC/day × 100 LOC each case variations × 1
or licensed minute per row
Class 50,000 LOC 10 classes × # processes ×library 25–50 LOC/day 5 actions × number of steps
or licensed 50 LOC each × 30 secondsor licensed per step plus #
test case variationsper process × 1minute per row
Code-free Licensed Licensed # processes ×number of steps ×30 seconds perstep plus # testcase variations perprocess × 1 minuteper row
05_4782 2/5/07 10:41 AM Page 88
Finally, plan for the results to be reviewed and analyzed by the de-signers since they are the owners of the processes and ultimately ac-countable for their accuracy.
SUMMARY
Test automation is a strategic solution to assuring that your SAPimplementation is accurate and reliable both the first time it goes liveand after every other time that configuration or software changes aremade. Thorough, automated test coverage can save millions in pro-duction errors, downtime, and loss of user productivity by detectingissues before they impact the business.
So take the time to select the right tool and technique for yourneeds, invest the proper resources, and follow best practices so thatyour test automation library can serve as a long-term asset.
Functional Test Automation 89
05_4782 2/5/07 10:41 AM Page 89
05_4782 2/5/07 10:41 AM Page 90
91
CHAPTER 6Test Tool Review and Usage
Test tools are composed of test management tools for test planning,test design, and test execution and test tools for capturing manual
keystrokes that can be played back with multiple sets of data.Some of the key questions to address before introducing test tools
to a project are as follows:
1. Who will support and maintain the test tools and test manage-ment tools?
2. Who will provide end-user training for the test tools and testmanagement tools?
3. Which business processes are good candidates for test automation?4. Will test automation take place in-house or be outsourced to a
third-party vendor?5. How much documentation for test cases, test scripts, business
process procedures (BPPs), and flow process diagrams existswithin the project to support test automation?
6. Does the project have a dedicated SAP test environment and in-stance to support test automation?
7. How will the test management tools be customized?8. When (realistically) can automation efforts be initiated given the
project’s deadlines, constraints, and resource bandwidth?9. How will automated test cases be approved, signed off, and
maintained?
06_4782 2/5/07 10:43 AM Page 91
Commercial SAP test tools bring benefits that exceed mere cap-turing and playback to support test execution. Various vendors haveproduced test tools that are compatible with SAP or supplementSAP’s eCATT. The tools vary in price, scripting language, and howrecorded objects are maintained. Test tools supplement the testing ef-fort but do not replace it. Activities such as test case creation, identi-fication of test data, drafting requirements, and mapping test cases torequirements are activities that need to be completed manually beforeautomation is attempted. It is possible that a company may need toacquire one or more test tools from different vendors in order to sup-port all the automation for an SAP project. Companies acquiring testtools and test management tools will need to establish a frameworkfor deciding which processes are suitable for test automation and alsoidentifying the necessary resources for maintaining the test tools andtest management tools.
TEST TOOL REVIEW
Exhibits 6.2 through 6.8 are completed surveys from vendors thathave automated test solutions that are compatible with SAP R/3™.Although several vendors were asked to fill out surveys for their testtools, not all vendors responded to the survey. In choosing a test tool,it is important to remember that the test tools facilitate the testingeffort but do not drive the testing effort. The methodology andapproaches for testing are more important than the chosen test tool.
It is unlikely that a single vendor would offer a test tool thatmeets all the automation challenges of an SAP project, and conse-quently the SAP project will have to apply an automation frameworkthat includes one or more test tool vendors in addition to manualtesting. SAP also provides its own native testing solution from its testworkbench, which is eCATT. Some commercial vendors of test toolshave integrated their tools with eCATT to expand, augment, or sup-plement its capabilities.
92 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
06_4782 2/5/07 10:43 AM Page 92
93
(Con
tinu
es)
EXHI
BIT
6.1
Test
Too
l Eva
luat
ion
Mat
rix
Tem
plat
e
Tes
t T
ool E
valu
atio
n M
atri
x
Too
l(s)
Nam
e:T
ool E
valu
ator
:V
endo
r N
ame:
Ven
dor
Web
site
:D
ate
of E
valu
atio
n:T
ool T
ype:
(i.e
., SA
P re
cord
/pla
ybac
k sc
ript
ing
test
too
l for
reg
ress
ion,
str
ing,
inte
grat
ion,
sm
oke
test
ing)
(Pl
ease
fill
in y
our
own.
Thi
sis
an
exam
ple.
)
Cri
teri
aC
omm
ents
I. T
rans
acti
on C
aptu
re a
nd P
layb
ack
Aut
omat
ed g
loba
l cha
nges
for
obj
ect
chan
ges
Thi
s re
fers
to
the
abili
ty f
or t
he t
ool t
o lo
cate
all
test
ref
eren
ces
to a
n an
d de
leti
onap
plic
atio
n ob
ject
and
aut
omat
ical
ly u
pdat
e th
em w
hen
an o
bjec
t is
rem
oved
or r
enam
ed. O
ther
wis
e, t
he u
ser
has
to m
ake
all t
he c
hang
es m
anua
lly.
Aut
omat
ed im
pact
ana
lysi
s fo
r ap
plic
atio
n T
his
refe
rs t
o th
e ab
ility
for
the
too
l to
info
rm t
he u
ser
of a
ll te
sts
that
are
ch
ange
saf
fect
ed b
y a
chan
ge t
o an
app
licat
ion
obje
ct. O
ther
wis
e, t
he u
ser
mus
t se
arch
all t
ests
and
man
ually
loca
te a
ll re
fere
nces
.
Test
s ca
n be
dev
elop
ed c
oncu
rren
tly
wit
h T
his
refe
rs t
o th
e ab
ility
to
crea
te a
n au
tom
ated
tes
t w
itho
ut h
avin
g th
e so
ftw
are
deve
lopm
ent
appl
icat
ion
avai
labl
e to
rec
ord
agai
nst.
Thi
s is
a n
eces
sary
fea
ture
for
agi
lede
velo
pmen
t ap
proa
ches
tha
t re
quir
e te
sts
to b
e de
velo
ped
befo
re t
he c
ode.
No
scri
pt c
odin
g re
quir
edTo
ol a
llow
s au
tom
atio
n w
itho
ut t
he n
eed
to w
rite
, gen
erat
e, o
r m
aint
ain
prog
ram
min
g co
de.
Tool
sup
port
s re
cord
ing
of n
on S
AP
appl
icat
ions
06_4782 2/5/07 10:43 AM Page 93
94
Com
mon
scr
ipti
ng la
ngua
ge (
i.e.,
VB
)W
hich
is t
he u
nder
lyin
g pr
ogra
mm
ing
lang
uage
for
scr
ipti
ng a
nd r
ecor
ding
(i.e
., V
isua
l Bas
ic)?
Allo
ws
RFC
s to
be
calle
dC
an r
ecor
ding
too
l inv
oke
rem
ote
func
tion
cal
ls?
Prod
uces
aut
omat
ic o
ptio
nal s
teps
Doe
s te
st t
ool o
ffer
the
use
r th
e ab
ility
to
auto
mat
ical
ly c
onve
rt a
cap
ture
d te
stst
ep in
to a
n op
tion
al s
tep
wit
hout
hav
ing
to a
dd if
/els
e sc
ript
ing
logi
c?
(i.e
., in
SA
P a
give
n sc
reen
may
or
may
not
app
ear
base
d on
ent
ered
dat
a.)
Has
ana
log
and
obje
ct r
ecor
ding
cap
abili
ties
Ana
log
reco
rdin
g re
fers
to
reco
rdin
g th
at r
equi
res
know
ledg
e fo
r an
obj
ect’s
coor
dina
tes
(on
the
x/y
axis
) w
ithi
n th
e m
onit
or s
cree
n (i
.e.,
Cit
rix,
DO
S,B
itm
aps,
etc
.) f
or a
nalo
g re
cord
ing.
Ana
log
reco
rdin
g is
use
ful w
hen
you
need
to t
rack
eve
ry m
ovem
ent
of t
he m
ouse
as
you
drag
the
mou
se a
roun
d a
scre
enor
win
dow
.
Rec
ogni
zing
obj
ects
inde
pend
ent
of lo
cati
on is
dig
ital
rec
ordi
ng. O
bjec
tre
cord
ing
enab
les
you
to r
ecor
d on
any
obj
ect
in y
our
appl
icat
ion,
whe
ther
or
not
the
test
too
l rec
ogni
zes
the
spec
ific
obj
ect
or t
he s
peci
fic
oper
atio
n.
Has
rep
osit
ory
for
man
agin
g th
e pr
oper
ties
H
as a
rep
osit
ory
whe
re a
ttri
bute
s, p
rope
rtie
s of
cap
ture
d or
rec
orde
d ob
ject
s of
rec
orde
d ob
ject
sar
e st
ored
, and
mai
ntai
ned.
Thi
nk t
imes
can
be
adde
d w
itho
ut
Aft
er t
he s
crip
t is
rec
orde
d th
e te
ster
can
add
thi
nk t
imes
and
del
ays
to e
ach
prog
ram
min
g or
cod
e ch
ange
ste
st s
tep
wit
hout
hav
ing
to in
sert
/cha
nge
exis
ting
pro
gram
min
g co
de.
Test
too
l allo
ws
for
crea
tion
of
user
-def
ined
To
ols
allo
w c
reat
ion
of a
use
r-de
fine
d fu
ncti
on in
ord
er t
o m
ake
test
s or
fu
ncti
ons
com
pone
nts
easi
er t
o m
aint
ain.
Use
r-de
fine
d fu
ncti
ons,
can
be
acce
ssed
fro
m a
ny t
est
or c
ompo
nent
.
Test
too
l off
ers
keyw
ord-
driv
en t
ests
Key
wor
d-dr
iven
tes
t en
able
s yo
u to
cre
ate
and
view
the
ste
ps o
f yo
ur t
est
orco
mpo
nent
in a
mod
ular
, tab
le f
orm
at. E
ach
step
in t
he s
crip
t is
a r
ow in
the
Key
wor
d V
iew
, com
pris
ed o
f in
divi
dual
par
ts t
hat
one
can
mod
ify.
EXHI
BIT
6.1
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 94
95
Has
inte
ract
ive
capt
ured
scr
een
of c
aptu
red/
The
too
l sho
ws
and
disp
lays
the
scr
een
that
was
cap
ture
d du
ring
the
init
ial
reco
rded
pro
cess
reco
rdin
g.
If t
ool o
ffer
s ca
ptur
ed/r
ecor
ded
scre
en, u
ser
Exa
mpl
e: I
f an
SA
P sc
reen
has
10
fiel
ds w
hen
it w
as r
ecor
ded
and
then
a n
ewca
n m
odif
y sc
ript
logi
c th
roug
h th
e ca
ptur
edfi
eld
is a
dded
the
tes
ter
can
mod
ify
the
scri
pt t
o in
clud
e th
e 11
th f
ield
thr
ough
sc
reen
the
capt
ured
scr
een
wit
hout
hav
ing
to m
odif
y th
e sc
ript
’s p
rogr
amm
ing
lang
uage
.
Allo
ws
rena
min
g of
labe
ls f
or c
aptu
red
fiel
dsT
he la
bels
tha
t th
e to
ol c
aptu
res
for
the
reco
rded
SA
P fi
elds
can
be
mod
ifie
d.
Allo
ws
addi
ng o
f st
art
and
stop
wat
ches
Ven
dor
offe
rs li
brar
y of
pre
-rec
orde
d SA
P T
he v
endo
r of
fers
a li
brar
y of
gen
eric
ally
or
plai
n va
nilla
rec
orde
d sc
ript
s sc
ript
s/pr
oces
ses
wit
h to
olco
ntai
ning
SA
P t-
code
s (i
.e.,
MIG
O, C
JN20
N, e
tc.)
II. S
AP
Supp
orte
d V
ersi
ons,
App
licat
ions
Com
pati
ble
wit
h SA
P bo
lt-o
ns (
i.e.,
BW
,L
ist
the
actu
al S
AP
bolt
-ons
tha
t yo
ur t
ool s
uppo
rts.
SR
M, A
PO, C
-fol
ders
, CR
M, e
tc.)
Supp
orts
SA
P G
UI,
Cit
rix
and
Port
als
Lis
t th
e ac
tual
SA
P ve
rsio
ns a
nd S
AP
fron
t en
d th
at y
our
tool
sup
port
s.
III.
Too
l Mai
nten
ance
Allo
ws
tool
bar
cust
omiz
atio
nsT
he t
est
tool
allo
ws
end
user
to
disp
lay/
supp
ress
/add
fie
lds,
but
tons
, rel
ocat
e bu
tton
s, e
tc.
IV. T
ool I
nsta
llati
on
Tool
inst
alla
tion
is W
eb-b
ased
or
requ
ired
des
ktop
G
UI
inst
alla
tion
(fa
t or
thi
n cl
ient
inst
alla
tion
)?
Ven
dor
offe
rs f
loat
ing
licen
ses
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 95
96
V. T
ool I
nteg
rati
on
Stor
es t
est
asse
ts in
MSD
E, S
QL
Ser
ver,
or O
racl
eT
his
refe
rs t
o th
e fa
ct t
hat
test
s ar
e st
ored
as
data
and
not
as
scri
pt c
ode.
Inte
grat
es w
ith
Solu
tion
Man
ager
Doe
s th
e re
cord
ing
tool
or
test
man
agem
ent
tool
inte
grat
e w
ith
SAP’
s So
luti
onM
anag
er a
nd, i
f so
, whi
ch o
ne?
If t
ool i
nteg
rate
s w
ith
Solu
tion
Man
ager
, ca
pabi
litie
s ex
ist
to e
xecu
te r
ecor
ded
scri
pts
from
Sol
utio
n M
anag
er
Inte
grat
es w
ith
test
man
agem
ent
tool
Rec
orde
d sc
ript
s ca
n be
sto
red
wit
hin
a te
st m
anag
emen
t to
ol (
i.e.,
Test
Dir
ecto
r, Q
AC
ente
r). I
f so
, whi
ch t
est
man
agem
ent
tool
?
If t
est
man
agem
ent
tool
exi
sts,
doe
s it
off
er
Doe
s te
st m
anag
emen
t to
ol a
llow
one
to
keep
tra
ck o
f m
ulti
ple
vers
ions
of
the
vers
ion-
cont
rol c
apab
iliti
es?
Or
inte
grat
e w
ith
sam
e sc
ript
, hav
e ch
eck-
in/o
ut f
eatu
res,
off
er d
ate/
tim
e st
amp,
scr
ipt
stat
us,
thir
d-pa
rty
tool
for
ver
sion
con
trol
?et
c.?
Inte
grat
es w
ith
eCA
TT
The
rec
ordi
ng t
ool h
as A
PI in
tegr
atio
n to
SA
P’s
eCA
TT.
Or
the
scri
ptin
g to
olca
n be
enh
ance
d w
ith
eCA
TT.
Can
tes
t da
ta b
e sh
ared
acr
oss
from
eC
AT
T t
ore
cord
ing
tool
?
Inte
grat
es w
ith
test
too
ls o
ther
tha
n eC
AT
T
Ope
n A
PI t
o in
tegr
ate
wit
h ot
her
tool
s, la
ngua
ges
Thi
s re
fers
to
the
abili
ty o
f th
e to
ol t
o in
tegr
ate
wit
h an
y ot
her
tech
nolo
gyne
eded
to
exec
ute
an e
nd-t
o-en
d te
st.
VI.
Too
l Exe
cuti
on
Dec
isio
n-m
akin
g op
tion
s fo
r ea
ch t
est
step
on
Thi
s re
fers
to
the
tool
’s a
bilit
y to
let
user
s m
ake
deci
sion
s ba
sed
on t
he r
esul
t of
pa
ss o
r fa
ilea
ch s
tep
at r
unti
me
and
cont
rol t
he t
est
wor
kflo
w a
s a
resu
lt, w
itho
ut w
riti
ngan
y co
de.
EXHI
BIT
6.1
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 96
97
Exe
cuti
on c
ontr
ol a
llow
s si
ngle
-ste
p,
Thi
s m
eans
the
too
l allo
ws
the
user
com
plet
e co
ntro
l ove
r th
e ex
ecut
ion
proc
ess
brea
kpoi
nts,
scr
een
capt
ures
, as
wel
l as
visi
bilit
y in
to t
he v
alue
of
date
at
any
poin
t, w
itho
ut h
avin
g to
vie
wva
riab
le a
nd d
ata
mon
itor
ing
or in
tera
ct w
ith
prog
ram
min
g co
de.
Cap
abili
ties
to
run
unat
tend
ed a
nd s
kip
If a
scr
ipt
has
mul
tipl
e it
erat
ions
and
one
of
them
fai
ls, t
he s
crip
t ca
n be
fa
iled
iter
atio
nsin
stru
cted
to
skip
the
fai
led
iter
atio
n an
d pr
ocee
d to
the
nex
t on
e.
Run
s sc
ript
s in
bac
kgro
und
and
fore
grou
nd m
ode
Has
sch
edul
ing
capa
bilit
ies
Doe
s th
e te
st t
ool o
ffer
sch
edul
ing
capa
bilit
ies?
Can
rec
orde
d sc
ript
s be
exec
uted
fro
m a
sch
edul
er (
i.e.,
run
scri
pt X
eve
ry F
rida
y at
10:
00 A
.M.)
?
Sche
dulin
g to
ol o
ffer
s ex
ecut
ion
wit
h de
pend
enci
esC
an s
crip
ts f
rom
tes
t to
ol b
e ex
ecut
ed f
rom
the
sch
edul
er in
a g
iven
seq
uenc
e(i
.e.,
run
scri
pt Y
bef
ore
scri
pts
X, a
nd Z
eve
ry F
rida
y at
11:
00 A
.M.,
but
ifsc
ript
Y f
ails
do
not
exec
ute
scri
pt Z
)?
Con
tain
s de
bugg
erT
he t
ool h
as c
apab
iliti
es t
o se
t up
“w
atch
var
iabl
es,”
has
a c
ompi
ler,
allo
ws
trac
ing.
Allo
ws
for
auto
mat
ic s
ynch
roni
zati
on
Thi
s m
eans
tha
t th
e sc
ript
pla
ybac
k ti
me
auto
mat
ical
ly a
djus
ts t
o th
e SA
P be
twee
n cl
ient
and
ser
ver
resp
onse
tim
es s
o th
at t
he s
crip
t do
es n
ot g
et a
head
of
SAP
duri
ng e
xecu
tion
(pla
ybac
k) in
the
eve
nt t
hat
the
SAP
serv
er is
exp
erie
ncin
g de
lays
/lags
.
Tool
off
ers
auto
mat
ic t
imin
g sy
nchr
oniz
atio
n m
anag
emen
t.
Bui
lt-i
n er
ror
hand
ling
capa
bilit
yT
he t
ool f
ram
ewor
k ha
s an
aut
omat
ed w
ay o
f re
spon
ding
to
appl
icat
ion
or t
est
erro
rs w
itho
ut r
equi
ring
pro
gram
min
g co
de.
Bui
lt-i
n co
ntex
t re
cove
ry c
apab
ility
The
too
l fra
mew
ork
has
an a
utom
ated
way
of
repo
siti
onin
g th
e ap
plic
atio
n to
a kn
own
stat
e af
ter
a pr
evio
us t
est
failu
re s
o th
e te
st s
essi
on c
an c
onti
nue
wit
hth
e ne
xt t
ext.
Aut
omat
ic t
imin
g fo
r ea
ch s
tep,
pro
cess
, T
he t
ool a
utom
atic
ally
rec
ords
tim
ing
inte
rval
s at
eve
ry le
vel o
f de
tail
wit
hout
an
d su
ite
requ
irin
g th
e us
e of
sto
pwat
ches
or
othe
r co
ding
tec
hniq
ues.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 97
98
VII
. Too
l Dat
a
Use
r-de
fine
d da
ta f
ilter
ing
on a
ll vi
ews
The
too
l allo
ws
user
s to
sor
t an
d se
lect
bas
ed o
n th
e va
lue
of a
ny t
est
elem
ent
to q
uick
ly lo
cate
and
vie
w d
esir
ed t
ests
fro
m a
larg
e in
vent
ory.
All
test
ass
ets
stor
ed a
s da
ta in
rel
atio
nal
Test
dat
a is
not
sto
red
in s
prea
dshe
ets
or o
ther
des
ktop
too
ls, b
ut in
stea
d is
da
taba
sest
ored
in a
cen
tral
dat
abas
e th
at is
eas
ily s
hare
d an
d co
ntro
lled.
Dat
abas
e ve
rifi
cati
on a
nd d
ata
acqu
isit
ion
Test
s ca
n ei
ther
ret
riev
e da
ta f
rom
a d
atab
ase
duri
ng e
xecu
tion
or
veri
fy t
heva
lue
in a
dat
abas
e w
itho
ut w
riti
ng a
ny c
ode.
Prov
ides
Exc
el-b
ased
fun
ctio
n (i
.e.,
TR
IM,
Cap
ture
d te
xt c
an b
e fo
rmat
ted,
man
ipul
ated
wit
hin
the
test
too
ls d
ata
shee
tsM
ID, e
tc.)
to
clea
n up
(i.e
., th
e st
atus
bar
mes
sage
“Sa
les
Ord
er 0
01”
can
be f
orm
atte
d to
ext
ract
ca
ptur
ed t
ext
only
the
val
ue “
001”
thr
ough
Exc
el-b
ased
for
mul
as).
Dat
a-dr
iven
tes
ts (
i.e.,
pulls
dat
a fr
om
The
rec
ordi
ng t
ool c
onta
ins
data
she
ets
for
man
ipul
atin
g sc
ript
dat
a, o
r sp
read
shee
ts, e
xter
nal s
ourc
es, e
tc.)
capt
urin
g sc
ript
dat
a. T
he t
ool c
an w
ork
wit
h da
ta r
esid
ing
in e
xter
nal d
ata
file
s (i
.e.,
tab
delim
ited
, .tx
t, e
tc.)
Allo
ws
for
veri
fica
tion
poi
nts
(obj
ects
, C
an o
bjec
ts a
nd t
ext
be v
erif
ied
duri
ng s
crip
t pl
ayba
ck?
(For
exa
mpl
e, c
heck
da
taba
se v
alue
s, t
ext)
that
an
ente
r bu
tton
is d
isab
led;
che
ck t
hat
quan
tity
on
hand
is a
t le
ast
10be
fore
car
ryin
g ou
t th
e sa
les
orde
r.)
Tool
off
ers
regu
lar
expr
essi
ons
(i.e
., te
xt
Mat
ches
cha
ract
ers
from
cap
ture
d or
eva
luat
ed t
ext
base
d on
a p
atte
rn (
i.e.,
char
acte
r m
atch
ing)
veri
fy a
ll SA
P sa
les
orde
r nu
mbe
rs f
rom
the
sta
tus
bar
mes
sage
sta
rtin
g w
ith
a“5
” su
ch a
s Sa
les
Ord
er N
umbe
r 5*
).
Cap
abili
ties
for
cre
atin
g ex
tern
al d
ata
file
sD
ata
capt
ured
or
gene
rate
d du
ring
the
scr
ipt
play
back
(ex
ecut
ion)
can
be
sent
to a
n ex
tern
al d
ata
file
.
Allo
ws
data
see
ding
and
dat
a co
rrel
atio
nT
his
mea
ns t
hat
data
can
be
pass
ed f
rom
one
rec
orde
d SA
P t-
code
to
the
next
wit
hin
the
sam
e sc
ript
(i.e
., a
sing
le s
crip
t is
rec
orde
d to
pas
s da
ta f
rom
Sal
esO
rder
t-c
ode
(VA
01)
to D
eliv
ery
t-co
de (
VL
01))
.
EXHI
BIT
6.1
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 98
99
Thi
s is
the
str
ingi
ng t
oget
her
of S
AP
tran
sact
ions
wit
hin
the
sam
e sc
ript
or
auto
mat
ed t
est
case
.
Allo
ws
vari
able
dec
lara
tion
Var
iabl
es o
f ty
pe I
NT,
FL
OA
T, C
HA
R, e
tc. c
an b
e de
clar
ed.
Cap
ture
s sc
reen
tex
t (i
.e.,
stat
us b
ar m
essa
ges)
Text
fro
m S
AP
can
be c
aptu
red
and
stor
ed o
n a
spre
adsh
eet
(i.e
., st
atus
bar
mes
sage
s, in
form
atio
nal s
cree
n te
xt, t
ext
wit
hin
an S
AP
grid
, tex
t fo
r a
fiel
d,te
xt f
rom
a d
rop-
dow
n lis
t, e
tc.)
.
Prov
ides
pla
ybac
k w
ith
mul
tipl
e da
ta a
cces
s D
ata
acce
ss m
etho
d ca
n be
seq
uent
ial,
rand
om, e
tc..
..m
etho
ds (
i.e.,
rand
om)
VII
I. T
ool S
ecur
ity
Use
r an
d gr
oup
secu
rity
and
per
mis
sion
s fo
r T
he t
est
repo
sito
ry a
sset
s ca
n be
man
aged
acc
ordi
ng t
o us
ers
and
grou
p ea
ch t
est
asse
t co
mpo
nent
secu
rity
and
per
mis
sion
s to
con
trol
acc
ess
base
d on
use
r ro
les.
Allo
ws
SAP
role
s-ba
sed
test
ing
IX. V
endo
r Su
ppor
t
Ven
dor
offe
rs w
eb-b
ased
pat
ches
, do
wnl
oads
to
upgr
ade
tool
Is t
here
a v
endo
r w
ebsi
te f
rom
whi
ch p
atch
es c
an b
e do
wnl
oade
d?
SAP
Cor
pora
tion
has
for
mal
ly c
erti
fied
the
too
l
X. T
rain
ing
Ven
dor
offe
rs t
est
tool
s tr
aini
ngD
oes
the
vend
or o
ffer
an
offi
cial
tra
inin
g pr
ogra
m w
ith
inst
ruct
ors,
boo
ks,
clas
s ex
erci
ses?
Ven
dor
offe
rs c
erti
fica
tion
exa
min
atio
n Is
the
re a
n ex
amin
atio
n th
at t
este
rs c
an t
ake
to d
emon
stra
te p
rofi
cien
cy a
nd
in t
est
tool
skill
leve
l in
the
test
too
l?
XI.
Tes
t R
epor
ting
and
Rev
iew
Res
ult
logs
sto
re s
cree
n ca
ptur
esSh
ows
scre
en c
aptu
red
duri
ng in
itia
l scr
ipt
reco
rdin
g.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 99
100
EXHI
BIT
6.1
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
Res
ults
log
show
s st
atus
for
eac
h ro
w o
f Sh
ows
the
stat
us f
or e
ach
iter
atio
n (i
.e.,
out
of 1
0 it
erat
ions
9 p
asse
d).
data
(it
erat
ion)
Res
ults
log
incl
udes
dat
e an
d ti
me
stam
p
Res
ults
log
can
be s
aved
in d
iffe
rent
for
mat
s (i
.e.,
HT
ML
, .do
c)
Cre
ates
aut
omat
ic t
est
resu
lts
file
s (t
est
logs
)R
esul
ts lo
gs s
how
bot
h ac
tual
and
exp
ecte
d re
sult
s. R
esul
ts lo
gs s
how
whe
ther
a ve
rifi
cati
on p
oint
pas
sed.
Use
r-de
fine
d qu
ery
and
repo
rtin
g or
cha
rtin
g U
sers
can
eas
ily d
evel
op t
heir
ow
n da
taba
se q
ueri
es a
nd p
rese
nt t
he r
esul
ts in
ca
pabi
lity
char
ts o
r re
port
s.
Eng
lish-
narr
ativ
e do
cum
enta
tion
pro
duce
d A
s th
e te
st is
dev
elop
ed, E
nglis
h-lik
e do
cum
enta
tion
is a
utom
atic
ally
pro
duce
dau
tom
atic
ally
fro
m t
est
proc
esse
san
d m
aint
aine
d. T
here
is n
o ne
ed f
or a
sep
arat
e do
cum
enta
tion
ste
p.
Exp
ort
to t
ext
capa
bilit
y fo
r al
l tes
t as
sets
All
test
ass
ets
can
be e
xpor
ted
from
the
dat
abas
e in
to a
tex
t fi
le o
r ot
her
form
atfo
r in
tegr
atio
n w
ith
othe
r to
ols.
Use
r-ex
tens
ible
cla
sses
, act
ions
, and
fun
ctio
nsT
he u
ser
can
exte
nd t
he c
apab
ility
of
the
tool
to
auto
mat
e cu
stom
and
thi
rd-
part
y co
ntro
ls.
Use
r ca
n ex
tend
inte
rfac
e w
ith
unlim
ited
new
R
efer
s to
the
tes
t m
anag
emen
t ca
pabi
lity
and
to t
he a
bilit
y to
add
dat
a el
emen
tsat
trib
ute
fiel
dsas
tex
t, c
ombo
-box
, che
ckbo
x, e
tc.
Thi
s al
low
s th
e us
er t
o cu
stom
ize
the
info
rmat
ion
that
is c
aptu
red
and
repo
rted
so t
hat
it c
onfo
rms
to t
he u
ser’
s in
tern
al p
roce
sses
and
ter
min
olog
y.
Allo
ws
man
y-to
-one
and
one
-to-
man
y Si
ngle
or
mul
tipl
e re
quir
emen
ts c
an b
e tr
aced
to
a si
ngle
or
man
y te
sts.
requ
irem
ents
tra
ceab
ility
Supp
orts
ful
l ind
irec
tion
for
all
test
pro
cess
esIt
mea
ns y
ou c
an p
ass
the
nam
es o
f th
ese
com
pone
nts
as v
aria
bles
to
allo
w t
he
and
data
file
nam
este
st d
ata
to c
ontr
ol t
he f
low
and
con
tent
of
test
exe
cuti
on.
Is t
he t
ool l
angu
age
and
plat
form
inde
pend
ent?
06_4782 2/5/07 10:43 AM Page 100
101
EXHI
BIT
6.2
Test
Too
l Eva
luat
ion
Mat
rix
(Ven
dor:
Ars
in C
orpo
rati
on)
Too
l Eva
luat
or:B
ob K
oche
Ven
dor
Nam
e:A
rsin
Cor
pora
tion
Ven
dor
Web
site
:ww
w.a
rsin
.com
Dat
e of
Eva
luat
ion:
02/0
9/06
Too
l Off
erin
gs:S
AP
Test
Aut
omat
ion
Lib
rary
, Val
idat
ion
Eng
ine
and
Aut
omat
ed T
est
Man
agem
ent
& C
reat
ion
Tool
set—
wor
ks in
conj
unct
ion
wit
h te
st A
utom
atio
n to
ols
from
Mer
cury
and
SA
P
Cri
teri
aC
omm
ents
/Res
pons
es
I. T
rans
acti
on C
aptu
re a
nd P
layb
ack
Aut
omat
ed g
loba
l cha
nges
for
obj
ect
chan
ges
and
dele
tion
sY
es—
test
ass
ets
are
obje
ct-b
ased
.
Test
s ca
n be
dev
elop
ed c
oncu
rren
tly
wit
h so
ftw
are
deve
lopm
ent
No.
No
scri
pt c
odin
g re
quir
edY
es, a
llow
s co
mpl
ex a
utom
atic
val
idat
ion
wit
hout
any
cod
ing.
Rem
oves
the
nee
d fo
r co
ding
in G
UI
test
too
l.
Tool
sup
port
s re
cord
ing
of n
on-S
AP
appl
icat
ions
N/A
. Not
a r
ecor
ding
tes
t to
ol.
Com
mon
scr
ipti
ng la
ngua
ge (
i.e.,
VB
)V
isua
l Bas
ic s
crip
ting
and
AB
AP
deve
lopm
ent
lang
uage
for
Val
idat
ion
Eng
ine.
Allo
ws
RFC
s to
be
calle
dY
es.
Prod
uces
aut
omat
ic o
ptio
nal s
teps
Yes
, thr
ough
the
Atl
as L
ibra
ry W
rapp
er.
Has
ana
log
and
obje
ct r
ecor
ding
cap
abili
ties
N/A
.
Has
rep
osit
ory
for
man
agin
g th
e pr
oper
ties
of
reco
rded
obj
ects
Yes
, thr
ough
the
fro
nt e
nd G
UI
test
too
l lik
e Q
uick
Tes
t Pr
o.*
Thi
nk t
imes
can
be
adde
d w
itho
ut c
hang
ing
prog
ram
min
g co
deY
es, t
hrou
gh t
he f
ront
end
GU
I te
st t
ool l
ike
Qui
ck T
est
Pro.
*
Test
too
l allo
ws
for
crea
tion
of
user
def
ined
fun
ctio
nsY
es, V
alid
atio
n E
ngin
e al
low
s fo
r cr
eatio
n of
use
d de
fined
func
tions
.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 101
102
Test
too
l off
er k
eyw
ord-
driv
en t
ests
No.
Has
inte
ract
ive
capt
ured
scr
een
of c
aptu
red/
reco
rded
pro
cess
Yes
, thr
ough
the
fro
nt e
nd G
UI
test
too
l Qui
ck T
est
Pro.
*
If t
ool o
ffer
s ca
ptur
ed/r
ecor
ded
scre
en, u
ser
can
mod
ify
scri
pt
Yes
.lo
gic
thro
ugh
the
capt
ured
scr
een
Allo
ws
rena
min
g of
labe
ls f
or c
aptu
red
fiel
dsY
es.
Allo
ws
addi
ng o
f st
art
and
stop
wat
ches
No.
Ven
dor
offe
rs li
brar
y of
pre
reco
rded
SA
P sc
ript
s/pr
oces
ses
Y
es—
Ext
ensi
ve C
ompo
nent
Lib
rary
of
Tra
nsac
tion
s an
d w
ith
tool
Pr
oces
ses.
II. S
AP
Supp
orte
d V
ersi
ons,
App
licat
ions
Com
pati
ble
wit
h SA
P bo
lt-o
ns (
i.e.,
BW
, SR
M, A
PO, C
-fol
ders
,Y
es, a
ll SA
P E
RP
and
SAP
New
Dim
ensi
on P
rodu
cts
such
as
APO
,C
RM
, etc
.)C
RM
, BW
, etc
.
Supp
orts
dif
fere
nt v
ersi
ons
of S
AP
(i.e
., SA
P G
UI,
Cit
rix,
N
o an
swer
pro
vide
d.N
etw
eave
r, an
d Po
rtal
s)
III.
Too
l Mai
nten
ance
Allo
ws
tool
bar
cust
omiz
atio
nsN
o.
IV. T
ool I
nsta
llati
on
Tool
is w
eb-b
ased
or
requ
ires
des
ktop
GU
I in
stal
lati
onW
eb-b
ased
app
licat
ion.
(fat
or
thin
clie
nt in
stal
lati
on)?
Ven
dor
offe
rs f
loat
ing
licen
ses
Yes
.
EXHI
BIT
6.2
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 102
103
V. T
ool I
nteg
rati
on
Stor
es t
est
asse
ts in
MSD
E, S
QL
Ser
ver,
or O
racl
eY
es—
any
DB
MS.
Inte
grat
es w
ith
Solu
tion
Man
ager
Not
at
pres
ent
tim
e. P
lann
ed f
or f
utur
e re
leas
e.
If t
ool i
nteg
rate
s w
ith
solu
tion
man
ager
, cap
abili
ties
exi
st
N/A
.to
exe
cute
rec
orde
d sc
ript
s fr
om S
olut
ion
Man
ager
Inte
grat
es w
ith
test
man
agem
ent
tool
Yes
. Int
egra
tes
wit
h M
ercu
ry I
nter
acti
ve’s
Tes
tDir
ecto
r**
(Qua
lity
Cen
ter)
.
If in
tegr
atio
n w
ith
test
man
agem
ent
tool
exi
sts,
doe
s it
off
er
Yes
, dep
endi
ng o
n te
st m
anag
emen
t to
ol t
hat
Atl
as in
tegr
ates
wit
h.ve
rsio
n-co
ntro
l cap
abili
ties
? O
r in
tegr
ate
wit
h th
ird-
part
y to
ol
for
vers
ion
cont
rol?
Inte
grat
es w
ith
eCA
TT
Yes
.
Inte
grat
es w
ith
test
too
ls o
ther
tha
n eC
AT
TY
es, w
ith
Mer
cury
’s Q
uick
Tes
t Pr
o* a
nd B
usin
ess
Proc
ess
Test
ing.
***
Ope
n A
PI t
o in
tegr
ate
wit
h ot
her
tool
s, la
ngua
ges
Yes
.
VI.
Too
l Exe
cuti
on
Dec
isio
n-m
akin
g op
tion
s fo
r ea
ch t
est
step
on
pass
or
fail
Not
und
er t
he c
urre
nt r
elea
se.
Exe
cuti
on c
ontr
ol a
llow
s si
ngle
-ste
p, b
reak
poin
ts, s
cree
n N
o.ca
ptur
es, v
aria
ble
and
data
mon
itor
ing
Aut
omat
ed im
pact
ana
lysi
s fo
r ap
plic
atio
n ch
ange
sN
ot u
nder
the
cur
rent
rel
ease
.
Cap
abili
ties
to
run
unat
tend
ed a
nd s
kip
faile
d it
erat
ions
Yes
, can
run
una
tten
ded
and
skip
fai
led
iter
atio
ns.
Run
s sc
ript
s in
bac
kgro
und
and
fore
grou
nd m
ode
Yes
.
Has
sch
edul
ing
capa
bilit
ies
Yes
.
Sche
dulin
g to
ol o
ffer
s ex
ecut
ion
wit
h de
pend
enci
esT
he c
urre
nt v
ersi
on s
uppo
rts
sim
ple
grou
p sc
hedu
ling.
A f
utur
eve
rsio
n w
ill s
uppo
rt c
ondi
tion
al s
ched
ulin
g.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 103
104
Con
tain
s de
bugg
erN
/A.
Allo
ws
for
auto
mat
ic s
ynch
roni
zati
on b
etw
een
clie
nt a
nd s
erve
rY
es, t
hrou
gh t
he f
ront
end
GU
I te
st t
ool Q
uick
Tes
t Pr
o.*
Bui
lt-i
n er
ror
hand
ling
capa
bilit
yY
es.
Bui
lt-i
n co
ntex
t re
cove
ry c
apab
ility
Not
und
er c
urre
nt r
elea
se.
Aut
omat
ic t
imin
g fo
r ea
ch s
tep,
pro
cess
, and
sui
teT
imin
g is
tra
cked
for
eac
h st
ep.
VII
. Too
l Dat
a
Use
r-de
fine
d da
ta f
ilter
ing
on a
ll vi
ews
No.
All
test
ass
ets
stor
ed a
s da
ta in
rel
atio
nal d
atab
ase
Yes
—al
l obj
ects
cre
ated
usi
ng A
tlas
™ a
re s
tore
d in
DB
MS.
Dat
abas
e ve
rifi
cati
on a
nd d
ata
acqu
isit
ion
Yes
.
Prov
ides
exc
el-b
ased
fun
ctio
n (i
.e.,
TR
IM, M
ID, e
tc.)
to
clea
n up
Not
Exc
el-b
ased
. A c
ompl
ete
libra
ry o
f fu
ncti
ons
is a
vaila
ble
for
capt
ured
tex
tva
lidat
ion
purp
oses
.
Dat
a-dr
iven
tes
ts (
i.e.,
pulls
dat
a fr
om s
prea
dshe
ets,
Y
es. P
ulls
dat
a fr
om e
xter
nal f
iles,
dat
abas
es, a
nd/o
r sp
read
shee
ts.
exte
rnal
sou
rces
, etc
.)
Allo
ws
for
veri
fica
tion
poi
nts
(obj
ects
, dat
abas
e va
lues
, tex
t)Y
es. T
he V
alid
atio
n E
ngin
e pr
ovid
es g
reat
fle
xibi
lity
and
cove
rage
of a
ll st
anda
rd a
nd c
usto
m S
AP
obje
cts.
Tool
off
ers
regu
lar
expr
essi
ons
(i.e
., te
xt c
hara
cter
mat
chin
g)Y
es, t
his
is a
fea
ture
of
the
Eff
ecta
™ V
alid
atio
n E
ngin
e.
Cap
abili
ties
for
cre
atin
g ex
tern
al d
ata
file
sY
es.
Allo
ws
data
see
ding
and
dat
a co
rrel
atio
nY
es.
Allo
ws
vari
able
dec
lara
tion
Yes
, thr
ough
fro
nt e
nd G
UI
test
too
ls li
ke Q
uick
Tes
t Pr
o.*
Cap
ture
s sc
reen
tex
t (i
.e.,
stat
us b
ar m
essa
ges)
Yes
.
EXHI
BIT
6.2
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 104
105
Prov
ides
pla
ybac
k w
ith
mul
tipl
e da
ta a
cces
s m
etho
ds
No.
(i.e
., ra
ndom
)
VII
I. T
ool S
ecur
ity
Use
r an
d gr
oup
secu
rity
and
per
mis
sion
s fo
r ea
ch t
est
asse
tY
es, f
or v
alid
atio
n ob
ject
s.co
mpo
nent
Allo
ws
SAP
role
s-ba
sed
test
ing
Yes
.
IX. V
endo
r Su
ppor
t
Ven
dor
offe
rs w
eb-b
ased
pat
ches
, dow
nloa
ds t
o up
grad
e to
olY
es, p
atch
es a
vaila
ble
via
the
Web
.
SAP
Cor
pora
tion
has
for
mal
ly c
erti
fied
the
too
lN
ot a
t th
is m
omen
t. T
his
is in
pro
cess
.
X. T
rain
ing
Ven
dor
offe
rs t
est
tool
s tr
aini
ngY
es.
Ven
dor
offe
rs c
erti
fica
tion
exa
min
atio
n in
tes
t to
olY
es.
XI.
Tes
t R
epor
ting
and
Rev
iew
Res
ults
logs
sto
re s
cree
n ca
ptur
esY
es, t
hrou
gh f
ront
end
GU
I te
st t
ools
like
Mer
cury
Int
erac
tive
’sQ
uick
Tes
t Pr
o.*
Res
ults
log
show
sta
tus
for
each
row
of
data
(it
erat
ion)
Yes
, res
ults
sho
w t
he s
tatu
s of
eac
h va
lidat
ion
step
in t
est
scri
pt.
Res
ults
log
incl
ude
date
and
tim
e st
amp
Yes
.
Res
ults
log
can
be s
aved
in d
iffe
rent
for
mat
s (i
.e.,
HT
ML
, .do
c)Y
es, a
s a
MS
Wor
d fi
le (
.doc
).
Cre
ates
aut
omat
ic t
est
resu
lts
file
s (t
est
logs
)Y
es—
repo
rts
on t
est
resu
lts
are
prov
ided
. Res
ults
can
be
fed
into
Mer
cury
Int
erac
tive
’s t
est
man
agem
ent
appl
icat
ion
(Tes
tDir
ecto
r**
or Q
ualit
y C
ente
r).
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 105
106
Use
r-de
fine
d qu
ery
and
repo
rtin
g or
cha
rtin
g ca
pabi
lity
Use
r -d
efin
ed q
uery
ing
is n
ot a
vaila
ble
unde
r th
e cu
rren
t re
leas
e.H
owev
er, c
anne
d re
port
ing
is a
vaila
ble.
Eng
lish-
narr
ativ
e do
cum
enta
tion
pro
duce
d au
tom
atic
ally
N
ot u
nder
the
cur
rent
rel
ease
.fr
om t
est
proc
esse
s
Exp
ort
to t
ext
capa
bilit
y fo
r al
l tes
t as
sets
Yes
.
Use
r-ex
tens
ible
cla
sses
, act
ions
, and
fun
ctio
nsY
es.
Use
r ca
n ex
tend
inte
rfac
e w
ith
unlim
ited
new
att
ribu
te f
ield
sN
o.
Allo
ws
man
y-to
-one
and
one
-to-
man
y re
quir
emen
ts t
race
abili
tyY
es.
Supp
orts
ful
l ind
irec
tion
for
all
test
pro
cess
es a
nd d
ata
file
nam
esN
o.
Lan
guag
e an
d pl
atfo
rm in
depe
nden
tN
ot la
ngua
ge in
depe
nden
t. Y
es, i
t is
pla
tfor
m in
depe
nden
t.
* A
ssum
es Q
uick
Tes
t Pr
o ha
s be
en a
cqui
red.
** A
ssum
es T
estD
irec
tor
(Qua
lity
Cen
ter)
has
bee
n ac
quir
ed.
***
Ass
umes
BPT
has
bee
n ac
quir
ed.
Rep
rint
ed w
ith
perm
issi
on f
rom
Ars
in C
orpo
rati
on.
EXHI
BIT
6.2
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 106
107
(Con
tinu
es)
EXHI
BIT
6.3
Test
Too
l Eva
luat
ion
Mat
rix
(Ven
dor:
Aut
otes
ter,
Inc.
)
Tes
t T
ool E
valu
atio
n M
atri
x
Too
l(s)
Nam
e:A
utoT
este
r O
NE
Spe
cial
Edi
tion
for
SA
PT
ool E
valu
ator
:Mun
dy P
eale
and
Mic
hael
Vils
Ven
dor
Nam
e:A
utot
este
r, In
c.V
endo
r W
ebsi
te:w
ww
.aut
otes
ter.c
omD
ate
of E
valu
atio
n:02
/02/
2006
Too
l Off
erin
gs:S
AP
reco
rd/p
layb
ack
scri
ptin
g te
st t
ool f
or r
egre
ssio
n an
d in
tegr
atio
n te
stin
g
Cri
teri
aC
omm
ents
/Res
pons
es
I. T
rans
acti
on C
aptu
re a
nd P
layb
ack
Aut
omat
ed g
loba
l cha
nges
for
obj
ect
chan
ges
and
dele
tion
sU
ser-
defi
ned
thro
ugh
vari
able
s. I
f a
glob
al c
hang
e oc
curs
, onl
y th
eva
riab
le n
eeds
to
be m
odif
ied.
Test
s ca
n be
dev
elop
ed c
oncu
rren
tly
wit
h so
ftw
are
deve
lopm
ent
Yes
.
No
scri
pt c
odin
g re
quir
edA
uto
Com
man
d fa
cilit
y al
low
s us
er t
o cr
eate
scr
ipts
wit
hout
codi
ng.
Tool
sup
port
s re
cord
ing
of n
on-S
AP
appl
icat
ions
Yes
. Als
o te
sts
GU
I, H
ost/
Leg
acy,
and
Web
app
licat
ions
.
Com
mon
scr
ipti
ng la
ngua
ge (
i.e.,
VB
)T
he u
nder
lyin
g pr
ogra
mm
ing
lang
uage
is p
ropr
ieta
ry E
nglis
h-lik
ela
ngua
ge d
esig
ned
for
nonp
rogr
amm
ers
or b
usin
ess
user
s.
Allo
ws
RFC
s to
be
calle
dA
utoT
este
r ca
n m
ake
calls
to
user
-def
ined
DL
Ls.
Prod
uces
aut
omat
ic o
ptio
nal s
teps
Log
ical
ope
rato
rs f
or s
cree
n id
enti
fica
tion
can
be
adde
d at
will
.T
here
is n
o co
mpi
lati
on o
f th
e sc
ript
. A c
aptu
red
test
ing
step
can
be e
dite
d to
ref
lect
the
pos
sibi
lity
that
it m
ay n
ot b
e pr
esen
t an
dst
eps
adde
d to
com
pens
ate
for
this
.
06_4782 2/5/07 10:43 AM Page 107
108
EXHI
BIT
6.3
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
Has
ana
log
and
obje
ct r
ecor
ding
cap
abili
ties
Aut
oTes
ter
One
sup
port
s bo
th a
nalo
g an
d ob
ject
rec
ordi
ng.
Has
rep
osit
ory
for
man
agin
g th
e pr
oper
ties
of
reco
rded
obj
ects
Doe
sn’t
have
a r
epos
itor
y—at
trib
utes
and
pro
pert
ies
of c
aptu
red
or r
ecor
ded
obje
cts
are
stor
ed in
the
scr
ipt
itse
lf.
Thi
nk t
imes
can
be
adde
d w
itho
ut c
hang
ing
prog
ram
min
g co
deA
fter
the
scr
ipt
is r
ecor
ded
the
test
er c
an a
dd t
hink
tim
es a
ndde
lays
to
each
tes
t st
ep w
itho
ut h
avin
g to
inse
rt/c
hang
e ex
isti
ngpr
ogra
mm
ing
code
.
Test
too
l allo
ws
for
crea
tion
of
user
-def
ined
fun
ctio
nsTo
ols
allo
w c
reat
ion
of u
ser-
defi
ned
func
tion
s (s
ub r
outi
nes
orca
lled
mod
ules
) in
ord
er t
o m
ake
test
s or
com
pone
nts
easi
er t
om
aint
ain.
Use
r-de
fine
d fu
ncti
ons,
can
be
acce
ssed
fro
m a
ny t
est.
Test
too
l off
er k
eyw
ord
driv
en t
ests
Test
Scr
ipts
can
be
view
ed in
icon
-bas
ed v
iew
s w
ith
sim
plif
ied
desc
ript
ive
text
of
test
ing
step
s or
vie
wed
wit
h th
e fu
ll un
derl
ying
code
. Scr
ipts
in t
he ic
on-b
ased
vie
w c
an g
roup
tes
ting
ste
ps a
sw
ell.
All
step
s ca
n be
edi
ted
at w
ill b
y th
e us
er.
Has
inte
ract
ive
capt
ured
scr
een
of c
aptu
red/
reco
rded
pro
cess
Full
scre
ens
are
not
capt
ured
. Ind
ivid
ual c
ompo
nent
and
use
rin
tera
ctio
ns a
re.
If t
ool o
ffer
s ca
ptur
ed/r
ecor
ded
scre
en, u
ser
can
mod
ify
scri
pt
If a
n SA
P sc
reen
had
10
fiel
ds w
hen
it w
as r
ecor
ded
and
then
a
logi
c th
roug
h th
e ca
ptur
ed s
cree
nne
w f
ield
is a
dded
, the
tes
ter
can
mod
ify
the
scri
pt t
o in
clud
e th
e11
th f
ield
thr
ough
sim
ple
edit
ing
wit
hin
the
scri
pt.
Allo
ws
rena
min
g of
labe
ls f
or c
aptu
red
fiel
dsY
es.
Allo
ws
addi
ng o
f st
art
and
stop
wat
ches
Tim
e st
ampi
ng a
nd t
rans
acti
on t
imin
gs c
an b
e ad
ded.
Ven
dor
offe
rs li
brar
y of
pre
reco
rded
SA
P sc
ript
s/pr
oces
ses
N/A
.w
ith
tool
06_4782 2/5/07 10:43 AM Page 108
109
(Con
tinu
es)
II. S
AP
Supp
orte
d V
ersi
ons,
App
licat
ions
Com
pati
ble
wit
h SA
P bo
lt-o
ns (
i.e.,
BW
, SR
M, A
PO, C
-fol
ders
, C
ompa
tibl
e w
ith
any
GU
I or
web
-bas
ed a
pplic
atio
n. D
oes
not
CR
M, e
tc.)
have
to
be S
AP
base
d.
Supp
orts
dif
fere
nt v
ersi
ons
of S
AP
(i.e
., SA
P G
UI,
Cit
rix,
SA
P® R
/3®
clie
nt s
oftw
are
Ver
sion
4.0
B, 4
.5a,
4.5
b, 4
.6b,
N
etw
eave
r, an
d Po
rtal
s)4.
6c, 4
.6d,
6.1
0, o
r 6.
20.
III.
Too
l Mai
nten
ance
Allo
ws
tool
bar
cust
omiz
atio
nsT
he t
est
tool
doe
s no
t al
low
end
use
r to
dis
play
/sup
pres
s/ad
dfi
elds
, but
tons
, rel
ocat
e bu
tton
s, e
tc.
IV. T
ool I
nsta
llati
onTo
ol is
web
-bas
ed o
r re
quir
es d
eskt
op G
UI
inst
alla
tion
Res
ides
as
a G
UI
tool
on
the
Win
dow
s de
skto
p—fa
t cl
ient
(f
at o
r th
in c
lient
inst
alla
tion
)?in
stal
lati
on.
Ven
dor
offe
rs f
loat
ing
licen
ses
Yes
.
V. T
ool I
nteg
rati
onSt
ores
tes
t as
sets
in M
SDE
, SQ
L S
erve
r, or
Ora
cle
N/A
.
Inte
grat
es w
ith
Solu
tion
Man
ager
Test
man
agem
ent
tool
(Te
st O
rgan
izer
) do
es n
ot in
tegr
ate
wit
hSA
P’s
Solu
tion
Man
ager
.
If t
ool i
nteg
rate
s w
ith
Solu
tion
Man
ager
, cap
abili
ties
exi
st
N/A
.to
exe
cute
rec
orde
d sc
ript
s fr
om S
olut
ion
Man
ager
Inte
grat
es w
ith
test
man
agem
ent
tool
Rec
orde
d sc
ript
s ca
n be
sto
red
wit
hin
a te
st m
anag
emen
t to
olca
lled
Test
Org
aniz
er, w
hich
is in
tegr
ated
wit
h A
utoT
este
r O
ne.
06_4782 2/5/07 10:43 AM Page 109
If in
tegr
atio
n w
ith
test
man
agem
ent
tool
exi
sts,
doe
s it
Te
st O
rgan
izer
doe
s pr
ovid
e ve
rsio
n-co
ntro
l cap
abili
ties
. Tes
tof
fer
vers
ion-
cont
rol c
apab
iliti
es?
Or
inte
grat
e w
ith
thir
d-O
rgan
izer
has
as
a ch
eck-
in/o
ut f
eatu
re, s
crip
t st
atus
, and
pa
rty
tool
for
ver
sion
con
trol
? m
anag
emen
t co
vera
ge r
epor
ting
of
test
ing
resu
lts.
Inte
grat
es w
ith
eCA
TT
Aut
oTes
ter
One
doe
s no
t in
tegr
ate
wit
h eC
AT
T—
inte
grat
es w
ith
the
Scri
ptin
g Fa
cilit
y.
Inte
grat
es w
ith
test
too
ls o
ther
tha
n eC
AT
TN
/A.
Ope
n A
PI t
o in
tegr
ate
wit
h ot
her
tool
s, la
ngua
ges
Yes
—Pr
opri
etar
y.
VI.
Too
l Exe
cuti
on
Dec
isio
n-m
akin
g op
tion
s fo
r ea
ch t
est
step
on
pass
or
fail
Use
r de
fine
d.
Exe
cuti
on c
ontr
ol a
llow
s si
ngle
-ste
p, b
reak
poin
ts, s
cree
n
Yes
.ca
ptur
es, v
aria
ble
and
data
mon
itor
ing
Aut
omat
ed im
pact
ana
lysi
s fo
r ap
plic
atio
n ch
ange
sN
/A.
Cap
abili
ties
to
run
unat
tend
ed a
nd s
kip
faile
d it
erat
ions
If a
scr
ipt
has
mul
tipl
e it
erat
ions
and
one
of
them
fai
ls, t
he s
crip
tca
n be
inst
ruct
ed t
o sk
ip t
he f
aile
d it
erat
ion
and
proc
eed
to t
hene
xt o
ne.
Run
s sc
ript
s in
bac
kgro
und
and
fore
grou
nd m
ode
Fore
grou
nd o
nly.
Has
sch
edul
ing
capa
bilit
ies
Aut
oTes
ter
One
has
a s
ched
ulin
g m
odul
e. S
crip
ts c
an b
e se
t to
run
at s
peci
fic
tim
es o
r in
a c
ount
dow
n m
ode.
Sche
dulin
g to
ol o
ffer
s ex
ecut
ion
wit
h de
pend
enci
esSc
ript
s ca
n be
exe
cute
d fr
om t
he s
ched
uler
in a
giv
en s
eque
nce.
Dep
ende
ncie
s ar
e sc
ript
ed.
Con
tain
s de
bugg
erN
ot a
com
pone
nt o
f A
utoT
este
r O
ne.
EXHI
BIT
6.3
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
110
06_4782 2/5/07 10:43 AM Page 110
111
Allo
ws
for
auto
mat
ic s
ynch
roni
zati
on b
etw
een
clie
nt a
nd s
erve
rSc
ript
pla
ybac
k ti
me
auto
mat
ical
ly a
djus
ts t
o th
e SA
P re
spon
seti
mes
so
that
the
scr
ipts
do
not
get
ahea
d of
SA
P du
ring
exe
cuti
on(p
layb
ack)
in t
he e
vent
tha
t th
e SA
P se
rver
is e
xper
ienc
ing
dela
ys/la
gs.
Bui
lt-i
n er
ror
hand
ling
capa
bilit
yU
ser
defi
ned
and
scri
pted
—no
t au
tom
atic
.
Bui
lt-i
n co
ntex
t re
cove
ry c
apab
ility
Use
r de
fine
d an
d sc
ript
ed—
not
auto
mat
ic.
Aut
omat
ic t
imin
g fo
r ea
ch s
tep,
pro
cess
, and
sui
teR
elat
ive
exec
utio
n ti
mes
are
not
ated
aut
omat
ical
ly in
res
ults
file
.
VII
. Too
l Dat
a
Use
r-de
fine
d da
ta f
ilter
ing
on a
ll vi
ews
Test
Org
aniz
er p
rovi
des
filt
erin
g ca
pabi
litie
s on
all
repo
rtin
gvi
ews.
All
test
ass
ets
stor
ed a
s da
ta in
rel
atio
nal d
atab
ase
Test
ass
ets
can
be s
tore
d in
the
Tes
t O
rgan
izer
mod
ule
ofA
utoT
este
r O
ne, w
hich
use
s a
rela
tion
al d
atab
ase.
Dat
abas
e ve
rifi
cati
on a
nd d
ata
acqu
isit
ion
N/A
.
Prov
ides
, Exc
el-b
ased
fun
ctio
n (i
.e.,
TR
IM, M
ID, e
tc.)
to
clea
n C
aptu
red
text
can
be
form
atte
d, a
nd m
anip
ulat
ed w
ithi
n th
e up
cap
ture
d te
xtva
riab
le t
hat
stor
es t
he d
ata
(pro
prie
tary
). D
ata
read
fro
mex
tern
al f
iles
can
also
be
form
atte
d.
Dat
a dr
iven
tes
ts (
i.e.,
pulls
dat
a fr
om s
prea
dshe
ets,
ext
erna
l T
he t
ool c
an w
ork
wit
h da
ta r
esid
ing
in e
xter
nal d
ata
file
s so
urce
s, e
tc.)
(i.e
., co
mm
a de
limit
ed, .
txt
file
, and
Exc
el s
prea
dshe
ets,
etc
.).
Allo
ws
for
veri
fica
tion
poi
nts
(obj
ects
, dat
abas
e va
lues
, tex
t)O
bjec
ts a
nd t
ext
can
be v
erif
ied
duri
ng s
crip
t pl
ayba
ck (
i.e.,
chec
kth
at a
n en
ter
butt
on is
dis
able
d; c
heck
tha
t qu
anti
ty o
n ha
nd is
at
leas
t 10
bef
ore
carr
ying
out
the
sal
es o
rder
etc
.).
Tool
off
ers
regu
lar
expr
essi
ons
(i.e
., te
xt c
hara
cter
mat
chin
g)M
atch
es c
hara
cter
s fr
om c
aptu
red
or e
valu
ated
tex
t ba
sed
on a
patt
ern
(i.e
., ve
rify
all
SAP
sale
s or
der
num
bers
fro
m t
he s
tatu
sba
r m
essa
ge s
tart
ing
wit
h a
“5”
such
as
Sale
s O
rder
Num
ber
5*).
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 111
112
Cap
abili
ties
for
cre
atin
g ex
tern
al d
ata
file
sN
ot o
ffer
ed w
ithi
n to
ol. E
xcel
dat
a fi
le f
orm
at a
nd t
ext
file
s ar
esu
ppor
ted.
Allo
ws
data
see
ding
and
dat
a co
rrel
atio
nD
ata
can
be p
asse
d fr
om o
ne r
ecor
ded
SAP
t-co
de t
o th
e ne
xtw
ithi
n th
e sa
me
scri
pt (
i.e.,
a si
ngle
scr
ipt
is r
ecor
ded
to p
ass
data
from
Sal
es O
rder
t-c
ode
(VA
01)
to D
eliv
ery
t-co
de (
VL
01))
.
Mul
tipl
e SA
P tr
ansa
ctio
ns c
an b
e in
clud
ed w
ithi
n th
e sa
me
scri
pt.
Allo
ws
vari
able
dec
lara
tion
Var
iabl
es c
an b
e te
xt, n
umer
ic, o
bjec
t, b
itm
ap, m
acro
, or
date
and
do n
ot h
ave
to b
e de
clar
ed.
Cap
ture
s sc
reen
tex
t (i
.e.,
stat
us b
ar m
essa
ges)
Text
fro
m S
AP
can
be c
aptu
red
and
stor
ed o
n a
spre
adsh
eet
(i.e
.,st
atus
bar
mes
sage
s, in
form
atio
nal s
cree
n te
xt, t
ext
wit
hin
an S
AP
grid
, tex
t fr
om a
fie
ld, t
ext
from
a d
rop-
dow
n lis
t, e
tc.)
.
Prov
ides
pla
ybac
k w
ith
mul
tipl
e da
ta a
cces
s m
etho
ds
Dat
a ac
cess
met
hod
can
be s
eque
ntia
l, ra
ndom
, etc
....
(i.e
., ra
ndom
)
VII
I. T
ool S
ecur
ity
Use
r an
d gr
oup
secu
rity
and
per
mis
sion
s fo
r ea
ch t
est
asse
t T
he t
est
orga
nize
r ha
s pe
rmis
sion
leve
ls f
or a
cces
s to
com
pone
nts
com
pone
ntst
ored
wit
hin
the
proj
ect.
Allo
ws
SAP
role
s-ba
sed
test
ing
Yes
.
IX. V
endo
r Su
ppor
t
Ven
dor
offe
rs w
eb-b
ased
pat
ches
, dow
nloa
ds t
o up
grad
e to
olW
ebsi
te e
xist
s fr
om w
hich
pat
ches
and
pro
duct
upd
ates
can
be
dow
nloa
ded.
EXHI
BIT
6.3
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 112
113
SAP
Cor
pora
tion
has
for
mal
ly c
erti
fied
the
too
lC
erti
fied
usi
ng G
UIL
IB t
esti
ng in
terf
ace
wit
h SA
P ve
rsio
ns u
p to
4.5b
. Scr
ipti
ng F
acili
ty s
uppo
rt d
oes
not
have
a f
orm
alce
rtif
icat
ion
from
SA
P. S
AP
4.7
test
ing
cert
ific
atio
n is
onl
yav
aila
ble
for
eCA
TT
inte
grat
ion.
X. T
rain
ing
Ven
dor
offe
rs t
est
tool
s tr
aini
ngO
nlin
e an
d on
-sit
e tr
aini
ng a
re o
ffer
ed.
Ven
dor
offe
rs c
erti
fica
tion
exa
min
atio
n in
tes
t to
olT
here
a b
asic
tra
inin
g cl
ass
cert
ific
ate
avai
labl
e af
ter
com
plet
ion
of t
rain
ing.
XI.
Tes
t R
epor
ting
and
Rev
iew
Res
ults
logs
sto
re s
cree
n ca
ptur
esSh
ows
scre
en/d
ata
capt
ured
dur
ing
scri
pt r
ecor
ding
(be
nchm
ark)
and
scre
en/d
ata
capt
ured
dur
ing
play
back
.
Res
ults
logs
sho
w s
tatu
s fo
r ea
ch r
ow o
f da
ta (
iter
atio
n)Sh
ows
the
stat
us f
or e
ach
iter
atio
n (i
.e.,
out
of 1
0 it
erat
ions
, 9pa
ssed
). A
ll it
erat
ions
are
use
r de
fine
d.
Res
ults
logs
incl
ude
date
and
tim
e st
amp
Dat
e an
d ti
me
stam
ps a
re in
clud
ed.
Res
ults
logs
can
be
save
d in
dif
fere
nt f
orm
ats
(i.e
., H
TM
L, .
doc)
Res
ults
can
be
expo
rted
to
text
for
mat
s.
Cre
ates
aut
omat
ic t
est
resu
lts
file
s (t
est
logs
)R
esul
t lo
gs s
how
bot
h ac
tual
and
exp
ecte
d re
sult
s. R
esul
ts lo
gssh
ow w
heth
er a
ver
ific
atio
n po
int
pass
ed.
Use
r-de
fine
d qu
ery
and
repo
rtin
g or
cha
rtin
g ca
pabi
lity
Test
Org
aniz
er p
rovi
des
seve
ral r
epor
ting
cap
abili
ties
.
Eng
lish-
narr
ativ
e do
cum
enta
tion
pro
duce
d au
tom
atic
ally
So
me
desc
ript
ive
text
is in
clud
ed in
the
scr
ipt
duri
ng t
he r
ecor
ding
fr
om t
est
proc
esse
spr
oces
s. S
peci
fic
addi
tion
al t
ext
can
be a
dded
at
will
to
crea
te a
docu
men
ted
test
cas
e w
ithi
n th
e sc
ript
. The
tex
t-ba
sed
scri
ptbe
com
es t
he d
ocum
ent.
Exp
ort
to t
ext
capa
bilit
y fo
r al
l tes
t as
sets
Scri
pts
are
in t
ext
form
at.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 113
114
Use
r-ex
tens
ible
cla
sses
, act
ions
, and
fun
ctio
nsN
/A.
Use
r ca
n ex
tend
inte
rfac
e w
ith
unlim
ited
new
att
ribu
te f
ield
sN
/A.
Allo
ws
man
y-to
-one
and
one
-to-
man
y re
quir
emen
ts t
race
abili
tyR
equi
rem
ents
tra
ceab
ility
is a
vaila
ble
in t
he T
est
Org
aniz
erm
odul
e of
Aut
oTes
ter
One
.
Supp
orts
ful
l ind
irec
tion
for
all
test
pro
cess
es a
nd d
ata
file
nam
esV
aria
ble
nam
es a
nd f
ile n
ames
hav
e in
dire
ctio
n ca
pabi
lity.
Lan
guag
e an
d pl
atfo
rm in
depe
nden
tA
utoT
este
r O
ne w
orks
wit
h SA
P in
terf
aces
in o
ther
lang
uage
s.O
nly
wor
ks in
Win
dow
s-ba
sed
plat
form
s.
Rep
rint
ed w
ith
perm
issi
on f
rom
Aut
otes
ter,
Inc.
EXHI
BIT
6.3
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 114
115
EXHI
BIT
6.4
Test
Too
l Eva
luat
ion
Mat
rix
(Ven
dor:
Com
puw
are
Cor
pora
tion
)
Tes
t T
ool E
valu
atio
n M
atri
x
Too
l(s)
Nam
e:Te
stPa
rtne
r (P
art
of t
he Q
AC
ente
r Te
stSu
ite)
Too
l Eva
luat
or:B
rian
Hur
stV
endo
r N
ame:
Com
puw
are
Cor
pora
tion
Ven
dor
Web
site
:ww
w.c
ompu
war
e.co
mD
ate
of E
valu
atio
n:02
/02/
06T
ool O
ffer
ings
:Fun
ctio
nal T
esti
ng f
or S
AP
Cri
teri
aC
omm
ents
/Res
pons
es
I. T
rans
acti
on C
aptu
re a
nd P
layb
ack
Aut
omat
ed g
loba
l cha
nges
for
obj
ect
chan
ges
and
dele
tion
sO
bjec
t M
appi
ng is
100
% c
entr
aliz
ed.
Test
s ca
n be
dev
elop
ed c
oncu
rren
tly
wit
h so
ftw
are
deve
lopm
ent
No.
No
scri
pt c
odin
g re
quir
edN
/A. S
crip
ting
is d
one
wit
hin
Mic
roso
ft V
BA
.
Tool
sup
port
s re
cord
ing
of n
on-S
AP
appl
icat
ions
SAP
is o
ne e
nvir
onm
ent
supp
orte
d. O
ther
s in
clud
e M
icro
soft
,Ja
va, W
eb, O
racl
e E
RP.
No
purc
hase
of
“plu
g-in
s” r
equi
red.
Com
mon
scr
ipti
ng la
ngua
ge (
i.e.,
VB
)M
icro
soft
VB
A 6
.2 (
Edi
tor
and
Lan
guag
e)
Allo
ws
RFC
s to
be
calle
dC
an c
all B
API
s vi
a A
ctiv
eX in
terf
ace.
Prod
uces
aut
omat
ic o
ptio
nal s
teps
No.
Has
ana
log
and
obje
ct r
ecor
ding
cap
abili
ties
Yes
, ana
log
(pos
itio
nal m
ouse
clic
ks o
nly)
can
be
capt
ured
whe
nob
ject
s ar
e no
t av
aila
ble.
Too
l can
als
o pe
rfor
m B
itM
apSe
lect
s,m
eani
ng c
aptu
red
Bit
map
s ca
n be
sto
red
and
auto
mat
ed li
keob
ject
s.
Obj
ect-
leve
l rec
ordi
ng f
or m
ulti
ple
tech
nolo
gies
is s
tand
ard.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 115
116
Has
rep
osit
ory
for
man
agin
g th
e pr
oper
ties
of
reco
rded
obj
ects
Test
Part
ner
stor
es o
bjec
ts in
the
Obj
ect
Map
, als
o lo
cate
d in
the
Rep
osit
ory.
Thi
nk t
imes
can
be
adde
d w
itho
ut c
hang
ing
prog
ram
min
g co
deT
hink
tim
es c
an b
e ca
ptur
ed d
urin
g re
cord
ing.
Test
too
l allo
ws
for
crea
tion
of
user
-def
ined
fun
ctio
nsFu
ncti
ons
can
be c
reat
ed a
nd m
ade
avai
labl
e to
all
scri
pts
in a
llpr
ojec
ts. F
unct
ion
capa
bilit
ies
follo
w M
icro
soft
VB
A.
Test
too
l off
ers
keyw
ord-
driv
en t
ests
Key
wor
d te
stin
g pr
ovid
ed t
hrou
gh s
trat
egic
inte
grat
ed p
artn
er;
Scri
ptTe
ch.
Has
inte
ract
ive
capt
ured
scr
een
of c
aptu
red/
reco
rded
pro
cess
No.
Thi
s is
tar
gete
d fu
ncti
onal
ity
for
a fu
ture
rel
ease
of
Test
Part
ner.
If t
ool o
ffer
s ca
ptur
ed/r
ecor
ded
scre
en, u
ser
can
mod
ify
No.
Thi
s is
tar
gete
d fu
ncti
onal
ity
for
a fu
ture
rel
ease
of
scri
pt lo
gic
thro
ugh
the
capt
ured
scr
een
Test
Part
ner.
Allo
ws
rena
min
g of
labe
ls f
or c
aptu
red
fiel
dsY
es, v
ia O
bjec
t M
appi
ng.
Allo
ws
addi
ng o
f st
art
and
stop
wat
ches
Yes
, clo
ck f
unct
ions
are
ava
ilabl
e.
Ven
dor
offe
rs li
brar
y of
pre
reco
rded
SA
P sc
ript
s/pr
oces
ses
No.
Com
puw
are
serv
ices
can
be
empl
oyed
to
perf
orm
/ass
ist
wit
hw
ith
tool
rapi
d sc
ript
cre
atio
n.
II. S
AP
Supp
orte
d V
ersi
ons,
App
licat
ions
Com
pati
ble
wit
h SA
P bo
lt-o
ns (
i.e.,
BW
, SR
M, A
PO, C
-fol
ders
,B
W, A
PO, S
RM
.C
RM
, etc
.)
Supp
orts
dif
fere
nt v
ersi
ons
of S
AP
(i.e
., SA
P G
UI,
Cit
rix,
SA
P W
inG
ui—
6.20
& 6
.40;
SA
P Po
rtal
is t
oler
ated
. App
s N
etw
eave
r, an
d Po
rtal
s)de
ploy
ed o
n C
itri
x ca
n be
tes
ted
at t
he C
itri
x Se
rver
Lay
er. L
oad
test
ing
can
be p
erfo
rmed
on
Nat
ive
SAP
or C
itri
x pr
otoc
ols.
EXHI
BIT
6.4
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 116
117
III.
Too
l Mai
nten
ance
Allo
ws
tool
bar
cust
omiz
atio
nsTo
olba
rs c
an b
e co
nfig
ured
and
/or
mov
ed. E
xter
nal t
ools
can
be
adde
d to
men
us.
IV. T
ool I
nsta
llati
on
Tool
is W
eb-b
ased
or
requ
ires
des
ktop
GU
I in
stal
lati
onD
eskt
op in
stal
l, (F
at C
lient
).(f
at o
r th
in c
lient
inst
alla
tion
)?
Ven
dor
offe
rs f
loat
ing
licen
ses
Yes
, all
licen
se s
ales
are
con
curr
ent
in n
atur
e.
V. T
ool I
nteg
rati
on
Stor
es t
est
asse
ts in
MSD
E, S
QL
Ser
ver,
or O
racl
eY
es, a
ll as
sets
are
sto
red
in A
cces
s, S
QL
Ser
ver,
or O
racl
e.
Inte
grat
es w
ith
Solu
tion
Man
ager
Test
Req
uire
men
ts c
an b
e au
tom
atic
ally
bui
lt f
rom
the
Pro
cess
Mod
el in
Sol
utio
n M
anag
er.
If t
ool i
nteg
rate
s w
ith
solu
tion
man
ager
, cap
abili
ties
exi
st
eCA
TT
can
laun
ch a
nd s
tore
scr
ipts
of
Test
Part
ner
via
SAP
to e
xecu
te r
ecor
ded
scri
pts
from
Sol
utio
n M
anag
erC
erti
fied
inte
grat
ion.
Inte
grat
es w
ith
test
man
agem
ent
tool
Te
stPa
rtne
r st
ores
all
test
scr
ipts
in a
dat
abas
e by
def
ault
. Tes
tM
anag
emen
t (Q
AC
ente
r) c
ontr
ols
the
exec
utio
n an
d re
port
ing.
(i.e
., Sc
ript
s ar
e no
t ph
ysic
ally
mov
ed f
or T
est
Man
agem
ent
purp
oses
.)
If in
tegr
atio
n w
ith
test
man
agem
ent
tool
exi
sts,
doe
s it
off
er
Test
ass
ets
can
be e
xpor
ted
to f
iles
for
impo
rt in
to a
ver
sion
-ve
rsio
n-co
ntro
l cap
abili
ties
? O
r in
tegr
ate
wit
h th
ird-
part
yco
ntro
l pac
kage
.to
ol f
or v
ersi
on c
ontr
ol?
Inte
grat
es w
ith
eCA
TT
Cer
tifi
ed e
CA
TT
inte
grat
ion
by S
AP.
Inte
grat
es w
ith
test
too
ls o
ther
tha
n eC
AT
TN
o. B
ut C
ompu
war
e’s
QA
Cen
ter
for
test
man
agem
ent
inte
grat
esw
ith
mul
tipl
e re
quir
emen
ts m
anag
emen
t to
ols.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 117
118
Ope
n A
PI t
o in
tegr
ate
wit
h ot
her
tool
s, la
ngua
ges
Ref
eren
ces
can
be a
dded
to
exte
rnal
app
licat
ions
/libr
arie
s vi
a th
eV
BA
‘Add
Ref
eren
ce’ f
unct
iona
lity.
QA
Cen
ter
Test
Man
agem
ent
inte
grat
es w
ith
thir
d-pa
rty
requ
irem
ents
too
ls C
alib
erR
M,
Req
uisi
tePr
o, D
OO
RS,
and
Com
puw
are
Stee
lTra
ce.
VI.
Too
l Exe
cuti
on
Dec
isio
n-m
akin
g op
tion
s fo
r ea
ch t
est
step
on
pass
or
fail
No.
Exe
cuti
on c
ontr
ol a
llow
s si
ngle
-ste
p, b
reak
poin
ts, s
cree
n Sc
ript
ing
envi
ronm
ent
is M
icro
soft
VB
A a
nd in
clud
es a
ll ca
ptur
es, v
aria
ble
and
data
mon
itor
ing
debu
ggin
g ca
pabi
litie
s of
Vis
ual B
asic
.
Aut
omat
ed im
pact
ana
lysi
s fo
r ap
plic
atio
n ch
ange
sN
o. H
owev
er S
AP
Tra
nspo
rts
can
be a
naly
zed
to d
etec
t t-
code
sth
at a
re a
ffec
ted
thro
ugh
a sy
stem
cha
nge.
Cap
abili
ties
to
run
unat
tend
ed a
nd s
kip
faile
d it
erat
ions
Thi
s ca
n be
acc
ompl
ishe
d vi
a sc
ript
ing.
Run
s sc
ript
s in
bac
kgro
und
and
fore
grou
nd m
ode
Fore
grou
nd o
nly
for
func
tion
al t
ests
. Bac
kgro
und
for
load
tes
ts.
Has
sch
edul
ing
capa
bilit
ies
QA
Cen
ter
Port
al in
clud
es s
ched
ulin
g of
exe
cuti
on (
tim
e/da
te)
and
repe
titi
ve e
xecu
tion
(i.e
., da
ily, w
eekl
y, m
onth
ly).
Sche
dulin
g to
ol o
ffer
s ex
ecut
ion
wit
h de
pend
enci
esR
emai
ning
tes
ts c
an b
e ex
ecut
ed if
fai
l occ
urre
d, o
r ab
ort
all t
ests
on f
irst
fai
lure
opt
ion
may
be
used
.
Con
tain
s de
bugg
erTe
stPa
rtne
r: u
ses
Mic
roso
ft V
BA
scr
ipti
ng e
nvir
onm
ent
and
lang
uage
.
Allo
ws
for
auto
mat
ic s
ynch
roni
zati
on b
etw
een
clie
nt a
nd s
erve
rSy
nchr
oniz
atio
n is
han
dled
aut
omat
ical
ly, a
nd t
imeo
ut t
ime
valu
es c
an b
e se
t gl
obal
ly.
Bui
lt-i
n er
ror
hand
ling
capa
bilit
yY
es.
Bui
lt-i
n co
ntex
t re
cove
ry c
apab
ility
Not
bui
lt in
; how
ever
, log
ic c
an b
e bu
ilt a
nd e
mbe
dded
thr
ough
scri
ptin
g.
EXHI
BIT
6.4
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 118
119
Aut
omat
ic t
imin
g fo
r ea
ch s
tep,
pro
cess
, and
sui
teT
imin
gs a
re c
aptu
red
at S
uite
and
Scr
ipt
leve
l.
VII
. Too
l Dat
a
Use
r-de
fine
d da
ta f
ilter
ing
on a
ll vi
ews
Yes
.
All
test
ass
ets
stor
ed a
s da
ta in
rel
atio
nal d
atab
ase
Yes
, all
asse
ts a
re s
tore
d in
Acc
ess,
SQ
L S
erve
r, or
Ora
cle.
Dat
abas
e ve
rifi
cati
on a
nd d
ata
acqu
isit
ion
Yes
, via
Com
puw
are’
s Fi
leA
id.
Prov
ides
Exc
el-b
ased
fun
ctio
n (i
.e.,
TR
IM, M
ID, e
tc.)
to
clea
n N
ot E
xcel
-bas
ed. H
owev
er, a
ll V
BA
str
ing
com
man
ds a
re
up c
aptu
red
text
avai
labl
e.
Dat
a-dr
iven
tes
ts (
i.e.,
pulls
dat
a fr
om s
prea
dshe
ets,
A
ctiv
eDat
a w
izar
d pr
ovid
es T
estP
artn
er c
apab
iliti
es t
o cr
eate
ex
tern
al s
ourc
es, e
tc.)
vari
able
s w
ithi
n sc
ript
s th
at a
re c
ompa
tibl
e w
ith
thes
e fo
rmat
s:.x
ls, t
xt, o
r cs
v as
sou
rce.
Allo
ws
for
veri
fica
tion
poi
nts
(obj
ects
, dat
abas
e va
lues
, tex
t)St
anda
rd c
heck
poin
ts in
clud
e: T
ext,
Bit
map
, Pro
pert
y, C
onte
nt,
and
Clo
ck. C
usto
m c
heck
s ca
n be
scr
ipte
d—U
serC
heck
s.
Tool
off
ers
regu
lar
expr
essi
ons
(i.e
., te
xt c
hara
cter
mat
chin
g)In
tex
t co
mpa
riso
n op
tion
suc
h as
Any
Val
id N
umbe
r or
Num
ber
wit
hin
Ran
ge, o
r m
atch
of
char
acte
rs (
any
alph
a, n
umer
ic)
patt
erns
can
be
built
.
Cap
abili
ties
for
cre
atin
g ex
tern
al d
ata
file
sY
es. A
ctiv
eDat
a w
izar
d al
low
s th
e pa
ram
eter
izat
ion
of r
ecor
ded
scri
pts,
whi
ch c
reat
es t
he E
xcel
file
wit
h ro
w 1
of
data
equ
al t
ore
cord
ed d
ata.
Fur
ther
mor
e, d
ata
can
be w
ritt
en t
o th
e da
tafi
le a
tru
ntim
e.
Allo
ws
data
see
ding
and
dat
a co
rrel
atio
nO
ptim
ized
app
roac
h is
sha
ring
dat
a be
twee
n sc
ript
s is
rea
d/w
rite
to M
icro
soft
Exc
el c
olum
ns. O
ther
opt
ions
exi
st.
Allo
ws
vari
able
dec
lara
tion
Yes
. Var
iabl
e ty
pes
and
decl
arat
ion
rule
s ar
e pr
ovid
ed b
yM
icro
soft
VB
A.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 119
120
Cap
ture
s sc
reen
tex
t (i
.e.,
stat
us b
ar m
essa
ges)
Text
fro
m S
AP
can
be c
aptu
red
and
stor
ed o
n a
spre
adsh
eet
(i.e
.,st
atus
bar
mes
sage
s, in
form
atio
nal s
cree
n te
xt, t
ext
wit
hin
an S
AP
grid
, tex
t fo
r a
fiel
d, t
ext
from
a d
rop-
dow
n lis
t, e
tc.)
.
Prov
ides
pla
ybac
k w
ith
mul
tipl
e da
ta a
cces
s m
etho
ds
Dat
a ca
n be
rea
d in
seq
uent
ial o
r ra
ndom
ord
er. F
urth
erm
ore,
(i
.e.,
rand
om)
spec
ific
dat
a ro
ws
(e.g
., on
ly r
ows
5–12
out
of
20 r
ows
of d
ata)
can
be s
peci
fica
lly u
tiliz
ed.
VII
I. T
ool S
ecur
ity
Use
r an
d gr
oup
secu
rity
and
per
mis
sion
s fo
r ea
ch t
est
asse
t Y
es. R
ole-
base
d se
curi
ty a
t th
e pr
ojec
t le
vel.
com
pone
nt
Allo
ws
SAP
role
s-ba
sed
test
ing
Yes
.
IX. V
endo
r Su
ppor
t
Ven
dor
offe
rs w
eb-b
ased
pat
ches
, dow
nloa
ds t
o up
grad
e to
olY
es, t
hrou
gh C
ompu
war
e Fr
ontl
ine.
SAP
Cor
pora
tion
has
for
mal
ly c
erti
fied
the
too
lY
es.
X. T
rain
ing
Ven
dor
offe
rs t
est
tool
s tr
aini
ngY
es.
Ven
dor
offe
rs c
erti
fica
tion
exa
min
atio
n in
tes
t to
olC
erti
fica
tion
sta
ndar
ds a
re u
nder
dev
elop
men
t.
XI.
Tes
t R
epor
ting
and
Rev
iew
Res
ults
logs
sto
re s
cree
n ca
ptur
esN
o sc
reen
cap
ture
s in
cur
rent
rel
ease
. Tar
gete
d fu
ture
func
tion
alit
y.
EXHI
BIT
6.4
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 120
121
Res
ults
log
show
sta
tus
for
each
row
of
data
(it
erat
ion)
All
iter
atio
ns a
re lo
gged
, res
ults
not
gro
uped
by
data
row
iter
atio
n.
Res
ults
log
incl
ude
date
and
tim
e st
amp
Test
sta
rt a
nd t
est
end
is in
clud
ed in
the
Web
res
ult
sum
mar
y.
Res
ults
log
can
be s
aved
in d
iffe
rent
for
mat
s (i
.e.,
HT
ML
, .do
c)R
esul
ts c
an b
e ex
port
ed t
o th
ese
form
ats:
HT
ML
, TX
T, C
SV,
XM
L.
Cre
ates
aut
omat
ic t
est
resu
lts
file
s (t
est
logs
)L
ogs
show
bot
h ac
tual
and
exp
ecte
d re
sult
s. R
esul
ts lo
gs s
how
whe
ther
a v
erif
icat
ion
poin
t pa
ssed
.
Use
r-de
fine
d qu
ery
and
repo
rtin
g or
cha
rtin
g ca
pabi
lity
Ove
r 35
cus
tom
izab
le r
epor
ts s
tand
ard
in Q
AC
ente
r. D
ata
can
beex
port
ed t
o X
ML
and
tex
t fo
r th
ird-
part
y re
port
ing.
Eng
lish-
narr
ativ
e do
cum
enta
tion
pro
duce
d au
tom
atic
ally
N
o.fr
om t
est
proc
esse
s
Exp
ort
to t
ext
capa
bilit
y fo
r al
l tes
t as
sets
Exp
orti
ng o
f te
sts
(and
rel
ated
ass
ets)
are
exp
orta
ble
to X
ML
.
Use
r-ex
tens
ible
cla
sses
, act
ions
, and
fun
ctio
nsY
es. S
hare
d an
d cl
ass
mod
ules
.
Use
r ca
n ex
tend
inte
rfac
e w
ith
unlim
ited
new
att
ribu
te f
ield
sN
/A.
Allo
ws
man
y-to
-one
and
one
-to-
man
y re
quir
emen
ts t
race
abili
tyY
es, Q
AC
ente
r w
orks
on
a re
quir
emen
ts d
rive
n te
st s
trat
egy.
Supp
orts
ful
l ind
irec
tion
for
all
test
pro
cess
es a
nd d
ata
file
nam
esY
es.
Lan
guag
e an
d pl
atfo
rm in
depe
nden
tTe
stPa
rtne
r is
use
d gl
obal
ly a
nd h
as b
een
loca
lized
for
dou
ble-
byte
cha
ract
ers
and
unic
ode.
Rep
rint
ed w
ith
perm
issi
on f
rom
Com
puw
are
Cor
pora
tion
.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 121
122
EXHI
BIT
6.5
Test
Too
l Eva
luat
ion
Mat
rix
(Ven
dor:
Suc
id C
orpo
rati
on)
Tes
t T
ool E
valu
atio
n M
atri
x
Too
l(s)
Nam
e:Su
cid
Proc
ess
Mod
eler
, Suc
id F
unct
ion,
Suc
id L
oad,
Suc
id S
ecur
ity,
Suc
id R
epor
tsT
ool E
valu
ator
:Gile
s Sa
mou
nV
endo
r N
ame:
Suci
d C
orpo
rati
onV
endo
r W
ebsi
te:w
ww
.suc
id.c
omD
ate
of E
valu
atio
n:02
/01/
06T
ool O
ffer
ings
:SA
P Te
st A
utom
atio
n To
ol f
or F
unct
iona
l tes
ting
(un
it, i
nteg
rati
on, r
egre
ssio
n), l
oad
test
ing
(loa
d, s
tres
s,pe
rfor
man
ce),
and
sec
urit
y te
stin
g (p
osit
ive,
neg
ativ
e)
Cri
teri
aC
omm
ents
/Res
pons
es
I. T
rans
acti
on C
aptu
re a
nd P
layb
ack
Aut
omat
ed g
loba
l cha
nges
for
obj
ect
chan
ges
and
dele
tion
sN
o.
Test
s ca
n be
dev
elop
ed c
oncu
rren
tly
wit
h A
llow
s fo
r te
st a
utom
atio
n to
tak
e pl
ace
conc
urre
ntly
wit
h de
velo
pmen
t. A
nd
soft
war
e de
velo
pmen
tif
any
tra
nsac
tion
mus
t be
rec
aptu
red
due
to m
ajor
cha
nges
late
in t
hede
velo
pmen
t cy
cle,
the
tra
nsac
tion
can
sim
ply
be r
ecap
ture
d w
itho
ut s
crip
ting
,an
d al
l the
tra
nsac
tion
s ar
ound
tha
t tr
ansa
ctio
n in
the
mod
eled
bus
ines
spr
oces
s au
tom
atic
ally
upd
ate
thei
r lin
ks a
nd d
epen
denc
ies
to/w
ith
that
rep
lace
dtr
ansa
ctio
n w
itho
ut t
he n
eed
for
scri
ptin
g.
No
scri
pt c
odin
g re
quir
edU
ser
runs
tra
nsac
tion
s us
ing
the
SAP
GU
I as
the
y no
rmal
ly w
ould
, and
tran
sact
ions
are
aut
omat
ical
ly c
aptu
red
and
auto
mat
ed w
itho
ut r
equi
ring
any
scri
ptin
g.
Tool
sup
port
s re
cord
ing
of n
on S
AP
appl
icat
ions
Suci
d pr
oduc
ts s
uppo
rt o
nly
SAP
test
aut
omat
ion.
Our
pro
duct
line
has
bee
nbu
ilt a
nd a
rchi
tect
ed f
rom
the
gro
und
up s
peci
fica
lly f
or S
AP.
06_4782 2/5/07 10:43 AM Page 122
123
Com
mon
scr
ipti
ng la
ngua
ge (
i.e.,
VB
)N
/A. P
rodu
ct is
not
a s
crip
t-dr
iven
too
l, so
thi
s is
not
app
licab
le.
Allo
ws
RFC
s to
be
calle
dSu
ppor
ts c
aptu
re, m
odel
ing
and
test
exe
cuti
on o
f m
achi
ne-g
ener
ated
tran
sact
ions
, suc
h as
tho
se s
ubm
itte
d th
roug
h R
FC o
r B
API
inte
rfac
es w
ith
the
sam
e fu
ncti
onal
ity
as t
hat
offe
red
for
test
ing
user
-gen
erat
ed t
rans
acti
ons.
Prod
uces
aut
omat
ic o
ptio
nal s
teps
Prod
uct
does
not
cur
rent
ly s
uppo
rt t
his
func
tion
alit
y, b
ut it
is p
lann
ed f
or a
futu
re r
elea
se.
Has
ana
log
and
obje
ct r
ecor
ding
cap
abili
ties
Ana
log
reco
rdin
g ca
pabi
litie
s ar
e no
t av
aila
ble.
Obj
ect
reco
rdin
g ca
pabi
litie
sw
itho
ut t
he n
eed
to w
rite
or
mai
ntai
n an
y sc
ript
s.
Has
rep
osit
ory
for
man
agin
g th
e pr
oper
ties
of
Stor
es t
he a
utom
ated
tes
t ca
ses’
ass
ocia
ted
prop
erti
es w
ithi
n a
data
base
.re
cord
ed o
bjec
ts
Thi
nk t
imes
can
be
adde
d w
itho
ut c
hang
ing
Thi
nk t
imes
can
be
chan
ged
wit
hout
any
scr
ipti
ng/c
odin
g.pr
ogra
mm
ing
code
Test
too
l allo
ws
for
crea
tion
of
user
-def
ined
N
/A. T
his
ques
tion
is n
ot a
pplic
able
to
Suci
d pr
oduc
ts, w
hich
are
not
scr
ipti
ng-
func
tion
sba
sed
in t
he f
irst
pla
ce.
Test
too
l off
ers
keyw
ord-
driv
en t
ests
No.
A t
abul
ar in
terf
ace
for
mak
ing
vari
able
s ou
t of
SA
P fi
elds
is u
sed
for
auto
mat
ed t
est
case
s, w
hich
see
ms
at le
ast
sim
ilar
to t
his.
Has
inte
ract
ive
capt
ured
scr
een
of c
aptu
red/
Dis
play
s ea
ch s
cree
n ca
ptur
ed a
s th
e tr
ansa
ctio
n is
aut
omat
ed.
reco
rded
pro
cess
If t
ool o
ffer
s ca
ptur
ed/r
ecor
ded
scre
en, u
ser
can
Supp
orts
thi
s ca
pabi
lity,
but
aga
in d
oes
so w
itho
ut s
crip
ting
, as
scri
ptin
g is
mod
ify
scri
pt lo
gic
thro
ugh
the
capt
ured
scr
een
not
requ
ired
.
Allo
ws
rena
min
g of
labe
ls f
or c
aptu
red
fiel
dsPr
oduc
t tr
eats
cap
ture
d SA
P da
ta f
ield
s by
the
ir n
ativ
e SA
P na
mes
.
Allo
ws
addi
ng o
f st
art-
and
sto
pwat
ches
Ena
bles
the
use
r to
che
ck s
cree
n re
spon
se t
imes
as
a fu
ncti
onal
or
load
tes
tco
ndit
ion
wit
hout
req
uiri
ng a
ny s
crip
ting
.
Ven
dor
offe
rs li
brar
y of
pre
-rec
orde
d SA
P sc
ript
s/pr
oces
ses
wit
h to
olD
oes
not
incl
ude
a lib
rary
of
such
scr
ipts
.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 123
124
II. S
AP
Supp
orte
d V
ersi
ons,
App
licat
ions
Com
pati
ble
wit
h SA
P bo
lt-o
ns (
i.e.,
BW
, B
eyon
d th
e tr
adit
iona
l SA
P E
RP
core
(in
clud
ing
all i
ts m
odul
es s
uch
as F
I, S
D,
SRM
, APO
, C-f
olde
rs, C
RM
, etc
.)M
M, e
tc.)
, the
pro
duct
is c
ompa
tibl
e w
ith
othe
r SA
P m
odul
es s
uch
as C
RM
,B
W, A
PO, a
nd M
DM
/AL
E.
Supp
orts
dif
fere
nt v
ersi
ons
of S
AP
(i.e
., SA
P Su
cid
func
tion
sup
port
s SA
P G
UI.
Sup
port
for
the
SA
P W
eb in
terf
ace
isG
UI,
Cit
rix,
Net
wea
ver,
and
Port
als)
curr
entl
y no
t av
aila
ble.
III.
Too
l Mai
nten
ance
Allo
ws
tool
bar
cust
omiz
atio
nsTo
olba
r cu
stom
izat
ions
are
not
off
ered
.
IV. T
ool I
nsta
llati
on
Tool
is W
eb-b
ased
or
requ
ires
des
ktop
GU
I Pr
oduc
t us
es a
thi
n cl
ient
app
roac
h. T
he u
ser
inte
rfac
e ru
ns a
s pa
rt o
f SA
Pin
stal
lati
on (
fat
or t
hin
clie
nt in
stal
lati
on)?
itse
lf, s
o us
ers
acce
ss S
ucid
fea
ture
s fr
om S
AP
as t
hey
wou
ld w
ith
any
othe
rSA
P fe
atur
e.
Ven
dor
offe
rs f
loat
ing
licen
ses
Ven
dor
offe
rs s
impl
e m
onth
ly s
ubsc
ript
ion
pric
ing
mod
el. P
rici
ng m
odel
isba
sed
on n
umbe
r of
aut
omat
ed t
rans
acti
ons
as o
ppos
ed t
o pe
r-se
at s
tyle
licen
ses.
V. T
ool I
nteg
rati
on
Stor
es t
est
asse
ts in
MSD
E, S
QL
Ser
ver,
or O
racl
eSt
ores
tes
t as
sets
in a
var
iety
of
rela
tion
al d
atab
ases
: Ora
cle,
SQ
L S
erve
r, an
dot
hers
.
Inte
grat
es w
ith
Solu
tion
Man
ager
No
inte
grat
ion
wit
h So
luti
on M
anag
er.
EXHI
BIT
6.5
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 124
125
If t
ool i
nteg
rate
s w
ith
solu
tion
man
ager
, N
/A. N
ot y
et in
tegr
ated
.ca
pabi
litie
s ex
ist
to e
xecu
te r
ecor
ded
scri
pts
from
Sol
utio
n M
anag
er
Inte
grat
es w
ith
test
man
agem
ent
tool
Aut
omat
ed t
ests
are
sto
red
wit
hin
Suci
d pr
oduc
ts—
not
in a
sep
arat
e te
stm
anag
emen
t to
ol. T
he S
ucid
pro
duct
line
incl
udes
its
own
built
-in
test
man
agem
ent
tool
.
If in
tegr
atio
n w
ith
test
man
agem
ent
tool
exi
sts
The
bui
lt-i
n te
st m
anag
emen
t to
ol d
oes
not
incl
ude
vers
ioni
ng f
eatu
res.
does
it o
ffer
ver
sion
con
trol
cap
abili
ties
? O
rin
tegr
ate
wit
h th
ird-
part
y to
ol f
or v
ersi
on c
ontr
ol?
Inte
grat
es w
ith
eCA
TT
Suci
d fu
ncti
on in
tegr
ates
wit
h eC
AT
T in
terf
aces
and
leve
rage
s eC
AT
Tca
pabi
litie
s se
amle
ssly
.
Inte
grat
es w
ith
test
too
ls o
ther
tha
n eC
AT
TD
oes
not
ship
wit
h pr
ebui
lt in
tegr
atio
n in
to a
ny t
est
tool
oth
er t
han
eCA
TT.
Ope
n A
PI t
o in
tegr
ate
wit
h ot
her
tool
s, la
ngua
ges
Prod
uct
expo
ses
inte
rfac
es t
hrou
gh R
FC a
nd J
ava
API
s.
VI.
Too
l Exe
cuti
on
Dec
isio
n-m
akin
g op
tion
s fo
r ea
ch t
est
step
on
No.
pass
or
fail
Exe
cuti
on c
ontr
ol a
llow
s si
ngle
-ste
p, b
reak
poin
ts,
Allo
ws
user
to
visu
ally
ste
p th
roug
h an
y au
tom
ated
tes
t on
e sc
reen
at
a ti
me
scre
en c
aptu
res,
var
iabl
e an
d da
ta m
onit
orin
gto
see
exa
ctly
wha
t is
hap
peni
ng a
t ea
ch s
tep
on e
ach
scre
en, e
nabl
ing
visu
alre
view
and
ver
ific
atio
n of
scr
eens
, SA
P da
ta f
ield
val
ues,
SA
P m
essa
ges,
etc
.
Aut
omat
ed im
pact
ana
lysi
s fo
r ap
plic
atio
n ch
ange
sD
oes
not
supp
ly a
utom
ated
impa
ct a
naly
sis.
Cap
abili
ties
to
run
unat
tend
ed a
nd s
kip
faile
d Pr
oduc
t m
ay b
e ru
n in
una
tten
ded
mod
e. P
rodu
ct w
ill e
xecu
te la
ter
iter
atio
ns if
it
erat
ions
earl
ier
iter
atio
n fa
ils.
Run
s sc
ript
s in
bac
kgro
und
and
fore
grou
nd m
ode
Run
tes
ts in
the
for
egro
und
on a
ny u
ser’
s PC
, wit
h ea
ch s
tep/
scre
en d
ispl
ayed
visu
ally
on
the
user
’s d
ispl
ay f
or v
isua
l ver
ific
atio
n of
tes
t ex
ecut
ion
and
resu
lts;
alt
erna
tive
ly t
ests
can
be
run
in t
he b
ackg
roun
d on
a t
est
serv
er.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 125
126
Has
sch
edul
ing
capa
bilit
ies
Doe
s no
t of
fer
a bu
ilt-i
n G
UI-
base
d sc
hedu
ler,
but
it d
oes
offe
r in
terf
aces
enab
ling
sim
ple
scri
pts
to in
voke
tes
t ru
ns a
t sc
hedu
led
tim
es.
Sche
dulin
g to
ol o
ffer
s ex
ecut
ion
wit
h de
pend
enci
esT
he d
epen
denc
y lo
gic
wou
ld n
eed
to b
e bu
ilt in
to t
he s
ched
ulin
g sc
ript
by
the
user
.
Con
tain
s D
ebug
ger
Suci
d fu
ncti
on e
nabl
es u
ser
to v
isua
lly s
tep
thro
ugh
any
auto
mat
ed t
est
to s
eeex
actl
y w
hat
is h
appe
ning
at
each
ste
p on
eac
h sc
reen
to
visu
ally
deb
ug a
nddi
agno
se p
robl
ems
that
may
dev
elop
eit
her
in t
he a
utom
ated
tes
ts t
hem
selv
esor
in t
he S
AP
appl
icat
ions
or
data
.
Allo
ws
for
auto
mat
ic s
ynch
roni
zati
on b
etw
een
The
pro
duct
’s o
rche
stra
tion
ser
ver
mai
ntai
ns a
rea
l-ti
me
data
base
wit
h th
e st
ate
clie
nt a
nd s
erve
rof
all
test
s an
d au
tom
atic
ally
ens
ures
tha
t tr
ansa
ctio
ns a
re e
xecu
ted
in t
heco
rrec
t or
der
wit
h th
e pr
oper
thi
nk t
imes
. Eac
h tr
ansa
ctio
n m
ust
com
plet
ebe
fore
the
nex
t tr
ansa
ctio
n be
gins
, or
befo
re t
he n
ext
thin
k ti
me
dela
y be
gins
ifth
ere
is a
spe
cifi
ed t
hink
tim
e, e
nsur
ing
cont
inuo
us s
ynch
roni
zati
on a
ndac
cura
te s
imul
atio
n of
rea
l-w
orld
usa
ge p
atte
rns.
Bui
lt-i
n er
ror
hand
ling
capa
bilit
yH
andl
es e
rror
s in
aut
omat
ed t
est
exec
utio
n. I
n th
e ev
ent
of a
n er
ror,
the
func
tion
al t
est
case
s w
ill li
kely
fai
l and
be
repo
rted
as
such
, but
the
res
t of
the
auto
mat
ed t
est
tran
sact
ions
will
be
exec
uted
.
Bui
lt-i
n co
ntex
t re
cove
ry c
apab
ility
Mai
ntai
ns c
onte
xt t
hrou
gh it
s or
ches
trat
ion
serv
er, w
hich
will
mai
ntai
n an
dco
ntex
t in
the
eve
nt o
f er
rors
or
failu
res
in a
utom
ated
tes
t ca
ses
or in
SA
Pit
self
.
Aut
omat
ic t
imin
g fo
r ea
ch s
tep,
pro
cess
, and
sui
tePr
oduc
t’s o
rche
stra
tion
ser
ver
auto
mat
es c
oord
inat
ion
and
tim
ing
for
each
ste
p(t
rans
acti
on),
pro
cess
(bu
sine
ss p
roce
ss)
and
suit
e.
EXHI
BIT
6.5
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 126
127
VII
. Too
l Dat
a
Use
r-de
fine
d da
ta f
ilter
ing
on a
ll vi
ews
Use
r ca
n qu
ery
test
res
ults
acc
ordi
ng t
o a
vari
ety
of p
aram
eter
s, s
uch
as t
heda
te/t
ime,
the
per
son
who
ran
the
tes
t, t
he n
ame
of t
he t
est
run,
the
ass
ocia
ted
SAP
tran
spor
t, t
he a
ssoc
iate
d re
quir
emen
t(s)
, etc
.
Sum
mar
y re
port
s ar
e in
clud
ed w
ith
the
prod
uct.
All
test
ass
ets
stor
ed a
s da
ta in
rel
atio
nal d
atab
ase
Stor
es a
ll te
st a
sset
s as
dat
a in
a r
elat
iona
l dat
abas
e.
Dat
abas
e ve
rifi
cati
on a
nd d
ata
acqu
isit
ion
Ret
riev
es a
nd v
erif
ies
data
val
ues
in t
he S
AP
data
base
as
part
of
auto
mat
ed t
est
case
s.
Prov
ides
Exc
el-b
ased
fun
ctio
n (i
.e.,
TR
IM,
Incl
udes
aut
omat
ed f
unct
ions
to
proc
ess
the
cont
ents
of
SAP
data
fie
lds
and
MID
, etc
.) t
o cl
ean
up c
aptu
red
text
mes
sage
s fo
r us
e in
fun
ctio
nal t
est
auto
mat
ion.
Use
rs d
o no
t ha
ve t
o us
e E
xcel
or p
erfo
rm s
crip
ting
to
do t
his.
Dat
a-dr
iven
tes
ts (
i.e.,
pulls
dat
a fr
om
Suci
d fu
ncti
on s
uppo
rts
data
-dri
ven
test
ing
such
as
loop
ing
or it
erat
ive
spre
adsh
eets
, ext
erna
l sou
rces
, etc
.)ex
ecut
ion
of a
n au
tom
ated
tra
nsac
tion
usi
ng d
iffe
rent
dat
a va
lues
pul
led
from
an S
AP
tabl
e in
eac
h it
erat
ion
wit
hout
scr
ipti
ng (
for
exam
ple,
to
test
a s
et o
fdi
ffer
ent
mat
eria
l typ
es u
sing
a s
ingl
e tr
ansa
ctio
n). S
ucid
fun
ctio
n al
so p
rovi
des
the
abili
ty t
o pa
ram
eter
ize
fiel
ds in
an
auto
mat
ed t
rans
acti
on w
itho
ut s
crip
ting
(for
exa
mpl
e, t
o pr
oduc
e a
valid
, uni
que
new
PO
num
ber
each
tim
e a
new
orde
r is
add
ed),
the
reby
red
ucin
g th
e ne
ed t
o cr
eate
and
man
age
exte
rnal
dat
afi
les
in t
he f
irst
pla
ce.
Allo
ws
for
veri
fica
tion
poi
nts
(obj
ects
, Su
ppor
ts t
he a
bilit
y to
ver
ify
text
fie
lds,
dat
abas
e va
lues
, and
var
ious
obj
ects
in
data
base
val
ues,
tex
t)SA
P.
Tool
off
ers
regu
lar
expr
essi
ons
(i.e
., te
xt
Supp
orts
reg
ular
exp
ress
ions
.ch
arac
ter
mat
chin
g)
Cap
abili
ties
for
cre
atin
g ex
tern
al d
ata
file
sT
he d
ata
gene
rate
d or
use
d du
ring
aut
omat
ed t
est
exec
utio
n is
sto
red
toge
ther
wit
h al
l oth
er t
est
outp
uts
in a
rel
atio
nal d
atab
ase.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 127
128
Allo
ws
data
see
ding
and
dat
a co
rrel
atio
nA
utom
ates
dis
cove
ry d
urin
g tr
ansa
ctio
n ca
ptur
e an
d va
riab
le c
hain
ing
duri
ngtr
ansa
ctio
n ex
ecut
ion
betw
een
tran
sact
ions
tha
t co
mpr
ise
an S
AP
busi
ness
proc
ess.
Han
dles
var
iabl
e ch
aini
ng d
urin
g te
st e
xecu
tion
wit
hout
req
uiri
ng a
ny s
crip
ting
or o
ther
inte
rven
tion
fro
m t
he u
ser;
for
exa
mpl
e, if
a n
ew o
rder
is a
dded
in o
nest
ep o
f a
busi
ness
pro
cess
, the
pro
duct
will
aut
omat
ical
ly p
ass
the
new
PO
num
ber
crea
ted
for
the
new
ord
er in
to t
he n
ext
tran
sact
ion
in t
he b
usin
ess
proc
ess
that
exe
cute
s to
upd
ate
the
orde
r th
at h
as ju
st b
een
crea
ted.
Allo
ws
vari
able
dec
lara
tion
SAP
data
fie
lds
and
mes
sage
s in
corp
orat
ed in
to a
utom
ated
tes
ts a
lrea
dy c
arry
vari
able
typ
es.
Cap
ture
s sc
reen
tex
t (i
.e.,
stat
us b
ar m
essa
ges)
Cap
ture
s al
l scr
een
text
suc
h as
SA
P er
ror
or c
onfi
rmat
ion
mes
sage
s, a
nd t
his
text
can
hav
e fu
ncti
onal
tes
t ca
ses
atta
ched
to
chec
k th
e co
nten
t of
thi
s sc
reen
text
wit
hout
req
uiri
ng a
ny s
crip
ting
.
Prov
ides
pla
ybac
k w
ith
mul
tipl
e da
ta a
cces
s Su
ppor
ts a
var
iety
of
data
acc
ess
met
hods
for
dat
a-dr
iven
aut
omat
ed t
ests
, suc
hm
etho
ds (
i.e.,
rand
om)
as s
eque
ntia
l, ra
ndom
, etc
.
VII
I. T
ool S
ecur
ity
Use
r an
d gr
oup
secu
rity
and
per
mis
sion
s fo
r O
ffer
s se
curi
ty b
ut n
ot d
own
to t
he in
divi
dual
tes
t as
set
com
pone
nt.
each
tes
t as
set
com
pone
nt
Allo
ws
SAP
role
s-ba
sed
test
ing
Off
ers
SAP
secu
rity
tes
ting
eit
her
by o
ne o
r m
ore
spec
ific
use
r ID
s, o
r by
secu
rity
pro
file
s; w
hen
test
ing
by s
ecur
ity
prof
iles,
all
user
s th
at a
re a
ssoc
iate
dw
ith
a se
curi
ty p
rofi
le a
re t
este
d.
Can
aut
omat
ical
ly t
est
to s
ee if
use
rs w
ho a
re s
uppo
sed
to b
e ab
le t
o ru
nde
sign
ated
tra
nsac
tion
s ar
e, in
fac
t, a
ble
to r
un t
hem
(po
siti
ve t
esti
ng);
and
can
also
aut
omat
ical
ly t
est
to s
ee if
use
rs w
ho a
re n
ot s
uppo
sed
to b
e ab
le t
o ru
nde
sign
ated
tra
nsac
tion
s ar
e, in
fac
t, n
ot a
ble
to r
un t
hem
(ne
gati
ve t
esti
ng).
EXHI
BIT
6.5
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 128
129
IX. V
endo
r Su
ppor
t
Ven
dor
offe
rs W
eb-b
ased
pat
ches
, dow
nloa
ds
Ven
dor
prov
ides
upd
ates
via
web
site
.to
upg
rade
too
l
SAP
Cor
pora
tion
has
for
mal
ly c
erti
fied
the
too
lN
o.
X. T
rain
ing
Ven
dor
offe
rs t
est
tool
s tr
aini
ngV
endo
r pr
ovid
es o
nsit
e tr
aini
ng s
ervi
ces
to c
usto
mer
’s u
sers
.
Ven
dor
offe
rs c
erti
fica
tion
exa
min
atio
n in
tes
t to
olV
endo
rs d
oes
not
offe
r a
cert
ific
atio
n pr
ogra
m.
XI.
Tes
t R
epor
ting
and
Rev
iew
Res
ults
logs
sto
re s
cree
n ca
ptur
esY
es.
Res
ults
logs
sho
w s
tatu
s fo
r ea
ch r
ow o
f da
ta
Suci
d re
port
s lo
g te
st r
esul
ts p
er it
erat
ion.
(ite
rati
on)
Res
ults
logs
incl
ude
date
and
tim
e st
amp
Yes
.
Aut
omat
ical
ly g
ener
ates
arc
hiva
ble
elec
tron
ic d
ocum
ents
tha
t in
clud
e al
l dat
are
late
d to
eac
h ex
ecut
ed t
est
(dat
e/ti
me
stam
p, t
est
resu
lts,
scr
eens
hots
, etc
.);
thes
e do
cum
ents
can
be
save
d an
d vi
ewed
to
prov
e ex
actl
y w
hat
has
been
test
ed a
nd w
hat
the
resu
lts
of t
hose
tes
ts w
ere.
Res
ults
logs
can
be
save
d in
dif
fere
nt f
orm
ats
Suci
d re
port
s au
tom
atic
ally
pro
duce
vie
wab
le a
nd a
rchi
ved
resu
lts
repo
rts
in
(i.e
., H
TM
L, .
doc)
HT
ML
for
mat
. The
se r
epor
ts c
an b
e sa
ved
in o
ther
for
mat
s as
wel
l.
Cre
ates
aut
omat
ic t
est
resu
lts
file
s (t
est
logs
)Su
cid
repo
rts
auto
mat
ical
ly g
ener
ate
and
stor
e te
st r
esul
ts f
rom
all
auto
mat
edte
st e
xecu
tion
s, in
clud
ing
date
/tim
e st
amp,
iden
tify
ing
info
rmat
ion
on t
est(
s)ex
ecut
ed, i
dent
ity
of p
erso
n ex
ecut
ing
the
test
, rev
iew
/aut
hori
zati
on s
tatu
s of
test
res
ults
, scr
eens
hots
of
exec
uted
scr
eens
fro
m t
est
run,
res
pons
e ti
mes
, and
SAP
mes
sage
s re
turn
ed p
er s
cree
n, e
tc.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 129
130
Use
r-de
fine
d qu
ery
and
repo
rtin
g or
cha
rtin
g U
ser
can
quer
y te
st r
esul
ts a
ccor
ding
to
a va
riet
y of
par
amet
ers,
suc
h as
the
ca
pabi
lity
date
/tim
e, t
he p
erso
n w
ho r
an t
he t
est,
the
nam
e of
the
tes
t ru
n, t
he a
ssoc
iate
dSA
P tr
ansp
ort,
the
ass
ocia
ted
requ
irem
ent(
s), e
tc.
Sum
mar
y re
port
s ar
e in
clud
ed w
ith
the
prod
uct.
Eng
lish-
narr
ativ
e do
cum
enta
tion
pro
duce
d D
oes
not
outp
ut o
r pr
oduc
e te
xt in
nar
rati
ve f
orm
. It
does
com
mun
icat
e in
auto
mat
ical
ly f
rom
tes
t pr
oces
ses
clea
r E
nglis
h th
e ac
tual
vs.
exp
ecte
d re
sult
s fo
r fu
ncti
onal
, loa
d, a
nd s
ecur
ity
test
ing.
But
the
tex
t is
inte
rspe
rsed
wit
h sc
reen
shot
s an
d is
not
pre
sent
ed in
narr
ativ
e fo
rm.
Exp
ort
to t
ext
capa
bilit
y fo
r al
l tes
t as
sets
Doe
s no
t of
fer
this
cap
abili
ty.
Use
r-ex
tens
ible
cla
sses
, act
ions
, and
fun
ctio
nsD
oes
not
requ
ire
clas
ses,
act
ions
, and
fun
ctio
ns.
Use
r ca
n ex
tend
inte
rfac
e w
ith
unlim
ited
new
D
oes
not
supp
ort
this
cap
abili
ty.
attr
ibut
e fi
elds
Allo
ws
man
y-to
-one
and
one
-to-
man
y Su
ppor
ts a
ssoc
iati
on o
f te
st c
ases
wit
h re
quir
emen
ts a
nd w
ith
tran
spor
ts f
or
requ
irem
ents
tra
ceab
ility
trac
eabi
lity.
Supp
orts
ful
l ind
irec
tion
for
all
test
pro
cess
es a
nd
No.
data
file
nam
es
Lan
guag
e an
d pl
atfo
rm in
depe
nden
tL
angu
age
and
plat
form
inde
pend
ent.
Scr
ipti
ng is
alm
ost
enti
rely
elim
inat
ed, s
ola
ngua
ge is
not
an
issu
e. P
rodu
ct’s
arc
hite
ctur
e m
akes
it p
latf
orm
inde
pend
ent.
Rep
rint
ed w
ith
perm
issi
on f
rom
Suc
id C
orpo
rati
on.
EXHI
BIT
6.5
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 130
131
EXHI
BIT
6.6
Test
Too
l Eva
luat
ion
Mat
rix
(Ven
dor:
Wor
ksof
t, I
nc.)
Tes
t T
ool E
valu
atio
n M
atri
x
Too
l(s)
Nam
e:C
erti
fyT
ool E
valu
ator
:Lin
da H
ayes
Ven
dor
Nam
e:W
orks
oft,
Inc
.V
endo
r W
ebsi
te:w
ww
.wor
ksof
t.co
mD
ate
of E
valu
atio
n:02
/01/
06T
ool O
ffer
ings
:Tes
t m
anag
emen
t, a
utom
atio
n, a
nd r
epor
ting
sol
utio
n fo
r SA
P an
d ot
her
plat
form
s.
Cri
teri
aC
omm
ents
/Res
pons
es
I. T
rans
acti
on C
aptu
re a
nd P
layb
ack
Aut
omat
ed g
loba
l cha
nges
for
obj
ect
chan
ges
and
dele
tion
sA
ny id
enti
fied
cha
nges
can
be
mad
e gl
obal
ly t
o al
l aff
ecte
d te
stas
sets
.
Test
s ca
n be
dev
elop
ed c
oncu
rren
tly
wit
h so
ftw
are
deve
lopm
ent
Use
rs m
ay d
efin
e th
e ap
plic
atio
n m
ap d
irec
tly
from
the
spec
ific
atio
n or
pro
toty
pe a
nd d
evel
op t
heir
tes
ts b
efor
e th
eso
ftw
are
is d
eliv
ered
.
No
scri
pt c
odin
g re
quir
edN
o co
de is
wri
tten
or
gene
rate
d fo
r a
test
, and
dur
ing
exec
utio
nno
cod
e is
exp
osed
or
debu
gged
.
Tool
sup
port
s re
cord
ing
of n
on-S
AP
appl
icat
ions
Cer
tify
has
nat
ive
supp
ort
for
Web
, mai
nfra
me,
.Net
, Jav
a, V
B,
and
XM
L. I
t al
so s
uppo
rts
othe
r pl
atfo
rms
thro
ugh
its
API
,in
clud
ing
Sieb
el.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 131
132
Com
mon
scr
ipti
ng la
ngua
ge (
i.e.,
VB
)N
o co
ding
or
scri
ptin
g is
req
uire
d. C
usto
m e
xten
sion
s m
ay b
ead
ded
usin
g to
ol o
r la
ngua
ge o
f ch
oice
.
Allo
ws
RFC
s to
be
calle
dY
es, u
sing
Ope
n A
PI.
Prod
uces
aut
omat
ic o
ptio
nal s
teps
Yes
, usi
ng t
he s
tep
igno
re o
ptio
n.
Has
ana
log
and
obje
ct r
ecor
ding
cap
abili
ties
Cer
tify
sup
port
s ob
ject
- an
d an
alog
-lev
el a
ctio
ns; h
owev
er, t
heus
er in
terf
ace
is a
lway
s pr
esen
ted
at t
he o
bjec
t le
vel.
Has
rep
osit
ory
for
man
agin
g th
e pr
oper
ties
of
reco
rded
obj
ects
All
Cer
tify
tes
t as
sets
are
sto
red
in a
dat
abas
e re
posi
tory
,in
clud
ing
all o
bjec
ts, t
est
proc
esse
s, t
est
data
, and
res
t re
sult
s.
Thi
nk t
imes
can
be
adde
d w
itho
ut c
hang
ing
prog
ram
min
g co
deN
o co
ding
is u
sed
but
user
can
set
glo
bal o
r lo
cal t
imeo
uts.
Test
too
l allo
ws
for
crea
tion
of
user
-def
ined
fun
ctio
nsC
erti
fy c
an b
e ex
tend
ed b
y ad
ding
cla
sses
and
act
ions
for
use
r-de
fine
d fu
ncti
ons
or c
usto
m c
ontr
ols.
Use
r-de
fine
d fu
ncti
ons
can
be a
cces
sed
from
any
tes
t or
com
pone
nt.
Test
too
l off
ers
keyw
ord-
driv
en t
ests
Cer
tify
off
ers
both
cla
ss a
ctio
n an
d ke
ywor
d fu
ncti
onal
ity
Has
inte
ract
ive
capt
ured
scr
een
of c
aptu
red/
reco
rded
pro
cess
Cer
tify
allo
ws
scre
ens
to b
e ca
ptur
ed e
ithe
r on
dem
and,
by
defa
ult,
or
base
d on
res
ults
.
If t
ool o
ffer
s ca
ptur
ed/r
ecor
ded
scre
en, u
ser
can
mod
ify
scri
pt
Cer
tify
doe
s no
t us
e sc
ript
s an
d ca
ptur
ed s
cree
ns n
eed
not
be
logi
c th
roug
h th
e ca
ptur
ed s
cree
nm
aint
aine
d. A
ll sc
reen
s an
d te
sts
are
desc
ribe
d an
d m
aint
aine
din
a d
atab
ase
and
can
be a
utom
atic
ally
upd
ated
glo
bally
whe
nch
ange
s ar
e m
ade.
Allo
ws
rena
min
g of
labe
ls f
or c
aptu
red
fiel
dsY
es.
EXHI
BIT
6.6
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 132
133
Allo
ws
addi
ng o
f st
art
and
stop
wat
ches
Stop
wat
ches
may
be
adde
d bu
t ar
e no
t ne
eded
bec
ause
Cer
tify
auto
mat
ical
ly t
imes
eve
ry s
tep,
pro
cess
, and
ses
sion
.
Ven
dor
offe
rs li
brar
y of
pre
reco
rded
SA
P sc
ript
s/pr
oces
ses
No.
wit
h to
ol
II. S
AP
Supp
orte
d V
ersi
ons,
App
licat
ions
Com
pati
ble
wit
h SA
P bo
lt-o
ns (
i.e.,
BW
, SR
M, A
PO, C
-fol
ders
,O
pen
supp
ort
for
any
SAP
add-
on o
r th
ird-
part
y bo
lt-o
ns.
CR
M, e
tc.)
Supp
orts
dif
fere
nt v
ersi
ons
of S
AP
(i.e
., SA
P G
UI,
Cit
rix,
SAP
Ver
sion
s V
4.X
, 5.X
.N
etw
eave
r, an
d Po
rtal
s)
III.
Too
l Mai
nten
ance
Allo
ws
tool
bar
cust
omiz
atio
nsC
erti
fy a
llow
s an
unl
imit
ed n
umbe
r of
use
r-de
fine
d fi
elds
to
bead
ded
usin
g m
ulti
ple
obje
ct t
ypes
suc
h as
tex
t fi
elds
, che
ckbo
xes,
com
bo b
oxes
, etc
. The
se f
ield
s m
ay b
e re
quir
ed o
r op
tion
al a
ndar
e av
aila
ble
for
cust
om f
ilter
, que
ries
, and
rep
orts
.
IV. T
ool I
nsta
llati
on
Tool
is W
eb-b
ased
or
requ
ires
des
ktop
GU
I in
stal
lati
onT
hick
clie
nt b
ut m
anag
ed f
rom
cen
tral
ser
ver
for
auto
mat
ion
(fat
or
thin
clie
nt in
stal
lati
on)?
upda
tes.
Ven
dor
offe
rs f
loat
ing
licen
ses
Yes
.
V. T
ool I
nteg
rati
on
Stor
es t
est
asse
ts in
MSD
E, S
QL
Ser
ver,
or O
racl
eC
erti
fy s
uppo
rts
repo
sito
ries
usi
ng M
SDE
, SQ
L S
erve
r, or
Ora
cle.
Inte
grat
es w
ith
Solu
tion
Man
ager
No.
If t
ool i
nteg
rate
s w
ith
solu
tion
man
ager
, cap
abili
ties
exi
st t
oN
o.ex
ecut
e re
cord
ed s
crip
ts f
rom
Sol
utio
n M
anag
er
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 133
134
Inte
grat
es w
ith
test
man
agem
ent
tool
Cer
tify
is a
n in
tegr
ated
man
agem
ent
and
auto
mat
ion
solu
tion
.
If in
tegr
atio
n w
ith
test
man
agem
ent
tool
exi
sts,
doe
s it
off
er
Cer
tify
man
ages
mul
tipl
e ve
rsio
ns o
f ap
plic
atio
ns a
nd t
est
asse
ts
vers
ion-
cont
rol c
apab
iliti
es?
Or
inte
grat
e w
ith
thir
d-pa
rty
and
prov
ides
impa
ct a
naly
sis
and
auto
mat
ed u
pdat
es f
or t
ool f
or
vers
ion
cont
rol?
chan
ges
betw
een
vers
ions
.
Inte
grat
es w
ith
eCA
TT
Inte
grat
es w
ith
SAP
GU
I Sc
ript
ing.
Inte
grat
es w
ith
test
too
ls o
ther
tha
n eC
AT
TY
es.
Ope
n A
PI t
o in
tegr
ate
wit
h ot
her
tool
s, la
ngua
ges
The
Cer
tify
Ope
n A
PI a
llow
s fu
ncti
ons
to b
e w
ritt
en u
sing
the
tool
or
lang
uage
of
choi
ce.
VI.
Too
l Exe
cuti
on
Dec
isio
n-m
akin
g op
tion
s fo
r ea
ch t
est
step
on
pass
or
fail
Eve
ry s
tep
in C
erti
fy o
ffer
s on
pas
s/on
fai
l opt
ions
to
cont
rol t
hete
st f
low
bas
ed o
n ex
ecut
ion
resu
lts
and
spec
ify
the
corr
ect
log
resp
onse
.
Exe
cuti
on c
ontr
ol a
llow
s si
ngle
-ste
p, b
reak
poin
ts, s
cree
n T
he C
erti
fy e
xecu
tion
das
hboa
rd a
llow
s us
ers
to p
erfo
rm s
ingl
e-ca
ptur
es, v
aria
ble
and
data
mon
itor
ing
step
exe
cuti
on, s
kip
step
s, s
et b
reak
poin
ts, m
onit
or v
aria
ble
or
reco
rdse
ts, c
aptu
re t
he s
cree
n, o
r ab
ort
the
sess
ion.
Aut
omat
ed im
pact
ana
lysi
s fo
r ap
plic
atio
n ch
ange
sA
ny a
pplic
atio
n ch
ange
is a
utom
atic
ally
map
ped
to a
ll af
fect
edte
st a
sset
s fo
r in
stan
t im
pact
ana
lysi
s.
Cap
abili
ties
to
run
unat
tend
ed a
nd s
kip
faile
d it
erat
ions
Cer
tify
pro
vide
s a
rich
set
of
erro
r ha
ndlin
g an
d re
cove
ry o
ptio
nsto
con
trol
exe
cuti
on w
orkf
low
bas
ed o
n re
sult
s.
Run
s sc
ript
s in
bac
kgro
und
and
fore
grou
nd m
ode
No.
Has
sch
edul
ing
capa
bilit
ies
Cer
tify
inte
grat
es w
ith
Win
dow
s Ta
sk S
ched
uler
.
Sche
dulin
g to
ol o
ffer
s ex
ecut
ion
wit
h de
pend
enci
esU
ser
can
defi
ne e
xecu
tion
dep
ende
ncie
s ba
sed
on r
unti
me
resu
lts.
EXHI
BIT
6.6
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 134
135
Con
tain
s D
ebug
ger
Exe
cuti
on d
ashb
oard
pro
vide
s st
ep-l
evel
exe
cuti
on, b
reak
poin
ts,
vari
able
and
rec
ords
et w
atch
win
dow
s. A
lso
allo
ws
scre
enca
ptur
es o
n de
man
d as
wel
l as
skip
ping
ste
ps.
Allo
ws
for
auto
mat
ic s
ynch
roni
zati
on b
etw
een
clie
nt a
nd s
erve
rE
very
Cer
tify
ste
p is
aut
omat
ical
ly s
ynch
roni
zed
wit
h th
eap
plic
atio
n pl
ayba
ck s
peed
.
Bui
lt-i
n er
ror
hand
ling
capa
bilit
yC
erti
fy p
rovi
des
a ri
ch s
et o
f er
ror
logg
ing
and
hand
ling
opti
ons
to a
ssur
e un
atte
nded
exe
cuti
on c
an c
onti
nue
afte
r er
rors
.
Bui
lt-i
n co
ntex
t re
cove
ry c
apab
ility
Cer
tify
pro
vide
s an
aut
omat
ed r
ecov
ery
syst
em t
hat
can
rest
ore
cont
ext
to a
kno
wn
stat
e an
d co
ntin
ue e
xecu
tion
aft
er a
loss
of
cont
ext.
Aut
omat
ic t
imin
g fo
r ea
ch s
tep,
pro
cess
, and
sui
teN
o st
opw
atch
es a
re r
equi
red,
tim
ing
is a
utom
atic
ally
mea
sure
s at
ever
y le
vel o
f ex
ecut
ion.
VII
. Too
l Dat
a
Use
r-de
fine
d da
ta f
ilter
ing
on a
ll vi
ews
Eve
ry t
est
asse
t vi
ew c
an b
e cu
stom
ized
as
to c
olum
ns, c
olum
nor
der,
sort
ord
er. a
nd f
ilter
ing
base
d on
sta
ndar
d or
cus
tom
crit
eria
.
All
test
ass
ets
stor
ed a
s da
ta in
rel
atio
nal d
atab
ase
All
Cer
tify
tes
t as
sets
are
sto
red
as d
ata.
The
re a
re n
o sc
ript
file
sfo
r te
st c
ases
. Onl
y us
er-d
efin
ed c
lass
es o
r ac
tion
s re
quir
e co
ding
and
only
onc
e pe
r cl
ass
and
acti
on.
Dat
abas
e ve
rifi
cati
on a
nd d
ata
acqu
isit
ion
Cer
tify
ena
bles
dir
ect
veri
fica
tion
or
acqu
isit
ion
of d
ata
stor
edw
ithi
n a
data
base
.
Prov
ides
Exc
el-b
ased
fun
ctio
n (i
.e.,
TR
IM, M
ID, e
tc.)
to
clea
n Te
st d
ata
may
be
crea
ted
or m
odif
ied
wit
hin
Exc
el a
s de
sire
d,
up c
aptu
red
text
then
sto
red
in C
erti
fy r
epos
itor
y.
Dat
a-dr
iven
tes
ts (
i.e.,
pulls
dat
a fr
om s
prea
dshe
ets,
ext
erna
l C
erti
fy a
llow
s re
cord
sets
to
be d
efin
ed a
nd s
tore
d w
ithi
n th
e so
urce
s, e
tc.)
data
base
, or
impo
rted
fro
m E
xcel
or
CSV
file
s. U
sers
sim
ply
link
reco
rdse
ts t
o te
st p
roce
sses
and
all
file
han
dlin
g an
d lo
opin
g is
auto
mat
ical
ly p
rovi
ded.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 135
136
Allo
ws
for
veri
fica
tion
poi
nts
(obj
ects
, dat
abas
e va
lues
, tex
t)A
ny s
tep
can
veri
fy a
ny o
bjec
t.
Tool
off
ers
regu
lar
expr
essi
ons
(i.e
., te
xt c
hara
cter
mat
chin
g)Y
es. C
erti
fy a
lso
prov
ides
ric
h se
t of
pre
defi
ned
veri
fica
tion
crit
eria
suc
h as
sta
rts
wit
h, c
onta
ins,
doe
s no
t co
ntai
n, e
tc.
Cap
abili
ties
for
cre
atin
g ex
tern
al d
ata
file
sC
erti
fy c
an c
aptu
re d
ata
duri
ng r
unti
me
and
wri
te it
to
the
repo
sito
ry. A
ny r
epos
itor
y da
ta c
an b
e ex
port
ed a
s ne
eded
to
exte
rnal
tex
t or
spr
eads
heet
s.
Allo
ws
data
see
ding
and
dat
a co
rrel
atio
nA
ll C
erti
fy d
ata
vari
able
s ar
e st
ored
in t
he r
epos
itor
y an
d ar
eav
aila
ble
from
any
tes
t fo
r in
put,
ver
ific
atio
n, o
r ou
tput
.
Allo
ws
vari
able
dec
lara
tion
Text
, num
ber,
date
.
Cap
ture
s sc
reen
tex
t (i
.e.,
stat
us b
ar m
essa
ges)
Text
or
bitm
ap s
cree
n ca
ptur
es
Prov
ides
pla
ybac
k w
ith
mul
tipl
e da
ta a
cces
s m
etho
ds
Dat
a ac
cess
is s
eque
ntia
l.(i
.e.,
rand
om)
VII
I. T
ool S
ecur
ity
Use
r an
d gr
oup
secu
rity
and
per
mis
sion
s fo
r ea
ch t
est
asse
t A
cces
s to
eve
ry t
est
com
pone
nt c
an b
e co
ntro
lled
by p
roje
ct, u
ser,
com
pone
ntan
d gr
oup
as t
o re
ad/w
rite
/exe
cute
per
mis
sion
.
Allo
ws
SAP
role
s-ba
sed
test
ing
Yes
.
IX. V
endo
r Su
ppor
t
Ven
dor
offe
rs w
eb-b
ased
pat
ches
, dow
nloa
ds t
o up
grad
e to
olY
es.
SAP
Cor
pora
tion
has
for
mal
ly c
erti
fied
the
too
lN
o. T
his
is in
pro
cess
.
X. T
rain
ing
EXHI
BIT
6.6
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 136
137
Ven
dor
offe
rs t
est
tool
s tr
aini
ngY
es.
Ven
dor
offe
rs c
erti
fica
tion
exa
min
atio
n in
tes
t to
olY
es.
XI.
Tes
t R
epor
ting
and
Rev
iew
Res
ults
logs
sto
re s
cree
n ca
ptur
esSc
reen
cap
ture
s ar
e op
tion
al a
nd m
ay b
e de
fine
d an
d st
ored
at
ant
step
. Scr
eens
are
sto
red
in t
he d
atab
ase
and
may
be
save
d to
an
exte
rnal
file
.
Res
ults
log
show
sta
tus
for
each
row
of
data
(it
erat
ion)
Exe
cuti
on lo
gs s
how
eac
h st
ep f
or e
ach
data
row
.
Res
ults
log
incl
ude
date
and
tim
e st
amp
Dat
e an
d ti
me
are
avai
labl
e fo
r ev
ery
step
, pro
cess
, and
ses
sion
.
Res
ults
log
can
be s
aved
in d
iffe
rent
for
mat
s (i
.e.,
HT
ML
, .do
c)R
epor
ting
out
put
is a
vaila
ble
to u
ser-
defi
ned
form
ats.
Cre
ates
aut
omat
ic t
est
resu
lts
file
s (t
est
logs
)E
xecu
tion
logs
sho
w s
tep-
leve
l det
ail w
ith
both
act
ual a
ndex
pect
ed r
esul
ts a
s w
ell a
s sc
reen
cap
ture
if d
esir
ed a
nd e
laps
edti
me
end
to e
nd a
t ea
ch le
vel.
Use
r-de
fine
d qu
ery
and
repo
rtin
g or
cha
rtin
g ca
pabi
lity
Eve
ry t
est
data
item
can
be
filt
ered
, sor
ted,
or
quer
ied
and
all t
est
asse
ts c
an b
e in
clud
ed f
or r
epor
ting
pur
pose
s.
Eng
lish-
narr
ativ
e do
cum
enta
tion
pro
duce
d au
tom
atic
ally
fro
mE
ach
Cer
tify
ste
p is
aut
omat
ical
ly e
xpre
ssed
as
narr
ativ
e te
st p
roce
sses
desc
ript
ion
that
can
be
mod
ifie
d w
ithi
n th
e da
taba
se a
ndpr
oduc
ed in
a d
ocum
enta
tion
-sty
le r
epor
t or
exp
orte
d to
an
exte
rnal
file
.
Exp
ort
to t
ext
capa
bilit
y fo
r al
l tes
t as
sets
Eve
ry C
erti
fy t
est
asse
t ca
n be
exp
orte
d to
an
exte
rnal
file
.
Use
r-ex
tens
ible
cla
sses
, act
ions
, and
fun
ctio
nsT
he C
erti
fy O
pen
API
allo
ws
user
-ext
ensi
ble
clas
ses,
act
ions
, and
func
tion
s.
Use
r ca
n ex
tend
inte
rfac
e w
ith
unlim
ited
new
att
ribu
te f
ield
sC
erti
fy a
llow
s an
unl
imit
ed n
umbe
r of
use
r-de
fine
d fi
elds
to
bead
ded
to t
he r
epos
itor
y.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 137
138
Allo
ws
man
y-to
-one
and
one
-to-
man
y re
quir
emen
ts t
race
abili
tyC
erti
fy r
equi
rem
ents
can
be
linke
d to
mul
tipl
e te
st p
roce
sses
and
vice
ver
sa.
Supp
orts
ful
l ind
irec
tion
for
all
test
pro
cess
es a
nd d
ata
file
nam
esA
ll C
erti
fy t
est
proc
esse
s an
d da
ta f
iles
can
be c
alle
d us
ing
vari
able
nam
es t
o al
low
ful
l ind
irec
tion
.
Lan
guag
e an
d pl
atfo
rm in
depe
nden
tC
erti
fy h
ad n
ativ
e su
ppor
t fo
r W
eb, .
NE
T, J
ava,
VB
, mai
nfra
me,
and
XM
L. O
ther
pla
tfor
ms
can
be a
dded
usi
ng t
he C
erti
fy O
pen
API
. A s
ingl
e C
erti
fy t
est
proc
ess
can
seam
less
ly s
pan
mul
tipl
eap
plic
atio
ns a
nd p
latf
orm
s w
ithi
n a
sing
le e
xecu
tion
ses
sion
.
Rep
rint
ed w
ith
perm
issi
on f
rom
Wor
ksof
t, I
nc.
EXHI
BIT
6.6
(Con
tinu
ed)
Cri
teri
aC
omm
ents
/Res
pons
es
06_4782 2/5/07 10:43 AM Page 138
139
EXHI
BIT
6.7
Test
Too
l Eva
luat
ion
Mat
rix
(Ven
dor:
iTK
O I
nc.)
Tes
t T
ool E
valu
atio
n M
atri
x
Too
l(s)
Nam
e:iT
KO
LIS
A C
ompl
ete
SOA
Tes
t Pl
atfo
rmT
ool E
valu
ator
:Jas
on E
nglis
h, iT
KO
Inc
.V
endo
r N
ame:
iTK
O, I
nc.
Ven
dor
Web
site
:ww
w.it
ko.c
omD
ate
of E
valu
atio
n:08
/15/
06T
ool T
ype:
SOA
tes
ting
too
l for
no-
code
fun
ctio
nal,
regr
essi
on, i
nteg
rati
on, l
oad
and
perf
orm
ance
tes
ting
of
Web
Ser
vice
s(W
SDL
/SO
AP)
, Mes
sagi
ng L
ayer
s (J
MS/
MQ
) an
d w
eb in
terf
aces
, as
wel
l as
Java
/J2E
E, d
atab
ases
and
oth
er m
iddl
e ti
erco
mpo
nent
s. I
deal
for
tes
ting
SA
P N
etW
eave
r po
rtal
s an
d di
stri
bute
d se
rvic
es-b
ased
arc
hite
ctur
es a
nd c
ompo
nent
s.
Cri
teri
aC
omm
ents
I. T
rans
acti
on C
aptu
re a
nd P
layb
ack
Not
e: L
ISA
doe
s N
OT
tes
t a
Win
dow
s U
I or
Thi
ck C
lient
por
tion
of
an(a
nd O
verv
iew
of
LIS
A)
SAP
appl
icat
ion
(suc
h as
the
UI
of a
n ap
plet
or
VB
UI,
etc
.) N
or d
o w
e te
st a
n in
tern
alw
orkf
low
of
SAP,
suc
h as
tes
ting
AB
AP
calls
wit
hin
R/3
, etc
.
LIS
A d
oes
dire
ctly
tes
t an
ent
ire
busi
ness
wor
kflo
w t
he B
row
ser-
base
d an
d/or
por
tal
inte
grat
ion
aspe
cts
of S
AP
appl
icat
ions
, esp
ecia
lly a
s th
e co
mpa
ny’s
app
s m
ove
to a
mor
e di
stri
bute
d, S
OA
(Se
rvic
es-O
rien
ted
Arc
hite
ctur
e) e
nvir
onm
ent
wit
h N
etW
eave
ran
d m
ulti
ple
J2E
E s
erve
rs (
Web
Log
ic, o
ther
s) h
ousi
ng W
eb S
ervi
ces,
JD
BC
/SQ
Lda
taba
ses,
RM
I, M
essa
ging
/JM
S/M
Q, F
ile s
yste
ms,
and
inte
grat
ion
poin
ts w
ith
othe
rm
ajor
ent
erpr
ise
apps
.
LIS
A’s
sup
port
of
all t
he a
bove
tec
hnol
ogie
s co
vers
uni
t, f
unct
iona
l, re
gres
sion
, loa
d,in
tegr
atio
n an
d pe
rfor
man
ce t
esti
ng in
a s
ingl
e so
luti
on, w
ith
test
cas
es u
sabl
e ac
ross
that
ent
ire
lifec
ycle
.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 139
140
Aut
omat
ed g
loba
l cha
nges
for
obj
ect
Hav
e L
ISA
poi
nt t
o a
serv
ice,
and
by
refl
ecti
on t
he s
oftw
are
will
dyn
amic
ally
inco
rpor
-ch
ange
s an
d de
leti
ons
ate
all t
he a
vaila
ble
refe
renc
es a
nd d
ata
wit
hin
the
test
obj
ect.
If
the
obje
ct c
hang
es,
LIS
A c
an u
pdat
e th
ose
refe
renc
es d
ynam
ical
ly. T
he o
nly
case
whe
n yo
u w
ill n
eed
tom
ake
a ch
ange
is if
a m
etho
d si
gnat
ure
type
has
cha
nged
. Sin
ce L
ISA
’s m
odel
isde
clar
ativ
e an
d no
t st
rict
-UI
base
d lik
e m
any
acce
ptan
ce t
esti
ng t
ools
, cha
nges
in t
heus
er in
terf
ace
will
not
aut
omat
ical
ly b
reak
the
tes
t ca
se.
Aut
omat
ed im
pact
ana
lysi
s fo
r ap
plic
atio
nT
his
refe
rs t
o th
e ab
ility
for
the
too
l to
info
rm t
he u
ser
of a
ll te
sts
that
are
aff
ecte
d by
ach
ange
sch
ange
to
an a
pplic
atio
n ob
ject
. Oth
erw
ise
the
user
mus
t se
arch
all
test
s an
d m
anua
llylo
cate
all
refe
renc
es. W
e do
n’t
do t
his
out
of t
he b
ox. B
ut if
we
wer
e pa
rt o
f a
tota
lA
LM
Tes
t M
anag
emen
t so
luti
on, w
e w
ould
be
able
to
give
tha
t im
pact
ana
lysi
s.
Test
s ca
n be
dev
elop
ed c
oncu
rren
tly
wit
hT
his
refe
rs t
o th
e ab
ility
to
crea
te a
n au
tom
ated
tes
t w
itho
ut h
avin
g th
e ap
plic
atio
nso
ftw
are
deve
lopm
ent
avai
labl
e to
rec
ord
agai
nst.
Thi
s is
a n
eces
sary
fea
ture
for
agi
le d
evel
opm
ent
appr
oach
esth
at r
equi
re t
ests
to
be d
evel
oped
bef
ore
the
code
. Yes
, thi
s is
our
enc
oura
ged
way
for
test
dev
elop
men
t. W
ith
this
pro
cess
, dev
elop
ers
can
help
jum
p-st
art
QA
by
supp
lyin
gth
eir
unit
tes
ts, a
nd Q
A t
hen
can
build
pro
per
busi
ness
sce
nari
os, a
nd t
est
a de
ploy
edap
plic
atio
n th
at h
as n
o us
er in
terf
ace.
No
scri
pt c
odin
g re
quir
edTo
ol a
llow
s po
int-
and-
clic
k te
st c
reat
ion
and
stag
ing
auto
mat
ion
wit
hout
the
nee
d to
wri
te, g
ener
ate,
or
mai
ntai
n pr
ogra
mm
ing
code
. LIS
A’s
dec
lara
tive
tes
t ca
se m
odel
prod
uces
XM
L f
iles
that
can
be
stor
ed in
any
dev
elop
men
t en
viro
nmen
t to
ol, a
ndm
anag
ed t
hrou
gh L
ISA
’s in
terf
ace.
No
need
to
rew
rite
or
tran
sfer
tes
t co
de w
hen
mov
ing
thro
ugh
unit
, fun
ctio
nal,
regr
essi
on, l
oad,
and
per
form
ance
tes
t ph
ases
. We
doof
fer
a ki
t to
ext
end
LIS
A t
o te
st n
ew t
echn
olog
ies
and
LIS
A w
ould
be
able
to
supp
ort
no c
ode
auth
orin
g to
exe
rcis
e th
ose
exte
nsio
ns t
o th
e ne
w t
echn
olog
ies.
Tool
sup
port
s re
cord
ing
of n
on-S
AP
iTK
O L
ISA
is a
n id
eal t
ool w
hen
you
are
test
ing
an in
tegr
ated
env
iron
men
t th
at in
clud
esap
plic
atio
nsSA
P in
the
larg
er c
onte
xt o
f an
ent
erpr
ise
arch
itec
ture
. LIS
A s
uppo
rts
mos
t of
the
stan
dard
s of
Web
Ser
vice
s/SO
AP
and
JEE
, whi
ch in
tur
n m
eans
LIS
A c
an u
niqu
ely
EXHI
BIT
6.7
(Con
tinu
ed)
Cri
teri
aC
omm
ents
06_4782 2/5/07 10:43 AM Page 140
141
supp
ort
test
ing
of m
ulti
ple
non-
SAP
appl
icat
ions
tha
t m
ight
be
leve
rage
d un
der
SAP
Net
Wea
ver,
or w
ithi
n J2
EE
App
licat
ion
Serv
ers
(Web
Log
ic, J
Bos
s, W
ebsp
here
, etc
.).
Com
mon
scr
ipti
ng la
ngua
ge (
i.e.,
VB
)L
ISA
is a
Jav
a ap
plic
atio
n, a
nd it
tal
ks t
o m
ost
ente
rpri
se c
ompo
nent
s na
tive
ly.
How
ever
you
can
ext
end
test
abili
ty o
f ap
plic
atio
ns w
ith
LIS
A E
xten
sion
API
, wit
h Ja
va,
or if
you
rea
lly n
eed
to s
crip
t, t
hrou
gh b
eans
hell
(jav
a sc
ript
).
Allo
ws
RFC
s to
be
calle
dR
emot
e ca
lls f
or m
ost
com
pone
nts
such
as
RM
I, E
JB a
re e
nabl
ed r
ight
out
of
the
box.
Prod
uces
aut
omat
ic o
ptio
nal s
teps
LIS
A w
ill a
utom
atic
ally
“no
rmal
” pa
ss/f
ail c
ondi
tion
s fo
r ea
ch t
est
step
to
the
next
ste
pin
the
wor
kflo
w o
r a
Failu
re n
ode.
Fro
m t
here
, add
itio
nal o
ptio
ns c
an b
e ap
plie
d ba
sed
on a
sser
tion
s th
e te
ster
mak
es. A
ddit
iona
lly, d
evel
oper
s ca
n qu
ickl
y de
fine
any
tes
tw
orkf
low
as
a “C
usto
m P
roce
ss”
that
wou
ld p
re-l
oad
wit
h an
y op
tion
al s
teps
des
ired
depe
ndin
g on
the
bus
ines
s ne
ed.
Has
ana
log
and
obje
ct r
ecor
ding
L
ISA
doe
s no
t pe
rfor
m a
nalo
g te
stin
g, e
xcep
t fo
r Sw
ing
(Jav
a) a
pps
that
req
uire
som
e ca
pabi
litie
san
alog
rec
ordi
ng t
o be
cap
ture
d. C
aptu
re o
f W
eb-b
ased
tes
t ca
ses
is d
one
via
brow
ser
sim
ulat
ion
and
com
plet
e H
TT
P, X
ML
, JM
S an
d ot
her
data
str
eam
and
obj
ect
inte
rcep
tion
by
LIS
A, s
o W
eb a
pps
and
port
al c
ompo
nent
s ar
e ab
stra
cted
into
tes
tabl
ebu
sine
ss p
rope
rtie
s.
For
robo
t (A
nalo
g) t
ype
reco
rdin
g, w
e su
ppor
t Sw
ing
clie
nts
only
. All
othe
r se
rvic
es, t
heob
ject
s ar
e fu
lly s
imul
ated
(D
igit
al)
and
abst
ract
ed in
to d
ynam
ic d
ata
prop
erti
es.
Has
rep
osit
ory
for
man
agin
g th
e L
ISA
off
ers
a se
rver
sid
e re
posi
tory
for
tea
m m
anag
emen
t an
d st
orag
e of
tes
t ca
ses
and
prop
erti
es o
f re
cord
ed o
bjec
tste
st s
uite
s th
at m
ay s
ched
uled
or
run
as f
unct
iona
l, re
gres
sion
and
load
tes
ts. L
ISA
tes
tsar
e al
l sto
red
as X
ML
file
s, lo
cally
on
the
clie
nt c
ompu
ter,
or a
ttac
hed
to a
ny e
xist
ing
grou
pwar
e or
tea
m c
olla
bora
tion
sof
twar
e (W
e do
thi
s on
pur
pose
so
team
s ca
n us
eth
eir
deve
lopm
ent
proc
ess
tool
s of
cho
ice)
.
Thi
nk t
imes
can
be
adde
d w
itho
utA
fter
the
scr
ipt
is r
ecor
ded
the
test
er c
an a
dd t
hink
tim
es a
nd d
elay
s to
eac
h te
st s
tep
prog
ram
min
g or
cod
e ch
ange
sw
itho
ut h
avin
g to
inse
rt/c
hang
e ex
isti
ng p
rogr
amm
ing
code
. We
add
them
auto
mat
ical
ly (
user
def
ined
), a
nd t
hink
tim
es c
an b
e ch
ange
d to
dif
fere
nt v
alue
s on
ape
r no
de b
asis
wit
h po
int-
and-
clic
k ea
se, o
r im
port
ed d
ynam
ical
ly f
rom
a lo
ad p
rofi
ler.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 141
142
Test
too
l allo
ws
for
crea
tion
of
user
- U
ser-
defi
ned
func
tion
s ca
n be
acc
esse
d fr
om a
ny t
est
or c
ompo
nent
, and
you
can
als
ode
fine
d fu
ncti
ons
capt
ure
brow
ser
test
s th
roug
h th
is m
etho
d.
Test
too
l off
ers
keyw
ord-
driv
en t
ests
LIS
A r
ende
rs “
keyw
ord-
driv
en”
test
ing
obso
lete
—yo
u ca
n di
rect
ly c
reat
e re
al w
ork-
flow
s in
a p
oint
-and
-clic
k m
anne
r in
LIS
A a
nd t
hese
are
act
ual t
ests
tha
t te
ams
can
labe
l how
ever
the
y w
ant.
The
re is
not
a s
epar
ate
proc
ess
of a
bstr
acti
ng g
ranu
lar
test
codi
ng in
to k
eyw
ord
libra
ries
nee
ded.
Tes
t pr
oced
ures
can
als
o be
def
ined
as
“bus
ines
spr
oces
ses”
in L
ISA
, whi
ch le
ts Q
A/B
usin
ess
team
s te
st in
ter
ms
of p
roce
sses
inst
ead
ofro
ot-l
evel
tec
hnol
ogie
s.
Has
inte
ract
ive
capt
ured
scr
een
ofL
ISA
has
a p
ower
ful I
nter
acti
ve T
est
Run
(IT
R)
func
tion
tha
t le
ts y
ou s
tep
capt
ured
/rec
orde
d pr
oces
sth
roug
h ea
ch W
eb U
I or
mid
dle-
tier
com
pone
nt in
the
tes
t an
d se
e bo
th w
hat
was
see
nif
it is
a U
I st
ep, o
r th
e ex
act
data
tha
t w
as r
elay
ed f
rom
sev
eral
dif
fere
nt v
iew
sac
cord
ing
to t
he u
ser’
s te
chni
cal r
equi
rem
ents
. Thi
s is
ext
rem
ely
pow
erfu
l for
pro
blem
solv
ing,
allo
win
g de
velo
pers
to
see
both
the
end
res
ult
and
the
unde
rlyi
ng c
ause
of
each
step
in a
tes
t ca
se.
If t
ool o
ffer
s ca
ptur
ed/r
ecor
ded
scre
en,
Rem
embe
r, w
e do
n’t
dire
ctly
tes
t th
ick
clie
nt U
Is..
. For
our
exi
stin
g W
eb c
lient
you
us
er c
an m
odif
y sc
ript
logi
c th
roug
h co
uld
take
the
sam
e te
st a
nd r
ecap
ture
the
pag
e w
ith
the
repl
ay u
tilit
y, w
hich
can
cat
chth
e ca
ptur
ed s
cree
nth
e ne
w f
ield
.
Allo
ws
rena
min
g of
labe
ls f
or c
aptu
red
You
can
sw
ap a
ny c
aptu
red
valu
e or
val
ue n
ame
in a
tes
t st
ep b
y ty
ping
in a
noth
er
fiel
dsva
lue
or a
ttac
hing
a d
ynam
ic v
alue
to
it.
Allo
ws
addi
ng o
f st
art
and
stop
wat
ches
I do
n’t
unde
rsta
nd—
you
can
pace
or
star
t an
d en
d a
LIS
A t
est
case
at
any
sche
dule
dti
me
or in
terv
al.
Ven
dor
offe
rs li
brar
y of
pre
reco
rded
Not
out
of
the
box.
We
wou
ld o
ffer
LIS
A e
xten
sion
s to
rap
idly
bui
ld “
solu
tion
nod
es”
SAP
scri
pts/
proc
esse
s w
ith
tool
spec
ific
ally
for
SA
P fu
ncti
ons
that
cus
tom
ers
wou
ld h
ave
a go
od u
se f
or.
EXHI
BIT
6.7
(Con
tinu
ed)
Cri
teri
aC
omm
ents
06_4782 2/5/07 10:43 AM Page 142
143
II. S
AP
Supp
orte
d V
ersi
ons,
App
licat
ions
Com
pati
ble
wit
h SA
P bo
lt-o
ns (
i.e.,
BW
N/A
to
core
SA
P m
odul
es.
SRM
, APO
, C-f
olde
rs, C
RM
, etc
.)
Supp
orts
SA
P G
UI,
Cit
rix,
and
Por
tals
For
GU
I te
stin
g, L
ISA
onl
y su
ppor
ts B
row
ser-
base
d U
Is le
vera
ging
SA
P, a
nd S
AP
Net
Wea
ver.
III.
Too
l Mai
nten
ance
Allo
ws
tool
bar
cust
omiz
atio
nsN
/A
IV. T
ool I
nsta
llati
on
Tool
inst
alla
tion
is w
eb-b
ased
or
requ
ired
D
ownl
oada
ble,
dou
ble-
clic
k in
stal
l of
LIS
A c
lient
app
licat
ion.
No
furt
her
desk
top
GU
I in
stal
lati
on (
fat
or t
hin
“im
plem
enta
tion
” re
quir
ed t
o te
st m
ulti
ple
com
pone
nt t
ypes
.cl
ient
inst
alla
tion
)?L
ISA
Use
r C
lient
: Fat
(Ja
va S
win
g-ba
sed)
clie
nt r
uns
on W
indo
ws/
Lin
ux/U
nix/
Mac
plat
form
s.
LIS
A S
erve
r A
pplic
atio
n: C
lient
less
app
tha
t L
ISA
tes
t ca
ses
can
leve
rage
; it
prov
ides
the
test
sch
edul
ing
and
virt
ual u
ser
load
gen
erat
ion
for
the
com
pany
’s t
este
rs u
sing
the
LIS
AU
ser
Clie
nt o
r em
bedd
ing
com
man
d-lin
e ca
lls t
o L
ISA
Ser
ver
wit
hin
thei
r co
de.
Ven
dor
offe
rs f
loat
ing
licen
ses
Yes
, LIS
A u
ser
licen
ses
can
be p
urch
ased
per
sea
t or
in v
olum
e as
fix
ed o
r fl
oati
nglic
ense
s. V
irtu
al U
sers
for
Loa
d Te
stin
g ar
e al
way
s fl
oati
ng a
nd “
pool
ed”
to t
heco
mpa
ny o
n th
e L
ISA
Ser
ver.
Bea
r in
min
d th
at t
here
are
no
licen
ses
requ
ired
on
the
test
tar
get
serv
er; i
f L
ISA
can
reac
h it
via
rem
ote
invo
cati
on (
In-C
onta
iner
tes
ting
), L
AN
/WA
N, o
r th
e In
tern
et, i
t ca
nte
st it
.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 143
144
V. T
ool I
nteg
rati
on
Stor
es t
est
asse
ts in
MSD
E, S
QL
Ser
ver,
LIS
A t
ests
are
sto
red
as X
ML
and
can
be
easi
ly a
ttac
hed
to a
ny d
evel
opm
ent
or O
racl
epo
rtal
or
AL
M/S
CM
/RM
/Iss
ue t
rack
ing
or g
roup
war
e to
ols
and
file
sys
tem
s.
Inte
grat
es w
ith
Solu
tion
Man
ager
Not
dir
ect
inte
grat
ion,
tho
ugh
sim
ply
stor
e an
XM
L f
ile in
SM
, or
crea
te a
n au
tom
atio
npr
oces
s w
ith
LIS
A
If t
ool i
nteg
rate
s w
ith
Solu
tion
Man
ager
,W
e ca
n. Y
ou c
an c
all a
ny r
unni
ng L
ISA
Ser
ver
to r
un a
nd r
epor
t of
f a
test
“he
edle
ssly
”ca
pabi
litie
s ex
ist
to e
xecu
te r
ecor
ded
via
com
man
d lin
e, in
a b
uild
or
othe
r te
st s
crip
t, o
r by
a ja
va m
etho
d.sc
ript
s fr
om S
olut
ion
Man
ager
Inte
grat
es w
ith
test
man
agem
ent
tool
Yes
, cur
rent
ly w
e ti
ghtl
y in
tegr
ate
wit
h M
KS
for
test
man
agem
ent
(whe
re L
ISA
cal
ls a
recr
eate
d in
MK
S an
d vi
ce v
ersa
). H
owev
er, f
or a
ny Q
A C
ente
r–st
yle
app,
it is
muc
hea
sier
to
just
att
ach
data
-ric
h L
ISA
tes
t ca
ses
and
test
run
s as
XM
L f
iles,
tha
n at
tach
ing
man
ual W
ord/
Exc
el d
ocs
or m
anua
lly c
oded
tes
t ca
ses.
If t
est
man
agem
ent
tool
exi
sts,
doe
s it
Dep
ends
on
test
man
agem
ent
solu
tion
use
d. (
LIS
A is
not
a t
est
man
agem
ent
tool
inof
fer
vers
ion
cont
rol c
apab
iliti
es?
this
sen
se.)
Or
inte
grat
e w
ith
thir
d-pa
rty
tool
for
ve
rsio
n co
ntro
l?
Inte
grat
es w
ith
eCA
TT
Unk
now
n. I
f eC
AT
T is
Jav
a or
can
gen
erat
e X
ML
, we
can
exte
nd L
ISA
to
use
eCA
TT.
Inte
grat
es w
ith
test
too
ls o
ther
tha
n eC
AT
TV
ery
stro
ng J
Uni
t, n
Uni
t, A
nt b
uild
inte
grat
ion
(run
LIS
A f
rom
tho
se s
crip
ts o
r ru
n th
esc
ript
s fr
om L
ISA
and
aff
ect
pass
/fai
l res
ults
). O
ther
tes
t to
ols
depe
ndin
g up
onte
chno
logy
.
Ope
n A
PI t
o in
tegr
ate
wit
h ot
her
tool
s,Y
es, L
ISA
off
ers
a hi
ghly
ext
ensi
ble
test
fra
mew
ork
for
ever
y te
st p
hase
and
lang
uage
sta
rget
tec
hnol
ogy,
and
we
publ
ish
our
LIS
A E
xten
sion
API
wit
h ou
r in
tegr
atio
n ki
t.
EXHI
BIT
6.7
(Con
tinu
ed)
Cri
teri
aC
omm
ents
06_4782 2/5/07 10:43 AM Page 144
145
(Con
tinu
es)
VI.
Too
l Exe
cuti
on
Dec
isio
n-m
akin
g op
tion
s fo
r ea
ch t
est
Yes
, we
have
poi
nt-a
nd-c
lick
asse
rtio
ns a
nd f
ilter
s w
here
you
can
dir
ect
step
on
pass
or
fail
LIS
A’s
wor
kflo
w in
eit
her
a bi
nary
(pa
ss/f
ail)
or
cond
itio
nal w
ay (
Boo
lean
).
Exe
cuti
on c
ontr
ol a
llow
s si
ngle
-ste
p,Y
es, w
hen
wor
king
wit
h a
com
pone
nt y
ou a
re d
irec
tly
exec
utin
g th
at c
ompo
nent
in a
brea
kpoi
nts,
scr
een
capt
ures
, var
iabl
e an
dpo
int-
and-
clic
k w
ay w
ith
LIS
A’s
cal
l pan
el. V
ia o
ur I
nter
acti
ve T
est
Run
ner,
you
can
data
mon
itor
ing
step
thr
ough
tes
ts, v
iew
dat
a at
any
leve
l, an
d ac
cess
all
test
info
rmat
ion
wit
hout
look
ing
or w
orki
ng w
ith
code
.
Cap
abili
ties
to
run
unat
tend
ed a
nd s
kip
A “
fail/
cont
inue
” st
ep m
etho
d is
ava
ilabl
e.fa
iled
iter
atio
ns
Run
s sc
ript
s in
bac
kgro
und
and
Fore
grou
nd =
usi
ng t
he I
TR
to
step
-thr
ough
a c
ase,
mod
e.fo
regr
ound
Bac
kgro
und
= st
aged
tes
ts t
hat
run
at t
imed
inte
rval
s/qu
anti
ties
.
Has
sch
edul
ing
capa
bilit
ies
Yes
, thr
ough
our
tes
t sc
hedu
ler,
whi
ch is
par
t of
the
LIS
A S
erve
r.
Sche
dulin
g to
ol o
ffer
s ex
ecut
ion
wit
hY
es, y
ou w
ould
use
our
sch
edul
er, a
nd s
et t
he t
ests
up
in a
tes
t su
ite
tode
pend
enci
esde
fine
ord
er.
Con
tain
s de
bugg
erY
es, w
e ca
n re
cord
var
iabl
es a
nd s
tate
s w
ith
our
repo
rtin
g. N
o co
mpi
ler
is n
eede
d.
Allo
ws
for
auto
mat
ic s
ynch
roni
zati
onL
ISA
off
ers
auto
mat
ic t
imin
g sy
nchr
oniz
atio
n m
anag
emen
t, a
nd c
an a
lso
bebe
twee
n cl
ient
and
ser
ver
set
to a
ccep
t as
ynch
rono
us in
puts
or
a ti
min
g pr
ofile
, but
doe
s no
t ro
bot
SAP
appl
icat
ion
UIs
.
Bui
lt-i
n er
ror
hand
ling
capa
bilit
yY
es.
Bui
lt-i
n co
ntex
t re
cove
ry c
apab
ility
LIS
A is
ver
y st
rong
at
repr
oduc
ing
text
con
text
and
con
figu
rati
ons
(for
SO
A, J
ava,
Web
appl
icat
ions
, not
clie
nt U
Is);
you
can
def
ine
a pr
oces
s fo
r fa
ilure
to
exec
ute
to b
ring
the
appl
icat
ion
back
to
a st
art
conf
igur
atio
n st
ate.
Aut
omat
ic t
imin
g fo
r ea
ch s
tep,
pro
cess
, Y
es, w
e re
cord
tho
se t
imin
gs f
or r
espo
nse,
whi
ch is
par
t of
our
rep
orti
ngan
d su
ite
fram
ewor
k.
06_4782 2/5/07 10:43 AM Page 145
146
VII
. Too
l Dat
a
Use
r-de
fine
d da
ta f
ilter
ing
on a
ll vi
ews
All
test
s ar
e st
ored
as
XM
L in
a f
ile s
yste
m. W
e do
n’t
curr
entl
y ha
ve a
n in
tern
al p
roce
ssto
fin
d va
lues
aga
inst
a s
erie
s of
file
s.
All
test
ass
ets
stor
ed a
s da
ta in
rel
atio
nal
Test
dat
a ca
n be
sto
red
in s
prea
dshe
ets,
xm
l, or
cen
tral
dat
abas
es (
whi
ch w
ould
ena
ble
data
base
data
filt
erin
g).
Dat
abas
e ve
rifi
cati
on a
nd d
ata
acqu
isit
ion
You
can
vie
w a
nd a
cqui
re d
ata
by d
irec
tly
ente
ring
the
val
ue w
ante
d or
by
wri
ting
SQ
Lst
atem
ents
aga
inst
the
dat
abas
e, b
ut y
ou w
on’t
need
to
wri
te c
ode
to c
ompa
re v
alue
s.
Prov
ides
Exc
el-b
ased
fun
ctio
n (i
.e.,
TR
IM,
Yes
.M
ID, e
tc.)
to
clea
n up
cap
ture
d te
xt
Dat
a-dr
iven
tes
ts (
i.e.,
pulls
dat
a fr
omY
es, v
ery
muc
h.sp
read
shee
ts, e
xter
nal s
ourc
es, e
tc.)
Allo
ws
for
veri
fica
tion
poi
nts
(obj
ects
,Y
es.
data
base
val
ues,
tex
t)
Tool
off
ers
regu
lar
expr
essi
ons
Yes
, can
be
ente
red
as p
art
of a
ny a
sser
tion
.(i
.e.,
text
cha
ract
er m
atch
ing)
Cap
abili
ties
for
cre
atin
g ex
tern
al d
ata
file
sY
es.
Allo
ws
data
see
ding
and
dat
a co
rrel
atio
nY
es f
or m
ulti
ple
SOA
com
pone
nts,
Web
Ser
vice
s, d
atab
ases
, etc
.N
o fo
r SA
P ap
ps/m
odul
es, w
hich
wou
ld r
equi
re e
xten
sion
s.
Allo
ws
vari
able
dec
lara
tion
Yes
.
Cap
ture
s sc
reen
tex
t (i
.e.,
stat
us b
ar
LIS
A c
an c
aptu
re v
alue
s fr
om W
eb U
Is o
nly,
and
sav
e th
em o
ut t
o fi
les.
mes
sage
s)
EXHI
BIT
6.7
(Con
tinu
ed)
Cri
teri
aC
omm
ents
06_4782 2/5/07 10:43 AM Page 146
147
Prov
ides
pla
ybac
k w
ith
mul
tipl
e da
ta
Yes
. ac
cess
met
hods
(i.e
., ra
ndom
)
VII
I. T
ool S
ecur
ity
Use
r an
d gr
oup
secu
rity
and
per
mis
sion
sW
e do
n’t
have
a r
epos
itor
y—us
es e
xist
ing
deve
lopm
ent
tool
s/gr
oupw
are
for
each
tes
t as
set
com
pone
ntan
d pe
rmis
sion
s.
Allo
ws
SAP
role
s-ba
sed
test
ing
No.
IX. V
endo
r Su
ppor
t
Ven
dor
offe
rs w
eb-b
ased
pat
ches
, Y
es.
dow
nloa
ds t
o up
grad
e to
ol
SAP
Cor
pora
tion
has
for
mal
ly c
erti
fied
In
pro
cess
.th
e to
ol
X. T
rain
ing
Ven
dor
offe
rs t
est
tool
s tr
aini
ngY
es.
Ven
dor
offe
rs c
erti
fica
tion
exa
min
atio
nN
ot c
urre
ntly
—iT
KO
dir
ectl
y re
fers
ser
vice
pra
ctit
ione
rs.
in t
est
tool
XI.
Tes
t R
epor
ting
and
Rev
iew
Res
ults
logs
sto
re s
cree
n ca
ptur
esY
es (
Web
UI
view
s).
Res
ults
logs
sho
w s
tatu
s fo
r ea
ch r
ow o
f Y
es.
data
(it
erat
ion)
Res
ults
logs
incl
ude
date
and
tim
e st
amp
Yes
.
Res
ults
logs
can
be
save
d in
dif
fere
ntY
es, y
ou c
an s
tore
val
ues
in a
dat
abas
e, t
oo.
form
ats
(i.e
., H
TM
L, .
doc)
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 147
148
Cre
ates
aut
omat
ic t
est
resu
lts
file
s Y
es, a
t a
very
hig
h le
vel o
f de
tail
(you
can
see
the
sta
te o
f th
e re
sult
at
(tes
t lo
gs)
ever
y la
yer)
.
Use
r-de
fine
d qu
ery
and
repo
rtin
g or
Y
es.
char
ting
cap
abili
ty
Eng
lish-
narr
ativ
e do
cum
enta
tion
pro
-N
o ne
ed f
or a
“na
rrat
ive”
as
you
can
see
it a
s a
self
-exp
lana
tory
gra
phic
al w
orkf
low
.du
ced
auto
mat
ical
ly f
rom
tes
t pr
oces
ses
BPE
L in
tegr
atio
n an
d ou
tput
com
ing
Oct
ober
200
6.
Exp
ort
to t
ext
capa
bilit
y fo
r al
l tes
t as
sets
Dat
a an
d re
port
ing
info
rmat
ion
can
be (
but
you
wou
ldn’
t w
ant
to le
ave
LIS
A).
Use
r-ex
tens
ible
cla
sses
, act
ions
and
Y
es.
func
tion
s
Use
r ca
n ex
tend
inte
rfac
e w
ith
unlim
ited
Not
ava
ilabl
e (L
ISA
’s e
asy
to u
se, b
ut it
is t
he t
oolk
it y
ou u
se).
new
att
ribu
te f
ield
s
Allo
ws
man
y-to
-one
and
one
-to-
man
yN
o, n
ot a
tes
t re
quir
emen
ts m
anag
er (
use
RM
of
choi
ce).
requ
irem
ents
tra
ceab
ility
Supp
orts
ful
l ind
irec
tion
for
all
test
Not
aut
omat
ical
ly. Y
ou c
ould
app
ly t
his
as a
n ap
proa
ch t
o bu
ildin
g a
wor
kflo
w.
proc
esse
s an
d da
ta f
ile n
ames
Is t
he t
ool L
angu
age
and
plat
form
Yes
. LIS
A w
ill r
un o
n al
mos
t an
y de
skto
p cl
ient
or
Java
-1.4
+ co
mpl
iant
ser
ver
inde
pend
ent?
(Lin
ux/U
nix,
Win
dow
s, O
SX, S
olar
is, H
PUX
, etc
.)
Rep
rint
ed w
ith
perm
issi
on f
rom
iTK
O I
nc.
EXHI
BIT
6.7
(Con
tinu
ed)
Cri
teri
aC
omm
ents
06_4782 2/5/07 10:43 AM Page 148
149
EXHI
BIT
6.8
Test
Too
l Eva
luat
ion
Mat
rix
(Ven
dor:
IB
M)
Tes
t T
ool E
valu
atio
n M
atri
x
Too
l(s)
Nam
e:IB
M R
atio
nal F
unct
iona
l Tes
ter
Too
l Eva
luat
or(s
):Sw
athi
Rao
and
Shi
noj Z
acha
rias
Ven
dor
Nam
e:IB
MV
endo
r W
ebsi
te:w
ww
.ibm
.com
/rat
iona
lD
ate
of E
valu
atio
n:09
/11/
06T
ool T
ype:
(i.e
., SA
P R
ecor
d/Pl
ayba
ck S
crip
ting
Tes
t To
ol f
or r
egre
ssio
n, s
trin
g, in
tegr
atio
n, s
mok
e te
stin
g) (
Plea
se f
ill y
our
own.
Thi
sis
an
exam
ple.
)
Rat
iona
l Fun
ctio
nal T
este
r’s
exte
nsio
n fo
r SA
P en
able
s th
e SA
P us
ers
to p
erfo
rm a
utom
ated
fun
ctio
nal a
nd r
egre
ssio
n te
stin
g of
the
irSA
P (6
.2/6
.4 G
UI
Clie
nt a
nd 4
.6/4
.7 S
erve
r) a
pplic
atio
ns p
rovi
ding
the
m w
ith
all c
apab
iliti
es t
hat
RFT
has
to
offe
r su
ch a
sV
erif
icat
ion
Poin
ts, D
ata
Dri
ven
Test
ing,
Dyn
amic
Dat
a V
alid
atio
n, O
bjec
t M
aps,
Scr
ipt
Ass
ure
and
mor
e. I
t m
akes
SA
P te
stin
gsi
mpl
e, e
asy
and
flex
ible
by
crea
ting
scr
ipts
tha
t ar
e ro
bust
and
res
ilien
t to
app
licat
ion
chan
ges.
Cri
teri
aC
omm
ents
I. T
rans
acti
on C
aptu
re a
nd P
layb
ack
Aut
omat
ed g
loba
l cha
nges
for
obj
ect
The
obj
ect
map
is s
peci
fica
lly d
esig
ned
to a
ddre
ss t
he p
ain
of s
crip
tch
ange
s an
d de
leti
ons
mai
ntai
nabi
lity.
The
obj
ect
map
is a
utom
atic
ally
pop
ulat
ed w
hen
a sc
ript
is r
ecor
ded,
or y
ou c
an m
anua
lly a
dd o
bjec
ts t
o th
e m
ap. T
he m
ap p
rovi
des
the
test
ing
team
asi
ngle
sou
rce
to u
pdat
e w
hen
obje
cts
in t
he A
UT
are
cha
nged
. By
chan
ging
the
map
, all
scri
pts
that
ref
eren
ce t
hat
obje
ct w
ill u
se t
he u
pdat
ed o
bjec
t in
form
atio
n. F
urth
erm
ore
RFT
has
a f
eatu
re c
alle
d th
e “O
bjec
t M
ap F
ind
and
Mod
ify
utili
ty, w
hich
ena
bles
you
to f
ind
all o
bjec
ts t
hat
mat
ch c
rite
ria
such
as
prop
erty
nam
es, p
rope
rty
valu
es, o
rva
riou
s cu
stom
filt
ers.
Act
ions
can
the
n be
tak
en o
n th
e m
atch
ing
obje
cts
to A
ddPr
oper
ty, R
emov
e Pr
oper
ty, C
hang
e V
alue
, and
/or
Cha
nge
Wei
ght.
Mod
ific
atio
ns c
anbe
app
lied
to o
bjec
ts o
ne a
t a
tim
e or
glo
bally
.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 149
150
Whe
n th
e ob
ject
is r
enam
ed in
the
Obj
ectM
ap, i
t w
ill im
med
iate
ly r
efle
ct in
all
scri
pts
that
ref
er t
o th
at o
bjec
t. H
owev
er, i
f ob
ject
s ar
e de
lete
d fr
om t
he S
ctri
pt’s
Obj
ect
Map
,R
FT d
oes
not
auto
mat
ical
ly d
elet
e al
l the
ref
eren
ces
to t
hat
obje
ct in
the
scr
ipt.
Ins
tead
it w
ill s
how
up
com
pila
tion
err
ors
at t
he in
stan
ces
whe
re t
he d
elet
ed o
bjec
t ha
s be
enus
ed in
the
scr
ipt.
Aut
omat
ed im
pact
ana
lysi
s fo
r ap
plic
atio
nT
he m
ap p
rovi
des
the
test
ing
team
a s
ingl
e so
urce
to
upda
te w
hen
obje
cts
in t
he A
UT
chan
ges
are
chan
ged.
By
chan
ging
the
map
, all
scri
pts
that
ref
eren
ce t
hat
obje
ct w
ill u
se t
he
upda
ted
obje
ct in
form
atio
n.
RFT
use
s ob
ject
-ori
ente
d te
chno
logy
to
iden
tify
obj
ects
by
thei
r in
tern
al o
bjec
tpr
oper
ties
and
not
by
scre
en c
oord
inat
es o
r ob
ject
nam
es. S
o if
the
loca
tion
or
text
or
any
othe
r sp
ecif
ic p
rope
rty
of t
he o
bjec
t ch
ange
s fr
om b
uild
to
build
, RFT
can
sti
ll fi
ndit
and
pro
ceed
wit
h pl
ayba
ck w
itho
ut b
reak
ing
our
scri
pts.
“Sc
ript
Ass
ure”
is t
hete
chno
logy
tha
t m
akes
the
tes
t sc
ript
s im
mun
e to
obj
ect
prop
erty
cha
nges
bet
wee
nso
ftw
are
build
s.
Test
s ca
n be
dev
elop
ed c
oncu
rren
tly
wit
hR
FT d
oes
prov
ide
the
abili
ty t
o co
de t
he e
ntir
e sc
ript
wit
hout
hav
ing
any
obje
cts
inso
ftw
are
deve
lopm
ent
the
map
. Wha
t th
is m
eans
is t
hat
we
can
hand
cod
e th
e en
tire
scr
ipt
wit
hout
usi
ng t
here
cord
er. S
o, a
s lo
ng a
s th
e te
ster
/dev
elop
er k
now
s th
e hi
erar
chy
and
prop
erti
es o
f th
eob
ject
in t
he a
pplic
atio
n, h
e/sh
e ca
n pr
ocee
d w
ith
crea
ting
tes
t sc
ript
s.
If t
he u
ser
is lo
okin
g to
per
form
key
wor
d-dr
iven
tes
ting
wit
h R
FT, t
hen
RFT
inte
grat
esw
ith
an o
pen-
sour
ce k
eyw
ord-
driv
en t
esti
ng f
ram
ewor
k ca
lled
“SA
FS”
(Sof
twar
eA
utom
atio
n Fr
amew
ork
Supp
ort)
, whi
ch d
oes
just
tha
t.
No
scri
pt c
odin
g re
quir
edY
es, n
o sc
ript
ing
is r
equi
red.
RFT
is u
sed
to r
ecor
d m
anua
l int
erac
tion
s w
ith
the
appl
icat
ion
unde
r te
st. R
ecor
ding
is, h
owev
er, t
rans
pare
nt t
o th
e us
er, i
n th
e se
nse
that
,w
hile
the
use
r pe
rfor
ms
the
test
, RFT
will
on
the
fly
mak
e a
scri
pt o
f al
l the
use
r
EXHI
BIT
6.8
(Con
tinu
ed)
Cri
teri
aC
omm
ents
06_4782 2/5/07 10:43 AM Page 150
151
(Con
tinu
es)
acti
viti
es. Y
ou c
an a
lso
add
Ver
ific
atio
n Po
ints
, Dat
a D
rive
n te
sts,
Com
men
ts, L
ogM
essa
ges,
Tim
ers
etc
whi
le r
ecor
ding
. Onc
e th
e sc
ript
s ar
e re
cord
ed, t
he t
est
scri
pts
can
be u
sed
for
unat
tend
ed e
xecu
tion
.
Tool
sup
port
s re
cord
ing
of n
on-S
AP
Yes
. RFT
is a
n au
tom
ated
fun
ctio
nal a
nd r
egre
ssio
n te
stin
g to
ol f
or t
esti
ng J
ava,
.NE
T,
appl
icat
ions
web
-bas
ed, S
iebe
l, SA
P, a
nd t
erm
inal
-bas
ed a
pplic
atio
ns.
Com
mon
scr
ipti
ng la
ngua
ge (
i.e.,
VB
)R
atio
nal F
unct
iona
l Tes
ter
is a
vaila
ble
in t
wo
scri
ptin
g la
ngua
ges—
Java
and
VB
.NE
T.T
he J
ava
lang
uage
is s
uppo
rted
usi
ng t
he E
clip
se (
open
-sou
rce)
ID
E w
hile
the
VB
.NE
Tla
ngua
ge is
sup
port
ed u
sing
the
Vis
ual S
tudi
o .N
ET
ID
E.
Allo
ws
RFC
s to
be
calle
dR
FT p
rovi
des
deve
lope
rs a
nd m
ore
adva
nced
tes
ters
the
abi
lity
to w
ork
the
scri
pt c
ode
dire
ctly
usi
ng t
he c
hoic
e of
VB
.NE
T o
r Ja
va la
ngua
ge b
ased
on
thei
r sk
ill s
et. T
he u
sers
can
leve
rage
the
pow
er o
f th
e la
ngua
ge t
hem
selv
es t
o pe
rfor
m c
ompl
ex t
asks
.
Prod
uces
aut
omat
ic o
ptio
nal s
teps
No,
cur
rent
ly t
his
feat
ure
is n
ot p
rovi
ded
by R
FT.
Has
ana
log
and
obje
ct r
ecor
ding
R
FT d
oes
not
supp
ort
anal
og r
ecor
ding
cap
abili
ty. H
owev
er u
sing
scr
ipti
ng (
hand
-ca
pabi
litie
sco
ding
), t
he s
ame
task
can
be
achi
eved
.
RFT
doe
s pr
ovid
e th
e ab
ility
to
reco
rd o
n al
l SA
P ob
ject
s, w
ith
the
abili
ty t
o re
cogn
ize
thes
e ob
ject
s in
depe
nden
t of
the
obj
ect
loca
tion
.
Has
rep
osit
ory
for
man
agin
g th
e T
he F
unct
iona
l Tes
ter
test
obj
ect
map
list
s th
e te
st o
bjec
ts in
the
app
licat
ion-
unde
r-pr
oper
ties
of
reco
rded
obj
ects
test
(A
UT
). T
he o
bjec
t m
ap is
a s
tati
c, h
iera
rchi
cal r
epre
sent
atio
nof
the
obj
ects
in t
heap
plic
atio
n w
here
att
ribu
tes,
pro
pert
ies
of c
aptu
red
or r
ecor
ded
obje
cts
are
stor
ed a
ndm
aint
aine
d.
Thi
nk t
imes
can
be
adde
d w
itho
ut
The
thi
nk t
imes
can
be
adde
d w
hile
rec
ordi
ng u
sing
the
Scr
ipt
Supp
ort
func
tion
spr
ogra
mm
ing
orpr
ovid
ed o
n th
e re
cord
ing
tool
bar.
Eve
n af
ter
the
reco
rdin
g is
com
plet
ed, t
he u
ser
can
code
cha
nges
inse
rt s
teps
, suc
h as
thi
nk t
imes
, by
clic
king
on
the
“Ins
ert
Rec
ordi
ng”
opti
on in
the
Scri
pt m
enu
whi
ch b
ring
s up
the
rec
orde
r, th
us p
rovi
ding
the
Scr
ipt
Supp
ort
func
tion
s.
06_4782 2/5/07 10:43 AM Page 151
152
Test
too
l allo
ws
for
crea
tion
of
Yes
. RFT
is s
uppo
rted
in t
wo
indu
stry
sta
ndar
d la
ngua
ges,
Jav
a an
d V
B.N
ET,
whi
ch
user
-def
ined
fun
ctio
nspr
ovid
es u
sers
wit
h th
e ab
ility
to
crea
te t
heir
ow
n fu
ncti
ons
that
can
be
acce
ssed
fro
mw
ithi
n an
d ac
ross
tes
t sc
ript
s.
Test
too
l off
ers
keyw
ord-
driv
en t
ests
No.
Cur
rent
ly t
his
feat
ure
is n
ot s
uppo
rted
.
Has
inte
ract
ive
capt
ured
scr
een
ofN
o. C
urre
ntly
thi
s fe
atur
e is
not
sup
port
ed.
capt
ured
/rec
orde
d pr
oces
s
If t
ool o
ffer
s ca
ptur
ed/r
ecor
ded
scre
en,
No.
Cur
rent
ly t
his
feat
ure
is n
ot s
uppo
rted
.us
er c
an m
odif
y sc
ript
logi
c th
roug
h th
e ca
ptur
ed s
cree
n
Allo
ws
rena
min
g of
labe
ls f
or c
aptu
red
Yes
. The
obj
ect
nam
es c
an b
e ch
ange
d fr
om t
he s
crip
t ex
plor
er o
r te
st o
bjec
t m
ap.
fiel
ds
Allo
ws
addi
ng o
f st
art
and
stop
wat
ches
Yes
. Tim
ers
can
be in
clud
ed in
the
scr
ipt
whi
le r
ecor
ding
by
usin
g th
e Sc
ript
Sup
port
Func
tion
fro
m t
he r
ecor
ding
too
lbar
.
Ven
dor
offe
rs li
brar
y of
pre
reco
rded
SA
PN
o.sc
ript
s/pr
oces
ses
wit
h to
ol
II. S
AP
Supp
orte
d V
ersi
ons,
App
licat
ions
Com
pati
ble
wit
h SA
P bo
lt-o
ns
Yes
. As
long
as
thes
e pa
ckag
es c
an b
e ac
cess
ed v
ia t
he S
AP
GU
I, w
e su
ppor
t te
stin
g(i
.e.,
BW
, SR
M, A
PO,
them
.C
-fol
ders
, CR
M, e
tc.)
EXHI
BIT
6.8
(Con
tinu
ed)
Cri
teri
aC
omm
ents
06_4782 2/5/07 10:43 AM Page 152
153
Supp
orts
SA
P G
UI,
Cit
rix,
and
Por
tals
SAP
Serv
er 4
.6 a
nd 4
.7 a
nd G
ui C
lient
6.2
and
6.4
.
III.
Too
l Mai
nten
ance
Allo
ws
tool
bar
cust
omiz
atio
nsY
es.
IV. T
ool I
nsta
llati
on
Tool
inst
alla
tion
is w
eb-b
ased
or
requ
ired
T
he t
ool i
nsta
llati
on is
not
web
-bas
ed. H
owev
er t
he t
ool u
ses
the
IBM
Ins
talla
tion
de
skto
p G
UI
inst
alla
tion
(fa
t or
thi
n cl
ient
M
anag
er, w
hich
can
poi
nt t
o a
loca
l/rem
ote
loca
tion
for
inst
allin
g th
e pr
oduc
t.in
stal
lati
on)?
Ven
dor
offe
rs f
loat
ing
licen
ses
Yes
.
V. T
ool I
nteg
rati
on
Stor
es t
est
asse
ts in
MSD
E, S
QL
Ser
ver,
Test
ass
ets
are
stor
ed a
s fl
at f
iles.
or O
racl
e
Inte
grat
es w
ith
Solu
tion
Man
ager
No.
If t
ool i
nteg
rate
s w
ith
Solu
tion
Man
ager
, N
/Aca
pabi
litie
s ex
ist
to e
xecu
te r
ecor
ded
scri
pts
from
Sol
utio
n M
anag
er
Inte
grat
es w
ith
test
man
agem
ent
tool
Yes
. RFT
inte
grat
es w
ith
two
of R
atio
nal’s
tes
t m
anag
emen
t so
luti
ons
(i.e
., w
ith
the
new
-gen
erat
ion
tool
CQ
TM
and
wit
h th
e le
gacy
too
l Tes
tMan
ager
).
If t
est
man
agem
ent
tool
exi
sts,
doe
s it
R
FT c
an b
e di
rect
ly in
tegr
ated
wit
h C
lear
Cas
e, t
he C
onfi
gura
tion
Man
agem
ent
offe
r ve
rsio
n-co
ntro
l cap
abili
ties
? so
luti
on f
rom
Rat
iona
l. T
he t
est
man
agem
ent
solu
tion
s th
at R
FT in
tegr
ates
wit
h O
r in
tegr
ate
wit
h th
ird-
part
y(C
QT
M/T
M)
also
inte
grat
e w
ith
Cle
arC
ase.
tool
for
ver
sion
con
trol
?
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 153
154
Inte
grat
es w
ith
eCA
TT
No.
Inte
grat
es w
ith
test
too
ls o
ther
tha
n N
o.eC
AT
T
Ope
n A
PI t
o in
tegr
ate
wit
h ot
her
tool
s,
No.
lang
uage
s
VI.
Too
l Exe
cuti
on
Dec
isio
n-m
akin
g op
tion
s fo
r ea
ch t
est
No.
step
on
pass
or
fail
Exe
cuti
on c
ontr
ol a
llow
s si
ngle
-ste
p,
Yes
. RFT
use
s th
e ID
E’s
deb
ugge
r to
deb
ug e
very
ste
p re
cord
ed, p
rovi
ding
the
vis
ibili
tybr
eakp
oint
s, s
cree
n ca
ptur
es, v
aria
ble
and
into
the
val
ue o
f da
ta a
t an
y po
int.
data
mon
itor
ing
Scre
en s
naps
hots
are
aut
omat
ical
ly g
ener
ated
onl
y fo
r fa
tal e
rror
s. F
or c
aptu
ring
scr
een
snap
shot
s ot
herw
ise,
RFT
exp
oses
API
s th
at c
an b
e in
clud
ed in
the
scr
ipt.
Cap
abili
ties
to
run
unat
tend
ed a
nd s
kip
RFT
doe
s pr
ovid
e th
e ab
ility
to
run
the
scri
pts
unat
tend
ed. H
owev
er, i
t w
ill s
top
at t
he
faile
d it
erat
ions
firs
t fa
ilure
enc
ount
ered
.
Run
s sc
ript
s in
bac
kgro
und
and
Scri
pts
can
only
be
run
in t
he f
oreg
roun
d m
ode.
fore
grou
nd m
ode
Has
sch
edul
ing
capa
bilit
ies
No.
Sche
dulin
g to
ol o
ffer
s ex
ecut
ion
wit
h T
he t
est
man
agem
ent
solu
tion
RFT
inte
grat
es w
ith
does
pro
vide
the
cap
abili
ty t
o ad
d de
pend
enci
esde
pend
enci
es t
o te
st e
xecu
tion
. How
ever
, no
sche
dulin
g ca
pabi
litie
s ar
e pr
ovid
ed.
EXHI
BIT
6.8
(Con
tinu
ed)
Cri
teri
aC
omm
ents
06_4782 2/5/07 10:43 AM Page 154
155
Con
tain
s D
ebug
ger
RFT
use
s th
e ID
E’s
deb
ugge
r.
Allo
ws
for
auto
mat
ic s
ynch
roni
zati
on
Yes
. RFT
pro
vide
s th
e ca
pabi
lity
to w
ait
for
the
exis
tenc
e of
an
obje
ct (
wit
hbe
twee
n cl
ient
and
ser
ver
tim
eout
spe
cifi
ed)
that
will
ena
ble
the
user
to
prov
ide
tim
ing
sync
hron
izat
ion.
Bui
lt-i
n er
ror
hand
ling
capa
bilit
yY
es. M
eani
ngfu
l exc
epti
ons
are
thro
wn
at f
ailu
res.
Bui
lt-i
n co
ntex
t re
cove
ry c
apab
ility
No.
Aut
omat
ic t
imin
g fo
r ea
ch s
tep,
pro
cess
N
o.an
d su
ite
VII
. Too
l Dat
a
Use
r-de
fine
d da
ta f
ilter
ing
on a
ll vi
ews
No.
All
test
ass
ets
stor
ed a
s da
ta in
rel
atio
nal
No.
data
base
Dat
abas
e ve
rifi
cati
on a
nd d
ata
acqu
isit
ion
No.
Thi
s fe
atur
e is
com
ing
shor
tly.
Prov
ides
Exc
el-b
ased
fun
ctio
n N
o.(i
.e.,
TR
IM, M
ID, e
tc.)
to c
lean
up
capt
ured
tex
t
Dat
a-dr
iven
tes
ts (
i.e.,
pulls
dat
a fr
om
Yes
. RFT
allo
ws
data
-dri
ven
test
ing
by u
sing
dat
a fr
om a
n ex
tern
al f
ile, a
spre
adsh
eets
, ext
erna
l sou
rces
, etc
.)da
tapo
ol (
a .c
sv f
ile, a
tab
del
imit
ed o
r co
mm
a se
para
ted
file
), a
s in
put
to a
tes
t.
Allo
ws
for
veri
fica
tion
poi
nts
(obj
ects
, Y
es. T
he o
bjec
t pr
ovid
es t
wo
type
s of
VPs
—O
bjec
t D
ata
Ver
ific
atio
n Po
int
and
Obj
ect
data
base
val
ues,
tex
t)Pr
oper
ties
Ver
ific
atio
n Po
int—
to c
aptu
re a
n ob
ject
’s s
tate
and
dat
a du
ring
a t
est.
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 155
156
Tool
off
ers
regu
lar
expr
essi
ons
(i.e
., te
xt
Yes
.ch
arac
ter
mat
chin
g)
Cap
abili
ties
for
cre
atin
g ex
tern
al d
ata
file
sU
sing
the
pow
er o
f sc
ript
ing
lang
uage
s (J
ava
or V
B.N
ET
) th
is c
apab
ility
can
be
achi
eved
.
Allo
ws
data
see
ding
and
dat
a co
rrel
atio
nY
es. S
AP
tran
sact
ions
can
be
stru
ng t
oget
her
wit
hin
the
sam
e sc
ript
or
auto
mat
ed t
est
case
.
Allo
ws
vari
able
dec
lara
tion
Yes
.
Cap
ture
s sc
reen
tex
t (i
.e.,
stat
us b
ar
mes
sage
s)Y
es.
Prov
ides
pla
ybac
k w
ith
mul
tipl
e da
ta
Yes
, sup
port
s bo
th r
ando
m a
nd s
eque
ntia
l acc
ess.
acce
ss m
etho
ds (
i.e.,
rand
om)
VII
I. T
ool S
ecur
ity
Use
r an
d gr
oup
secu
rity
and
per
mis
sion
s T
he t
est
man
agem
ent
solu
tion
RFT
inte
grat
es w
ith
prov
ides
thi
s ca
pabi
lity.
for
each
tes
t as
set
com
pone
nt
Allo
ws
SAP
role
s-ba
sed
test
ing
No.
If
RFT
inte
grat
ed w
ith
Test
Man
ager
, the
n Te
st M
anag
er a
llow
s lim
ited
rol
e-ba
sed
test
ing
capa
bilit
ies.
But
RFT
as
tool
itse
lf d
oes
not
allo
w r
ole-
base
d te
stin
g.
IX. V
endo
r Su
ppor
t
Ven
dor
offe
rs W
eb-b
ased
pat
ches
, Y
es.
dow
nloa
ds t
o up
grad
e to
ol
EXHI
BIT
6.8
(Con
tinu
ed)
Cri
teri
aC
omm
ents
06_4782 2/5/07 10:43 AM Page 156
157
SAP
Cor
pora
tion
has
for
mal
ly c
erti
fied
N
o.th
e to
ol
X. T
rain
ing
Ven
dor
offe
rs t
est
tool
s tr
aini
ngY
es.
Ven
dor
offe
rs c
erti
fica
tion
exa
min
atio
n Y
es.
in t
est
tool
XI.
Tes
t R
epor
ting
and
Rev
iew
Res
ult
logs
sto
re s
cree
n ca
ptur
esSc
reen
sna
psho
ts a
re a
utom
atic
ally
gen
erat
ed o
nly
for
fata
l err
ors.
For
cap
turi
ng s
cree
nsn
apsh
ots
othe
rwis
e, R
FT e
xpos
es A
PIs
that
can
be
used
to
incl
ude
the
snap
shot
s in
the
log
file
s.
Res
ults
log
show
sta
tus
for
each
row
of
Yes
. RFT
logs
sho
w e
very
sig
nifi
cant
act
ion
perf
orm
ed o
n ea
ch it
erat
ion.
data
(it
erat
ion)
Res
ults
log
incl
ude
date
and
tim
e st
amp
Yes
.
Res
ults
log
can
be s
aved
in d
iffe
rent
Y
es. I
t ca
n be
sto
red
as T
ext
file
s, H
TM
L F
iles,
TPT
P L
og F
iles
and
Test
Man
ager
Log
form
ats
(i.e
., H
TM
L, .
doc)
file
s.
Cre
ates
aut
omat
ic t
est
resu
lts
file
s R
FT p
rovi
des
an o
ptio
n to
aut
omat
ical
ly g
ener
ate
logs
. How
ever
, if
Ver
ific
atio
n Po
int
(tes
t lo
gs)
has
pass
ed, i
t do
es n
ot s
how
bot
h ac
tual
and
exp
ecte
d re
sult
. The
Ver
ific
atio
n Po
int
Com
para
tor
show
s ac
tual
and
exp
ecte
d va
lues
onl
y w
hen
the
VP
has
faile
d. F
or p
asse
dV
Ps, o
ne c
an v
iew
onl
y th
e ex
pect
ed r
esul
t.
Use
r-de
fine
d qu
ery
and
repo
rtin
g or
N
o.ch
arti
ng c
apab
ility
(Con
tinu
es)
06_4782 2/5/07 10:43 AM Page 157
158
Eng
lish-
narr
ativ
e do
cum
enta
tion
Y
es. F
or h
igh-
leve
l act
ions
, the
doc
umen
tati
on (
com
men
ts)
are
gene
rate
dpr
oduc
ed a
utom
atic
ally
auto
mat
ical
ly.
from
tes
t pr
oces
ses
Exp
ort
to t
ext
capa
bilit
y fo
r al
l tes
t as
sets
Yes
. The
tes
t as
sets
suc
h as
the
Tes
t Pr
ojec
t w
ith
its
Scri
pts
and
the
rela
ted
asse
ts a
sw
ell a
s th
e D
atap
ools
can
be
expo
rted
. How
ever
, the
y ca
nnot
be
reus
ed b
y an
y “o
ther
”te
stin
g to
ol.
Use
r-ex
tens
ible
cla
sses
, act
ions
, and
N
o.fu
ncti
ons
Use
r ca
n ex
tend
inte
rfac
e w
ith
unlim
ited
N
o.ne
w a
ttri
bute
fie
lds
Allo
ws
man
y-to
-one
and
one
-to-
man
y T
his
is p
ossi
ble
whe
n R
FT is
inte
grat
ed w
ith
its
test
man
agem
ent
tool
(C
QT
M o
r T
M).
requ
irem
ents
tra
ceab
ility
Supp
orts
ful
l ind
irec
tion
for
all
test
Y
es.
proc
esse
s an
d da
ta f
ile n
ame
Is t
he t
ool l
angu
age
and
plat
form
N
o. I
t ge
nera
tes
eith
er J
ava
or .N
ET
scr
ipts
, whi
ch c
anno
t be
use
d in
terc
hang
eabl
y.in
depe
nden
t?H
owev
er, R
FT s
uppo
rts
cros
s-pl
atfo
rm c
apab
iliti
es. T
his
mea
ns s
crip
ts g
ener
ated
on
aW
indo
ws
plat
form
can
be
exec
uted
on
a L
inux
pla
tfor
m a
nd v
ice-
vers
a.
Rep
rint
ed w
ith
perm
issi
on f
rom
IB
M.
EXHI
BIT
6.8
(Con
tinu
ed)
Cri
teri
aC
omm
ents
06_4782 2/5/07 10:43 AM Page 158
The tools were evaluated based on several factors for the follow-ing categories:
■ Transaction capture and playback■ SAP-supported versions■ Applications■ Tool maintenance■ Tool installation■ Tool integration■ Tool execution■ Tool data■ Tool security■ Vendor support■ Training■ Test reporting and review
The criteria used for evaluating the vendors assists in the acquisi-tion of SAP-specific test tools. The vendors in these exhibits providedself-evaluations for the capabilities of their test tools. Exhibit 6.1 pro-vides descriptions for the criteria factors used to evaluate the vendors.Exhibits 6.2 through 6.8 are the actual evaluation of all vendors,listed in alphabetical order.
SOURCES OF AUTOMATION
In an SAP environment, various sources provide information for auto-mating processes to support string, integration, regression, or perfor-mance testing. Automation within an SAP includes automation ofprocesses within R/3, SAP bolt-ons (i.e., SRM, SAP CRM Sales InternetSystem, Employee Self-Service—ESS, Cross Applications Timesheets—CATS, Advanced Planning Optimization—APO) and other externalapplications interfacing directly with R/3.
The business process master list (BPML) provides a listing of SAPtransaction codes, either custom or out-of-the-box, that are in scopefor particular SAP releases. From the BPML, standalone SAP trans-action codes can be identified for potential SAP automation. Auto-mated test cases for individual SAP transaction codes can be strungtogether to form larger end-to-end scenarios. For instance, the BPML
Test Tool Review and Usage 159
06_4782 2/5/07 10:43 AM Page 159
160 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
may show that these transactions are in scope VA01 and VL01, whichare used for sales order creation and deliveries; these transaction codescan be automated independently of each other as standalone auto-mated test cases and then strung together to deliver a newly createdsales order. Additional individual transaction codes from the BPMLcan be automated until the entire order fulfillment process is auto-mated, including shipping, invoicing, credit checks, and so on.
Other sources of information that assist in the automation of testcases include business process procedures (BPPs), which contain train-ing information for transaction codes; flow process diagrams, whichcan demonstrate how a process flows end-to-end, including multipletransaction codes; and SAP roles. Functional and technical requirementsand specifications also provide further information for automating testcases containing transactions codes, reports, and workflow.
Documented test cases with expected results can be drafted fromthe functional and technical requirements. The test team can leveragethe documented test cases to construct automated test cases that in-clude SAP verification points. Verification points are system outputsthat can be inspected; for instance, when an SAP sales order is cre-ated, SAP generates a status bar message indicating that a sales orderhas been created that may appear generically as “Sales Order Num-ber XX has been generated.” Other verification points include check-ing financial figures on reports, ensuring that workflow messages arerouted to the correct user, and quantities within a table.
TYPE OF TESTS SUITABLE AND CRITERIA FOR TESTAUTOMATION
Test tools can support playback and execution of test cases for thefollowing testing efforts:
■ Smoke (testing of vital components for each official build)■ Regression■ Scenario■ Integration■ Performance
Test cases can be automated when the environment is stable andnot subject to frequent system changes. Automation efforts on a test
06_4782 2/5/07 10:43 AM Page 160
Test Tool Review and Usage 161
environment that experiences frequent system changes cause muchrework since automated test cases would have to be changed andamended to meet a new system baseline or configuration changes. Asa rule of thumb, automation on an object should not be attempteduntil the object has been demonstrated to first execute successfullymanually. Furthermore it is not practical to attempt to automate alltest cases.
Many SAP projects attempt limited test case automation for ini-tial system releases since the system is unstable and undergoing manychanges due to identified defects, requirement changes, or require-ment misinterpretations. Automation efforts are typically enhancedand expanded for production support when system changes are in-troduced due to OSS (On-line Service System) notes, new functional-ity being requested, defects, system patches, system upgrades, and soon. For regression testing, sunny- and rainy-day scenarios are auto-mated and played back to ensure that the introduction of new systemchanges does not affect previously working system functionality.
Scenario testing is the equivalent of string testing since processesare tested within a single enterprise area (or module). Scenario test-ing is the precursor to integration testing and is usually the follow-ontest for unit testing. Automation attempts during the scenario-testingphase may be hindered by the poor resolution of defects during unittesting, since that would cause the system to become unstable for sce-nario testing. Limited automation can be attempted during scenariotesting toward the end of the scenario test, when the system hasdemonstrated that it can successfully execute processes manually.
Integration testing for an initial SAP implementation may consistof three or more iterations whereby the system is tested first during it-eration one for the most important processes, and all processes in itera-tion two; iteration three is used to address any or all defects that remainoutstanding from iterations one and two. Automated test cases can as-sist and expedite the integration-testing cycle, since many processes mayhave to be retested when defects are discovered and resolved.
Performance, load, volume, and stress testing in general requiremultiple end users to execute simultaneous keystrokes to generatesystem traffic and identify the application’s bottlenecks, degradationpoints, and so on. Depending on the size and scope of the SAP im-plementation, a performance test may require that hundreds or thou-sands of end users execute processes simultaneously while systemresponse times are manually collected. Organizing and scheduling a
06_4782 2/5/07 10:43 AM Page 161
performance test consisting of synchronized end users spread out inmultiple locations may prove difficult, if not impossible, in particularwhen a performance test has to be repeated multiple times over ashort time window. Test case automation is highly suggested and ap-propriate for a performance test since it reduces many of the prob-lems associated with coordinating, scheduling, and training multipleend users for a performance test. Test case automation allows endusers to be emulated as virtual users, which reduces the headcountneeded for a manual performance test and also instantly collects re-sults and produces graphs and charts at the conclusion of a perfor-mance test.
Exhibit 6.9 provides criteria that help test engineers and test man-agers decide which business processes are suitable for test automation.
EXHIBIT 6.9 Criteria to Determine Whether Automation is Necessary
Criteria Criteria Pertain to:
Test script requires the verification and validation of multiple attributes, objects, and components. Automation
Test script needs to be executed with external data that resides in a database system or spreadsheet. Automation
Business process to be automated has a finite number of input values and fields and was constructed with an orthogonal array to provide coverage for all its permutations. Automation
Test script will be used for security testing. Automation
Test script will be used for either one or all of these: stress, volume, load, soak, performance testing (add a point for eachtype of test that it meets). Automation
Test script will be used for regression, functional testing. Automation
Test script will be used to validate values, calculations, etc.,displayed on a customized report, or online report. Automation
Test script will be repeated many times or is highly repetitive (i.e., test script will need to be executed multiple times on different software releases or builds). Automation
Test script will be played back with multiple sets of data(i.e., data-driven scripts, parameterized scripts). Automation
The application under test has a stable environment withrepeatable conditions. Automation
Test script will be used for integration testing and requirescorrelation across multiple business processes. Automation
162 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
06_4782 2/5/07 10:43 AM Page 162
Test Tool Review and Usage 163
Test script will be used to kick off or launch programs thatalready have batch scheduled jobs (i.e., interface programs,conversion programs). Manual
Test script will be used for negative testing. Manual
Test script will be used for analog testing or bitmap testing. Manual
The application under test where the test script will berecorded is constantly changing. Manual
Test script has test steps to display objects, figures, GUIs, etc. that the automation tool does not recognize. Manual
Test script will be used for intuitive testing or lateral testing.(i.e., recording of test script is predicated on intuitive knowledge of the application or depends on error guessing). Manual
Test script will be used to test how objects are displayed orcaptured on a screen and is not testing the application’s functionality (i.e., testing to see how objects are displayed via an emulated desktop session which varies from desktop to desktop based on screen coordinates, size of screen, pixels, etc.). Manual
Test script will be used for usability testing. Manual
Test script and business process requires recording of business processes or test steps with more than two distinct recording tools. Manual
Testing process has physical requirements such as the use of hardware equipment or mechanical devices (i.e., scanning serial numbers with bar-coding machine). Manual
Test script will be executed only once. Test script will only be used on a single release or build of the application and NOT during subsequent releases or builds of the software. Manual
Test script will be used to automate a business process that does not yield predictable or static results. Manual
Test scripts need to be executed immediately (i.e., the test scripts need to be executed within 30 hours or less). Manual
Test script is for a process that is highly complex on an applicationthat has many custom controls, and once automated the test scriptwill be extremely difficult to maintain, modify, or reuse. Manual
In addition to the criteria provided above, it is highly beneficialto automate test cases that maximize the return on investment (ROI)for the efforts spent on automating a process, and the following fac-tors must be evaluated and considered before automation for a testscenario is attempted:
EXHIBIT 6.9 (Continued)
Criteria Criteria Pertain to:
06_4782 2/5/07 10:43 AM Page 163
■ Frequency. Number of times that a given test scenario is expectedto be executed manually within a 12-month period.
■ Execution duration. Based on historical evidence or (resourceexpertise), how long does it take to execute the test scenario man-ually, including recording results for the test run?
■ Preparation. How much time does it take to plan or rehearse atest scenario that will be executed manually?
■ Stability. Based on historical data or functional specs, how manytimes has the underlying process been modified or reconfigured?
■ Number of assigned testers. How many individuals or resourcesare dedicated to execute manually a test scenario that cuts acrossmultiple SAP modules?
With the criteria and factors in the following, a hypothetical sce-nario can be created to objectively determine whether it is practicalto automate a test scenario:
Hypothetical test scenario: Order-to-close scenarioTotal time to execute manually including recording test results:
25 hours.Frequency: Executed five times during the year to support
major system releases.Stability: Process is subject to few minor modifications per year
(two minor modifications). Fairly static.Preparation: On average, 15 hours are spent manually rehears-
ing the test scenario before it is fully executed.Number of assigned testers: Three testers (having expertise in
project systems [PS] module, finance [FI], and sales and distribution[SD] module).
Given these metrics, it is possible to estimate with some marginof error that between preparation and execution of the manual testcase (including manually recording test results) approximately 200man-hours per year are spent executing the order-to-cash scenario.This is not including time needed to manually modify the documen-tation for the test case when the order-to-cash scenario is subject toconfiguration changes, or the time needed to coordinate the multipleresources that are necessary for executing the test scenario.
With an automated framework in place, one can review the fol-lowing statistics and metrics needed to automate the order-to-cashscenario and whether doing so is cost effective:
164 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
06_4782 2/5/07 10:43 AM Page 164
Test Tool Review and Usage 165
Total hours needed to automate test case (including functional sup-port): 80 hours.
Time needed to execute process with automated test tools(including automatic test results [logs] generated by automated testtools): Two hours.
Number of resources needed to execute automated test case:One at most, since automated test case can be scheduled to rununattended.
Preparation time needed to execute automated test case: Five.
Under the hypothetical scenario for automating from scratch andexecuting the automated test case for order-to-cash, it is estimatedthat for the first year it would take 115 man-hours to execute theautomated test case. For subsequent years it would take 35 man-hours to execute the automated test since the automated test case hasalready been constructed, whereas executing the process manually isa fixed number of man-hours: 200 man-hours per year, subject to theavailability of the testing resources and level of expertise. This analy-sis points objectively and based on certain assumption to a case infavor of automation. With a similar analysis, projects can employ anobjective approach for automating scenarios.
AUTOMATION TRANSCENDS TESTING
Test tools are helpful for testing, but they also serve other purposesthat are not related to validation of system, functional, and technicalrequirements. Test tools offer benefits that can expedite and facilitateSAP activities. Test tools in an SAP implementation can bring the fol-lowing benefits which help to increase the ROI:
■ SAP ad-hoc data loads. Test tools can automate processes andSAP transaction codes for infrequent or one-time data loads forevents such as training, environment setup, and so on. Forinstance, an automated test case can be created to load thousandsof SAP pay scales within hours. Other examples of data loadsinclude loading up materials, vendors, and wage types.
■ Prototypes/demonstrations for end users. As part of an informaluser acceptance test or client demonstration of the SAP system,
06_4782 2/5/07 10:43 AM Page 165
processes can be automated and played back repeatedly to showexpected system functionality in demos or prototypes.
■ Verification of objects. Security settings, SAP variants, and othersystem settings can be inspected and verified with test tools. Anexample includes automating processes that retrieve the data val-ues for the variants for all advanced business application pro-gramming (ABAP) programs and extract the information into aspreadsheet for inspection as opposed to verifying each ABAPprogram variant manually and one at a time.
■ Repetitive nontesting tasks. In SAP, there are many repetitivetasks to set up the system, initializing processes that are notrelated to a specific testing effort. For instance, the security teamhas to create, update, and maintain users. These roles and taskscan be performed with automated test tools.
METHODS OF AUTOMATION
In SAP projects, the most common method employed for automationis to provide the resource knowledgeable on the test tools with docu-mented test cases to create and design automated test cases. Fre-quently, the functional team leaders will not release resources tosupport automation efforts because the functional resources are timeconstrained. Automation of processes requires support and knowl-edge from different project members, since documented test cases areusually not kept up to date with the system configuration, whichdiminishes their value. Exhibit 6.10 offers techniques and methods forautomating test cases.
SIGNS OF TEST AUTOMATION FAILURE
SAP projects that have purchased commercial test tools and are suf-fering from the conditions below will probably either need to scrapall test automation efforts, outsource the automation efforts, or ded-icate more project resources (i.e., subject matter experts [SMEs], func-tional analysis, configuration team members, and test engineers) tothe automation activities:
166 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
06_4782 2/5/07 10:43 AM Page 166
167
EXHI
BIT
6.10
Aut
omat
ion
App
roac
hes
Met
hod
Des
crip
tion
Too
lsA
dvan
tage
sD
isad
vant
ages
Vid
eota
ped
and
Func
tion
al e
xper
t (u
sual
ly r
emot
e)L
otus
Cam
Solv
es t
ime
diff
eren
ces,
Spee
d du
ring
pla
ybac
k, n
obu
sine
ss p
roce
ssca
ptur
es in
the
for
m o
f vi
deo
(wit
hca
n be
pla
yed
back
inte
ract
ive
foru
m f
or Q
&A
voic
e) a
bus
ines
s pr
oces
s an
dw
itho
ut in
tera
ctin
gse
nds
e-m
ail t
o au
tom
atio
n ex
pert
agai
nst
SAP
syst
em
Shar
ed s
essi
onA
fun
ctio
nal e
xper
t sh
ows
the
Web
ex, C
itri
x,C
an b
e in
tera
ctiv
eC
onne
ctio
n sp
eeds
(la
gs),
auto
mat
ion
expe
rt, t
hrou
gh a
nN
etm
eeti
ng,
wit
h fu
ncti
onal
exp
ert
Soft
war
e in
stal
lati
on,
emul
ated
ses
sion
, how
to
reco
rdPC
Any
whe
reov
er t
he p
hone
, allo
ws
Secu
rity
(?)
a bu
sine
ss p
roce
ssre
mot
e in
tera
ctio
n
Sitt
ing
side
by
side
A f
unct
iona
l exp
ert
sits
nex
t to
Cor
rect
SA
P na
viga
tion
Take
s fu
ncti
onal
exp
ert
the
test
too
l exp
ert
and
prov
ides
is e
nsur
ed in
clud
ing
wor
k-aw
ay f
rom
pri
mar
y ta
sks;
to
inst
ruct
ions
for
how
to
navi
gate
arou
nds,
ove
rcom
esre
fine
the
scr
ipt
addi
tion
ala
proc
ess
wit
hin
SAP
outd
ated
art
ifac
tsex
pert
ise
may
be
need
ed
Det
aile
dT
he t
est
tool
exp
ert
follo
ws
BPP
s, T
est
Allo
ws
test
too
l exp
ert
Tim
e co
nsum
ing
to p
rodu
cedo
cum
enta
tion
deta
iled
wri
tten
doc
umen
tati
onC
ases
, Tes
tto
wor
k in
depe
nden
tly;
deta
iled
docu
men
tati
on;
to d
ocum
ent
a pr
oces
sSc
ript
sal
low
s te
st t
ool
docu
men
tati
on b
ecom
esex
pert
to
map
fai
lure
s ob
sole
te if
not
man
aged
to t
est
scri
pts
thro
ugh
vers
ion
cont
rol
06_4782 2/5/07 10:43 AM Page 167
168 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
■ Test engineers are automating processes without any establishedguidelines and criteria.
■ Test managers or senior managers are unaware or confused as towhat was previously automated and how it pertains to the pro-ject’s future testing cycles.
■ Test engineers or experts with the automated test tools are hiredor contracted but do not have access to SMEs, business analysts(BAs), or configuration team members to construct automatedtest cases.
■ Project has woefully inadequate documentation for test cases,BPPs, or flow process diagrams, which hinders the ability of thetest engineer to understand the project’s business process andbusiness rules; thus he or she cannot develop suitable test cases orverify testable conditions.
■ Test engineers construct and design automated test cases in anenvironment that is subject to frequent changes, and the configu-ration or development changes are not clearly communicatedwithin the project, which causes automated test cases to fail dur-ing playback and much rework.
■ The project does not have dedicated or expert resources for testcase automation and rather uses “fillers” or individuals fromother teams who have different primary job responsibilities fortest case automation.
■ The project implements an automated test case strategy only withindividuals who have recently come out of a 1- to 4-day trainingclass for test case automation and the project does not have a testcase automation mentor or expert for the recently trained resources.
■ Test tools have outdated versions and have not been upgraded,and there is no dedicated project resource for maintaining the testtools.
■ Project members lose faith in the test tool because it takes toomuch time to automate a test case when automation is initiallyattempted and complain that executing a manual task over a 10-hour time window is much quicker and faster than attempting toautomate the same process over five business days. Initial auto-mation efforts are extremely time consuming, and ROI for auto-mated test tools is not realized until the automated test cases areexecuted frequently for future testing cycles.
06_4782 2/5/07 10:43 AM Page 168
Test Tool Review and Usage 169
The preceding list represents manifestations that the test case au-tomation approach is headed for a collision course, which usuallycauses many projects to abandon any automation attempts in favorof manual testing. When these signs appear, the test manager or pro-ject manager may have to evaluate the need for automating test casesor whether a third-party provider can provide automation muchmore efficiently.
TEST MANAGEMENT TOOLS
Test management tools are used for test planning, test repository, testdesign, test execution, reporting defects, reporting, audit trails, andtracking defects. Some test management tools can be integrated withother commercial tools from third-party companies for purposes suchas version control and requirements management.
Within a test management test tool, a testable requirement can belinked or associated to a test case, and after the test case is executedthe requirement is automatically updated with either a pass or failurestatus. Test management tools can also be used to store and executeboth automated test cases and manual test cases. Furthermore, testresults, test logs, and defects can also be stored within a test manage-ment tool that includes a date/time stamp and audit trails. Some testmanagement tools also offer e-mail workflow capabilities on re-ported and closed defects.
Arguably, the biggest advantages of test management tools is thatthey offer a single repository where all test artifacts can be safely andsecurely stored as opposed to storing information on test cases in dis-parate and disconnected spreadsheets, shared drives, or e-mails. Be-cause test management tools are a single repository, data can becollected from a single source, which increases the transparency ofthe testing effort and provides greater visibility into the testingprogress. For example, test management tools permissions and au-thorizations can be granted to project members to generate real-timegraphs, charts, and reports to analyze how many test cases have beendesigned and executed, how many defects with a priority of “1” remainoutstanding, or how many requirements have been covered. The abil-ity to generate real-time testing metrics from a test management tool
06_4782 2/5/07 10:43 AM Page 169
170 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
allows senior management and test managers to make informed de-cisions for supporting a go/no-go decision or an exit-criteria decisionbased on objective information.
Another benefit of test management tools is that they offer a his-tory of planning and estimating costs for future testing cycles. ManySAP projects struggle with questions such as how much of the budgetshould be allocated to testing activities, how many billable hoursshould be spent on testing, and creating a project schedule with ac-curate dates and activity duration times for the planning and execu-tion of test cases. With a test management tool, data such as howlong it actually took to design, plan, and execute a test case or resolvea defect can be easily extracted and used as a baseline for planningand estimating the costs and number of resources needed for a futurea testing cycle. This information cannot be easily extracted from dis-parate spreadsheets or e-mails, particularly for projects that experi-ence employee, consultant, or subcontractor turnover after a majortesting cycle is completed.
06_4782 2/5/07 10:43 AM Page 170
171
CHAPTER 7Quality Assurance Standards
The concept of quality assurance (QA) in an SAP environment con-jures the perception that QA is the equivalent of configuring the
quality management (QM) module. While QM configuration helpsend users perform tasks such as lot and source inspections, the act ofconfiguring QM does not introduce QA to the design, configuration,or development of the SAP solution. Projects that adhere to recog-nized methodologies and philosophies such as Six Sigma, CapabilityMaturity Model (CMM), and Institute of Electrical and ElectronicsEngineers (IEEE) standards are most likely to successfully enforce QAstandards.
The QA team can perform the following activities within an SAPimplementation:
■ Ensure that documentation for deliverables such as business processprocedures (BPPs), flow process diagrams, and technical and func-tional specifications are in accordance with documented standards.
■ Ensure that all mandatory information is documented andapprovals are granted before an object is transported into the pro-duction environment.
■ Ensure that project members assigned to testing tasks and execu-tion of test cases adhere to the procedures and methodology out-lined in the test plan and test strategy (i.e., using appropriatetemplates, version control for objects).
■ Spot inspection for project deliverables and document scorecardsfor measuring and evaluating compliance of project teams to doc-umented QA standards.
■ Document lessons learned.■ Train and mentor project members on QA standards.■ Ensure that the project’s test cases and requirements are aligned
with the project’s scope.
07_4782 2/5/07 11:09 AM Page 171
■ Enforce QA standards and audit the deliverables and work prod-ucts subject to QA standards.
■ Provide quality gates to substantiate the exit criteria for a testingcycle.
The QA team can help to create, define, and enforce quality stan-dards. Standards govern how deliverables and work products are cre-ated, peer reviewed, and accepted. QA standards are needed to designtest cases, plan the testing efforts, define the testing criteria, reporttest results, and resolve reported defects. Depending on a project’sbudget, the line blurs between the QA team and the test team, and in-dividuals assigned to the test team must also fulfill the expectedrole(s) of QA team members. In theory, the concept of QA relates topreventing defects, whereas testing relates to detecting defects. Theassumption is hereby made that test team members fulfill both test-ing and QA roles. Defined and implemented QA standards must befit for purpose and get the buy-in from project managers and/or teamleaders and be supported by the QA charter; otherwise, they risk be-coming obsolete and difficult to enforce. QA representatives mustdocument training materials and provide training for project mem-bers who are expected to adhere to QA standards.
TEST PLAN AND STRATEGY
The test plan and strategy is the documentation that addresses thedetails, procedures, and approach for testing. The test plan and teststrategy explain the “how” and “what” for testing. The test man-ager defines the test strategy in the project preparation phase andfinalizes the documentation for the test strategy in the blueprintphase. The test manager should obtain buy-in from various projectstakeholders, such as the project manager, the configuration man-ager, development manager, and so on, to document the test strategy,as the test strategy needs to be realistic for the project given its dead-lines, available resources, and budget allocation. The SAP ASAPmethodology within SAP’s Solution Manager platform offers anaccelerator white paper for documenting the test strategy. IBM’sAscendant guide methodology for implementing SAP also provides atemplate for a test strategy.
172 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
07_4782 2/5/07 11:09 AM Page 172
The white paper test strategy from Solution Manager defines theexpected roles for the various testing efforts and the recommendedtest system for each testing effort, and also provides definitions foreach SAP testing phase from unit testing through system testing,which helps the project members standardize their testing terms andnomenclature. (Note: The following website provides a glossary oftesting terms: www.erp.navy.mil/util/browse.asp?glossaryletter=p.) Thegeneric test strategies from either Solution Manager or Ascendant canbe customized to meet a project’s specific testing needs. For instance,a test strategy may need to be modified to include a framework fortest case automation or outsourcing of test cases for off-shore execu-tion. The test strategy can also address what documents should be re-tained at the end of each testing phase.
Specific test strategies can be developed to provide more granu-larities for other testing efforts such as stress, security, and user ac-ceptance testing. SAP’s Solution Manager also provides test strategytemplates for planning a stress test.
The test plan addresses specifically the “how” of testing. Elementsof a test plan include:
■ Available resources (i.e., machines, test lab, equipment, test tools,etc.)
■ Test schedule (timeline of activities)■ Test calendar (expected sequence and execution of test cases)■ Test criteria (entrance/exit)■ How test results will be reported■ Criteria for automating processes (if test tools are in place)■ Resolution of testing defects■ Issues, risks, and assumptions for testing■ Specific roles and responsibilities for each tester and descriptions
for each testing role■ Objectives of the test■ Scope of the test■ An organizational chart for individuals participating in the test-
ing efforts■ Test case templates■ Test readiness review criteria■ How the test environment will be constructed and baselined and
how data will be populated in the test environment■ How interfaced data will be verified through the legacy systems
Quality Assurance Standards 173
07_4782 2/5/07 11:09 AM Page 173
IBM’s Ascendant SAP implementation guide offers a sample testplan. Typically, test plans are applied and adhered to, depending on theproject scope for integration testing consisting of multiple iterations.
Both test strategies and test plans need to be signed off and ap-proved by the project’s stakeholders. It is recommended that the con-figuration, development, integration, and Basis team leaders and theproject manager sign off the test plan and test strategy. The test planand strategy need to be stored in a version-controlled repository thatis accessible to all testing participants. Prior to the start of the testingcycles, the test manager should present the contents from the teststrategy and test plan in a kickoff presentation to the individuals par-ticipating in and responsible for conducting testing tasks.
The test plan and test strategy are living documents. However,they should be amended only under a controlled process that includesapprovals. The QA team helps in ensuring that the project membersadhere to the guidelines and procedures documented within the testplan and test strategy.
TEST CRITERIA
The software testing criteria identify critical conditions and measuresnecessary to start exit from or suspend testing for a designated test-ing effort. Testing criteria are defined and documented within the testplan and are constructed with the input from several project stake-holders. Testing criteria can be customized for each project, and theyshould be aligned with the company’s policies and goals. For instance,a company that follows Six Sigma (6σ) as part of its statistical processcontrol may impose that at least 90 percent of all test cases are suc-cessfully executed and show a “pass” status for the first iteration ofthe integration test. In contrast, another company that does not havea strong culture of quality or total quality management (TQM) maybe satisfied with a success rate of 70 percent for all test cases for thefirst iteration of the integration test. A corporation will need to estab-lish testing criteria that are fit for purpose and suitable to theconstraints, budget, and resources available to the project team imple-menting SAP. Formal testing efforts that are subject to signoffs, peerreviews, and audits are typically likely to include testing criteria.
The main types of testing criteria are entrance and exit criteria.According to the Certified Software Tester Common Body of Knowl-
174 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
07_4782 2/5/07 11:09 AM Page 174
edge, the following definitions are provided for entrance and exitcriteria:
Entrance Criteria/Exit Criteria—the criteria that must be met priorto moving to the next level of testing, or into production, and howto realistically enforce this or minimally how to reduce risk to test-ing organization when external pressure (from other organizations)causes you to move to the next level without meeting exit/entrancecriteria.
The following are examples of test criteria for scenario testing:
Entrance Criteria
■ All developed code must be unit tested. Unit testing must be com-pleted and signed off by the configuration team.
■ At least 85 percent of all unit test cases must have passed.■ No defects with a priority level of 1 from unit testing remain
unresolved.■ Any outstanding defects from unit testing are documented and
have workarounds.
Exit Criteria
■ All high-priority defects of level 1 or 2 must be fixed and tested.■ At least 80 percent of all testable requirements must have been
verified. All requirements with a high level of importance to thebusiness must have been shown to work successfully.
■ At least 90 percent of all test cases must have passed successfully.■ A trend of decreasing defects on a weekly basis.■ Outstanding defects of low or medium priority must have docu-
mented workarounds and be signed off as acceptable risks by thesubject matter experts.
In addition to entrance and exit testing criteria, other criteriainclude:
■ Release—as part of the decision to move something into the Pro-duction environment.
■ Suspension—to terminate/halt testing.
Quality Assurance Standards 175
07_4782 2/5/07 11:09 AM Page 175
■ Success—the level of achievement or passes needed to consider atesting effort successful.
■ Pass/fail.■ Resumption—criteria to continue testing if it has been suspended.
Testing will not recommence until the software reaches thesecriteria.
TEST READINESS REVIEW
The test readiness review (TRR) is a formal review gate point to ver-ify that the system is ready for formal testing and approval. A TRRconsists of a criterion checklist whereby criteria factors are evaluatedto assess how well prepared or ready the system and the project mem-bers are to initiate a testing effort. As an analogy, a TRR is similar tothe checklist that a pilot must check off before departing from thegate to ensure that certain conditions are met before the planedeparts. In similar fashion, the project needs to ensure that certainconditions are met before starting a testing effort.
The contents and criteria for TRRs are evaluated and addressedin the presence of witnesses from the configuration team, test team,configuration management team, development team, Basis teams, theproject manager, and the project management operations (PMO) of-fice. A TRR is recommended for major testing efforts such as inte-gration testing, but in theory can be held prior to any testing effort(i.e., scenario testing, user acceptance testing, etc.). The criteria froma TRR should be assigned in advanced to project members who mustbe ready to state at the TRR meeting whether the criteria have beenfulfilled or, if they have not been fulfilled, why this is the case and theconsequences to the project schedule and the testing effort. A TRR re-view should be held at least 72 hours prior to the start of a testing ef-fort to allow project members to respond or address criteria factorsthat have not been met.
Exhibit 7.1 shows a sample TRR to be evaluated prior to the startof an integration test. The TRR can be amended or customized tomeet project-specific needs. Note that each condition within Exhibit7.1 can have a different importance factor. Depending on the condi-tion factor not met from the TRR, the project may decide that itneeds to delay or postpone the start of the integration test. The TRR
176 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
07_4782 2/5/07 11:09 AM Page 176
177
EXHI
BIT
7.1
Sugg
este
d T
RR
Che
cklis
t
Test
Rea
dine
ss R
evie
w C
heck
list:
Dat
e:W
itne
sses
:A
ppro
vals
:
Con
diti
onY
es—
Met
Not
Met
N/A
Com
men
ts
1.H
as t
he in
tegr
atio
n te
st p
lan
been
app
rove
d?
2.H
ave
all t
esti
ng p
roce
dure
s be
en d
ocum
ente
d? I
nclu
ding
proc
edur
es f
or r
epor
ting
tes
t re
sult
s an
d re
solv
ing
and
clos
ing
defe
cts?
3.H
as a
tes
ting
sch
edul
e an
d a
test
ing
cale
ndar
bee
nap
prov
ed a
nd p
oste
d?
4.H
ave
all t
esti
ng f
acili
ties
(i.e
., ro
oms,
tes
t en
viro
nmen
t,de
skto
ps, t
est
lab)
5.H
as a
ll re
quir
ed d
ata
been
cre
ated
and
is it
rea
dy f
orin
tegr
atio
n te
stin
g? H
as a
ll co
mm
on d
ata
for
inte
grat
ion
test
ing
been
def
ined
(i.e
., ch
art
of a
ccou
nts,
mat
eria
ls,
etc.
)? H
as a
ll ne
cess
ary
test
dat
a be
en lo
aded
into
the
tes
ten
viro
nmen
t (i
.e.,
data
load
ed f
rom
CA
TT
scr
ipts
, clie
ntco
py, i
nter
face
s, e
tc.)
?
(Con
tinu
es)
07_4782 2/5/07 11:09 AM Page 177
Con
diti
onY
es—
Met
Not
Met
N/A
Com
men
ts
6.A
re t
here
any
def
ects
out
stan
ding
fro
m a
pri
or t
esti
ngef
fort
(i.e
., sc
enar
io t
esti
ng)?
7.H
ave
all i
nteg
rati
on t
est
case
s be
en c
ompl
eted
? A
ppro
ved?
8.H
ave
all n
eces
sary
SA
P ID
s/pa
ssw
ords
& r
oles
bee
nes
tabl
ishe
d an
d ve
rifi
ed?
Hav
e te
ster
s be
en g
iven
acc
ess
tosh
ared
dri
ves,
LA
Ns,
Por
tals
, tes
t to
ols,
tes
t m
anag
emen
tre
posi
tori
es, e
tc.?
9.Is
the
re a
con
tact
list
for
all
test
ing
part
icip
ants
? H
ave
all
test
ing
part
icip
ants
bee
n co
nfir
med
, inc
ludi
ng p
arti
cipa
nts
from
off
site
s? H
ave
the
cont
act
nam
es &
tel
epho
nenu
mbe
rs o
f su
ppor
t pe
rson
nel b
een
prov
ided
to
test
tea
m?
10.
Hav
e al
l int
erfa
ces
wit
h ot
her
syst
ems
(i.e
., le
gacy
sys
tem
s)be
en id
enti
fied
for
inte
grat
ion
test
ing?
11.
Hav
e sa
mpl
e se
ts o
f re
pres
enta
tive
dat
a be
en id
enti
fied
to
test
inte
rfac
es?
12.
Hav
e al
l bat
ch s
ched
uled
jobs
for
AB
AP
prog
ram
s be
enid
enti
fied
?
178
07_4782 2/5/07 11:09 AM Page 178
Con
diti
onY
es—
Met
Not
Met
N/A
Com
men
ts
13.
Hav
e al
l tes
tabl
e re
quir
emen
ts b
een
fully
tra
ced
to t
heco
rrec
t te
st c
ases
?
14.
Is t
he t
arge
t te
st e
nvir
onm
ent
read
y fo
r te
stin
g? H
ardw
are
com
pone
nts
read
y? A
ll pa
tche
s, O
SS n
otes
bee
n ap
plie
d?D
oes
it h
ave
conn
ecti
vity
to
the
lega
cy s
yste
ms?
Doe
s it
have
the
late
st a
nd g
reat
est
conf
igur
atio
n ch
ange
s?
15.
Hav
e al
l pro
cedu
res
been
est
ablis
hed
for
tran
spor
ting
obje
cts
into
tes
t en
viro
nmen
t? H
ave
indi
vidu
als
resp
onsi
ble
for
sign
ing
off
tran
spor
ts b
een
iden
tifi
ed?
16.
Is t
here
a c
hang
e co
ntro
l boa
rd in
pla
ce t
o ev
alua
tepr
opos
ed s
yste
m c
hang
es o
r sc
ope
chan
ges
aris
ing
from
test
res
ults
, or
test
def
ects
?
17.
Hav
e al
l wor
kflo
w r
oles
/pro
file
s be
en s
et u
p?
18.
Has
the
tes
t te
am h
eld
a ki
ck-o
ff p
rese
ntat
ion
to r
evie
wal
l tes
ting
pro
cedu
res
(inc
ludi
ng r
oles
for
tes
ting
part
icip
ants
)?
19.
Hav
e da
ily o
r w
eekl
y m
eeti
ngs
been
est
ablis
hed
to r
evie
wan
d de
brie
f te
st r
esul
ts?
20.
Has
the
exi
t an
d su
spen
sion
cri
teri
a be
en d
efin
ed f
orin
tegr
atio
n te
stin
g?
(Con
tinu
es)
179
07_4782 2/5/07 11:09 AM Page 179
Con
diti
onY
es—
Met
Not
Met
N/A
Com
men
ts
21.
Do
test
ing
issu
es h
ave
wor
karo
unds
? If
so,
hav
e th
ew
orka
roun
ds b
een
docu
men
ted?
22.
Has
the
sys
tem
bee
n ba
selin
ed?
23. H
ave
arra
ngem
ents
bee
n m
ade
wit
h th
e ap
prop
riat
esu
ppor
t pe
rson
nel (
i.e.,
Bas
is)
to p
rovi
de e
xten
ded
supp
ort
beyo
nd n
orm
al h
ours
of
oper
atio
n?
24. H
ave
All
test
too
ls b
een
inst
alle
d?
180
07_4782 2/5/07 11:09 AM Page 180
should be signed off by the witnesses attending the review when theconditions have been met.1
TEST CASE TEMPLATE
Testing requires test case templates that indicate the process thatneeds to be executed (commonly referred to as test steps) and theexpected system behavior or system response after the test steps areexecuted (commonly referred to as expected test results). For SAPtesting, test cases should contain, at a minimum, the following infor-mation and fields:
■ SAP roles needed to execute a process(es) (i.e., warehouse clerk).■ Data value(s) or data variants needed to execute the test (i.e.,
enter data value “1000” for company code, test process with mul-tiple “wage types” such as straight time, holiday time, overtime,vacation time, etc.).
■ Requirement met or fulfilled from executing the test case.■ Any preconditions that are needed for executing a test case (i.e.,
a requisition is needed before generating a purchase order).■ Description of the process to be tested.■ Approval fields (i.e., for signoffs).■ Test steps to be performed.■ Expected test results.■ Actual test results (i.e., pass or failure).
The SAP ASAP methodology offers test case templates and in-structions for creating and documenting integration test scenarios asseen in Exhibit 7.2. The unit testing template from the SAP ASAPmethodology embedded as an accelerator consists of the template forBPPs that can be enhanced with test conditions. The template and itsfields shown in Exhibit 7.2 can be customized within a test manage-ment tool or reproduced in spreadsheets or text editors. When testcases are customized within test management tools, they can offer a
Quality Assurance Standards 181
1The Defense Contract Management Agency (DCMA) offers a detailed TRR onlineat the Web site http://guidebook.dcma.mil/226/2245%20TRR.pdf for those readersinterested in seeing another sample TRR.
07_4782 2/5/07 11:09 AM Page 181
182
EXHI
BIT
7.2
Test
Cas
e Te
mpl
ate
from
SA
P A
SAP
Met
hodo
logy
Inte
grat
ion
Test
Sce
nari
o N
o. _
____
SCE
NA
RIO
OW
NE
R:
BU
SIN
ESS
CA
SE:
STA
TU
S:
DE
SCR
IPT
ION
:R
UN
NO
:
EX
PEC
TE
D R
ESU
LTS:
RU
N D
AT
E:
SET
UP
DA
TA
DA
TA
OB
JEC
TV
AL
UE
/CO
DE
DE
SCR
IPT
ION
CO
MM
EN
TS
AN
D N
OT
ES
07_4782 2/5/07 11:09 AM Page 182
183
TR
AN
SAC
TIO
NA
L S
TE
PS
No.
BU
SIN
ESS
PR
OC
ESS
ST
EPS
/T
RA
NS.
INPU
T D
AT
A/S
PEC
IAL
OU
TPU
T
TE
STE
R/
OK
/B
PP N
UM
BE
RC
OD
EIN
FOR
MA
TIO
ND
AT
A/R
ESU
LTT
EA
ME
RR
OR
0 1 2 3 4 5 6 7 99 Com
men
ts:
App
rova
l:__
____
____
____
____
____
____
____
____
____
Dat
e:__
__ /
____
/ __
__
07_4782 2/5/07 11:09 AM Page 183
series of benefits that cannot be easily obtained from disconnectedspreadsheets or text editors such as automatic reporting, metrics,traceability to requirements, audit trails, and transparency for up todate progress on test execution. Individual test cases should be pro-duced for each unique process that needs to be tested.
APPLICABILITY AND LIMITATIONS OF QA STANDARDS
Implemented or proposed QA standards must align with the project’sscope, budget, and deadlines. QA standards must first be applied tocritical to quality processes or deliverables that can compromise orundermine the project’s success when their quality degrades. Thework and deliverables for the QA team must be governed by anapproved charter, and project members must receive sufficient train-ing to adhere to QA standards. The QA team holds the promise ofoffering consistency for project deliverables and work products, butthey also have limitations.
QA standards can enforce a defined process and methodologyand contribute to project consistency. It is possible that QA standardsensure consistency across project deliverables; however, the projectmay be consistently producing the wrong deliverables. For example,a QA standard may enforce that a particular mandatory field for atemplate be populated, but the QA team member may not knowwhether the contents for the mandatory field are populated correctlyand can enforce only that the field is not left empty. QA representa-tives can enforce a process or methodology for producing deliverables(i.e., no field is left empty, all executed test steps must be accompa-nied by screenshot printouts, forms need to be filled out before an ob-ject is transported into production, etc.) but may not have sufficientsubject matter expertise or knowledge to discern whether the deliver-ables are suitable for the project needs or whether the contents for adeliverable are fit for purpose.
In many projects the role of the QA team members is obscure andpoorly understood, which causes many project members in particu-lar from the configuration and development teams to question theneed for QA representatives. Other project members view the auditsand spot inspecting from QA representatives as a nuisance, since QAmembers may not have sufficient knowledge of SAP processes or SAP
184 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
07_4782 2/5/07 11:09 AM Page 184
navigation. Under these situations, the role and/or existence of theQA team become tenuous and difficult to justify.
Some projects may also implement QA standards that are diffi-cult to enforce or standards that, if enforced, can have unintended ef-fects. For instance, a QA standard that states that the execution of alltest cases must have a success ratio of 95 percent or higher for first-time execution on the surface appears highly desirable but also cancause the test team members to hide or “sweep under the carpet” po-tential defects because the testers are apprehensive to report defectsthat can compromise a quality metric of 95 percent success ratio.
QA managers must develop QA initiatives to support the pro-ject’s scope statement and must have the approval of a project com-mittee. In order to roll out QA standards to project members, it isnecessary to identify how the training will be provided for the stan-dards, how standards will be enforced, how noncompliance with QAstandards or deficiencies will be reported, and how corrective actionsare applied to deliverables not conforming to QA standards.
Quality Assurance Standards 185
07_4782 2/5/07 11:09 AM Page 185
07_4782 2/5/07 11:09 AM Page 186
187
CHAPTER 8Assembling the QA/Test Team
Any SAP project—whether it’s a brand-new implementation; is inproduction support; includes an upgrade; or requires adding new
modules and/or bolts-ons, extending a rollout to a new division orcompany site—will need testing and quality assurance (QA) resourcesto conduct the required testing. Some of the most often asked ques-tions related to SAP QA and testing resources are:
■ When are they needed?■ For how long are they needed?■ Where will they come from (i.e., internal transfers or external
hires)?■ What will they do? What are their roles and responsibilities?■ What skill sets do they need to have?■ How do we measure their success (i.e., how will testers’ perfor-
mance be evaluated)? What are the criteria for doing so?■ Whose authority are they under (i.e., whom do they report to)?■ How will they interact with the other project teams (i.e., Basis,
Development, Functional, etc.)?■ How much will they cost?■ How will testers’ performance be evaluated? What are the crite-
ria for doing so?
Answers to these questions are not always (if at all) readily avail-able in any SAP implementation methodology. Nevertheless, the in-ability to answer these questions often plagues the SAP project, whichcan lead to an unstable system in production. Many SAP projects as-semble test and QA teams from resources taken away from their pri-mary job responsibilities because they are underutilized. Identifyingand screening the “right” testing and QA resources is often time con-suming and could prove expensive when the “wrong” resource is
08_4782 2/5/07 11:11 AM Page 187
introduced to the project. The test manager and project manager canrecruit testing resources based on defined roles and skill sets that aresuitable for planning different types of testing efforts with manualand automated test case execution.
Many SAP projects assemble test and QA teams from resourcestaken away from their primary job responsibilities because at the timethey might be underutilized in their current jobs, without giving muchthought to the expertise, experience, and skill required to do the jobright. The misconception still exists that anybody can do testing andnot much special skill is required to accomplish effective SAP testing.
Understanding the skills needed and identifying and screening the“right” testing and QA resources can be time consuming. Yet it isoften more expensive to introduce the “wrong” resources to theproject.
The test manager and project manager need to recruit testing re-sources based on defined roles and skill sets that are suitable for thetasks at hand (i.e., understanding SAP implementations, strategizing,and planning for effective testing, and planning for different types oftesting efforts with manual and automated test case execution).
The SAP project manager needs to ensure that the project teamsare constantly following and adhering to QA standards that enforcethe principle of preventing errors while building a product with qual-ity. Furthermore, the project will need dedicated testing resources thatenforce the principle of error detection, which indicates that the sys-tem meets documented business requirements and service-level agree-ments (SLAs). Assembling a team that has a mixture of QA andtesting skills is necessary for deploying, upgrading, implementing,and maintaining a SAP system.
QA AND TEST TEAM DIFFERENCES
Although both QA and testing contribute to system quality, the termsare distinct and their philosophies may also vary. An SAP project mayneed both a QA team and a testing team or a team that combines theskills sets from both QA and testing. QA is a role that helps defineand enforce the standards that will be used to configure and developthe system. In contrast, testing is the activity that ensures that the sys-tem meets the documented system requirements through detection,
188 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 188
which consists primarily of execution of test cases. Through systemtesting, errors are identified, retested, resolved, and closed. The deci-sion to form both a QA and test team depends largely on projectscope, budget, and schedule.
Exhibit 8.1 provides examples of ways to differentiate the activi-ties and roles of the QA and test teams.
Projects in heavily regulated environments or subject to auditsmay need both QA and test team resources. However, projects withlimited budget but with a consistent approach across all project teamsfor producing deliverables and work products may need only thepresence of a testing team. Alternatively, a project may form a singleintegrated QA/test team whereby the team manager can implementQA standards and also manage the execution of test cases. In an agile
Assembling the QA/Test Team 189
EXHIBIT 8.1 Sample Activities for a Quality Assurance and Test Team
QA Activities: Test Activities:
• Create standards for drafting test • Develop automated test casescases, requirements, developing • Execute test cases (manual/automated)diagrams, collecting information • Support/Maintain test toolsfrom workshops, naming • Customize test toolsconventions, etc. • Support/Maintain test lab
• Enforce standards • Inspect requirements• Develop functional test plan and • Create/Maintain testing schedule
test strategy • Create requirements traceability• Develop test case template matrix (RTM)• Participate in requirements • Develop testing schedule
gathering workshop • Conduct system integration testing• Inspect technical and functional • Conduct security testing
specifications • Conduct performance testing• Inspect process flow diagrams • Report defects• Inspect requirements • Review documented test cases• Raise risks • Help identify test data• Participate in Change Control • Particpate in Change Control Board
Board (CCB) (CCB)• Approve transports • Generate test reports, charts, graphs• Develop forms for peer reviews • Produce testing metrics• Create exit/entrance testing criteria • Develop performance test plan• Develop defect resolution • Document lessons learned
procedures • Hold testing kickoff meetings• Develop unit, scenario,
integration-testing procedures
08_4782 2/5/07 11:11 AM Page 189
or low-budget SAP development environment it is perfectly fine andeven recommended to treat the QA and testing responsibilities inter-changeably, for example, making the testing team also responsible forQA activities, such as verifying, adhering to defined standards, andothers as defined in Exhibit 8.1.
Appendix A provides a list of activities for QA and test teammembers based on the SAP ASAP Implementation methodology.
WHEN SHOULD QA AND TEST TEAM MEMBERS BEBROUGHT ONTO THE PROJECT?
SAP testing and QA resources are needed as soon as the project prepa-ration phase is commenced for initial SAP implementations. Forinstance, based on SAP’s own ASAP methodology built within theSolution Manager platform, the following activities are identified forthe project preparation phase:
■ Determine quality standards■ Define testing strategies
For initial SAP implementations, the ASAP methodology indi-cates that a test strategy needs to be written and documented well inadvance of holding workshops to identify business processes andbusiness requirements. An experienced QA manager can documentthe SAP test strategy that meets the project’s objectives and testing ex-pectations. During later phases of the SAP project, such as blueprintand realization, the need arises to bring more QA and testing re-sources to complete activities such as inspecting requirements,procuring and installing automated test tools, developing standardsfor creating business process flow diagrams, functional and technicalspecifications, creation of automated test cases, and the creation of arequirements traceability matrix (RTM).
For other projects that have had a system in production, the timeto bring QA and testing resources may be long overdue, if they don’texist already. SAP production support teams have to constantly applySAP system changes due to releases of patches, OSS notes, and en-hancements from the vendor, which require testing. Furthermore, theproduction support team needs to ensure that consistent methods are
190 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 190
practiced for applying, tracking, controlling, and verifying systemchanges. The project manager may form an integrated QA/testingteam to verify system transports, automate and maintain test cases,execute test cases, and conduct regression/performance testing. With-out the formation of an integrated QA/test team, the project may ex-perience risks in the production environment, since the functionaland development teams may not have the methodology, bandwidth,expertise, or time to conduct regression testing.
Other SAP projects that merit the immediate formation of eitherseparate or integrated QA and test teams are projects experiencingupgrades, adding new modules, or including a new division or cor-porate unit. These project teams often need to conduct gap analysisto determine new system functionality; draft new requirements ormodify existing requirements for security, performance, and systemfunctionality; identify new system touch points; or include logic totake into account business rules and policies. The QA and test teamscan review, inspect new requirements when new modules or bolt-onsare added to the existing implementation, conduct system testing, andensure that the configuration and development team observe docu-mented standards.
SKILL SETS FOR QA AND TEST TEAM
The role of the SAP tester is to test and verify that the system meetssystem requirements from all aspects of security, performance, func-tionality, reporting, work flow, and programming. SAP testers needproficiency in various areas, including test planning, requirementsevaluation, industry regulations, automated test tools, test manage-ment tools, test execution, test reporting, and functional navigationwithin SAP R/3 and SAP bolt-ons. Placing nonskilled testers or fillersin the test team can produce unreliable test results. Testers who donot understand the concept of module and data integration within anSAP system need to be trained and educated on those concepts.
SAP testers need to have at a minimum the following background:
■ Formal testing skills and methodologies.■ Knowledge of SAP processes (modules and bolt-ons).■ Expertise in navigating SAP transactions.
Assembling the QA/Test Team 191
08_4782 2/5/07 11:11 AM Page 191
■ Knowledge of ABAP programming.■ Ability to interpret and understand test cases.■ Knowledge of the various types of SAP data.■ Industry-specific SAP solutions.■ Expertise with automated test tools.■ Ability to write Structured Query Language (SQL) statements.■ Programming skills such as Visual Basic.■ Ability to inspect requirements for completeness, quality, and
testability and construct a RTM.■ Subject matter expertise.
Specific testing roles and skills sets for various testing positionsare outlined in Exhibit 8.2.
EXHIBIT 8.2 Test Program Roles
Test Manager
Responsibilities Skills
• Liaison for interdepartmental • Understand company’s interviewinginteractions: Representative of the methodology, i.e., has taken thetesting team “hiring certificate.” (If none exists,
• Recruiting, staff supervision, and insist a repeatable interviewingstaff mentoring and training process is created)
• Test program budgeting and • Understand all HR policies related toscheduling, i.e., test-staffing needs managementand effort estimations • Understands SAP testing process or
• Customer support interaction methodology• Customer interaction, as applicable • Understands test-program concerns
including test strategies, testenvironment and data management,trouble reporting and resolution, andtest design and development
• Understands manual testingtechniques and automated testingbest practices
• Test planning, including development • Understands application businessof testing goals and strategy area, application requirements
• Vender interaction • Understands different types of• Defines technologies to help improve technologies available to increase
testing efforts, i.e., test-tool testing efficiencyselection and introduction or in-housedeveloped testing tools, etc.
192 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 192
• Cohesive integration of test and • Skilled at developing test goals,development activities objectives, and strategy
• Acquisition of hardware and software • Familiar with different test tools,for test environment defect-tracking tools, and other
test-support COTS tools which canenhance testing efforts (Ghost,VMWare, etc.) and their use
• Test environment and test product • Good at all planning aspects, includingconfiguration management. people, facilities, and schedule
• Test-process definition, training and continual improvement
• Use of metrics to support continual test-process improvement
• Test-program oversight and progress tracking
• Coordinating pre- and post-test meetings
Test Lead
Responsibilities Skills
• Technical leadership for the test • Understands application businessprogram, including test approach area and application requirements
• Support for customer interface, • Familiar with test-program concernsrecruiting, test-tool introduction, test including test-data management,planning, staff supervision, and cost trouble reporting and resolution, testand progress status reporting design, and test development
• Verifying the quality of the • Expertise in a variety of technicalrequirements, including testability, skills including programmingrequirement definition, test design, languages, database technologies,test-script and test-data development, and computer operatingtest automation, test-environment systemsconfiguration; test-script configurationmanagement, and test execution
• Interaction with test-tool vendor to • Familiar with different test tools,identify best ways to leverage test defect-tracking tools, and other COTStool on the project tools supporting the testing life cycle,
and their use
Assembling the QA/Test Team 193
(Continues)
08_4782 2/5/07 11:11 AM Page 193
EXHIBIT 8.2 (Continued)
Responsibilities Skills
• Staying current on latest test approaches and tools, and transferring this knowledge to test team
• Conduct test-design and test-procedure walk-throughs and inspections
• Implementing test-process improvements resulting from lessons learned and benefits surveys
• Test Traceability Matrix (tracing the test-procedures to the test requirements)
• Test-process implementation
• Ensuring that test-product documentation is complete
Usability Test Engineer
Responsibilities Skills
• Designs and develops usability testing • Proficient in designing test suitesscenarios • Understanding usability issues
• Administers usability testing process
• Defines criteria for performing • Skilled in test facilitationusability testing, analyzes results oftesting sessions, presents results todevelopment team
• Develops test-product documentation • Excellent interpersonal skills
• Defines usability requirements, and • Proficient in GUI design standardsinteracts with customer to refine them
• Participates in test-procedurewalk-throughs
Manual Test Engineer
Responsibilities Skills
• Designs and develops test procedures • Has good understanding of SAPand cases, with associated test data modules and design
194 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 194
• Manually executes the test procedures • Proficient in software testing
• Attends test-procedure walk-throughs • Proficient in designing test suites
• Conducts tests and prepares reports • Proficient in the business area ofon test progress and regression application under test
• Proficient in testing techniques• Understands various testing phases
• Follows test standards • Proficient in GUI design standards
Automated Test Engineer (Automator/Developer)
Responsibilities Skills
• Designs and develops test procedures • Good understanding of SAP modulesand cases based upon requirements under test
• Designs, develops and executes • Proficient in software testingreusable and maintainableautomated scripts
• Uses capture/playback tools for GUI automation and/or developstest harnesses using a development of scripting language
• Follows rest-design standards • Proficient in designing test suites
• Conducts/attends test procedure • Proficient in working with test toolswalk-throughs
• Executes tests and prepares reports • Programming skillson test progress and regression
• Attends test-tool user groups and • Proficient in GUI design standardsrelated activities to remain abreast oftest-tool capabilities
Network Test Engineer
Responsibilities Skills
• Performs network, database, and • Network, database, and systemmiddleware testing administration skills
• Researches network, database, and • Expertise in a variety of technicalmiddleware performance monitoring skills, including programmingtools languages, database technologies,
• Develops load and stress test and computer operating systemsdesigns, cases and procedures
Assembling the QA/Test Team 195
(Continues)
08_4782 2/5/07 11:11 AM Page 195
EXHIBIT 8.2 (Continued)
Responsibilities Skills
• Supports walk-throughs or inspectionsof load and stress test procedures
• Implements performance monitoring • Product evaluation and integrationtools on ongoing basis skills
• Conducts load and stress test • Familiarity with network sniffers,procedures and available tools for load and
stress testing
Test Environment Specialist
Responsibilities Skills
• Responsible for installing test tool • Network, database and systemand establishing test-tool environment administration skills
• Responsible for creating and • Expertise in a variety of technicalcontrolling test environment by skills, including programming andusing environment setup scripts scripting languages, database
technologies, and computeroperating systems
• Creates and maintains test database • Test-tool and database experience(adds/restores/deletes, etc.)
• Maintains requirements hierarchy • Product evaluation and integrationwithin test-tool environment skills
Security Test Engineer
Responsibilities Skills
• Responsible for security testing of • Understands security testingthe application techniques
• Responsible for supporting the secure • Background in securitysoftware development lifecycle • Security test tool experience
• Understands security modelingtechniques
Build Support, Test Library and Configuration Specialist
Responsibilities Skills
• Test-script change management • Network, database, and systemadministration skills
196 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 196
• Test-script version control • Expertise in a variety of technicalskills including programminglanguages, database technologies,and computer operating systems
• Maintaining test-script reuse library • Configuration-management tool• Creating the various test builds, in expertise
some cases
• Verify that Development Build and • Test-tool experienceVersion Control standards are beingmet
For projects that have significant investments in requirementsmanagement tools, automated test tools, version control software,defect tracking, and test management tools, testers who have demon-strated experience and/or vendor certification within these tools arerecommended as members for the test team. Ideal test members areindividuals who understand the SAP “to-be” system, the company’spolicies and business rules, integration points, and legacy systems;can document and execute test cases; and have proficiency with au-tomated test tools. SAP testers document, design, and peer review testcases with expected results and execute the test cases either manuallyor with automated test tools. Finding test team members who havefunctional SAP experience and technical experience with automatedtest tools may turn out to be a challenge. It is recommended that thetest team consist of a mixture of individuals with both functionalknowledge of SAP and knowledge of automated test tools in order todevelop and maintain automated test cases.
QA team members need to successfully implement, communicate,and enforce standards and document the test plans. QA team mem-bers do not necessarily need SAP functional knowledge or knowledgeof automated test tools, but they do need to provide practices forbuilding a system with quality and preventing defects. QA membersneed to define standards for conducting workshops, gathering re-quirements, inspecting requirements, documenting functional andtechnical specifications, and creating business process procedures(BPPs) and flow process diagrams. The QA team members must en-sure that the project members from the configuration, test, and de-velopment teams understand and can follow the QA standards to
Assembling the QA/Test Team 197
08_4782 2/5/07 11:11 AM Page 197
avoid producing inconsistent work products. The QA team also playsa vital role in documenting the project’s lessons learned after eachtesting phase and monitoring the entrance and exit criteria for eachtesting effort.
DISTRIBUTION OF SKILLS ACROSS THE TEST TEAM
An effective testing team consists of team members with a mixture ofexpertise, such as subject matter, technology, and testing techniques,plus a mixture of experience levels, that include junior, mid-level, andexpert testers. Subject matter experts (SMEs) who understand thedetails of the application’s functionality play an important role in thetesting team.
The following list describes these concepts in more detail.
■ Subject matter expertise. A technical tester might think it is feasi-ble to learn the subject matter in depth, but this is usually not thecase when the domain is complex. Some problem domains, suchas tax law, labor contracts, military procurement, and regulatorycompliance from the Food and Drug Administration (FDA) andFederal Energy Regulatory Commission (FERC), may take yearsto fully understand. It could be argued that detailed and specificrequirements should include all the complexities possible, so thatthe developer can properly design the system and the tester canproperly plan the testing. Realistically, however, budget and timeconstraints often lead to requirements that are insufficientlydetailed, leaving the content open to interpretation. Even detailedrequirements often contain internal inconsistencies that must beidentified and resolved.
For these reasons, each SME must work closely with thedeveloper and other SMEs on the program (e.g., tax-law experts,financial expert, human resources law experts) to parse out theintricacies of the requirements. Where there are two SMEs, theymust be in agreement. If two SMEs cannot agree, a third SME’sinput is required. A testing SME will put the final stamp ofapproval on the implementation after appropriate testing.
■ Technical expertise. While it is true that a thorough grasp of theapplications to be tested is a valuable and desirable trait for atester, the tester’s effectiveness will be diminished without some
198 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 198
level of understanding of software (including test) engineering.The most effective SME testers are those who are also interestedand experienced in the technology—that is, those who have takenone or more programming courses or have some related technicalexperience. Subject matter knowledge must be complementedwith technical knowledge, including an understanding of the sci-ence of software testing.
Technical testers, however, require a deeper knowledge of thetechnical platforms and architectural makeup of a system in orderto test successfully. An SAP installation consists of the core R/3system, SAP bolt-ons, and the systems interfacing with SAP. Atechnical tester should know how to write automated scripts withautomated test tools; know how to write a test harness; andunderstand such technical issues as compatibility, performance,and installation, in order to be best prepared to test for compli-ance. While it is beneficial for SMEs to possess some of thisknowledge, it is acceptable, of course, for them to possess a lowerlevel of technical expertise than do technical testers.
■ Experience level. A testing team is rarely made up exclusively ofexpert testers with years of expertise—nor would that necessarilybe desirable. As with all efforts, there is room for apprentices whocan be trained and mentored by more senior personnel. To iden-tify potential areas for training and advancement, the test man-ager must review the difference between the skill requirementsand an individual’s actual skills.
A junior tester could be tasked with testing the lower-riskfunctionality or cosmetic features such as the graphical user inter-face (GUI) controls (if this area is considered low risk). If a juniortester is tasked with testing of higher-risk functionality, the juniortester should be paired with a more senior tester who can serve asa mentor.
Although technical and subject matter testers contribute tothe testing effort in different ways, collaboration between the twotypes of testers should be encouraged. Just as it would take a tech-nical tester a long time to get up to speed on all the details of thesubject matter, it would take a domain or subject matter expert along time to become proficient with technical issues such as testcase automation. Cross-training should be provided to make thetechnical tester acquainted with the subject matter and the SMEfamiliar with the technical issues.
Assembling the QA/Test Team 199
08_4782 2/5/07 11:11 AM Page 199
NUMBER OF RESOURCES
The number of test and QA resources needed for a project largelydepends on the expected number of modules, business processes,interfaces, security roles, conversions, reports, and enhancements tobe included for the release based on the project scope and the exper-tise of the resource.
Frequently, SAP projects struggle to identify the expected numberof test cases since no formal approach exists for documenting andmanaging requirements, which makes it difficult to anticipate thenecessary level of effort for testing.
Testing activities include test planning, test design for both man-ual and automated test cases, test scheduling, test execution, test re-porting, and retesting as needed. A given SAP implementation mayhave as many as 1,000 of in-scope out-of-the-box, and custom SAPtransactions in addition to multiple security roles and advanced busi-ness application programming (ABAP) programs for a system release.An SAP upgrade may include changes to the system functionality,GUI, and database, and the inclusion of SAP hot packs, which requirethousands of testing man-hours. Furthermore, projects that imple-ment SAP processes based on end-to-end processes such as order-to-cash and build-to-order may need to test the processes and theircorresponding variations, which can create hundreds of test cases.
Manual testing is time consuming, prone to error, difficult to co-ordinate, and resource intensive. The project team may need to com-bine manual testing efforts with automated testing efforts to create atest team capable of providing coverage for all test cases and subse-quent retesting when defects are reported.
As a result, it is recommended that projects pair up at least onetesting resource per module or end-to-end process, depending on thestructure of the functional SAP teams. The testing team also will needat least one resource to support testing of SAP bolt-ons (e.g., APO,SRM) and one to two resources for development (ABAP) objects. Ad-ditional testing resources will be needed to maintain and support testtools and for performing load/stress testing. The QA team will needat least two to three resources to set up and enforce standards.
The following estimates are provided for test planning, test de-sign, test execution, and test reporting. When the number of test casesis identified, the following guidelines may help to estimate the ex-pected number of man-hours needed to complete a testing phase.
200 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 200
Unit
■ Two to three hours to design each unit test case (also includenegative testing conditions for error handling and exceptionmessages).
■ Two to three hours to execute each test case, which includes atleast two manual runs and recording of test results.
String (Scenario)
■ One to two days to design each string test case (combines multi-ple SAP transaction codes; includes SAP roles).
■ Four to six hours to execute each test case, which includes at leasttwo manual runs and recording of test results.
Integration Testing (Scenario)
■ Two to three days to design each integration test case (combinesmultiple SAP transaction codes with data from external systems;includes SAP roles, preconditions, and postconditions).
■ Four to six hours to execute each test case, which includes at leasttwo manual runs for manual and recording of test results.
■ Six to eight days to automate an integrated test case for each end-to-end process (assuming system is stable).
■ Up to 30 minutes to execute an automated test case assumingmultiple sets of data.
Performance Testing (Scenario)
■ Four to six days to automate an integrated test case (assumingsystem is stable).
TEAM COMPOSITION
The QA team can consist of two to three permanent members. It isprimarily concerned with defining, communicating, and enforcing
Assembling the QA/Test Team 201
08_4782 2/5/07 11:11 AM Page 201
standards for preventing system errors and documenting test plans.The number of dedicated QA resources is fairly static throughout thelife cycle of an SAP project. A typical SAP QA team can consist of thefollowing permanent members:
■ One QA manager, who is primarily responsible for drafting anddocumenting test plans, documenting risks, and setting and defin-ing standards.
■ One to two QA members responsible for enforcing testing stan-dards and generating reports and metrics to support companyaudits.
Unlike the QA team, the test team may grow exponentially insize, depending on the testing effort at hand. For instance, during theproject preparation phase, the organization may not have any formaltesting roles or any test team members. However, during the integra-tion testing effort, the test team may expand from its permanentmembers to include members from the entire project structure. Mem-bers of other groups or teams may become “temporary” testers whoare “borrowed” by the test team during string, integration, and stresstesting and perform testing activities such as documenting test cases,sequencing test cases, executing test cases, and resolving defects.Some “borrowed” testing members from other groups may not sup-port testing activities directly but may perform dependent testing ac-tivities such as data loads, establishing test environment, and creatinguser test roles that are critical to the testing efforts. At some pointduring the life cycle of the SAP implementation, upgrade, or produc-tion support, the number of permanent test team members will beaugmented with “borrowed” test team members from other groups.
The permanent test team consists of the test manager and the in-dividuals who support and maintain the test tools, develop auto-mated test cases, are part of an outsourcing agreement, support thetest lab, and execute test cases. The permanent test team members areexperienced in testing methodologies, automated test tools, and link-ing requirements to test cases; understand SAP functionality andprocesses; and can execute documented test cases independently. Aheuristic to determine the necessary number of permanent test teammembers would be one resource per identified end-to-end process
202 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 202
(i.e., one test team member to provide coverage for the order-to-closeteam). A typical SAP test team can consist of the following permanentand dedicated testing members:
■ One test team manager.■ One or two test team members for testing development objects
that include reports, interfaces, conversions, enhancements, work-flow, and forms.
■ One test team member per bolt-on (i.e., APO, CRM, BW, etc.)■ One test team member for each end-to-end process (i.e., purchase-
to-pay, order-to-cash, hire-to-retire, etc.). Typical responsibilitiesinclude test tool functional automation, test case execution, andreporting of defects.
■ Two test team members for stress/performance/load testing. Thesemembers create and maintain automated test cases for perfor-mance testing, help the project stay compliant with establishedservice-level agreements (SLAs), and ensure that the project’sresponse times remain optimal. Performance testing is an ongoingactivity for an SAP implementation and should be performedwhen system changes are implemented, which include OSS notes,system patches, enhancements, and hot packs; new divisions areadded; GUI is upgraded or reconfigured; modules or bolt-ons areadded; new interfaces and conversion programs are added; andso on.
■ One or two test team members for support, maintenance, andcustomization of test tools and test lab.
■ One leader to manage all automation efforts or serve as a liaisonfor outsourcing test automation efforts.
The assumption is made that the test team members describedabove are at the experienced level and not necessarily entry-levelstaff. The number of test team members may also be increasedthrough the use of outside contractors, particularly for SAP projectsthat will automate test cases. Contractors that specialize in auto-mated test tools may perform functions for which there is no in-houseexpertise from the project’s test team. The following situations areexamples in which outside contractors may augment the existing test-ing resources:
Assembling the QA/Test Team 203
08_4782 2/5/07 11:11 AM Page 203
■ Stress testing and performance tuning for the SAP CRM SalesInternet application prior to a go-live.
■ Building a library of automated test cases for a new systemmodule.
■ Training for a new automated test tool.■ Mentoring of testers recently trained on test tools.■ Execution of test cases due to an increase in project scope.
In addition to the permanent testing members, a test team canconsist of borrowed resources from other teams. The activities for theborrowed testing resources include identification of test data, pro-viding SAP navigational support, refinement of documented testcases, execution of manual test cases, coordination of ABAP objectswith legacy system owners, identification of SAP roles, and resolutionof reported testing defects.
Test team members can come from other groups, including thechange management team, SMEs, configuration team, legacy owners,development team, end users, Basis team, and the outsourcing part-ner. For initial testing efforts such as unit testing, the test teammembers may primarily consist of borrowed testers from the config-uration, development, and security teams that plan, design, and exe-cute test cases, while the permanent testing resources providemaintenance support for the underlying test tools used for unit test-ing. When other testing efforts such as string (scenario), integration,performance, and user acceptance are initiated, the test team consistsof permanent and borrowed testing resources.
For instance, the test team can increase in size by as much as 50percent with borrowed resources from unit testing to user acceptancetesting (UAT) for the following reasons:
■ Resources from the change management team would be needed toselect end-user participants for the UAT.
■ Members from the configuration and development teams wouldneed to assist the UAT participants and resolve their defects.
■ Security team members would need to identify the testing rolesfor the UAT participants.
■ QA team members would have to enforce standards.■ The basis team would have to erect an SAP test client for UAT
testing.
204 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 204
■ The dedicated test team members would have to maintain testtools and coordinate defect resolution.
■ The UAT participants that are primarily system end users wouldexecute the UAT test cases.
Exhibit 8.3 shows multiple types of testing efforts along with theproject teams supporting the testing effort. This exhibit reveals thatthe integration-testing phase requires assistance from more projectteams than any other testing effort.
EVALUATING TESTERS1
Maintaining an effective test program requires that the implementa-tion of its functions, such as the test strategy, the test environment,and the test team makeup, are continuously evaluated and improvedas needed. Test managers are responsible for ensuring that the testingprogram is being implemented as planned and that specific tasks arebeing executed as expected. To accomplish this, the implementationof the test program has to be tracked, monitored, and evaluated, soit can be modified as needed.
At the core of the test program execution are the test engineers.The ability of a tester to properly design, document, and execute ef-fective tests, accurately interpret the results, document any defects,and track them to closure is critical to the effectiveness of the testingeffort. The test manager could plan the perfect testing process andimplement the best strategy, but if the testing team members do noteffectively execute the testing process (such as participating effectivelyin requirements inspections or design walkthroughs) and omittingother strategic testing tasks as assigned (such as executing a specifictest procedure), important defects could be discovered too late in thelife cycle, resulting in increased costs. At worst, defects will be com-pletely overlooked and make their way into production software.
A tester’s effectiveness can also make a big difference in the inter-relationships with other groups. A tester who always finds bogus er-rors or reports “user” errors, meaning the application works as
Assembling the QA/Test Team 205
1Adapted from Elfriede Dustin, Effective Software Testing, Reading, MA: AddisonWesley, 2002.
08_4782 2/5/07 11:11 AM Page 205
206
EXHI
BIT
8.3
Team
s In
volv
ed w
ith
Test
ing
Act
ivit
ies
Dep
endi
ng o
n th
e Ty
pe o
f Te
st E
ffor
t
Uni
tSt
ring
Reg
ress
ion
Inte
grat
ion
UA
TPe
rfor
man
ce
•D
evel
opm
ent
•D
evel
opm
ent
•D
evel
opm
ent
•D
evel
opm
ent
•D
evel
opm
ent
•D
evel
opm
ent
•C
onfi
gura
tion
•C
onfi
gura
tion
•C
onfi
gura
tion
•C
onfi
gura
tion
•C
onfi
gura
tion
•C
onfi
gura
tion
•Q
A•
Cha
nge
•C
hang
e•
Cha
nge
•C
hang
e•
Test
•Se
curi
tym
anag
emen
tm
anag
emen
tm
anag
emen
tm
anag
emen
t•
Bas
is•
Test
•Q
A•
QA
•Q
A•
QA
•D
ata
•Se
curi
ty•
Secu
rity
•Se
curi
ty•
Secu
rity
•N
etw
ork
•Te
st•
Prod
ucti
on•
Test
•Te
st•
Dat
abas
e•
Test
•E
nd u
sers
•E
nd u
sers
•In
tegr
atio
n•
SME
s•
SME
s•
Bas
is•
Dat
a•
Leg
acy
syst
emow
ners
08_4782 2/5/07 11:11 AM Page 206
expected, but the tester misunderstood the requirement, or worse, atester who often overlooks critical defects will lose credibility withother team members and groups and can tarnish the reputation of anentire test program.
Evaluating the tester’s effectiveness is a difficult and often subjec-tive task. Besides the typical evaluations of any employee’s atten-dance, attentiveness, attitude, and motivation, there are specific,testing-related measures against which a tester can be evaluated. Theevaluation process starts with the recruitment efforts. Hire a testerwith the skills required for the roles and responsibilities assigned tothe position. Based on the defined skills you are requiring for a spe-cific hire, you can base your expectations and task assignments onthose skills. All testers need to be detail-oriented and possess analyt-ical skills, independent of whether they are technical experts, subjectmatter experts, security, or usability testers. Once you have hired theright tester for the job, you have a good basis for evaluation. In thecase where a testing team is “inherited,” this type of evaluation crite-ria cannot be applied upfront. In this case, it is necessary to come upto speed on the various testers’ backgrounds, so the team can betasked and evaluated based on their experience, expertise, and back-ground, and be reassigned to other roles as needed.
A test engineer’s performance cannot be evaluated unless thereare roles and responsibilities, tasks, schedules, and specific standardsto follow. First and foremost, the test manager must make sure tostate clearly what is expected and by when it is expected from the testengineer.
Expectations, or the “what” and “by when,” need to be clearlycommunicated, and tasks must be meticulously outlined and docu-mented. The following list describes a typical set of expectations thatmust be communicated to testers.
■ Standards and procedures. The test engineer needs to be aware ofwhich standards and procedures need to be followed, andprocesses must be communicated.
■ Schedules. Testers must be aware of the test schedule, whichshould indicate when test plans, test designs, test procedures,scripts, and other testing products must be delivered. In addition,the delivery schedule of software components to testing should beknown by all testers.
Assembling the QA/Test Team 207
08_4782 2/5/07 11:11 AM Page 207
■ Set goals and assign tasks. Document and communicate the tasksand schedule deadlines, specific to each tester. Both the test man-ager and the test engineer have to agree on the assigned tasks.
■ Budgets. In the case of the tester evaluating a testing tool or otherform of purchased technology, the available budget must beknown so that they can work within that range and avoid wast-ing time evaluating products that are too expensive.
Expectations and assignments differ, depending on the task athand and the skill set of the tester. For example, different types oftests, test approaches, techniques, and outcomes are expected.
Once the expectations are set, the test manager can start com-paring the production of the test team against the preset goals, tasks,and schedules, measuring the effectiveness and implementation. Thefollowing is a list of items to consider when evaluating testers’effectiveness.
■ Subject matter expert versus a technical expert. The expertiseexpected from a subject matter expert is related to the domain ofthe application, while a technical tester will be concerned with thetechnical issues of the application.
In the case of a technical tester functioning as an automator,automated test procedures should be evaluated based on definedstandards, which must be followed by the test engineers. Forexample, did the engineer create maintainable, modular, reusableautomated scripts, or do the scripts have to be modified with eachnew system build? In an automation effort, did the tester followbest practices? For example, did the test engineer make sure thatthe test database was baselined and could be restored for theautomated scripts to be rerun? If the tester is developing customtest scripts, or a test harness, the tester will be evaluated on thesame criteria as a developer, namely readability and reliability ofthe code.
A tester who specializes in the use of automated tools, yetdoes not understand the intricacies of the application’s function-ality and underlying concepts, will usually be ineffective. Theautomated scripts generated based only on high-level knowledgeof the application will often find less important defects. It is
208 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 208
important that the automator understands the application’s func-tionality in order to be an effective member of the testing team.
Another area of evaluation is technical ability and adaptabil-ity. Is the test engineer capable of picking up new tools andbecoming familiar with their capabilities? Testers should betrained on tool capabilities, if they are not aware of all of them.
■ Experienced versus novice tester. As mentioned, the skill level ofthe tester also needs to be taken into account. For example, novicetesters may overlook some errors, or not realize they exist or aredefects; therefore, it is important to assign lower-risk testing areasto the novice tester.
Experienced testers may ignore some classes of defects, basedon past experience (“the product has always done that”) or thepresence of workarounds. While this may or may not be appro-priate, testers will acclimate or assimilate this knowledge intotheir testing and not report defects seemingly unimportant tothem, yet which end users might find unacceptable.
■ Type of tests, functional versus nonfunctional, such as perfor-mance and security. Evaluate a tester’s understanding of the vari-ous testing techniques available and knowledge of whichtechnique is the most effective to apply for the task at hand. If thetester doesn’t understand the various techniques and applies atechnique ineffectively, test designs, test cases, and test procedureswill be adversely affected.
The evaluation of functional tests can additionally be basedon a review of the test procedures. Typically, testers are assignedto write test procedures for a specific area of functionality basedon assigned requirements. As part of an effective test process, testprocedure walkthroughs and inspections should be conductedthat will include the requirements, testing, and developmentteams. During the walkthrough verify that all teams agree on thebehavior of the application.
Consider the following during an evaluation of functional testprocedures:
■ The completeness of the mapping of the test procedure steps tothe requirements steps. Is the traceability complete?
Assembling the QA/Test Team 209
08_4782 2/5/07 11:11 AM Page 209
■ The correctness of the test input, steps, and output (expected result).■ Are major testing steps omitted in the functional flow of the test
procedure?■ Has an analytical thought process been applied to come up with
effective test scenarios?■ Have the test procedure creation standards been followed?■ How many revisions are required to consider the test procedures
effective and complete, related to misunderstanding or miscom-munication?
■ Have effective testing techniques been applied to derive theappropriate set of test cases?
During a test procedure walkthrough, in addition to verifying themapping to requirements, also evaluate the “depth” or effectivenessof the test procedure. This is somewhat related to the depth of the re-quirement steps. What does the test procedure test? Does it test thefunctionality at a high level or does it really dig deep down into theunderlying functionality of the application?
For example, a functional requirement might state, “The systemshould allow for adding records of type A.” A high-level test proce-dure will test that the record can be added through the GUI. A moreeffective test procedure will have additional steps in place that test theareas of the application that are affected when this record is added.Those additional steps could include a SQL statement that verifiesthat the record appears correctly in the database tables, plus addi-tional steps verifying the record type.
If test procedures are at a very high level, evaluate first that therequirements are at the appropriate level and pertinent details are notmissing. However, if the detail is in the requirement, but is missing inthe test procedure, the test engineer might need additional coachingon how to write effective test procedures.
Different criteria are required for evaluating the effectiveness ofthe different types of test procedures, functional versus nonfunc-tional. Nonfunctional tests will have to be designed and documentedin a different manner than functional test procedures, for example.
■ Testing phase (i.e., alpha test, beta test, system test, acceptancetest, etc.). Different tasks are expected from the tester dependingon the testing phase, and they have to be considered.
210 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 210
During system testing, the tester will be responsible for alltesting tasks, including the development and execution of testprocedures, as well as tracking defects to closure, and so on. Forexample, during alpha testing, a tester might simply be taskedwith recreating and documenting defects the “alpha testing” teamis reporting, usually done by a company’s independent testingteam (independent verification and validation [IV&V] team).
During beta testing the tester might actually be tasked withdocumenting beta test procedures for the beta tester to execute,in addition to recreating and documenting defects found by the“beta” testers (customers are often recruited to become betatesters).
■ The phase of the development life cycle. As mentioned through-out this book, testers should be involved from the beginning ofthe life cycle. For example, during the requirements phase, thetester can be evaluated based on defect prevention efforts, suchas discovery of testability issues or pointing out requirementinconsistencies.
A tester’s evaluation can be subjective and many variableshave to be considered, without jumping to the first seeminglyobvious conclusion. For example, when evaluating the test engi-neer’s testing of requirements during the requirements phase, it isalso important to evaluate the requirements themselves. If therequirements are poorly written, even an average tester can findmany defects. However, if the requirements are well laid out andabove average, only a really good tester can find the mostinvolved defects.
■ Follows instructions and attention to detail. It is also importantto consider how well a test engineer follows instructions and paysattention to detail. Unreliability is a bad test engineer trait andfollow-through has to be monitored. If test procedures need to beupdated and executed to ensure a quality product, the test man-ager must be confident that the test engineers will carry out thistask. If tests have to be automated, the test manager should beconfident that progress is being made. Weekly (or daily in the finalstages of a testing phase) status meetings where engineers reporton their progress are useful to track and measure progress.
■ Types of defects, defect ratio, and defect documentation. The typeof defects found by the engineer must also be considered during
Assembling the QA/Test Team 211
08_4782 2/5/07 11:11 AM Page 211
the evaluation. However, there are many caveats to consider whenusing this metric to evaluate a tester’s effectiveness. Keep in mindthe skill level of the tester, the type of tests being performed,and the testing phase being conducted, and consider such factorsas the complexity and the maturity of the application under test.Defects found not only depends on the skill of the tester, but onthe skill of the developer who wrote, debugged, and unit tested thecode, and the walkthrough/inspection teams who reviewedthe requirements, design, and code, hopefully removing most ofthe defects before formal testing.
Additionally, evaluate in this context whether the test engi-neer finds errors that are complex and domain related or only cos-metic. For example, cosmetic defects such as missing window textor control placement are relatively easy to detect and become highpriority during usability testing, whereas more complicated prob-lems relating to data or cause-and-effect relationships among ele-ments in the application are more difficult to detect, and requirea better understanding of the application and become high prior-ity during functional testing. By contrast, cosmetic defect fixes,since they are most visible, make the customers happy.
What about the tester who is responsible for testing a specificarea where most defects are discovered in production? First, thetest manager will need to evaluate the area for which the tester isresponsible. Is it the most complex, error-prone area? If it is a verycomplex area and the product was released in a hurry, the defectcan be more understandable.
Further evaluate what types of defects were discovered in pro-duction. Could those defects have been discovered if a basic test pro-cedure had been executed, as part of the existing test procedure suite,and there was plenty of time to execute the test procedure? If so, thiswould be a major oversight on the part of the tester responsible forthis area. However, before passing judgment, there are some addi-tional points to consider:
■ Was the test procedure supposed to be executed manually? Is themanual tester tired of executing the same test procedures over andover again, and now, the fiftieth time, he felt it should be safe notto execute the tests, because that part of the application has
212 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 212
always worked? In this case, automated regression tests, shouldbe strongly considered.
■ Was the software delivered under time pressures, preventing a fulltest cycle, yet the deadline couldn’t be moved? Releases shouldn’tgo out without having met the release criteria.
■ Was this test automated? Are details missing in the automatedscript and the script missed testing that one step? In this case, theautomated scripts have to be reevaluated.
■ Is it a defect that was discovered using some combination of func-tional steps that are barely executed? This type of defect is moreunderstandable.
Additionally, it may also be important to go back and look at thetest goals, risks of the project, and assumptions made when the testeffort started. If it was decided not to conduct a specific type of testdue to time constraints or low risk, then the tester should not be heldresponsible afterward for not finding defects this test could have un-covered. The fallout would be a cost of the assumed risk, and the riskwas assumed at the beginning of the project with full knowledge ofthe possibility of problems.
Effectiveness can also be evaluated by examining “how” a defectis documented. Is there enough detail in the documented defect for adeveloper to be able to recreate the problem? Are developers alwayshaving a difficult time recreating one specific tester’s defects? Makesure there are standards in place that document exactly what infor-mation is required in a documented defect and that the defect track-ing life cycle is well communicated and understood. For morediscussion on the defect tracking life cycle. The testers need to followthese instructions.
The outcome of the evaluation could point to various issues. Inthe case of the test procedure lacking detail, which should be discov-ered during the test procedure inspection phase, it could be possiblethat the tester didn’t quite understand the requirement, among manyother plausible causes.
Determine the cause of the issue and try to come up with a solu-tion. Each issue needs to be evaluated as it arises, before a judgmentcall regarding the tester’s capability is made. After careful evaluationof the entire situation, after additional coaching has been provided, itwill be possible to get an idea of how detail-oriented, analytical, or
Assembling the QA/Test Team 213
08_4782 2/5/07 11:11 AM Page 213
effective this tester is. In the case when it is determined that the testerlacks attention to detail or analytical skills or there are communica-tion issues, that tester’s performance might have to be specificallymonitored and reviewed, requiring additional instruction, training,and improvement.
Testers’ effectiveness must be constantly evaluated to ensure thesuccess of the testing program. It is important to evaluate the tester’seffectiveness on an ongoing basis, not only to determine training needs,but most importantly to ensure the success of the testing program.
TEST ENGINEER SELF-EVALUATION
Test engineers themselves should assume some responsibility for eval-uating their own effectiveness. The following list can be used as astarting point for test engineer self-evaluation:
■ Consider the types of defects you are discovering. Are they impor-tant defects, or are they mostly “cosmetic” and low-prioritydefects? For example, if you consistently uncover only low-priority defects during a functional testing phase, such as hot keysnot working or typos, reassess the effectiveness of your test pro-cedures. (Note: Keep in mind that during usability testing, forexample, the priority of the type of defects described changes.)● Are test procedures detailed enough, covering the depth and
combinations and variations of data and functional paths nec-essary to catch the higher priority defects? Are tests including“invalid data,” as well as “valid data”?
● Did you receive and incorporate test procedure feedback fromrequirements, testing, and development staff? If not, ask fortest procedure reviews, inspections, and walkthroughs involv-ing those teams.
● Are you aware of the testing techniques available, such asboundary values testing, equivalence partitioning, and orthog-onal arrays, in order to be able to derive the most effective testprocedures?
● Do you understand the intricacies of the application’s func-tionality and domain well? If not, ask for an overview or addi-
214 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 214
tional training session. If you’re a technical tester, ask for helpfrom an SME.
■ Are you discovering the major defects too late in the testing lifecycle? If you are consistently discovering major defects very latein the life cycle, consider the following:● Does your initial testing focus on the low-priority require-
ments? Make sure to focus your initial testing on the high-priority, highest-risk requirements.
● Does your initial testing focus on regression testing of alreadyexisting functionality that was working previously, and hardlyever broke in the past? Make sure to focus your initial testingon code changes, defect fixes, and new functionality. Focus onregression testing later. Ideally, the regression testing efforts areautomated, so you can focus your testing efforts on the newerareas.
■ Are there any areas you are testing exhibiting suspiciously lowdefect counts? If so, these areas should be reevaluated:● Determine if the test coverage is robust enough.● Determine that the types of tests you are executing are most
effective. Are important steps missing?● Analyze the complexity of this application’s area under test.
Evaluate whether this functionality has low complexity.● The functionality was implemented by your most senior devel-
oper(s) and it has been unit and integration tested so well, thatno major defects are to be found.
Consider the defect work flow:
■ Are you documenting defects in a timely manner? (That is, as soonas a defect is discovered, after determining it really is a defect, itshould be documented. Don’t hold off documenting the defect.)
■ Make sure to document defects, following the defect documenta-tion standards. If there aren’t any documented defect documen-tation standards, request those from your manager (i.e., list allinformation that needs to be in a defect, in order for the developerto be able to reproduce it).
■ If a new build is received, focus your initial testing on the defectsin “retest” status. It is important that fixed defects are retested as
Assembling the QA/Test Team 215
08_4782 2/5/07 11:11 AM Page 215
soon as possible, so the developers know whether their defects arereally fixed.
■ Make sure to continuously evaluate your requested defects forcomments from development, whether they require additionalinformation or additional testing steps. Be eager to track defectsto closure.
■ Examine the comments added to your defects to determine howdevelopers or other testers receive them. If many defects aremarked often as “works as expected” or “cannot be reproduced,”this may actually be the case; however, it could also signal a fewother things:● Your understanding of the application is inadequate. In this
case, more training is required. Request help from domainexperts (SMEs).
● The requirements may be ambiguous and need to be corrected(this, however, is often discovered during the requirementsand/or test procedure walkthrough/inspections).
● Your defect documentation skills are not as effective as theycould be. This may lead to a misunderstanding of the defectand the developer will need additional steps to reproduce it.
● The developer is assuming a false implementation of therequirement.
● The developer might lack the patience to follow the detaileddocumented defect steps to reproduce the defect.
■ Monitor if defects are discovered in your test area after the appli-cation has gone to production. Evaluate the defect. Why was itmissed?● Did you not execute a specific test procedure that contained the
steps that would have caught this defect? If yes, why was it notexecuted? Are your regression tests automated?
● Did no test procedure exist that would have caught this defect?Why was this procedure not developed? Was this area consid-ered low risk? Evaluate your test procedure creation strategy.Add the test procedure to your regression test suite.
● Evaluate how you could create more effective test procedures;evaluate the test design, test strategy, and test technique; dis-cuss with your peers or manager.
● Was there not enough time available to execute an existing testprocedure? Let management know, before the application goes
216 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
08_4782 2/5/07 11:11 AM Page 216
live or is shipped, not after the fact. This sort of situationshould also be discussed in a posttest/preinstallation meeting,and be documented in the test report.
■ Do other testers during the course of their work discover defectsthat were your responsibility? Evaluate the reasons and makeadjustments accordingly.
There are many more questions a tester can ask him- or herself re-lated to testing effectiveness, depending on the testing phase and test-ing task at hand, type of expertise (technical versus domain), andexperience level.
An automator might want to make sure that he or she is familiarwith the automation standards and best automation practices. A per-formance tester might request additional training in the performance-testing tool used. Self-assessment of the tester’s capabilities and thenthe associated improvement steps are an important part of an effec-tive testing program.
Assembling the QA/Test Team 217
08_4782 2/5/07 11:11 AM Page 217
08_4782 2/5/07 11:11 AM Page 218
219
CHAPTER 9Establishing and Maintaining
Testing Resources
SAP testing requires multiple resources for planning and executingtest cases, reporting of test results, and the resolution of defects.
Establishing, identifying, and scheduling all the necessary resourcesto complete the SAP testing effort depends on the size of the organi-zation, scope of the test, type of test, and the regulations that governhow the organization performs its business processes.
Resources for an SAP test include:
■ Individuals to develop the test scenarios and resolve defects.■ Individuals to develop automated test cases.■ File cabinets or scanners for storing test results with handwritten
signatures in regulated environments.■ Establishment of the SAP instance needed to execute the test
cases.■ Test tools to develop, store, and execute automated test cases.■ Software tools for managing requirements, configuration man-
agement, creating business process procedures (BPPs), developingflow process diagrams, and tracking SAP transports.
■ Machines where test cases will be executed.■ A room (test lab) where test machines for executing the test cases
are stored.
Identified resources may come from different parts of an organi-zation. For instance, for a global SAP implementation rollout, indi-viduals acting as super-users and executing test cases for the useracceptance test (UAT) may come from an international companyplant, whereas for string testing the individuals needed to execute thetest cases may originate from the configuration and test teams. Once
09_4782 2/5/07 11:13 AM Page 219
resources are identified for a test effort they will need to be scheduledand confirmed prior to the test to avoid unexpected testing delays orconflicts with other teams performing tests on non-SAP applications.Furthermore, resources such as hardware components and softwarepackages will require dedicated maintenance and potential upgradesto avoid becoming obsolete or losing their usefulness.
TEST LAB
A test lab is a physical facility dedicated for testing that serves as awar room and allows the SAP test team and other project teams tocongregate for executing test cases and resolving defects. A test labwill have the necessary machines (i.e., workstations such as desktopsand/or laptops) and software installations to conduct an SAP test. Themachines in the test lab often have more disk space, memory, and ahigher-grade processor than the individual machines from the testteam members, which make it possible to install multiple softwarepackages, including automated test tools, different operating systems,and drivers. Furthermore, machines dedicated to a test lab can bereimaged and formatted for different testing efforts without causingdowntime to the individual’s personal machine.
An example of a test lab would be a stress test where emulatedend users performing SAP processes in large quantities are deployedinto multiple machines that need large storage and memory. With atest lab trial, runs for a stress test can be simulated across multiplemachines without affecting individual personal machines. Anothersuitable example that illustrates the use for a test lab would be anSAP UAT with end users from remote company locations congregat-ing to execute test cases prior to the system go-live date. A test lab al-lows end users for UAT to execute test cases on the test lab machineswithout having to request the services of the help desk team to reim-age or rebuild the individual end user’s machines. Without a test lab,UAT participants would have to meet in a makeshift facility lackingthe proper hardware and software components to execute UAT testcases.
A test lab will also contain other hardware that facilitates the test-ing effort, including a dedicated local area network (LAN), printers,fax machines, and a phone line. The test team is appointed the re-
220 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
09_4782 2/5/07 11:13 AM Page 220
sponsibility of maintaining the test lab and all hardware and softwareresources contained within the test lab. The test team schedules thetest lab for various testing efforts, conducts training for the test teamin the training lab, and holds demonstrations in the test lab.
SOFTWARE RESOURCES
Third party software tools, SAP’s internal solutions, or in-house cre-ated applications facilitate and expedite the various testing phases ofan SAP implementation in the areas of test design, test execution, testreporting, and training. The software tools are resources that are inplace to support and assist the testing efforts. The following tools arefrequently used at SAP installations to enhance the capabilities of thetest team:
■ Test tool software. Test tools can be used for test managementand for test case automation. Test management tools allow plan-ning and creation of manual and automated test cases, test caseexecution, defect tracking, and change impact analysis for SAPtransports. Test tools are used for developing automated test casesfor both functional and capacity testing. Test tools can be pur-chased from third-party vendors or obtained from SAP’s internalnative test scripting tool, Extended Computer Aided Test Tool(eCATT). Some vendors of test tools offer solutions that integratewith eCATT to enhance and extend its capabilities.
■ BPP authoring tools. BPPs are the cornerstone for SAP training atthe SAP transactional level but can be enhanced and augmentedto include test conditions that consequently lead to the creationof unit test cases. BPPs can be created manually in a text editorsuch as Microsoft Word or with third-party tools.
■ Requirement management tools. As the name implies, require-ments management tools assist in the gathering and maintenanceof a requirement. An example of a functionality requirementwould be “the system will allow creation of fixed-cost contracts”;a performance requirement would be “the system will haveresponse times that do not exceed a response time of three sec-onds per screen when the maximum expected number of users arelogged on to the application through a browser and corporate
Establishing and Maintaining Testing Resources 221
09_4782 2/5/07 11:13 AM Page 221
LAN.” A requirement may undergo multiple changes that requiremanagement throughout the life cycle of an SAP implementationor upgrade. Requirements management tools help you manage allchanges to a requirement, while producing audit trails, a singleand secured repository for managing the requirement, and pro-vide coverage for a requirement through a requirement traceabil-ity matrix (RTM). Requirements management tools can becreated in-house with a database such as Microsoft Access or pur-chased from third-party vendors.
■ Monitoring tools. A capacity test such as a load, volume, andstress test require monitoring of multiple application components.SAP R/3 has the internal monitoring tool CCMS but monitoringother components such as the network, database, servers, and soon will require monitoring tools from third-party vendors.
■ Transport management tools. Transporting SAP objects mustoccur in the right sequence and when moved into a live produc-tion environment the transports require signatures and approvalsfrom multiple stakeholders. In-house forms, manuals, e-mails,spreadsheets, and templates can be created to manage the trans-port of objects into different SAP instances or commercial solu-tions can be acquired from third party vendors for managing thetransport of all objects after testing activities have been success-fully completed and all associated documentation for a transporthas been completed.
■ Flow process diagramming tools. Requirements, functional sce-narios, and business processes are frequently diagrammed. Dia-gramming techniques vary in formality from Unified ModelingLanguage (UML) diagrams to flow process diagrams. Many pro-jects diagram their processes with Visio, but diagrams that includeintegration points and have dependencies on other processesrequire other types of diagramming tools. All diagrams should beaccompanied with narratives that provide at a minimum: (1)description of the diagram, (2) the underlying assumptions in cre-ating the diagrams, (3) the expected roles of the end users for thediagrammed process, (4) the dependencies for the diagrams, and(5) the frequency with which the diagrammed process is executed.
■ Version control tools. Version management consists of lockingdown objects, check-in and check-out of objects, promoting ofobjects from one environment to the next, maintaining multiple
222 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
09_4782 2/5/07 11:13 AM Page 222
versions of the same objects with audit trails, and allowing accessto objects based on defined security roles. Version control must beperformed on all testing deliverables such as test cases, test exe-cution logs, automated test cases, and manual test cases. Versioncontrol tools are available from third-party vendors, or versioncontrol functionality can be included within the project’s Internetportal (if one is in place).
■ Data loading tools. A dedicated test environment will be neededto execute scenarios and test cases. Loading and refreshing datato construct the test environment manually is time consuming andtedious. SAP offers the Test Data Migration Server software tohelp construct the test environment and migrate all necessary datafrom other environments.
Software tools that facilitate testing will need upgrades, mainte-nance, and customization during their lifetime. Many of these toolswhether created in-house or purchased from a software vendor arenot part of the standard company software image and thus receive lit-tle or no support from the local help desk team. The project teamsthat procured the software tools will need to assign administrators toapply patches to the tools, install the tools, back up the files createdwith the tools, customize the tools, and provide in-house training forthe tools.
The tools administrators evaluate and assess the impact of achange to the tool. For example, for a commercial solution, the ven-dor may provide patches to resolve a tool’s defect or enhance thetool’s capability. The administrator will need to assess how a vendorpatch will affect the tool’s customizations. For a vendor patch, thetool’s administrator may also have to update the training materialsfor any changes that the patch causes to the tool’s customization,fields, screen layouts, or functionality.
In-house created tools do not get vendor patches and need to bemaintained internally. They require much design documentation inthe event of employee turnover for the individuals who created anddesigned the in-house tool. In-house tools typically do not come withvendor-delivered training materials, vendor online or phone support,maintenance agreements, or context-sensitive online help, which in-creases the risk of tool failure or complaints from the tool’s end users.However, the benefits of an in-house–created tool are that they can
Establishing and Maintaining Testing Resources 223
09_4782 2/5/07 11:13 AM Page 223
meet a project’s specific needs that no commercially available tool canmeet except with extreme modifications to the source code and theyare not subject to maintenance licenses fees.
The project will need to determine the risks and rewards of de-veloping an in-house tool versus procuring a commercially availablesolution. Depending on the project’s budget and deadlines, a neededtool can be supplanted with a mixture of manual processes, e-mails,and spreadsheets that are highly inefficient in the long run but solvea short-term need for the project. The procurement or creation of atool underscores the need to support and maintain the tool.
HARDWARE RESOURCES
Hardware resources consist of machines such as workstations, print-ers, and phones, needed for the test lab and a data center to conductfunctional, technical, and system testing. Most projects building a testlab will either “borrow” hardware resources from other departmentsand place them within the test lab or purchase the hardware resourcesfrom various manufacturer vendors.
In order to install test tools the workstation machines will needto meet or exceed the memory, operating system, and processor re-quirements from the test tools vendors. Depending on its scope, aproject may allocate one to two workstations per functional team ora fixed number of workstations for the entire project based on thetotal number of present and future requirements. Projects emulatinghundreds or thousands of end users for a stress test will need to en-sure that the existing workstations either have sufficient memory orthat more workstations can be borrowed from other departments orthe data center. Emulated end users for a stress test can consume asmany as five megabytes of random access memory (RAM) per emu-lated end user in addition to memory requirements for the actualstress-testing tool.
Once the workstations are identified the project will need to as-sign a team to the workstations for maintenance. The workstationswill either have the company’s standard image supported by the localhelp desk that includes periodic software updates or an ad-hoc imagethat is nonstandard and thus receives no support from the local helpdesk. Typically, the local help desk does not support problems, issues,bugs, or environmental conflicts with automated test tools on work-
224 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
09_4782 2/5/07 11:13 AM Page 224
stations assigned for testing, which then falls under the responsibilityof the assigned test tool administrator.
In addition to workstation machines, the test team or projectmanager will need to ensure that testers are equipped with printers tocapture screenshot printouts for system defects, and print out testcases, test logs, and test reports.
ENVIRONMENT RESOURCES
Environment resources consist of the SAP client landscape where thesolution is designed, tested, and subsequently transported into pro-duction. The dedicated test team owns and controls the test environ-ment, usually designated with the letters “TST,” in the SAP clientlandscape. The test team has the ability to change fiscal years andposting periods, refresh the system, and make client copies as neededto simulate test conditions within the test environment. The test envi-ronment is a critical resource for the test team to design, debug, andplay back automated test cases.
The test team must not allow configuration and developmentchanges to occur in the TST environment. All development, security,and configuration changes must first occur in the development envi-ronment, usually designated with the letters “DEV,” and then betransported into TST under a controlled process and with approvalfrom the test team.
INDIVIDUAL RESOURCES
The number of resources needed to support an SAP testing effort islargely a function of the type of test being performed. Integration test-ing for initial SAP implementations and regression testing for majorsystem upgrades are likely to consume the largest number of resources.Testing resources can come from the following sources: configurationteam, end users, test team, subject matter experts (SMEs), productionteam, and outsourcing organization.
The identification of resources for various SAP tests is highlightedin Exhibit 9.1. In this exhibit, individuals with the role of activityowner (AO) comprise the bulk of the resources for a testing effort.
Establishing and Maintaining Testing Resources 225
09_4782 2/5/07 11:13 AM Page 225
226
EXHI
BIT
9.1
Rol
es o
f In
divi
dual
Res
ourc
es t
o Su
ppor
t V
ario
us S
AP
Test
ing
Eff
orts
Tes
t E
ffor
t
Proj
ect
Tea
mU
nit
Scen
ario
Dev
elop
men
tIn
tegr
atio
nR
egre
ssio
nPe
rfor
man
ceT
echn
ical
UA
T
Con
figu
rati
onA
OA
OS,
RA
OF
AA
Dev
elop
men
tA
OA
AA
Test
F, R
AF
AA
AO
F
Bas
isA
OA
O
End
Use
rsO
SA
O, S
SME
sF
S,O
RS,
OR
SR
Prod
ucti
onA
AO
Mat
rix
of t
asks
and
rol
es
Lev
els
of p
arti
cipa
tion
:
AO
—A
ctiv
ity
Ow
ner,
R—
Rev
iew
s, S
—Si
gns-
off
and
appr
oves
, A—
Act
ive
(i.e
., cr
eate
s te
st s
crip
t), F
—Fa
cilit
ates
, O—
Obs
erve
s
09_4782 2/5/07 11:13 AM Page 226
Unit Testing
Unit testing is the first type of testing that occurs for initial SAP imple-mentations at the SAP transaction level or when system changes areintroduced for the first time in the production environment. Membersfrom the SAP configuration team will execute, design, and plan unittest cases. Alternatively, members of the production support team willconduct and plan unit testing for system changes that the productionteam initiated and implemented in production as a result of a helpdesk ticket or the application of OSS (On-line Service System) notes.For SAP transactions that require testing with multiple sets of data(e.g., test for multiple wage types), members from the test team canexecute testing with automated test cases after the configuration andproduction team members have successfully validated the unit testcase conditions and no defects remain outstanding.
Scenario Testing
Scenario testing builds on the unit-testing effort and includes testingof two or more SAP transactions comprising a process within a sin-gle module. Members of the configuration, production, and testteams can plan, design, and execute the scenario tests to test the sys-tem functionality and security roles for the scenario. SMEs canapprove and sign off test results. Members of the development teamcan test interfaces constructed with Batch Data Communication(BDC) sessions.
Development Testing
Development testing includes testing the following developmentobjects: reports, interfaces, conversions, enhancements (user exits),forms, work flow, and batch scheduled jobs. Development teammembers’ background is typically in advanced business applicationprogramming (ABAP, SAP’s native and proprietary programming lan-guage). Members from the development team plan and execute devel-opment tests whereas members from the configuration team canapprove the results from the development test. Test team members
Establishing and Maintaining Testing Resources 227
09_4782 2/5/07 11:13 AM Page 227
can audit test results. Legacy system owners also assist the develop-ment team in identifying data sets for interfaces and conversions forthe development test.
Integration Testing
Integration testing builds on the development and scenario test casesto form end-to-end scenarios. This level of testing requires involve-ment and support from most of the project members. The test teammembers can perform execution of integrated test cases either manu-ally or with automated test tools. Configuration and developmentteam members resolve defects, plan, design, and execute the test cases.SMEs can sign off and certify test results. Legacy system owners alsoassist the development team in identifying data sets for interfaces andconversions for the integration test. End users can observe the execu-tion of integration test cases.
Performance Testing
Performance testing ensures that the system has optimal responsetimes and does not experience degradation points or bottlenecks. Thetest team is responsible for planning, designing, and executing theautomated test cases for a performance test case while interpretingand reporting the results for the performance test. The SAP basis andother technical teams are responsible for monitoring the system dur-ing a performance test and resolving performance-based problems.The development and configuration team members play support rolesin a performance test for launching manual jobs, identifying test data,and documenting test cases.
User Acceptance Testing
UAT allows end users to test the application from the point of viewof how they will interact with the application in the production envi-ronment. UAT can consist of reexecuting previously executed inte-gration test cases with end users, demonstrations of the application,
228 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
09_4782 2/5/07 11:13 AM Page 228
or the creation of new test cases. End users plan, design, and executeUAT test cases. Configuration and development team membersresolve defects that arise from UAT.
Technical Tests
Members from the archiving, database, and basis team perform tech-nical tests such as backup and recovery, connectivity tests, and relia-bility tests.
Regression Testing
Regression testing is conducted to ensure that previously working sys-tem functionality is not adversely affected by the introduction of newsystem changes. Production support teams can plan, design, and exe-cute regression test cases. Test team members internally within theproject or from an outsourced agreement can augment regression test-ing with the automation of test cases. End users can approve and signoff results from a regression test.
Establishing and Maintaining Testing Resources 229
09_4782 2/5/07 11:13 AM Page 229
09_4782 2/5/07 11:13 AM Page 230
231
CHAPTER 10Planning and Construction
of Test Cases
The construction and execution of test cases verify and validate thecaptured in-scope requirements or the request for new system
changes. Test cases contain test conditions, test data, and expectedresults to verify the design, functionality, and development of SAPadvanced business application programming (ABAP) objects, securitysettings, segregation of duties, work flow, data archiving, businesswarehouse (BW) reports, and service-level agreements (SLAs) for sys-tem performance. A combination of test cases forms a test scenario,whereas a test script is the sequence of actions that executes a test case.
Test cases can be designed in spreadsheets, text editors, or withintest management tools. Project deliverables such as business processprocedures (BPPs), flow process diagrams, and functional and tech-nical specifications are excellent sources of information for designingtest cases. The most effective test cases are those that can be executedby nonfunctional SAP experts independently without the assistanceof business analysts or SAP consultants. Templates and guidelines arehereby provided to build robust test cases.
Well-written test cases exhibit the following characteristics andattributes:
■ Can be reused for user acceptance testing (UAT) with minimal orno modifications.
■ Trace back to testable requirements or other existing documen-tation (BPPs, flow process diagrams, technical or functionalspecifications).
■ Include preconditions.■ Have been peer reviewed and signed off.■ Include SAP role(s) to be used for verifying the test conditions.
10_4782 2/5/07 11:15 AM Page 231
■ Include a narrative or description of the test conditions to beverified.
■ Contain valid test data (i.e., master data).■ Have been rehearsed prior to approval.
Test cases that meet the aforementioned characteristics fulfill twomajor expectations in an SAP implementation: (1) reuse or repeata-bility for future testing cycles, and (2) provide comprehensive detailsfor test case automation with automated test tools. Test cases to bereused for a future testing cycle will need evaluation and maintenancefor possible modification. Projects that have version control and testmanagement tools can maintain audit trails and history logs for theactions and modifications performed on test cases. A test case writ-ten for a single SAP transaction code can be combined with test casesfor other transaction codes to form test cases for end-to-end test sce-narios. Test scenarios vary in complexity since they may involve thestaging and preparation of data, user exits, and execution throughlegacy systems.
BUILDING A TEST CASE
In order to build a test case it is necessary to know what requirementor test conditions are fulfilled with the execution of the test case. Theexecution of a test case will need to trace to at least one requirement;otherwise, it calls into question the validity or merits of the test case.In turn, a requirement for which a test case cannot be constructedcalls into question the validity of the requirement. After the underly-ing requirement(s) that trace to the execution of the test case are iden-tified, it is necessary to construct and design the test case template. Atest case template can be used for different testing efforts such as func-tional, development, performance, security, and regression testing.
SAP implementation methodologies such as IBM’s Ascendant orSAP’s ASAP Roadmap methodology offer test case templates that canbe recycled “as-is” or modified for testing SAP. In the absence of aSAP methodology that offers test case template, it will be necessaryfor the test team to create test case templates in-house. At a mini-mum, the construction of a test case should include a data dictionarythat clearly defines each field to be populated, the team or individual
232 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
10_4782 2/5/07 11:15 AM Page 232
responsible for populating the fields on the test case template, and theattributes of the fields for the test case template. Exhibit 10.1 showsa customized test case template from the Ascendant and ASAPRoadmap methodologies. Exhibit 10.2 shows a partial data dictio-nary and instructions for populating the test case shown in Exhibit10.1. A test case template can be constructed in spreadsheets, text ed-itors, or directly within a test management tool. Depending on thetest management tool being used, it is possible to transfer with macrosor other utilities the contents of a test case designed in a spreadsheetor text editors into a test management tool.
For initial SAP implementations the configuration, functional,business analysts, and subject matter experts play a role in document-ing and peer reviewing the fields on the test case templates. For exist-ing SAP implementations, the test team, assigned testers, and businessanalysts perform the activities of updating and modifying the contentsof the test cases templates. Documented test cases can be approvedand signed off prior to test execution. Testing cycles that include testreadiness review (TRR) criteria can be included to ensure that all testcases have been peer reviewed, and signed off prior to test execution.The quality assurance (QA) team (if one is present) also can play a rolein ensuring that the documented process and methodology for design-ing and completing a test case is enforced and adhered to.
The authors of test cases must assume that individuals other thanthe original authors of the test cases will execute the test cases to sup-port future tests such as user acceptance testing (UAT) or regressiontesting. Hence, it is important that all test cases have expected results,preconditions, valid test data, and conditions to be verified duringtesting. The constructed test cases must be written with sufficient de-tails to allow non-SAP-knowledgeable individuals to execute them.
LEVERAGING INFORMATION
SAP testers can leverage the following sources of information or workproducts for constructing test cases and populating the fields shownin Exhibits 10.1 and 10.2:
■ Business process procedures (BPPs)■ Flow process diagrams
Planning and Construction of Test Cases 233
10_4782 2/5/07 11:15 AM Page 233
234
EXHI
BIT
10.1
Hyb
rid
Test
Cas
e C
reat
ed f
rom
the
ASA
P R
oadm
ap a
nd S
AP
Asc
enda
nt M
etho
dolo
gies
Sect
ion
1: H
eade
r D
ata
Test
Scr
ipt
Tit
le:
Test
Scr
ipt
Des
ign
Stat
us:
Test
Scr
ipt
Met
hod:
Test
Scr
ipt
Exe
cuti
on S
tatu
s:
Prio
rity
:R
evie
wed
by:
Test
Cas
e A
utho
r(s)
:D
ate
Rev
iew
ed:
Test
ing
Lev
el/P
hase
:R
efer
ence
Doc
umen
ts (
i.e.,
BPP
s, S
pecs
, etc
.):
Des
crip
tion
of
Test
Con
diti
ons:
Inte
r Te
st C
ase
Dep
ende
ncie
s:
Test
/Dat
a Pr
epar
atio
n:
Exe
cuti
on D
ate:
Test
er(s
) E
xecu
ting
:A
ppro
ver(
s) f
or t
est
resu
lts:
Sect
ion
2: S
cena
rio
Des
crip
tion
for
Tes
t C
ases
Num
ber
Run
#D
etai
led
Scen
ario
Des
crip
tion
Map
ped
Req
uire
men
ts (
CR
s)Pr
e-C
ondi
tion
(s)
Post
-Con
diti
on(s
)
1 2 3
10_4782 2/5/07 11:15 AM Page 234
235
Sect
ion
3: T
est
Scri
pt D
ocum
enta
tion
TR
AN
SAC
TIO
NA
L S
TE
PS—
CA
SE: 1
SAP
Out
put
Dat
a/A
ctua
l N
o.R
ole
Bus
ines
s Pr
oces
s S
tep,
wit
h D
ata
T-C
ode
Exp
ecte
d R
esul
tR
esul
tPa
ss/F
ail
Def
ect#
10_4782 2/5/07 11:15 AM Page 235
■ Technical and functional specifications■ Process design documents■ Requirements■ Customer input (CI) templates
The primary purpose for a BPP is for training; however, the SAPASAP Roadmap methodology offers an accelerator for the BPP tem-plate that can be enhanced to include test conditions that makes itsuitable for the execution of unit test cases. BPPs with test conditionsused for unit testing can be strung together to create larger test cases
236 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
EXHIBIT 10.2 Partial Data Dictionary and Instructions for Populating aTest Case Template
Field Name Characteristic Content
Description of Mandatory A high level explanation or summary oftest conditions the process(es)/scenario(s)/business case(s)
to be tested and what prompted the testingof this process (i.e., process impacted bychange request, verifies a change request,etc.).
Test method Optional Manual or automated.
Reviewed by Optional Provide the name(s) of the stakeholdersreviewing and approving the test casedesign.
Test data Optional Any data that requires staging orpreparation preparation in order to execute the entire
test script which can consist of multiple testcases.
Inter–test case Optional List any test cases that must be executed ordependencies completed successfully prior to the
execution of the test script.
Reference Optional List any documentation such as technicaldocuments or functional specifications, BPPs, flow
process diagrams, and process designdocuments that map to the test case(scenario/process) being tested.
10_4782 2/5/07 11:15 AM Page 236
for scenario and end-to-end testing. Larger SAP scenarios can includetest conditions for SAP work flow, migrated data from legacy sys-tems, user exits, interfaces, converted data, and security roles.
Flow process diagrams based on Unified Modeling Language(UML) notation such as use cases are effective techniques for con-structing test cases, since they reveal the expected interaction and tasksof end users with the system. Diagrams representing user tasks orlarger end-to-end processes must include notations that describe the as-sumptions, constraints, high-level description, user roles, precondi-tions, postconditions, and business priority for the described process.UML use-cases diagrams are suitable for providing such notation. Fur-thermore, it is necessary to maintain all relationships and diagrams,since it is possible that a diagram for a single process forms part of alarger end-to-end scenario. Flow process diagrams provide a high-leveldescription of the process to be tested, which can help the resource as-signed to creating test cases document the test case template.
Functional and technical specifications provide the logic for con-figuring the system, creating report, interface, conversion, enhance-ment, work flow, and form (RICEWF) objects. Furthermore, SAPconfiguration settings and changes made to the production systemsuch as new business logic, validation rules, and radio buttons alsoneed to be documented when maintaining the SAP production systemin order to allow proper testing coverage and documentation of testconditions for future testing cycles. As an example of the value oftechnical specifications assigned, testing resources can leverage tech-nical specifications to create test conditions and document test casesfor validating interfaces for the following parameters: business rulesfor converting all data and field mappings, number of records to beextracted, how the interfaces are batch scheduled, and data reconcil-iation between source and target systems.
Requirements are critical because they tell “what the system willdo,” and thus the test team must develop test cases to ensure that allrequirements are validated and verified. Test cases and requirementsare tightly intertwined since a test case can demonstrate that a cap-tured requirement is not suitable for testing or implementation, andrequirements provide the basis for constructing a requirements trace-ability matrix (RTM). Assigned testing resources must develop anddesign test cases that have test steps that successfully validate the de-sign of the implemented requirement.
Planning and Construction of Test Cases 237
10_4782 2/5/07 11:15 AM Page 237
In addition to leveraging the aforementioned sources of informa-tion, assigned testing resources must review “as is” documentationfrom the legacy systems to be replaced by SAP, company businessrules, project’s scope statement, key performance parameters, and in-dustry standards to ensure that the documented test cases contain suf-ficient information for validating all expected end-to-end businessscenarios and business requirements. Peer reviewing of test cases is aneffective technique for eliminating or modifying inadequate and am-biguous test cases prior to the start of test execution.
PEER REVIEWING TEST CASES
Test cases can be written with multiple authors or in the absence ofany clearly identified quality assurance (QA) guidelines, which oftenresults in test cases whose contents are suspect or questionable sincethey may not offer valid testing conditions. A peer review consists ofthe following activities:
■ Members from the QA team ensure that the test case followsestablished documentation guidelines and standards (i.e., no fieldsare left blank).
■ Business analysts (BAs), subject matter experts (SMEs), andtesters review the test case to ensure that the test case when exe-cuted fulfills the intent of the covered testable requirement.
■ Independent or third-party testers review the documented pre-conditions and test steps of the test case and execute the test caseto determine how clear the test case documentation is and pro-vide feedback on any areas of the test case that are missing detailsor are ambiguous to execute.
■ Designated project members or client requesting SAP servicesfrom the SAP system integrator review and approve all test casesprior to test case execution.
■ Different members from the same business area will peer-revieweach other’s test cases (i.e., a member from the sales and distrib-ution [S&D] module creates a test case and a different team mem-ber from the S&D team will peer review the case and providefeedback).
■ End users participating in user acceptance tests (UATs) will reex-ecute previously executed test cases as part of UAT testing can
238 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
10_4782 2/5/07 11:15 AM Page 238
provide feedback on the construction and contents of the testcases.
■ Technical resources with knowledge of test tools that are expectedto automate business processes from documented test cases canprovide feedback on the test cases if test steps are missing or thetestable conditions are unclear.
The test manager must allocate time for peer-reviewing test casesas part of the project schedule and the test case planning effort. Fur-thermore, the test manager must document procedures and guidelinesfor determining the number of test cases to be reviewed (sample test-ing), who conducts the peer reviews, what forms are used for provid-ing peer-review feedback, which test cases must be peer-reviewed (i.e.,peer-review test cases with a business criticality of high), and how dis-putes over the feedback for a peer review will be addressed. Afterpeer reviews are applied to test cases, it is good practice to recordlessons learned from the process and identify any deficiencies orstrengths associated with the peer-review effort.
METRICS FOR PLANNING TEST CASES
Peer reviews assist in improving the quality of the contents of the testcases but in the course of peer reviewing and constructing test casesit is necessary to collect metrics. Building and constructing test casesimply that at defined intervals progress is measured for the numberof completed test cases. But counting the number of constructed testcases is only one of many metrics to be gathered in the planning of atest case. The SAP project and test manager also have a vested inter-est in collecting the following (or similar metrics) when planning tocreate a test case:
■ Number of approved test cases.■ Number of test cases that have been created, completed, started,
in-progress.■ Number of hours spent in constructing a test case.■ Number of requirements to which a test case traces.■ Average number of transaction codes per test case.■ Number of test steps per test case.■ Which teams have the most test cases.
Planning and Construction of Test Cases 239
10_4782 2/5/07 11:15 AM Page 239
■ Which individuals have been assigned to construct the most testcases.
■ Which test cases with a complexity level of “high” have beencompleted.
■ Average time to document and construct a test case.■ Trend of completed test cases over a five-week period.■ Total number of hours allocated to the construction of test cases.■ Number of hours on average spent in modifying a test script.
Gathering and collecting these metrics is straightforward in testmanagement tools that provide reporting capabilities. In contrast,collecting these or similar metrics from disconnected spreadsheets inmultiple shared drives proves to be a daunting and time-consumingtask. These metrics assist in estimating the costs for future testing cy-cles and identifying areas of deficiencies associated with the creationof test cases.
MAINTAINING TEST CASES
SAP projects are constantly undergoing changes and evolutions fromupgrades to adding new modules or system enhancements, which cancause the underlying system design and implementation to changefrom what was originally captured as system requirements duringblueprint phases or previous gap analysis. All these system changescan cause previously designed test cases to become obsolete or unfitfor test execution purposes. Furthermore, the level of detail for testcases can vary widely if the test cases are not peer reviewed or if theQA did not enforce standards.
SAP projects may find that test cases need to be updated for thefollowing reasons:
■ Test cases do not contain enough information for validating thesystem requirements.
■ Test cases do not have sufficient details for validating system func-tionality at the back-end level.
■ The test data in the test cases has become obsolete and thus doesnot reflect the system’s baseline or configuration settings.
240 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
10_4782 2/5/07 11:15 AM Page 240
■ Requirements have evolved. The requirements have changed butwere not properly documented and thus the contents of the testcases do not reflect the scope of the requirements.
■ The test cases are written at a high level and thus do not lendthemselves well for test case automation (i.e., test case will say“Create a sales order and a delivery” but not state how these twoobjects are created in SAP or what conditions need to be verified).
■ The test case has multiple authors who did not adhere to the samestandards for documenting the test case and the test case verifiesfunctionality for a large end-to-end process involving multipleSAP transaction codes, security authorizations, and work flow.
■ The test case was never approved or peer reviewed prior to testcase execution.
■ Resource turnover: A consultant knowledgeable in a businessarea or SAP module who constructed and designed a test case mayexit the project without a formal transition or knowledge sharingfor the next person who will “inherit” ownership of the test case.
■ Ownership of a test case for maintenance and modificationsbecomes fuzzy when there is a test case that expands across mul-tiple business areas and there are multiple individuals who con-tributed to the creation of the test case. Without establishedownership for a test case it is unclear which individuals areresponsible for updating and maintaining the test case.
Since test cases are subject to frequent modifications, it is imper-ative to store the test cases, in a single secured repository with audittrails and version management. In the absence of a single repositoryfor storing and managing test cases, a given SAP implementation in-creases its likelihood of having the contents of the test cases becomeobsolete and irrelevant since the test cases may get stored in multipleshared drives or different folders and thus there is no central locationfor managing all the test cases. For these reasons, it is strongly rec-ommended that test cases be maintained and updated in a test man-agement tool whenever requirements change, and when new systemchanges are proposed and accepted for the production environment.
Planning and Construction of Test Cases 241
10_4782 2/5/07 11:15 AM Page 241
10_4782 2/5/07 11:15 AM Page 242
243
CHAPTER 11Capacity Testing
Recent industry data suggests that large companies implementingSAP may experience losses of millions of dollars per hour or
minute for each occurrence of downtime in the SAP production sys-tem. Fortunately, well-planned and -executed capacity tests are aproven method to help minimize unexpected SAP downtime.
Capacity testing is a broad name for tests that ensure that the SAPsystem meets established response times and can support the expectedtotal number of concurrent end users with optimal system responsetimes. The main types of capacity tests include performance, volume,stress, and load testing. Each type of test plays a critical role in fine-tuning the SAP system, discovering degradation points, and elimin-ating system bottlenecks. Furthermore, appropriate execution ofeach capacity team can help the SAP system achieve the followingobjectives:
■ Verify that service-level agreements (SLAs) are maintained.■ Ensure optimal software configuration settings.■ Avoid overspending on hardware equipment.■ Ensure that the system does not crash or fail given surges in vol-
ume from seasonality.■ Avoid financial losses from system failure in a production
environment.■ Reduce the number of end-user complaints reported to the help
desk production support team.
Capacity testing is an ongoing activity throughout the life cycle ofan SAP implementation. Many SAP projects conduct a capacity testonce a year, which exposes the business to the risk of failure in theproduction system. Given the constant changes that existing SAP im-plementations undergo on a daily basis from the introduction of hot
11_4782 2/5/07 11:17 AM Page 243
packs, configuration changes, new development interfaces, and addi-tion of modules, it is necessary to routinely monitor the system per-formance and conduct capacity tests to ensure that the SAP changesdo not adversely affect system response time. Every SAP productionchange introduces a risk to the established SLAs and can cause sys-tem downtime. New SAP implementations also require the executionof capacity tests before a go-live. The SAP Roadmap methodologywithin the SAP platform Solution Manager has defined activities andaccelerator templates for conducting volume and stress tests duringthe final preparation phase.
Capacity tests are subject to much planning, which must be com-pleted well in advance of a system go-live or cutover. The most prac-tical method for conducting capacity tests is with automated testtools as these provide repeatability and results in the form of charts,graphs, and reports to help interpret and identify the causes of systemdegradation.
TEST PLANNING
Capacity tests are needed for initial SAP implementations, plannedupgrades, and as part of production support. A capacity test for amajor SAP upgrade or initial SAP implementation should be plannedsix months in advance of the actual expected date for the executionof the test. Identifying which event triggers the need to perform acapacity test is the initial activity that helps to determine the objec-tives, the design, and level of support needed to conduct a capacitytest. In Exhibit 11.1 a spoked wheel shows the various events withina typical SAP project that may prompt a capacity test. For instance,an existing SAP implementation that is in the midst of an SAP GUIupgrade may need to conduct a performance benchmark test to ver-ify that the established response times for both custom and out-of-the-box SAP transactions does not deteriorate or degrade after theGUI is upgraded.
Discerning the event that prompts a capacity test is the initial stepin planning a capacity test. It is necessary to note that multiple si-multaneous events can trigger distinct capacity tests. For example, aproject may simultaneously upgrade the SAP graphical user interface(GUI), add a new SAP module, include new advanced business appli-
244 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
11_4782 2/5/07 11:17 AM Page 244
cation programming (ABAP) interfaces, and add new SAP end usersfrom a different corporate division, which can cause the SAP projectteam to conduct performance, volume, load, soak, and stress testingon the SAP system. Once the SAP project recognizes that at least oneform of a capacity test is necessary, a decision must be made to de-termine which specific type of capacity test must be executed.
Understanding the events that trigger the capacity test and thespecific types of capacity tests that need to be performed helps deter-mine the scope, objectives, and goals of the capacity test. Exhibit 11.2provides examples and the objectives for various capacity tests. Itshows that a stress test can help a project establish optimal hardwaresettings for an initial SAP implementation and also address sporadicsurges and spikes in system traffic attributed to seasonality, market-ing programs that drive up company sales, or random events thatcause SAP system traffic to increase unpredictably. However, a pro-ject may perform a load test to verify established SLAs.
Capacity Testing 245
GU
I rec
onfig
ured
Com
plaints from end users
Cha
nges
to u
nder
lyin
g da
taba
ses
Hardware changes (i.e., s
ervers)
Initial implementation
New division/unit addedC
onfiguration changes
New GUI upgrade
New SAP module/bolt-on added
New ABAP interfaces are added
Upg
rade
d LA
N/W
AN
EXHIBIT 11.1 Events that Trigger a Capacity Test in an SAP Environment
11_4782 2/5/07 11:17 AM Page 245
Identifying the need for a specific type of capacity test promptsthe SAP project team to assign an owner for the planning, design, ex-ecution, and analyzing the test results. Typically, the SAP project orits system integrator will initially assign ownership for such a test tothe SAP Basis group or the test team if one exists. The owner(s) of thetest develops a capacity test plan that specifies at a minimum whattype of test will be conducted, risks and mitigation strategy, a sched-ule with planned activities, processes to be tested, monitoring supportfor the test, available resources for the test, and major objectives to
246 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
EXHIBIT 11.2 Different Types of Capacity Tests to Meet Different TestingObjectives
Type of Test Example Purpose
Performance What is the response time for time critical Benchmarking.transactions and processes under a lowload?
Stress What is the maximum number of users Minimize downtimethat the system can hold? during seasonality.
Load Under a load of 50 users in addition to Confirm an SLA.batch jobs running in the background,the maximum response time for anytransaction will not exceed 1 minute perscreen 95% of the time.
Volume What is the CPU utilization and disk Validate expectedactivity for a peak system load? for all system throughput.batch background processes?
Network How do established response times for Assess any latencycritical transactions degrade when 50% problems for remoteof available bandwidth is consumed? locations. Assess
bandwidth. Assessnetwork trafficbottlenecks.
Soak What happens to SAP buffers after the Assess memory leaks.system has gone live for four months? Assess database
problems that createsystem stalling.Throttle the system.
11_4782 2/5/07 11:17 AM Page 246
be accomplished. For SAP clients utilizing the Solution Manager plat-form, accelerators in the form of templates are offered for planningstress and volume tests.
In parallel with the writing of the capacity test plan, the require-ments for the test are identified. Capacity requirements are identifiedbased on the expected SAP business throughput, which generates sys-tem traffic and total population of production end users. Initial SAPplanning questionnaires such as the one in Exhibit 11.3(a) provide as-sistance in identifying system information and potential sources oftraffic for an SAP implementation. SAP throughput is captured withsizing documents or SAP user community profiles. Exhibit 11.3(b) isa sample template for capturing SAP throughput. Once SAP through-put is captured, requirements are defined and subsequently turnedinto SLAs.
For existing SAP implementations, SAP transaction code ST03 pro-vides historical usage data. For new implementations, SAP throughputcan come from system traffic from the legacy systems that SAP willreplace. SAP throughput is captured with the input and feedbackfrom several project stakeholders. Stakeholders in a capacity test in-clude database administrators (DBAs), infrastructure engineers, mid-dleware engineers, subject matter experts (SMEs), developers, businessowners, configuration experts, and so on.
The following suggestions are offered as criteria factors to helpproject teams capture expected or existing SAP throughput:
■ Follow the 80/20 rule. After World War II, Italian economist Vil-fredo Pareto discovered that, in Italy, 80 percent of the country’swealth was held by 20 percent of the country’s population, hencehe developed the 80/20 rule. Moreover, Pareto’s observationshave led to the creation of Pareto charts and have played a piv-otal role in statistical process control (SPC). The Pareto analysisin SPC operates in this fashion: 80 percent of problems usuallystem from 20 percent of the causes, which leads to the famousphrase “the vital few and the trivial many.”
Pareto’s principle is also applicable to all types of capacitytesting: stress, performance, volume, load, soak, and so on. Whenapplying the 80/20 rule to capacity testing, one can focus on the20 percent of the business processes that are likely to cause 80percent of the entire system traffic. Pareto’s principle allows test
Capacity Testing 247
11_4782 2/5/07 11:17 AM Page 247
EXHIBIT 11.3(A) Questionnaire for Gathering Initial Information for anSAP Implementation
Project Info
Attach a project schedule and/or GANTT chart including testing milestones.
Attach a system architecture diagram.
Attach the SAP client landscape.
Background
Application Under TestSAP version for the TEST instance? Type of SAPinstallation (GUI, HTML, Portals, Netweaver, etc.)?
System ReliabilityWhat are the availability requirements for the testinstance?
FacilityIs there a test facility? If so, is it a shared facility or dedicated to the testing of SAP?
OS DescriptionList the operating system including type and version for the test instance.
Additional Sources of TrafficList all SAP add-ons.
System ConfigurationAre there any firewalls? Load balancers?
ToolsDescribe the available automated test tools.
ABAP (RICEF) ListHow many interfaces are in scope? Conversions? ABAP Reports? User exits?
In-Scope FunctionalityList all in-scope modules.
Industry SolutionAre there any SAP industry-specific solutions?
248 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
11_4782 2/5/07 11:17 AM Page 248
249
EXHI
BIT
11.3
(B)
Tem
plat
e to
Cap
ture
SA
P T
hrou
ghpu
t
Aut
hor(
s):
Team
Nam
e:
App
licat
ion
Tra
nsac
tion
N
umbe
r T
otal
Cod
e or
of
Dia
log
Hou
rs o
fE
xpec
ted
Onl
ine/
Bus
ines
s Pr
oces
sSt
eps
per
Ope
rati
onC
oncu
rren
tB
ackg
roun
dor
Obj
ect
Thr
ough
put
Proc
ess
(i.e
., Sh
ifts
)U
sers
/hr
Proc
ess
Dep
ende
ncie
sT
arge
t R
espo
nse
SRM
-EB
PSh
oppi
ng c
art
five
sho
ppin
g 9:
00 A
.M.–
100
Onl
ine
<=5
seco
nds
per
for
orde
rsca
rts
crea
ted
per
5:00
P.M
.sc
reen
–80%
of
hour
per
use
rth
e ti
me
SD M
odul
eC
reat
e Sa
les
thir
ty c
onsi
gnm
ent
7:00
A.M
.–60
Onl
ine
3 se
cond
s pe
r sc
reen
Ord
eror
ders
per
day
6:00
P.M
.un
der
a pe
ak s
yste
mpe
r us
erlo
ad 9
5% o
f th
e ti
me
SD M
odul
eC
reat
e Sa
les
fift
een
repa
ir8:
00 A
.M.–
30O
nlin
e5
seco
nds
per
scre
enO
rder
orde
rs p
er d
ay5:
00 P
.M.
unde
r a
peak
sys
tem
per
user
load
–95%
of
the
tim
e
CR
M S
ales
Cre
ate
Sale
s on
e sa
les
orde
r pe
r12
:01–
1000
Onl
ine
<=5
seco
nds
per
Inte
rnet
Ord
erda
y pe
r us
er24
:00
scre
en, <
=3 s
econ
ds
Syst
emco
ntai
ning
on
to p
lace
ord
er w
ith
aver
age
5–10
su
bmit
but
ton,
it
ems
per
orde
r<
5 se
cond
s to
rece
ive
conf
irm
- at
ion
num
ber–
100%
of
the
tim
e
(Con
tinu
es)
11_4782 2/5/07 11:17 AM Page 249
250
EXHI
BIT
11.3
(B)
(Con
tinu
ed)
App
licat
ion
Tra
nsac
tion
N
umbe
r T
otal
Cod
e or
of
Dia
log
Hou
rs o
fE
xpec
ted
Onl
ine/
Bus
ines
s Pr
oces
sSt
eps
per
Ope
rati
onC
oncu
rren
tB
ackg
roun
dor
Obj
ect
Thr
ough
put
Proc
ess
(i.e
., Sh
ifts
)U
sers
/hr
Proc
ess
Dep
ende
ncie
sT
arge
t R
espo
nse
AB
AP
MR
P ru
n on
e ba
tch
job
per
7:00
A.M
.–N
/AB
ackg
roun
dPr
ogra
m m
ust
prog
ram
sda
y8:
30 A
.M.
com
plet
e w
ithi
n1
hour
CA
TS
Tim
e en
try—
one
tim
e en
try
per
1:00
P.M
.–30
0 fo
r O
nlin
e<
6 se
cond
s to
log
Web
-bas
edem
ploy
ee (
unio
n 4:
00 P
.M.
enti
re t
ime
on, <
=5 s
econ
ds
hour
s) p
er d
ay
win
dow
per
scre
en, <
5
duri
ng t
he
seco
nds
to
spec
ifie
d ti
me
subm
it–9
0%w
indo
wof
the
tim
e
CA
TS
Tim
e en
try—
one
tim
e en
try
per
4:00
P.M
.–70
0 fo
r O
nlin
e<
6 se
cond
s to
log
Web
-bas
edus
er (
exem
pt a
nd6:
00 P
.M.
enti
re t
ime
on, <
=10
seco
nds
cont
ract
or h
ours
)w
indo
wpe
r sc
reen
, < 1
0pe
r da
y du
ring
the
seco
nds
to s
ubm
it–
spec
ifie
d ti
me
95%
of
the
tim
ew
indo
w
PS M
odul
eC
J20N
eigh
t pr
ojec
ts p
er8:
00 A
.M.–
20O
nlin
eda
y pe
r us
er5:
00 P
.M.
syst
em4:
00 A
.M.–
back
up6:
00 A
.M.
11_4782 2/5/07 11:17 AM Page 250
managers to select and identify the business processes that aremost likely to cause bottlenecks, performance degradation, andtraffic across a software application.
■ Focus on business processes that are subject to fluctuations indemand and seasonality due to causes such as holidays and mar-keting programs.
■ Select 5 to 10 business processes per SAP application module witha high throughput for dialog steps.
■ Select processes that are executed frequently with a large popula-tion of end users.
In SAP projects system traffic or throughput can originate frominterface programs, Intermediate Documents (IDOCs), bolt-on sys-tems such as Customer Relationship Management (CRM), SupplierRelationship Management (SRM), and Business Warehouse (BW),and end users interacting with the core R/3 SAP transactions. Theamount of traffic each SAP source generates varies based on com-plexity of the process, frequency with which the process is performed,and expected number of end users for the process. For instance, ac-tivities such as timesheet entry via SAP Cross Application Time Sheets(CATS) can affect a large number of end users, whereas processes in-volving approving invoices, hiring employees, and creating networkactivities for a project may affect only a handful of end users withinthe organization. In contrast, activities such as month-end closing,payroll run, and material requirements planning (MRP) runs may beperformed at infrequent time intervals with few end users but mustbe thoroughly tested because they are complex, high in databaseinput/output processing, resource-intensive on the system, and mustbe completed within a certain time frame to avoid impacting depen-dent activities. The project team will need to develop requirementsfor critical, resource-intensive processes affecting multiple individu-als that can be validated and measured through testing.
Developing well-written requirements will require support andassistance from multiple stakeholders from the basis, configuration,and development teams. Avoid requirements that can be interpreteddifferently by different individuals such as “make the system as fastas possible.” Requirements like these are ambiguous and cannot betested.
Capacity Testing 251
11_4782 2/5/07 11:17 AM Page 251
The following are examples of a poorly written requirement anda well-written requirement:
Example 1: Poorly Written Requirement“The system shall handle on average 3,500 sales orders per day
with an average of 20 lines per sales order.”
Problems with this requirement:
■ Unknown number of expected users creating sales orders.■ The desired end-user response times for this process are not stated.
Potentially, it may take an end user an hour to create a sales order,which is an unacceptable response time for the business.
■ This requirement also fails to state the medium under which thesales orders are created (i.e., dial-up connections, mobile engines,DSL connections, etc.).
■ Finally, this requirement does not address hours of operations oracceptable success factors for validating the requirement. Theclient may have a business need to have on average response timesof three seconds per screen (with a maximum number of concur-rent end users logged on) to create a sales order between peakworking hours 9 A.M.–12 P.M., but this business need is not man-ifested at all as written in this requirement. As written, thisrequirement can be interpreted differently, and cannot be mea-sured, tested, or validated.
Example 2: Well-Written Requirement with Precise Target GoalsSpecified for Validating the Requirement
“For the SAP CRM sales Internet system under a 500-hourlyuser load, sales orders static pages will display in under x seconds,dynamic pages in under y seconds, and search/order pages in underz seconds, 95 percent of the time during working hours (7 A.M.–7P.M.) with no errors when accessed via corporate local area network(LAN) via Browser X and with each sales order containing between20 and 30 line items.”
Reasons why this is a well-written requirement:
■ Gives specific usage of the application under test for the intendedproduction end users that can be measured and tested.
■ Provides the expected number of concurrent end users during thepeak hours of operation for a given workday.
252 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
11_4782 2/5/07 11:17 AM Page 252
■ Provides the expected size of a CRM SAP sales order in the targetproduction environment (20 to 30 line items).
■ States the type of speed for connecting to the Internet and the spe-cific Internet browser.
■ Has success factor for validating the requirement (95 percent).■ Gives specific threshold values that need to be validated for vari-
ous Internet pages for the SAP CRM application.
Well-written requirements are consequently converted into SLAs.SLAs establish goals and objectives that the system must conform toafter the system is deployed into the production environment. SLAsneed to be tested and confirmed in conjunction with the other sourcesof system traffic prior to system deployment. They also need to bemonitored constantly in the live production environment. SAP’s toolsand services such as Early Watch and CCMS help monitor the SAPsystem. Non-SAP tools for monitoring the system performance in-clude SiteScope, Luminate, and BMC Patrol.
SLAs are initially verified prior to go-live through the executionof test cases in a production-sized environment. Therefore, it is criti-cal that the design of test cases represents the behavior of the pro-duction end users. The test environment for a capacity test shouldclosely resemble the intended target production system in size, data-base connections, database size, hardware, configuration settings,and interfaces to external systems. Attempts to extrapolate capacity-testing results from an SAP instance that is not production-sized orrepresentative of a production environment is usually a fruitless ex-ercise, since performance is not linear across environments.
TEST DESIGN AND CONSTRUCTION
The construction of SAP test cases for a capacity test can be achievedwith an automated test tool such as Mercury Interactive’s Loadrun-ner or manually with spreadsheets or text editors. Given time con-straints and project deadlines, automated test tools are recommendedas the de facto solution for designing, executing, and analyzing theresults of a capacity test.
The rationale for utilizing an automated testing approach over astrictly manual approach is provided in Exhibit 11.4. However, theunderlying reason for conducting an automated capacity test is that
Capacity Testing 253
11_4782 2/5/07 11:17 AM Page 253
254
EXHI
BIT
11.4
Ben
efit
s of
Aut
omat
ed T
esti
ng o
ver
Man
ual T
esti
ng f
or a
Cap
acit
y Te
st
Aut
omat
edM
anua
l
Con
cept
Exe
cuti
ng p
roce
sses
wit
h te
st s
crip
ts t
hat
requ
ire
littl
e or
no
Exe
cuti
ng t
est
scri
pts
wit
h ke
ystr
okes
fro
m h
uman
hum
an in
terv
enti
onbe
ings
and
col
lect
ing
resu
lts
wit
h w
atch
es
Ben
efit
sR
epea
tabi
lity,
con
sist
ent
Lit
tle
or n
o tr
aini
ng in
volv
edA
llow
s da
ta c
reat
ion
for
scri
pts
that
req
uire
dat
a se
edin
gN
o in
stal
lati
on o
f te
st t
ools
Can
be
used
for
non
test
ing
acti
viti
es s
uch
as d
ata
load
ing
Mak
es t
est
tool
exp
ert
irre
leva
ntfo
r tr
aini
ngC
olle
cts
auto
mat
ic s
tati
stic
s, r
epor
ts, m
etri
csA
llow
s hu
ndre
ds o
f en
d us
ers
to b
e de
ploy
ed a
cros
s a
sing
le
mac
hine
Allo
ws
test
ing
on “
rem
ote”
sit
esC
an b
e “s
ched
uled
” to
run
wit
hout
hum
an in
terv
enti
onA
llow
s co
ntro
l ove
r ex
ecut
ion
of t
est
scri
pts
Allo
ws
for
flex
ible
dat
a ac
cess
met
hod—
allo
cati
on a
llow
s“a
pple
s to
app
les”
tes
tIn
clud
es S
AP
mon
itor
s fo
r tr
oubl
esho
otin
g
Dra
wba
cks
Exp
ense
Not
eas
ily r
epea
tabl
eT
rain
ing
Req
uire
s a
lot
of c
oord
inat
ion
Test
too
l may
not
rec
ogni
ze o
bjec
ts/a
pplic
atio
ns w
ithi
n yo
urR
equi
res
man
y PC
s to
dep
loy
to e
nd u
sers
infr
astr
uctu
reD
iffi
cult
to
sync
hron
ize
end
user
sR
equi
res
inst
alla
tion
, val
idat
ion
agai
nst
SAP
syst
emfo
r ke
ystr
okes
Rep
orts
and
gra
phs
not
prod
uced
Exp
ensi
ve (
man
-hou
rs)
Take
s re
sour
ces
away
fro
m “
prim
ary”
job
func
tion
s
11_4782 2/5/07 11:17 AM Page 254
most SAP projects find it an intractable challenge to coordinate hun-dreds or even thousands of end users in a single room to executemanual test cases over an extended period of time to optimize SAPsystem performance. While an automated approach is recommended,it is important to note that manual intervention during a capacity testwill be needed to launch manual batch jobs, interfaces, ABAP re-ports, and other non-SAP applications that send traffic into the sys-tem through remote function calls (RFCs), intermediate documents(IDOCs), and so on. Hence, a mixture of automated and manualtesting will provide greater flexibility for projects conducting a ca-pacity test.
Independent of the approach (automated or manual) for execut-ing a capacity test, test cases representing the system throughput needto be documented. Test cases can be recycled from previous testing ef-forts but must include test steps that are representative of how theprocess will be executed in a production environment. Test casesfrom previous efforts may need modification to include larger sets ofdata for processes that need to be executed multiple times orprocesses that consume data. For instance, a process that creates SAPshipments may exhaust the existing number of sales orders in the sys-tem, or the creation of outbound deliveries may deplete the inventoryfrom a warehouse. Since a capacity test for a stress or load test mayneed to be repeated or iterated multiple times during a given timeframe, it is important to have documented test cases that identify allthe necessary data sets to execute the business process or contain thenecessary test steps to build self-feeding test cases that replenish con-sumed data after every iteration.
Documented test cases that represent the production throughputneed to represent stable system functionality. The underlying as-sumption for a capacity test is that the system’s functionality has beenpreviously tested, verified, and is stable and frozen. For example, theobjective for a capacity test may be to determine system responsetimes under a peak system load of concurrent users for creating SAPdeliveries as opposed to determining whether the SAP deliveriesprocess functions correctly.
Documented test cases are the cornerstone for designing auto-mated test scripts for projects that elect automated test tools for exe-cuting a capacity test. With an automated test tool, a process can berecorded, played back with multiple sets of data, and executed with
Capacity Testing 255
11_4782 2/5/07 11:17 AM Page 255
N number of emulated end users. For instance, an automated test toolmay record SAP’s time entry process for CATS and allow playback oftime entry with hundreds or thousands of emulated end users.
Automated test tools allow for repeatability when processes needto be executed multiple times in order to troubleshoot a system oridentify bottlenecks or system degradation points, which cannot beeasily accomplished with a large number of end users pressing key-strokes at the same time. Exhibit 11.5 shows how an automated testtool sends traffic to different machines with emulated end users tocreate traffic in an SAP environment and results are generated. Theprimary benefits associated with automated load testing tools include:
■ Emulate, from a single point, any number of users and theirimpact on the system.
■ Perform large-scale tests with minimal hardware resources.■ Adjust volume levels through dial-up/dial-down virtual users.■ Create repeatable test scripts.■ Find and correct scalability problems early in the development
process.
256 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
Resultslogs
Directsautomatedtest cases
Machines emulatingsystem traffic in SAP
SAP systemreceiving
traffic
EXHIBIT 11.5 Automated Test Tool Simulating System Load in an SAPEnvironment
11_4782 2/5/07 11:17 AM Page 256
■ Correlate poor response times with virtual user levels.■ View or print test run history for each test or group of tests
executed.■ Measure system performance under differing conditions.■ Replace human beings with emulated end users.■ Allow system monitoring during a test execution.■ Allow interpreting of system response data with automatically
generated results logs.
When an automated test tool is selected for a capacity test, testcases are recorded to reflect end-user behavior and rate of through-put. In order to increase the accuracy of automated test cases it is rec-ommended that test cases include “think-times” so that test casesplay back at a rate that is equivalent to the speed at which an end userexecutes a transaction. For instance, a recorded process for creationof work orders can take an end user on average five minutes to com-plete a single work order, whereas the playback of the automated testcase for work order creation may take on average 40 seconds to com-plete for a single work order. In order to compensate for the playbackspeed of the automated test case, artificial or random think-timesmay need to be introduced in the automated test case to closelyresemble the expected system throughput. Also, for purposes ofdiagnosing and troubleshooting an automated test case, it is recom-mended that the automated test cases contain verification points thatinspect expected system messages, windows, or attributes. For exam-ple, when a user logs on to the system, a welcome message is ex-pected. This message serves as a visual cue that can be verified withan automated test case. Other visual cues applicable for verificationinclude status bar messages, screen titles, information screens, andfield attributes. Designing automated test cases that embed logic torecognize visual cues helps to troubleshoot the system when the per-formance degrades as visual cues facilitate the process of pinpointingat what point the system experiences a choke point.
The creators of automated test cases for an SAP capacity test alsoneed to implement logic in the design of the test cases for the follow-ing conditions:
■ Optional screens that pop up based on different conditions suchas data entered or security privileges.
Capacity Testing 257
11_4782 2/5/07 11:17 AM Page 257
■ Data captured from an SAP screen with an automated test tool tobe used for a subsequent recorded process may need to be con-verted from one format to another. For instance, a captured digitfrom an SAP screen may look like a number, but it may be cap-tured as a character as opposed to an integer, which prevents theautomated test case from playing back correctly.
■ A script that logs on to an SAP application such as Employee Self-Service (ESS) or E-hiring may need to randomly substitute thehard-coded value for a machine name with a parameter value toproperly test the load balancing application.
■ In Web-based applications for SAP’s CATS, Enterprise Buyer Pro-gram (EBP), ESS, and CRM Sales Internet system, it is necessarythat emulated end users maintain their Web session ID through-out the navigation of different Web pages before completing agiven scenario and exiting the application.
■ Ensuring that emulated end users log off completely from theapplication, particularly for web-based applications.
■ Self-feeding test cases that consume data for each executed testcase iteration and require unique data records for subsequentiterations.
■ Scrolling functionality when populating data into an SAP table ormatrix where records need to be inserted in the next availableopen field independent of its location on the screen.
Automating test cases for a capacity test is a dedicated develop-ment effort that requires support and assistance from members fromthe SAP configuration, basis, and development teams. Additionally,support is expected from SMEs and the project manager to developautomated test cases. The project members play different roles inidentifying system throughput, documenting test cases, developingtest cases, approving automated test cases, monitoring trial runs, set-ting up the environment, and identifying data values.
After the roles and activities for the project members have beenidentified to design the automated test cases it will be necessary to de-velop and debug automated test cases and conduct trial runs. Trialruns representing 20 to 50 percent of all expected system trafficshould be executed prior to the actual start date of the formal capac-ity test for all automated test cases, which can help with the following:
258 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
11_4782 2/5/07 11:17 AM Page 258
■ Flushing out errors with the automated test cases such as dataconflicts.
■ Verifying that the automated test tool has been properly installedon all machines.
■ Verifying log-on IDs and passwords.■ Ensuring that the system monitors have been properly installed.
Trial runs should be communicated in advance to the rest of theproject team to avoid impacting other users logging on to the test en-vironment. Results from trial runs may help to diagnose initial re-sponse problems encountered with the application. For instance,initial trial runs may show deficiencies with the load balancing appli-cation, the need to create a group log-on, and the need to allocatemore disk space to table spaces.
After the trial runs are successfully executed, the project teamschedules the formal execution of the capacity test. Scheduling andplanning the execution of the actual test includes confirming hard-ware resources and facilities (i.e., a war room), monitoring resourcesfor the test, reserving the test environment, communicating the testdates to the rest of the project team members, and reviewing the riskplan and risk mitigation strategies in the event that the system crashesor fails during the middle of the test. Many projects schedule capac-ity tests during off hours when the tests will not impact the test envi-ronment during working hours and back up the system prior to thestart of the testing cycles as a means of reducing the risks associatedwith a capacity test. Furthermore, to avoid skewing test results somecompanies deactivate or disable SAP log-on user IDs that are not partof the capacity test to prevent users from logging on to the systemduring testing hours.
TEST EXECUTION
Execution of capacity tests is an iterative approach that may requiremultiple iterations until the system reaches a system architecture thatis optimal and consistent with the established SLAs. Testing withautomated test tools allows for infinite playback of test cases as longas the necessary data sets are identified and the test environment has
Capacity Testing 259
11_4782 2/5/07 11:17 AM Page 259
a frozen design. In contrast, a manual capacity testing with end usersoffers limited repeatability in the event that test cases have to be exe-cuted multiple times. Given the impracticalities of manual executionof a capacity test, the remainder of this chapter is focused on testingof SAP with automated test tools.
For initial SAP implementations and major system upgrades, theproject may need to allocate four weeks, which may include weekendtime for executing the test cases, troubleshooting the system, andgathering and interpreting test results. This four-week time estimateis different from the time allocated to the design of the automated testcases.
Prior to the execution of the test the project manager and/or basisteam may decide to back up the system. The first day of execution allmonitoring groups need to have their monitoring equipment turnedon and ready to capture data or system pulses at least 45 minutesprior to the actual start time of the capacity test to capture initial datafrom the system. (Exhibit 11.6 displays some sample parameters andthe tools to monitor them for an SAP implementation with an Oracledatabase.) Initial system pulses may indicate that the system is expe-riencing delays, degradation points, or very little activity (under uti-lization) prior to the start of the test. If initial system pulses manifestor display high system usage or high utilization, it is recommendedthat the monitoring members investigate the causes behind the highsystem utilization and eliminate them. For example, high database ac-tivity or server utilization for the SAP box could signal to the Basisadministrator that individuals not associated with the system arelogged on to SAP or that batch jobs are running in the backgroundmode that need to be terminated.
Monitoring resources for an SAP capacity test include membersfrom the network (infrastructure) group, DBAs, Basis, server admin-istrators, and development teams. However, participants for the en-tire capacity test may include members from the monitoring groupsin addition to the architects for the solution under test, and develop-ment and configuration team members to help troubleshoot and di-agnose system problems. Since it is possible that participants for acapacity test are offsite or away from the war room, it is recom-mended that a dial-in number be provided to all participants to allowthem to provide feedback when issues and problems arise during the
260 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
11_4782 2/5/07 11:17 AM Page 260
capacity test. The person in charge of the capacity test needs to en-sure that all participants are present or available at least one hourprior to the start of the test. Typically, the person in charge of a ca-pacity test is the person who launches or executes the automated testcases from the war room and generates traffic with emulated endusers on different machines. If critical participants are absent for un-expected reasons, the capacity test may be suspended.
When all participants have been confirmed, the monitoring toolshave been engaged, and the initial system pulses have been success-fully gauged, the designer of the automated test tools may start tokick off the automated cases based on order of execution. Automatedtest cases should be kicked off in gradual fashion to avoid crashingthe servers during the log-on process. It is important to note that thecapacity test will consist of more than just automated test cases,which may include manual processes such as interfaces, month-endactivities, payroll runs, and batch jobs, which will require other usersto manually log on to the SAP system as automated test cases arebeing launched. A well-planned capacity test should offer a calendaror schedule with expected times for starting or launching processes inSAP and the resource responsible for doing so.
The first day of testing, the scenarios for the capacity test shouldincrement the system load in increments of 5 to 10 percent of thetotal load. For instance, if the total expected user load is 1,000 con-current users excluding manual processes and batch jobs, the person
Capacity Testing 261
EXHBIT 11.6 Sample Parameters to Monitor for an SAP Capacity Test
Area Parameter Measuring Tool
Database CPU used by this session STATSPACKDatabase Consistent gets STATSPACKDatabase DB block gets STATSPACKDatabase Physical reads STATSPACKDatabase Top Wait Events STATSPACKDatabase Physical reads (lob) STATSPACKDatabase Physical reads (direct) STATSPACKDatabase Total sorts (memory sorts + disk sorts) STATSPACKDatabase Disk sorts/memory sorts STATSPACK
11_4782 2/5/07 11:17 AM Page 261
responsible for kicking off the automated test cases may launch thefirst 100 emulated end users for the first 10 minutes of the test andtake a snapshot or pulse of the system response times. Then assum-ing the system response time is adequate, kick off a batch of another100 emulated users for a total of 200 emulated users and so on every10 minutes until all 1,000 concurrent emulated end users are loggedon. The actual testing calendar would show the expected ramp-up ofemulated end users in addition to the execution of manual processes.Using the previous example of 1,000 concurrent users, the bar-codingteam members launch a job sending IDOCs into SAP after the first300 concurrent emulated users are logged on whereas, the supplychain team members may launch an MRP run after the first 400hours emulated end users are logged on in addition to other back-ground jobs.
The first day of execution may demonstrate that the production-sized or actual production environment may not adequately handle amoderate or even light load of emulated users when the capacity testincludes batch jobs, SAP CATT (Computer Aided Testing Tool)scripts, data from external components, and IDOCs, which maycause the test to be suspended. Under these circumstances where thesystem locks up, more emulated end users cannot successfully log on,or the performance degrades beyond an acceptable threshold level, itis critical to annotate any reported errors or capture errors withscreen shot printouts. Screen shots are particularly useful for trou-bleshooting Web-based systems, which may produce cryptic runtimeor Java errors when the performance degrades.
System errors during the test runs should be reported immediatelywhen first observed to all monitoring parties on the conference callor in the war room to help pinpoint the cause of the problem. Addi-tionally, the person who first monitors the error can send screenshotprintouts of all observed system problems to all monitoring groups.Often, the initial responses from monitoring resources for a systemerror are that the automated test tool has a problem, or that the au-tomated test cases were recorded incorrectly but, when confrontedwith evidence of the system problem through screen shot printouts,the monitoring resources are more likely to recognize the merits ofthe system error and the system limitations. Another good techniqueto overcome skepticism of a system error from a monitoring groupduring a capacity test is to have the monitoring group log on to the
262 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
11_4782 2/5/07 11:17 AM Page 262
application, which may produce an error in and of itself or may showthat the system response times are inadequately slow when the userattempts to execute a process.
Depending on the type of problem observed during the executionof the capacity test, the monitoring groups, configuration experts, ordevelopment team members can make adjustments to the system dur-ing the same day of the test to allow the testing to continue. If theproblem observed in the system has a simple and easy solution to im-plement, the system may need to be refreshed or reset to allow for thetest to resume. However, if the problem proves to be more severe,such as inefficient Structured Query Language (SQL) statements in anABAP program that require code to be rewritten, or if the systemneeds to be reconfigured or more hardware needs to be procured toimprove the system response times, it may be necessary to suspendthe capacity test indefinitely. The complexity of the observed systemproblem and its expected resolution will dictate how long it takes toresume the capacity test.
For problems that allow the test to proceed even though the es-tablished SLAs are violated, the test engineer may execute capacitytests on consecutive days to allow the SAP experts to tweak the sys-tem on a gradual basis until SLAs are verified. Even the best-plannedcapacity tests are an inexact science, requiring tweaking and modifi-cation of various parameters until a permanent solution is identified.To compensate for flaws in the design of the capacity test or identifi-cation of system throughput, it is recommended for initial SAP imple-mentation and major system upgrades that a load 20 to 25 percentgreater than the expected peak system load be emulated in the system,which helps increase the confidence level that the system can success-fully handle the expected production traffic or even seasonality.
At the conclusion of each day when the capacity tests are con-ducted, all results for the day should be gathered and stored in a cen-tral location for safekeeping, analysis, and interpreting at a futuredate. The test engineer, in addition to other project stakeholders, maycreate success or exit criteria for concluding a capacity test at the endof each day or for the entire testing effort. For example, a system thatis expected to handle a peak load of 1,000 concurrent end users thatis proven to successfully handle a load of 1,300 end users while meet-ing all SLAs after a period of three hours may provide substantial ev-idence for successfully concluding the load and volume test. However,
Capacity Testing 263
11_4782 2/5/07 11:17 AM Page 263
for a stress test, a system that is expected to handle a concurrent peakload of 10,000 end users may find that after 105,000 concurrentusers logged on for 20 minutes, the system has memory leaks, expe-riences unacceptable response times, locks up or crashes, which con-cludes the stress test has met its objective of determining the system’sbreaking point. The objective of the capacity test, along with its de-fined success criteria, can provide a stopping point for the executionof the test.
TEST ANALYSIS
Results for a capacity test can be captured and reported at varioustimes during a test. A capacity test may have severe problems afterthe first day of testing which reveal that all graphs, charts, and reportsfrom the test need to be evaluated and reviewed before the test con-tinues any further. In contrast, capacity tests that reveal minimalproblems or system deficiencies may have reports, graphs, and chartsthat are accumulated over a period of several days which are subjectfor interpretation after the execution phase has completed.
Automated test tools provide evidence that a problem exists withthe application but not necessarily what the exact problems arewhich causes the test engineer along with the other monitoringgroups to compare and review several graphs to detect inflectionpoints, system spikes that cause system degradation or failure to con-form to SLAs. Test tools produce a phalanx of graphs and charts thatin isolation offer little or no help in diagnosing system problems,which in turn may need to be overimposed with other graphs fromthe automated test tool or from the other monitoring groups. Con-versely, the automated tests and graphs from other monitoringgroups may provide evidence that the system consistently producesresults that are within the specified thresholds for the project.
Graphs and charts that reveal a performance problem mayprompt the project manager and the person responsible for resolvingthe problem to discuss the impact of the problem, severity, and courseof action. The problem could reveal that certain processes do notmeet SLAs by a slight percentage but resolving the problem maydelay the expected SAP go-live date at a large cost, which may proveimpractical for the project to attempt to resolve. Exhibit 11.7 shows
264 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
11_4782 2/5/07 11:17 AM Page 264
some of the common resolutions for resolving SAP performance-based problems. However, problems reported through the graphs andcharts may demonstrate that the system is not ready to go live andthat if it does go live despite the reported performance problems thecompany risks experiencing significant downtime in production andlarge financial losses. In situations where the performance problempresents a high risk for occurrence with deleterious consequences, theChange Control Board (CCB), the project sponsors, steering com-mittee, and project manager may need to assess the magnitude of theproblem and impact on the project’s schedule.
The lead tester or the owner of the capacity test is expected toproduce a final write-up or project postmortem for the results from acapacity test. The various monitoring entities are expected to provideindividual write-ups for the areas that they monitored and the areasthat are expected to have performance problems. The documentationfor the final analysis write-up includes lessons learned, areas of sys-tem deficiencies, verified SLAs, issues and problems encountered dur-ing the test, and resolutions for all reported problems. Problemsreported for the test should be categorized, ranked, and stored in adefect management tool.
The analysis and final phase for a capacity test concludes whenresults are reported to and accepted by senior management and alldefects have a resolution or expected resolution date with an assigned
Capacity Testing 265
EXHIBIT 11.7 Common Resolutions for Performance Problems in an SAPImplementation
Create a group log-on.Create a data archiving strategy to avoid increasing the size of the database to anunmanageable size.Rewrite and tune SQL (ABAP) code.Increase number of app servers, dialogs.Redo database log files.Respecify buggers (SAP, database).Provide more memory to support more work processes and more processors tosupport those additional processes.Reconfigure SAP modules.Redistribute load balancing.Allocate more disk space for tables.Create an index on table to avoid full-table scans.Reinstall ITS servers to decrease utilization.
11_4782 2/5/07 11:17 AM Page 265
owner. Although the initial SAP capacity tests can prove that responsetimes are acceptable based on SLAs and the senior manager has ac-cepted the results prior to system go-live, it is important to note thatSAP production-based systems are subject to constant changes thatmay adversely affect system performance or compromise SLAs,which would call for more capacity tests or system monitoring. Forentities that are concerned about the SAP system response times afterthe system has been deployed, SAP Corporation offers monitoringand optimization services to ensure that SLAs are maintained.
266 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
11_4782 2/5/07 11:17 AM Page 266
267
CHAPTER 12Test Execution
Test execution is the phase held after the test strategies, test plan-ning, test script documentation, and test procedures have been
designed and developed. A testable software build is now ready to bedeployed into the quality assurance (QA) environment. The mainobjective of test execution is to demonstrate that the actual test resultsfor each test step match the expected test results. Alternatively, testexecution may identify that the system configuration does not meetrequirements, which causes the test team to log defects as discussedin Chapter 13.
Test execution is conducted after the test cases have been designed,documented, peer-reviewed, and approved, and the entrance criteriaand/or test readiness review (TRR) have been evaluated. Test executionis conducted with either manual or automated test cases. For initialSAP implementations, test execution is conducted during the realiza-tion and final preparation phases. For existing SAP implementations,test execution is routinely held for regression testing that addresses sys-tem upgrades, addition of new SAP modules, SAP patches, OSS (On-line Service System) notes, SAP hot packs, and production transports.
The construction of a test schedule prior to the start of the test ex-ecution is highly recommended to identify the sequence of dependen-cies in which the test cases need to be executed. A test schedule alsohelps to estimate the total time needed to execute all test cases asso-ciated with a testing effort and assign the test cases to the availableresources for test execution. A test schedule needs to be developedearly on in the software development life cycle and needs to be in-corporated into the overall SAP implementation schedule. In order todevelop a robust and sound testing schedule, appropriate methodsand techniques must be utilized in order to estimate the duration oftimes necessary to execute each test case while considering the level ofexpertise of each test team member assigned to execute the test cases.
12_4782 2/5/07 11:19 AM Page 267
After the test schedule is developed and the test cases are executed,testing metrics are collected to monitor the testing progress and tohelp evaluate the exit and entrance criteria for the next testing cycle.
Test execution also includes the storing of test results and testlogs. Depending on the industry or contractual obligation underwhich the SAP system is to be tested, it may be necessary to archivetest results and test logs in order to support testing audits and indus-try regulations.
TEST SCHEDULE
Given its integrated modules, SAP processes must be executed in agiven sequence to account for dependencies. An example is the need toschedule and identify testing dependencies—one may have to schedulethe exeuction of year-end testing at the end of the testing cycle becauseexecuting year-end runs at the beginning or middle of the testing cyclemay close all profit-and-loss accounts to the balance sheet, which couldhinder the testing team from executing any other test cases.
Testing schedules should meet the following objectives:
■ Clearly identify all the resources needed to manually execute thetest cases.
■ Identify the expected or planned duration times to execute eachtest case.
■ Facilitate the creation of a testing calendar.■ Identify all testing dependencies and the appropriate sequence for
execution each test case.■ List all available manual and automated test cases.■ Identify the status of each test case (i.e., completed, in-progress,
etc.).■ Determine whether the testing execution efforts are behind, ahead
of, or on schedule.■ Schedule the test runs and resources required for them.
Early planning allows for software development and testingschedules and budgets to be estimated, approved, and incorporatedinto the overall software development plan. Estimates must be con-tinually monitored and compared to actuals, so they can be revisedand expectations can be managed as required.
268 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
12_4782 2/5/07 11:19 AM Page 268
Often, however, estimates have to be adjusted and test strategieshave to be modified to meet a nonmovable deadline.
Before a test schedule is developed, it is important to define thetesting tasks. If a testing schedule is dictated or imposed on the test-ing team, it should be made clear what tasks can actually be com-pleted within the dictated time frame to manage expectationsaccordingly.
In contrast, when a testing schedule is not dictated, the test man-ager is given more time to evaluate the following considerations:
■ Before designing an appropriate testing schedule, the test programtasks have to be clearly defined and scheduled.
■ Once the tasks are understood it is important to know how toestimate schedules and duration times accordingly, and some esti-mation techniques described here need to be implemented.
■ On a more granular level, test case execution needs to be esti-mated in addition to all the test program tasks listed (see the sec-tion entitled Test Calendar for more detail).
In order to create a testing schedule, however, it is critical to es-tablish the testing program tasks that need to be monitored and eval-uated to prevent testing tasks from falling behind schedule. The nextsection will aid in defining and understanding the test program tasks,which need to be clearly delineated in order to be able to execute acomprehensive test program in which each task has to be laid out indetail to allow for an effective test schedule development.
DEFINE AND UNDERSTAND THE TEST PROGRAM TASKS1
Exhibit 12.1, Test Program Work Breakdown Structure, reflectsexamples of the different types of test execution tasks that can be per-formed on an SAP project to supplement the testing activities outlinedand described in the SAP ASAP Roadmap methodology. The tasks inthis exhibit are suitable for companies that have SAP in a production
Test Execution 269
1Derived from Elfriede Dustin. 1999, Automated Software Testing, Reading, MA:Addison Wesley.
12_4782 2/5/07 11:19 AM Page 269
270 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
EXHIBIT 12.1 Test Execution Work Breakdown Structure
No. Work Breakdown Structure (WBS) Element
8 ..........
9 Test Execution
9.1 Environment Setup. Develop environment setup scripts. Ensure allnecessary data for testing has been loaded into test environment andidentified within the test scripts.
9.2 Test Bed Environment. Construct, debug, and troubleshoot automated testscripts.
9.3 Test Phase Execution. Execute the various test phases. Execution ofAutomated Test cases and manual test cases.
9.4 Test Reporting. Prepare test reports.
9.4 Issue Resolution. Resolve daily issues regarding automated test toolproblems. If necessary, contact test tool vendor for support andmaintenance of test tools.
9.5 Test Repository Maintenance. Perform test tool database backup/repair.
10 Test Execution and Management Support
10.1 Process Reviews. Perform a test process review to ensure and enforce thatstandards and defined test processes are adhered to. Generate deficiencyreports for non compliance with established and approved standards.
10.2 Test Bed Configuration Management (CM). Maintain the entire testbed/repository (i.e., test data, test procedures and scripts, software problemreports, etc.) within a configuration management tool. Define test scriptCM process and ensure that test personnel works closely with the CMgroup to assure test process reusability.
10.3 Test Program Status Reporting. Identify mechanisms for tracking testprogram progress. Develop periodic reports on test program progress.Reports should reflect estimates to complete tasks in progress (EarnedValue Measurements).
10.4 Defect Management. Define Defect tracking workflow. Perform defecttracking and reporting. Attend defect review meetings.
10.5 Metrics Collection & Analysis. Collect and review all metrics to determinewhether changes in process are required and to determine whether productis ready to be shipped. .
11 ...........................
12_4782 2/5/07 11:19 AM Page 270
environment and want to improve their current testing approach ormethodology or for companies undergoing initial SAP implementa-tions. The structure represents a work breakdown structure (WBS)that can be used in conjunction with timekeeping activities to developa historical record of the test execution effort expended to performthe various activities on projects. The information in a detailed WBScan be used to calculate the Earned Value2 metric that will allow forprogress tracking.
Test teams may wish to further break down elements in Exhibit12.1 to delineate all test program activities, such as project startup,early project support, decision to automate test, test tool selection, teststrategy development, test procedure/script development, and othersuch activities according to the various types of tests. For the purposeof this chapter, Exhibit 12.1 provides the WBS sample for test execu-tion only. WBS 9.3 can be broken down further to delineate WBS ac-tivities for the various types of tests, which may include functionaltesting, server performance testing, archiving testing, developmenttesting for report, interface, conversion, enhancement, workflow, andform (RICEWF) objects, scenario testing, integration testing, regres-sion testing, boundary testing (positive and negative), security testing,memory leak testing, and response-time performance testing.
Once the test execution tasks are understood, the test managercan start estimating how long it will take to execute the tests. You canalso decide which tasks have to be removed, given the limited budgetsand schedules, and evaluate associated risks with dropping any ofthose tasks. When considering a tight schedule, the test managermight also want to evaluate whether putting more people on the pro-ject along the schedule’s critical path, which is known as projectcrashing, will speed things up, while keeping in mind that adding newpeople to a project that’s in jeopardy rarely ends in success and canin fact increase the project’s costs.
Test Execution 271
2Earned Value is a management technique that relates the WBS elements to schedulesand to technical cost and schedule requirements.
12_4782 2/5/07 11:19 AM Page 271
FACTORS AFFECTING THE TEST EXECUTION SCHEDULE
The following list provides the factors that should be evaluated beforeestimating duration times for the testing schedule:
■ Organization. Culture or test maturity of the organization. Anorganization that strives to meet CMM (Capability MaturityModel) level 5 requirements will have different expectations ofthe detailed tasks included in a testing schedule, for example, ver-sus a startup company that is not following any specific processes(the latter is not recommended). For example, a process-centricorganization generally understands that schedules cannot be dic-tated, but have to allow for appropriate estimation and measure-ments as to what can be feasibly implemented in a specified timeframe.
■ Scope of test requirements. Tests that need to be performed caninclude functional requirement testing, server performance test-ing, user interface testing, program module performance testing,program module complexity analysis, program code coveragetesting, system load performance testing, boundary testing, secu-rity testing, memory leak testing, response-time performance test-ing, and usability testing.
■ Test engineer skill level. This refers to the technical skill level ofthe individuals performing the test. As defined in Chapter 8, oftena mix of testing skills is required to make up an efficient SAP test-ing team. If a team consists entirely of junior-level inexperiencedpeople, the test execution schedule can drag out much longerthan anticipated, because the learning curve can be much toosteep. It is recommended to require a mix of testing skills and SAPexperience.
■ Test tool proficiency. The use of automated testing introduces anew level of complexity that a project’s test team may not havepreviously experienced. Test script programming is an expertiserequired and may be new to the test team, and possibly few on thetest team have had experience in performing coding. Even if thetest team has had experience with one kind of an automated testtool, the tool required on the new project may be different.
■ Business knowledge. Test team personnel familiarity with theapplication business area is important. If there is a lack of busi-
272 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
12_4782 2/5/07 11:19 AM Page 272
ness knowledge, again the schedule might have to be moved outlonger than anticipated.
■ Scope of test program. An effective automated test programamounts to a development effort complete with strategy and goalplanning, test requirement definition, analysis, design, andcoding.
■ Start of test effort. Test activity and test planning should be initi-ated early in the project. This means that test engineers need to beinvolved in analysis and design review activities. These reviewscan be used as effective testing techniques, which are essential inpreventing analysis/design errors. This involvement allows the testteam to more completely understand requirements and design,architect the most appropriate test environment, and generate amore thorough test design. Early involvement not only supportseffective test design, which is a critically important activity whenutilizing an automated test tool, but also provides early detectionof errors and prevents migration of errors from requirement spec-ification to design, and from design into code.
■ Number of incremental software builds planned. Many industrysoftware professionals have a perception that the use of automatedtest tools makes the software test effort less significant in terms ofman-hours, or less complex in terms of planning and execution.Savings accrued from the use of automated test tools will taketime to generate. In fact, at the first use of a particular automatedtest tool by a test team, very little savings may be realized. Sav-ings are realized in subsequent builds of a software application.
■ Process definition. Test team utilization of defined (documented)processes improves efficiency in test engineering operations. Lackof defined processes has the opposite effect and translates to alonger learning curve for junior test engineers.
■ Mission-critical applications. The scope and breadth of testing onsoftware applications, where a software failure poses a risk tohuman life or where software failure is mission critical to an orga-nization, are greater than for software applications that do notpose a high risk. For example, the performance of software con-trolling a heart monitor in a hospital setting is more critical thanthat of game software that entertains people.
■ Test development/execution schedule. Short time frames to per-form test development and execution may interject inefficiency in
Test Execution 273
12_4782 2/5/07 11:19 AM Page 273
test engineering operations and require that additional test engi-neering effort be applied.
TEST EXECUTION CALENDAR
In addition to a test schedule, a test execution calendar can be con-structed. A test execution calendar provides an alternative view orformat to the testing schedule. The testing calendar is not as robustas a formal testing schedule, since it does not contain critical pathsand a baseline or show schedule variances or clearly identified testingdependencies, but it does present at a high and simple level all testcases scheduled for test execution. The test calendar may also be eas-ier and less time consuming to maintain than a full-blown test sched-ule with scheduling software.
A test execution calendar can be constructed as an input to build-ing a formal test execution schedule or it can become the by-productof the test schedule. Exhibit 12.2 shows a test execution calendar foran SAP implementation that has to execute multiple test cases over afive-day time window as part of the regression testing cycle.
To construct a testing calendar, one needs to determine all testcases that must be executed as part of a testing cycle and the expectedduration execution times for each test case. In Chapter 8, techniquesand ballpark estimates were provided to design, construct, create,and execute SAP test cases for different testing efforts such as unit,scenario, integration, and so on.
In addition to the historical information technique for estimatingduration times based on prior testing efforts, another effective tech-nique for estimating duration times is the expert method, whereby in-dividuals knowledgeable of the test case to be executed provide anestimate based on previous work experiences of how long it wouldtake to execute a given process. For instance, the SAP FICO (Financeand Controlling) expert may estimate based on familiarity with theproject’s requirements that the test case for month-end testing wouldtake eight hours to execute and verify manually.
The test manager can set up meetings with different members ofthe configuration and development teams to capture duration timesfor all test cases to be executed and the necessary sequence in whichthe test cases will be executed. Once the test manager captures this in-
274 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
12_4782 2/5/07 11:19 AM Page 274
formation for all in-scope test cases, a testing calendar can be con-structed as seen in Exhibit 12.2 and subjected to peer reviews. As arule of thumb, test cases that are most critical to the business; thatspan multiple SAP modules, containing high customizations; that re-quire verification of data with legacy systems; and that are known tobe unstable, high risk, or prone to defects, should be executed as earlyas possible or as soon as dependencies would allow to permit suffi-cient time to resolve defects.
When estimating the time needed to execute manual test cases, itis important to recall that the execution time for the manual test caseincludes the time spent on manual keystrokes to run the test steps andtest conditions and also the time needed to manually record all test
Test Execution 275
July '06 M T W TH F
07/10 07/11 07/12 07/13 07/148:00 A.M. Test Results Meeting (Check Schedule)
8:30 A.M.
PeriodPrep
Activities
GeneralLedger
AssetManagement
Payroll Run Stock Transfer Orders MRP Run
12:00 P.M.1:00 P.M. LUNCH BREAK
GoodsMovement
Wage Types Invoicing C-folders
OutlineAgreements
Stock Overview Personnel
Administration SRM
Shipments Benefits EBP ShoppingCart/Catalogs
NetworkActivities
CATSMonth End
ClosingPurchase Order Interfaces
Billing Deliveries BWBin-to-Bin Movements Infocubes
GoodsIssue
Scrapping RequisitionsWBS
ElementsFinancial Reconciliations
Month-End Closings6:00 P.M. Team Review—Discuss Defects
Continue as Needed
SAP Upgrade–Regression
Test
EXHIBIT 12.2 Sample SAP Testing Calendar Depicting Execution of TestCases over a Five-Day Period
12_4782 2/5/07 11:19 AM Page 275
results. When test cases are executed with automated test tools, thetest tool typically generates an execution log that shows all test re-sults, which test steps were completed successfully, and whatprocesses/objects were verified.
In SAP some manual test executions for a given end-to-endprocess or SAP transaction code need to be repeated multiple times toaccount for process variations and data variations. For instance, theSAP transaction code VA01 for creation of a sales order may need tobe executed multiple times in order to account for different ordertypes, such as repair, stock transfer, and domestic and internationalorders. The test schedule and/or test calendar may need to be ad-justed to include all potential variations of a given test case.
The test calendar can be created for multiweek or even multi-month test cycles. The test calendar can include the name of the testerassigned to execute each test case and show the time duration foreach test case with a time grid, as shown in the vertical bar for the testcalendar within Exhibit 12.2. Test cases that have execution times ex-panding over multiple days can be shown as continuation test casesfor the following days. Another dimension or featured element of thetest calendar is that test processes or modules to be tested can beshown in different shades, as seen in Exhibit 12.2. When developingthe test calendar, create a timeline for all the different tests under eachtesting phase (i.e., unit and other development testing, scenario test-ing, integration testing, performance testing, user acceptance testing,etc.).
The test calendar can be highlighted in different colors or altereddaily in the event that the execution of test cases takes longer thanplanned, and subsequently this information can be recycled and serveas historical information for future system releases and testing cycles.Similarly, test cases that finish ahead of schedule or earlier than ex-pected should also be highlighted and the information reused for fu-ture testing cycles. The test manager can update both the testcalendar and the test schedule with the briefings provided at the dailytesting meetings. The test calendar in Exhibit 12.2 shows that time isallocated every day prior to the start of testing to review test resultsfrom the previous day.
276 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
12_4782 2/5/07 11:19 AM Page 276
TEST DEPENDENCIES
When setting up the testing calendar, it is important that within thespecific testing context testing dependencies are planned for or at leastreviewed prior to the execution of the first test case. Chapter 7 dis-cussed techniques and methods, such as test readiness review (TRR),which can be used as checklists for evaluating and assessing the readi-ness to initial the first test case execution. The following are somedependencies that should be examined prior to the start of the testcase execution. Furthermore, the test execution calendar and sched-ule themselves have dependencies, since it is possible that a given testcase cannot be started until another dependent test case has eithercompleted or partially completed.
■ The test environment has to be ready. It is important that the test-ing environment is separate from the development environmentin order to maintain an uncompromised test environment. Alsoconsider the lead times to procure any hardware or softwarerequired for this independent type of testing effort.
■ Test data. Test data has to be prepared for the testing efforts. Forexample, in order to test data transfer rules between interfaces,the test data has to be considered and needs to be preparedaccordingly. Data from external systems may need to be prese-lected for testing interfaces. Test data may also need to be loadedinto the SAP test environment with SAP-CATT scripts, automatedtest tools, or other mechanisms prior to the start of the test exe-cution. Special attention needs to be given to test cases that con-sume SAP data and require unique data records or test cases thatcan cause data conflict because they attempt to process the samedata record.
■ Test case dependencies. In order to complete a test, a setup mightbe needed that requires the run of a test case. Test case depen-dencies have to be considered. For example, in order to run a spe-cific financial report and analyze its output, a set of numerous testcases might have to be run in order to populate the requiredreport data.
■ End-to-end processes. Other testing dependencies require the testmanager to consider testing of chains of SAP transactions thatmake up an end-to-end process that cuts across multiple modules
Test Execution 277
12_4782 2/5/07 11:19 AM Page 277
such as order-to-cash, purchase-to-pay, and hire-to-retire withexternal data and converted data.
■ Roles/profiles. In order to test roles and profiles, appropriateaccess controls have to be set up and prepared to allow for thistype of testing.
Again, these dependencies demonstrate that multiple variablesand parameters need to be evaluated prior to the start of the test ex-ecution cycle and that omitting a single variable may cause the test-ing cycle to come to an abrupt halt.
TEST METRICS
After the test cases have been scheduled with dependencies and exe-cuted, the test team collects testing metrics.
It is important to track the schedule and the test completeness viatesting metrics. The execution of test cases is an activity that in andof itself generates multiple testing metrics that provide managementwith transparency into the testing progress and resolution of defects.Testing metrics provide information such as how many defects havebeen resolved, how many test cases have been executed, average timeto resolve a defect, average duration time to execute a test case, howmany test cases a particular tester has executed, number of require-ments met as a result of executed test cases, and so on. Testing met-rics also provide answers that assist in evaluating the exit, entrance,and release criteria. Most test management tools provide a compre-hensive set of real-time reports, graphs, and charts automatically thatfacilitate and expedite the gathering of testing metrics. In the absenceof a test management tool, spreadsheets and much manual effort canserve as the test completeness–tracking tool. In Exhibit 12.3 testingmetrics are collected and gathered manually with a spreadsheet. Asummary worksheet is developed that tracks the test execution per-centage completion daily.
Testing metrics are important because they help to address thequestion “is testing completed?” Everyone3 wants to know when test-
278 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
3Adapted from Elfriede Dustin, 1999, Automated Software Testing, Reading, MA:Addison Wesley.
12_4782 2/5/07 11:19 AM Page 278
ing is completed; therefore, it is imperative that test execution istracked effectively. This is accomplished by collecting data or metricsthat help identify the test progress, so that corrections can be takenin order to assure success. Additionally, using these metrics, the testteam can predict the release date for the application. In the case of arelease date being dictated, these metrics can be used to measure cov-erage. Progress metrics are collected iteratively during the variousstages of the test execution cycle. Some sample progress metrics areoutlined below. Testing metrics will vary widely across SAP installa-tions; the system integrator providing SAP services, the client payingfor SAP services, and the auditors for the SAP project can define andestablish testing metrics to be collected, disseminated, distributed,and published based on the project’s needs, charter, and contractualobligations:
Test Procedure Execution Status (%) =Executed number of TP
Total number of TP
This execution status measurement divides the number of test proce-dures already executed by the total number of test proceduresplanned. By reviewing this metric value, the test team can ascertainthe number of remaining test procedures that need to be executed.This metric, by itself, does not provide an indication of the quality ofthe application. It only provides information about the depth and
Test Execution 279
EXHIBIT 12.3 Testing Metrics Collected with a Spreadsheet
12_4782 2/5/07 11:19 AM Page 279
progress of the test effort without any indication of the success of theeffort itself.
It is important to measure the test procedure steps executed, notjust the entire test procedure executed. For example, one test proce-dure might contain 25 steps. The tester successfully executed steps 1through 23 and then encounters a showstopper at step 24. In thiscase, it is not beneficial to fail the entire test procedure; a more pre-cise measurement of the progress, or number of steps executed, ismore useful. Measuring test procedure execution status at the steplevel results in a highly granular progress metric.
The best way to track test procedure execution is by developing amatrix that contains the identifier of the build-under-test, a list of alltest procedure names, the tester assigned to each test procedure, andthe percentage complete, updated daily and measured by the totalnumber of test procedure steps versus test procedure steps executedsuccessfully. Many test management or requirements managementtools help automate this process.
Defect Aging = Date defect was opened versus date defect was closed
Another important metric in determining progress status might be theturnaround time for a defect to be corrected, also called defect aging.Using defect aging data, the test team can conduct trend analysis. Forexample, 100 defects may be recorded on a project. When docu-mented past experience indicates that the development team can fixas many as 20 defects per day, the turnaround time for these problemreports may be only one workweek. The defect-aging statistic, in thiscase, would reflect an average of five days. When the defect agingmeasure equals 10 to 15 days, the slower response time by the devel-opers to make corrections may impact the ability of the test team tomeet scheduled deadlines. Note that defect-aging measurement is notalways appropriate and needs to be modified, depending on the com-plexity of the specific fix that is being implemented, among other cri-teria. Defect aging is a high-level metric that verifies defects are beingaddressed in a timely manner.
If developers don’t fix the defects in time, this can have a rippleeffect. Testers will run into related defects in another area, now du-plicating defects. One defect fix could prevent all subsequent defectsfrom occurring. In addition, the older a defect becomes, the more dif-
280 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
12_4782 2/5/07 11:19 AM Page 280
ficult it may be to correct it, since additional code may be built on topof it. Correcting the defect at this point may have much larger impli-cations on the software than when it was originally discovered.
Defect Fix Retest = Date defect was fixed/released in new buildsubtracted from date defect was entered or retested and failed.
The defect fix retest metric provides a measure of whether the testteam is retesting the corrections at an adequate rate. If defects thathave been fixed are not retested adequately and efficiently, this canhold up progress, since the developer cannot be assured that their fixhas not introduced a new defect, or has not been properly corrected.This last point is especially important: Code that is being developedwith the assumption that the previous code has been fixed now willhave to be reworked. If defects are not being retested quickly enough,the testing team has to be reminded of the importance of re-testingfixes, so developers can move forward, knowing their fix has passedthe test.
Defect Trend Analysis = Number total defects found versustesting life cycle
Defect trend analysis can help determine the trend of defects found.Is the trend improving as the system-testing phase is nearing comple-tion, or is the trend worsening? This metric is closely related to newlyopened defects discussed below. The newly opened defects shoulddecline as the system testing phase nears the end; otherwise, it mightbe an indicator of a severely flawed system.
If the number of defects found is increasing with each subsequenttest release, assuming no new functionality is being delivered, and thesame code is being tested, only with code fixes, it could be indicativeof numerous problems, such as:
■ Improper code fixes from development for previous defects.■ Incomplete testing coverage for each build; new testing coverage
discovers new defects.■ Tests could be not executed until some of the defects were fixed,
and then new defects are found once the previous defects areresolved and the tests can proceed to that point in the code:
Test Execution 281
12_4782 2/5/07 11:19 AM Page 281
Quality of Fixes = Previously working functionality and numberof new errors introduced
The value obtained from this calculation provides a measure of thequality of the software corrections implemented in response to soft-ware problem reports.
This metric aids the test team in being able to determine the de-gree to which other, previously working, functionality has been ad-versely affected by software corrections. When this value is low, thetest team needs to make the developers aware of the problem. Thismetric is also referred to as the recurrence ratio. It measures the per-centage of fixes that fail to correct the defect, or, more specifically, thefix introduced a new error or broke previously working functional-ity. This ratio can be useful to measure the success of the unit and in-tegration testing efforts.
Defect Density = Total number of defects found / Executednumber of TP per requirement
The defect density metric is an average calculated by taking the totalnumber of defects found in a specific functional area or requirement.For example, if there is a high defect density in a specific functionalarea, it is important to conduct a causal analysis using the followingtypes of questions:
■ Is this functionality very complex, and therefore it is to be expectedthat the defect density is high?
■ Is there a problem with the design/implementation of the func-tionality?
■ Were inadequate resources assigned to the functionality, becausean inaccurate risk had been assigned to it? It also could beinferred that the developer responsible for this specific function-ality needs more training.
Additionally, when evaluating defect density, the priority of thedefect will need to be considered. For example, one application re-quirement may have as many as 50 low-priority defects, while the ac-ceptance criteria have been satisfied. Still, another requirement might
282 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
12_4782 2/5/07 11:19 AM Page 282
have one open high-priority defect that prevents the acceptance crite-ria from being satisfied.
These are just a few of the metrics that need to be gathered tomeasure test program execution—there are many more available.These are the core set of metrics to be tracked in order to allow forcorrective activity, if necessary, to point out risk areas, and to allowsuccessful execution of the test program. Again, testing metrics needto be defined that are suitable to the project’s needs prior to the startof the test execution.
TEST LOGS AND RESULTS
In addition to testing metrics, test execution brings about test resultsthat can show that the system either works as intended or fails to doso. Depending on the SAP environment and industry regulations, testresults may need to be stored in a secured repository that offers ver-sion control and audit trail capabilities.
Test results can be stored either manually with hard copies orelectronically as in scanned images. The project’s policy will dictatethe length of time needed to store, naming standards for test results,who signs the test results, how test results will be archived, and whohas the ability and permission to modify and/or review the testresults.
Test results can include the following information: the test statusfor each test step, the screenshots used to show that the test step wasexecuted successfully, the comments (if any) annotated after a testscript was executed, log files or forms produced after a test case wasexecuted, signatures provided to approve the test results, and auto-matically generated test logs from automated test tools. Most testmanagement test tools will store test results from manual and auto-mated test case execution including screenshot printouts. Test resultscan also be stored in spreadsheets, assuming that the spreadsheets areplaced in a common repository such as a shared drive or SolutionManager.
Test Execution 283
12_4782 2/5/07 11:19 AM Page 283
12_4782 2/5/07 11:19 AM Page 284
285
CHAPTER 13Management of Test Results and Defects
After test cases are executed, test results are reported and testingdefects are resolved. Test cases consist of test logs generated auto-
matically from the execution of automated test cases and resultsreported from manually executing each test step. A test result takeson the value of “pass” when the actual test results match the expectedtest results. Otherwise, the test result takes on the value of “fail.” Asuccessful test result with a value of pass indicates that the executionof the test case has successfully fulfilled the requirement. Test resultswith a value of fail will require analysis and can become defects.Depending on its priority or category, a defect has the potential todelay or postpone a system release date.
Defects are stored and managed in a secured system with database,reporting, and query capabilities. The Change Control Board (CCB)helps to manage, prioritize, assign, and categorize defects and subse-quently determine how the defect will be resolved. The life cycle of adefect may include multiple states and handoffs among project mem-bers for initial testing, final validation, and approval before it is closed.
Reports are generated for all stored defects. Defect reports are use-ful for supporting the exit criteria for each testing phase. Reports fordefects can show the trend for defects for a given time segment, thenumber of outstanding defects, and the closure rate for the defects.
NEED FOR DOCUMENTING TEST RESULTS
The formality for the reporting and management of test results islargely a function of the testing effort, project discipline, industryregulations, and available resources dedicated to the testing effort.
13_4782 2/5/07 11:21 AM Page 285
Entities implementing SAP (whether they are in the privately held,publicly traded, federal government, public, or not-for-profit sector)will have a variety of reasons for documenting SAP test results. Thereasons for documenting test results may be either imposed on theentity implementing SAP due to regulations for Sarbanes-Oxley(SOX) compliance, part of a corporate approach and standard forimplementing information technology such as the discipline offeredby the Capability Maturity Model (CMM) from the Software Engi-neering Institute (SEI), or as a means of measuring and verifying thesystem integrator’s completion rates for the execution of test cases.
The following are some reasons that companies and entities indifferent fields will need to document SAP test results:
■ A publicly traded company implementing SAP will need to estab-lish a procedure for storing and managing test results to meetcompliance with Section 404 from SOX. Section 404 contains fivephases, including management testing, key controls, and audits.
■ A pharmaceutical company implementing SAP will need to com-ply with good manufacturing practice (GMP) requirements setforth in the Quality System (QS) regulation that are promulgatedunder Section 520 of the Food, Drug, and Cosmetic (FD&C) Act.In short, according to the general principles of software valida-tion of the Food and Drug Administration (FDA):
The validation [of requirements] must be conducted in accordancewith a documented protocol, and the validation results must also bedocumented. (See 21 CFR §820.70(i).) The test cases should be exe-cuted and the results should be recorded and evaluated to determinewhether the results support a conclusion that the software is vali-dated for its intended use.1
■ An entity requesting the SAP services from a systems integratormay ask that all test results from the integration test be docu-mented as proof that the test cases were executed. Test results mayshow that a test case was executed successfully or that it failedand thus required the successful resolution of a defect to ensurethat the requirement was implemented correctly.
286 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
1www.fda.gov/cdrh/comp/guidance/938.html#_Toc517237968.
13_4782 2/5/07 11:21 AM Page 286
■ A corporation or government agency adhering to the CMM atlevel 2 or higher as part of its documentation standards may doc-ument all test results.
■ The Department of Defense as part of its DoD 5000 series acqui-sition regulations may impose that test results be captured, docu-mented, and stored during the independent testing phases, as wellas operational assessments for the implementation of an enter-prise resource planning (ERP) system such as SAP.
Independent of the reason for documenting test results (whetherit is mandated or optional), the most important reason to documenttest results is to trace system, functional, and technical requirementsto the successful execution of test cases. Actual test results that aredocumented demonstrate either that a test case was successfully im-plemented since its execution obtained a pass status or that it failswhen the actual results do not match the expected results docu-mented on the test case. Test cases reach a successful state of “com-pleted” when all test steps have been successfully completed with apass status (since a test case may have multiple test steps) and whenthe appropriate stakeholders have approved and signed off the test re-sults. Test cases that have a status of failure after test case executiondemonstrate that the requirements were not implemented correctly,or that some other conditions prevented the tester from proving thatthe requirement was implemented correctly.
STORING TEST RESULTS AND METRICS GATHERING
Test results provide a chronological record with audit trails of rele-vant details about the execution of tests. Test results are generated intwo formats: (1) manual, whereby testers report test results for eachindividual test step; or (2) automated, whereby after an automatedtest case is executed it automatically produces a test log with all thetest results. An automated test log will contain information such asstart and end times for the test, results for verification points (i.e., ver-ify that SAP produces a status bar message at a designated point in aprocess), and all the automated test steps that failed and passed.
Management and storage of test results can take place either in atest management tool with query and reporting capabilities for both
Management of Test Results and Defects 287
13_4782 2/5/07 11:21 AM Page 287
manual and automated test cases or within spreadsheets/text editorsthat are not tied together for manually executed test cases. The deci-sion to utilize a test management tool to store and manage test resultsdepends largely on the project’s budget, the learning curve for theproject’s members, and the project’s audits and compliance regula-tions. A test management tool for collecting and storing test resultsand documenting defects includes the following benefits:
■ Audit trails.■ Integrates with automated test tool.■ Slicing/dicing of data.■ Security log-on features.■ Scheduled reports.■ Version control.■ E-mail workflow based on defined business rules.■ Customizations.■ A single repository for all data.■ Greater transparency for audits and managerial reports, since it
offers integrated data from a single location and both custom andcanned reports.
■ Historical data (i.e., how long on average it takes to resolvedefects reported against functionality for a particular SAP mod-ule, how long on average it takes developer “X” to resolve adefect, or which requirements are most likely to cause defectswhen executed, etc.).
However, collecting test results on spreadsheets, text editors, andnotepads offers the following benefits:
■ Inexpensive, since spreadsheets and text editors are normallyincluded with the standard software image.
■ Reduced learning curve for project members.
Companies with reduced budgets and limited functional scopecan test SAP manually with spreadsheets where test cases and test re-sults are documented for the short run. However, over the long runas the project’s SAP functionality increases, more SAP modules andbolt-ons (i.e., Supplier Relationship Management) are added and testresults are subjected to more third-party audits and the likelihood of
288 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
13_4782 2/5/07 11:21 AM Page 288
successfully managing SAP test results and defects with disconnectedspreadsheets decreases. When the number of test cases and scenariosincreases as the project adds new requirements, functionality, en-hancements, and patches that are subject to multiple rounds of test-ing during integration, and regression testing, the need to have a testmanagement tool is magnified. Many large projects that attempt tocollect test metrics and test results to meet audits cannot do so effec-tively or at all with a series of disconnected spreadsheets.
Test results need to include information that shows that a teststep was actually successfully executed or that it failed to execute.The test results for manually executed test steps are documented inthe field known as “actual results.” Automated test cases may auto-matically update all test steps with the status of pass for a successfullyexecuted automated test case but may not actually update the actualresults field, which would cause the reviewer of an executed test caseto review the automatically generated test log in order to verify theindividual results for each test step. For instance, a test case that wasautomated may include the creation of an SAP project (transactioncode CJ20N), and the project may have been successfully created inSAP, but to learn the actual result for this automated test case it willbe necessary to open up the automatically created test log and reviewwhich project number the system actually created.
Screenshots attached to test results can show whether a particu-lar process was successfully executed or that it failed in the event thata defect is needed. Screenshot printouts can be used in SAP to showthat a particular test step produced an expected outcome such as astatus bar message (i.e., Sales Order XXX was created for transactionVA01), that a workflow object was triggered and routed correctly,that a financial report produced correct calculations, or that customfields properly displayed and processed information on an SAPscreen. Screenshot printouts can be attached to test results to demon-strate that the system correctly functions at both the front and backend of the system. Screenshot printouts are also useful when a defectis created to facilitate the development or configuration team mem-bers’ understanding of the error after the tester identifies a systemerror from the execution of a test step.
Both test results and screenshot printouts can be archived and ei-ther stored electronically or printed and stored in a file cabinet afterthey have been completed and approved. Electronic test results can be
Management of Test Results and Defects 289
13_4782 2/5/07 11:21 AM Page 289
obtained by scanning the test results and storing them in a securedelectronic medium. For printed results stored in file cabinets, a desig-nated person will need to administer the hard copy results and haveaccess to the file cabinet to prevent test results from being fudged.
After the test results have been accepted and approved, the testteam can collect data and use it as input for metrics that assist man-agement in determining test progress. Again, test management toolsfacilitate the collection of test data and generation of reports forwhich test cases have been executed, which test cases have been as-signed, total number of test cases that have been reexecuted in theevent of changes or defects, which test cases are behind or on sched-ule, and so on. Collection of metrics for test results can be compiledand provided to the appropriate stakeholders including senior man-agement for large testing efforts. Collecting test data from discon-nected spreadsheets and text editors may create logistical problems orprove impossible for the test team.
REPORTING TEST DEFECTS
Whenever the actual test results are different from the expected testresults documented within a test case, a defect is reported. The testerexecuting the test case that discovers the defect through testing theapplication submits the defect within the defect-tracking tool. Afterthe defect is submitted, it is reviewed, assigned, and subsequentlyclosed when it has been successfully resolved. For example, Exhibit13.1 shows a typical process for resolving a defect, whereby a defectis identified, assigned, reviewed by the CCB, resolved and retested,and closed when the original submitter of the defect is satisfied withthe successful resolution of the defect. The CCB is involved with theresolution of a defect during testing because SAP is an integrated solu-tion and even the most benign system change as a result of a defectresolution can have cascading effects on other system components,affect the project’s scope, and consequently increase the project’scosts. The CCB reviews the validity and merits of the submitteddefects based on the project’s requirements. The CCB may decide thatresolving the defect is out of scope for the project’s existing SAPrelease, that the defect will be assigned to the appropriate team mem-bers for resolution, or to defer the defect for a future system release.
290 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
13_4782 2/5/07 11:21 AM Page 290
291
Lif
e cy
cle
of
a d
efec
t
Sub
mit
Def
ect
Eva
luat
e R
evie
w
Orig
inat
or
Fix
Pro
blem
Team
Lea
d
Dup
licat
e?
Test
Res
ults
?
Rej
ect
Team
Mem
ber
Test
er
Det
erm
ine
Impa
ct
Impa
ct
Ana
lysi
s
Rou
ted
CC
B
Sco
pe? Out
Clo
sed
Impl
emen
tS
olut
ion
TL
Ass
igns
Ver
ify R
esol
utio
n
Ver
ify F
ixF
lag
asR
eady
for T
est
Fails R
ejec
tA
ppro
vePas
ses
Clo
se D
efec
t
Clo
sed
EXHI
BIT
13.1
Typi
cal A
ppro
ach
for
Res
olvi
ng a
Def
ect
13_4782 2/5/07 11:21 AM Page 291
The rigors and discipline for reporting and tracking defects varyacross testing efforts. For instance, during the unit-testing phase, de-fects may not be formally reported or documented and the defectsmay be resolved on the fly without aid and assistance from the CCB.However, during integration testing, all defects may be documented,reviewed, assigned, and closed with proper documentation includingscreenshot printouts and signatures from all affected stakeholders.Furthermore, resolutions to defects may not be transported to thenext environment until all corresponding documentation for the de-fect has been validated.
The main components and attributes of a defect include its sever-ity, state, category, description, impact, and time and resource esti-mates for resolving it.
Exhibit 13.2 shows the potential categories for assigning SAP de-fects in addition to proposed resolutions for each category. It is im-portant to assign categories to SAP defects, since that can lead to thecorrect assignment of the defect from the get-go as opposed to as-signing the defect to individuals who are not responsible for resolv-ing the defect.
Severity of a defect describes how important the defect is to thebusiness, whether it is a “showstopper” or a minor defect that can beaddressed at a future date. Ranking defects by priority or severity isimportant because it helps to assess whether the system is ready forgo-live or can move from one testing effort to the next. For instance,a defect identified during integration testing that shows that an MRP(Material Requirements Planning) run cannot be carried out, thatsales orders cannot be created consistently, or that month-end closingactivities cannot be accomplished may delay the go-live for an SAPimplementation and may show that the system cannot exit integra-tion testing until these defects are resolved. In contrast, a defect thatshows that time entry cannot be entered through the online timesheetmay not fall under a high priority since it has a workaround, whichis that employees can enter time on manual paper forms. Typically,defects that are critical to the business and operations and have nodocumented solution or workaround are the defects that deserve andwarrant the most attention from the development and configurationteams, and whose resolution is critical to continue testing activities.
Exhibit 13.3 displays severity levels for SAP defects and their im-pact on the testing activities. When a defect is submitted, a defect
292 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
13_4782 2/5/07 11:21 AM Page 292
Management of Test Results and Defects 293
EXHIBITY 13.2 Categories and Recommended Resolution Methods forReported SAP Defects
Defect Category Description of Defect and Proposed Resolution
Data Defect The test case was documented and executed with invalidtest data. The process or object to be tested cannot betested until valid data can be identified.Resolution—Identify and remove the erroneous datavalue from the test case, replace it with a valid value, andreexecute the test.
RICEWF (ABAP) The ABAP program, user-exit, or object is defective in thatCoding Defect it does not meet the technical specifications or require-
ments documented and managed in the requirementstools. This type of defect is also triggered when the pro-gram fails to function, compile, or produces shortdumps. The type of RICEWF objects include Reports, In-terfaces, Conversions, Enhancements, Workflow, andForms. Defects with batch scheduled jobs can also be re-ported under this category of defects.Resolution—Identify the programming or developmentdefect if any RICEWF is not meeting documented re-quirements, or technical or functional specifications. As-sign the defect to the development team for resolutionand initial testing. The functional team reexecutes thetest case for validation and approval.
Configuration Defect The SAP configuration or configuration settings do notmeet the functional requirements captured during theblueprint phase, or stored in the requirementsmanagement tool.Resolution—Report defect and assign for Change Con-trol Board for further review. The CCB can assign thedefect (if it is in scope) to the appropriate configurationteam for resolution. The configuration team resolves andtests the defect, then assigns the defect to the test team orsubject matter experts for final validation.
User Role Defect The SAP test case cannot be executed because the roleassigned for the execution of the test case does not haveall the necessary authorizations based on documentedsecurity requirements and/or segregation of duties.Resolution—Report defect to the security team andidentify correct role or modify the existing role to allowtest case execution to resume.
(Continues)
13_4782 2/5/07 11:21 AM Page 293
severity is chosen and then refined later when the CCB and/or teamleader reviews the defect. A tester may perceive that a defect is of thehighest severity only to find out that a configuration team leader mayhave a workaround for the defect and thus its severity level is re-duced. Assigning the severity level for a defect may involve multipleindividuals from different teams.
The severity of a defect determines how critical the defect is to thebusiness and also how quick the turnaround time should be for re-
294 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
EXHIBITY 13.2 (Continued)
Defect Category Description for Defect and Proposed Resolution
Vendor Software The SAP solution as delivered out of the box has Defect deficiencies, errors, bugs, or defects not associated to the
project’s custom or unique system settings.Resolution—Contact the in-house SAP representative orSAP support (if software maintenance agreement is inplace) to troubleshoot and resolve problem. Defectresolution may require application of OSS notes, patches,hot packs, or system upgrade.
End User Error This is a bogus defect. This category occurs when there isnothing wrong with the SAP system and/or documentedrequirements and the tester incorrectly logged because hedoes not have sufficient training in SAP, did not under-stand the test case or executed the test case incorrectly.Resolution—Provide further training for the testerexecuting the test case, or update the test casedocumentation so that test steps are clearly understoodby a wider audience of test participants with differentlevels of knowledge in SAP.
Requirements Defect This defect occurs when the application meets thedocumented requirement but there is a problem with thedocumented requirement. The requirement may havebeen ambiguous incomplete, inconsistent with company’srules and policies, etc.Resolution—Turn the requirement to the ChangeControl Board (CCB) for further evaluation. The require-ment may be deferred, waived, scrapped, or rewordedunder a controlled process for implementation. Updatetest cases and test data to provide coverage for thedeficient or erroneous requirement and retest the systemto verify the requirement.
13_4782 2/5/07 11:21 AM Page 294
Management of Test Results and Defects 295
EXHIBIT 13.3 Defect Severity Levels
Severity Criteria Testing Impact
1 SAP Crashes or Lock Ups Testing cannot proceeda. Critical SAP scenarios, features, or until defect is resolved
requirements cannot be performedb. A requirement of the highest level of
importance cannot be performed.
2. System is operable but has technical Continuing to test the problems (major) and no work-arounds system under this exist severity may jeopardize a. Required system capability or the testing schedule
functionality cannot be performedand no work-around solution isknown/available
b. Project’s cost, or schedule are atrisk and no work-around solution isknown/available
3 Development of Configuration Problem Little or no impact to(medium/major) where an acceptable the testing schedulework-around has been identifieda. Required system capability or
functionality cannot be carried outbut a work-around solution has beenidentified
b. Project’s cost, or schedule are atrisk but a work-around solution isknown/available
4 Development or Configuration Problem No impact to testing(minor) activities/schedulesa. Results in end user but does not
effect a requirement, systemcapability, service level agreement,or functionality
b. Does not affect other testing tasksor execution of other test cases
5 Minor—Problems that do not require No impact to testingsystem changes or are ‘nice to have’ activities/schedulesbut are not necessary to continueoperations or meet documentedrequirements. Problems related todocumentation
13_4782 2/5/07 11:21 AM Page 295
solving it. A system integrator implementing SAP for a customer maybe told that defects with a severity level of one must be solved withinthe same business day that the defect was identified. Conversely, aclient may expect that a system integrator resolve minor defectswithin a workweek. The client and the system integrator must reacha reasonable accord as to what the turnaround times will be for re-solving defects and also document all assumptions made for estimat-ing the turnaround times for resolving a defect. Exhibit 13.4 providessuggested turnaround times for resolving defects based on assignedseverity levels.
During its life cycle, a defect may enter several states that allowthe project team to track and monitor its status. A defect manage-ment tool is recommended for tracking and monitoring the status ofa defect that includes e-mail work flow functionality. For instance,when the status of a defect is changed from “assigned” to “ready forretest,” an automatic e-mail is sent to the test team indicating that thedefect can be retested in the development or test environment beforeit is ready for transport. Exhibit 13.5 provides several defect statesand definitions for each state. States of a defect can be customized orchanged based on project’s preferences and standards. Also, a projectmay decide who is responsible for changing the status of a defect. Forexample, a project may specify that only a client representative canclose out a defect that the system integrator resolved.
296 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
EXHIBIT 13.4 Proposed Turnaround Times to Resolve a Defect
Suggested Defect Resolution Turnaround Times
Severity Resolved within
1 a. As soon as possible (System down emergency)b. Within 1 business day
2 a. As soon as possible orb. Within 1–3 business days
3 a. Within 4 business days
4 a. Within 1–2 work weeks
5 a. If within scope resolve at any time within existing release orfuture releases
13_4782 2/5/07 11:21 AM Page 296
297
EXHI
BIT
13.5
Poss
ible
Sta
tes
of a
Def
ect
Def
ect
Stat
eD
escr
ipti
on
Subm
itte
dT
he t
este
r ex
ecut
ed a
tes
t ca
se a
nd r
epor
ted
the
defe
ct f
or r
esol
utio
n an
d it
is a
wai
ting
Tea
m L
ead
revi
ew.
Rev
iew
-Tea
m L
ead
The
def
ects
in t
his
stat
e ar
e be
ing
eval
uate
d by
the
tea
m le
ad f
or a
ssig
nmen
t in
to o
ne o
f th
e fo
llow
ing
stat
es: O
pen-
Una
ssig
ned,
Clo
sed-
Dup
licat
e, C
lose
d-In
valid
, Ope
n-A
ssig
ned.
Ope
n-U
nass
igne
dT
he d
efec
ts in
thi
s st
ate
are
reco
gniz
ed a
s va
lid; h
owev
er, t
he t
eam
lead
has
not
ass
igne
d th
e de
fect
to
ate
am m
embe
r fo
r re
solu
tion
or
the
team
lead
can
not
prop
ose
a cu
rren
t re
solu
tion
for
the
def
ect.
Ope
n-Im
pact
The
def
ects
in t
his
stat
e ar
e re
cogn
ized
as
valid
; how
ever
, the
tea
m le
ad h
as n
ot a
ssig
ned
the
defe
ct t
o a
Ass
essm
ent
(CC
B)
team
mem
ber
for
reso
luti
on b
ecau
se t
he im
pact
to
the
syst
em f
rom
the
res
olut
ion
of t
he d
efec
t is
bei
ngan
alyz
ed b
y th
e ch
ange
con
trol
boa
rd.
Ope
n-A
ssig
ned
The
def
ects
in t
his
stat
e ha
ve b
een
assi
gned
to
a te
am m
embe
r fo
r re
solu
tion
and
the
tea
m m
embe
r is
curr
entl
y w
orki
ng o
n a
solu
tion
for
the
pro
blem
.
Clo
sed-
Dup
licat
eT
he d
efec
ts in
thi
s st
ate
have
bee
n re
view
ed b
y te
am le
ad b
ut c
lose
d be
caus
e th
ere
is a
lrea
dy a
noth
erre
port
ed id
enti
cal d
efec
t.
Clo
sed-
Inva
lidT
he d
efec
ts in
thi
s st
ate
have
bee
n re
view
ed b
y te
am le
ad b
ut c
lose
d be
caus
e ei
ther
the
tes
ter
mad
e a
mis
take
in e
xecu
ting
the
tes
t ca
se o
r th
e re
solu
tion
for
the
def
ect
is o
ut-o
f-sc
ope.
Ret
este
d (P
asse
d)T
he d
efec
ts in
thi
s st
ate
have
had
a s
ucce
ssfu
lly im
plem
ente
d re
solu
tion
(fi
x).
Ret
este
d (F
ail)
The
def
ects
in t
his
stat
e en
coun
tere
d fa
ilure
s th
at in
dica
te t
he f
ix w
as n
ot s
ucce
ssfu
lly im
plem
ente
d.
Rea
dy f
or r
etes
ting
The
def
ect
resi
des
in t
his
stat
e to
indi
cate
tha
t a
fix
for
the
defe
ct is
rea
dy t
o be
ret
este
d.
Rea
dy f
or t
rans
port
The
def
ects
in t
his
stat
e ar
e re
ady
for
prom
otio
n to
the
fin
al S
AP
clie
nt (
i.e.,
Prod
ucti
on, Q
A).
Com
plet
e (C
lose
d)T
he d
efec
ts in
thi
s st
ate
have
bee
n su
cces
sful
ly t
este
d an
d al
l cor
resp
ondi
ng d
efec
t do
cum
enta
tion
and
fiel
ds h
ave
been
upd
ated
. Onl
y th
e de
fect
sub
mit
ter
can
com
plet
e th
e de
fect
.
13_4782 2/5/07 11:21 AM Page 297
Defects that have different statuses, severity levels, and categoriescan be extracted to reports for presentation to the project’s seniormanagers. Test management tools offer capabilities for generating re-ports, graphs, and charts for reported defects. Test management toolsoffer a single repository for reporting defects. Reports include defecttrends such as whether the number of defects are increasing or de-creasing on a weekly basis. Other reports include on average howlong it takes to resolve defects with severity level one, two, and so on,how many defects remain open, and how long they have been open.The ability to query a database of defects and generate reports allowstest and project managers alike to make informed decisions for sup-porting the exit criteria for a testing effort or assess the readiness fora go-live. For instance, the exit criteria for integration testing can im-pose that no defects with severity levels of one or two remain open,in addition to showing a decreasing trend of defects on a weekly basisin order to exit integration testing.
The recommended fields to be populated before a defect is com-pleted and its associated object (which triggered the creation of thedefect) is transported into production or another target environmentare included below:
■ Results from impact analysis (i.e., include description for affectedprocesses, areas).
■ Level of effort—hours needed to resolve defect.■ Originator’s name.■ Description including observed output (i.e., messages, system
responses, test results, dumps, etc.).■ Screen captures.■ Category.■ Priority.■ Corresponding test case.■ Affected SAP area or module.■ Workarounds (if any).■ Release notes (if any).
After the suggested fields above are populated, a defect can be closedor completed.
298 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
13_4782 2/5/07 11:21 AM Page 298
299
CHAPTER 14Testing in an SAP
Production Environment
The axiom of a production-based SAP system is upgrades and sys-tem changes. In the human body a medication that cures some
symptoms may cause unintended side effects. In the software world,particularly in the world of SAP with the promise of data and processintegration among modules, a single change to an area of the systemmay cause “side effects” on previously working system functionality.Many factors trigger changes and upgrades in a live SAP system (i.e.,applying OSS [On-line Service System] notes, graphical user interface[GUI] upgrades, etc.). These system changes must be thoroughlytested to avoid adverse cascading effects within the SAP landscape.Introducing vital system changes to a live SAP system is compromisedwhen a project relies on manual testing and does not have an auto-mated testing framework.
Commercial automated tools provide a bona fide solution to sup-port and facilitate upgrades and changes to an SAP production envi-ronment. Automated test tools facilitate and expedite the executionof test cases for regression testing. Despite the promise of imple-menting an effective automation framework with commercial testtools as a means to support testing of SAP production changes, manySAP projects struggle to do so. Regression and performance testingwith automated test tools can lead to the creation of a library of testSAP scripts that will help maximize investment in test tools while in-creasing the dependability of mission-critical business processes run-ning in the SAP production system.
14_4782 2/5/07 11:27 AM Page 299
PRODUCTION SUPPORT BACKGROUND
In an SAP production environment system changes can be planned,emergency (ad hoc), or part of a major upgrade (i.e., GUI upgrade).
Planned system changes for improving the system such as en-hancements, routine maintenance, and patches tend to drive the bulkof all system changes. Planned system changes are transported intoproduction as part of a scheduled release. Improvement changes canbe either optional or mandatory and are generally customer driven.Planned changes are considered customer driven because often theydeal with a system improvement or enhancement that the SAP pro-duction support team implemented.
Emergency changes that are considered “showstoppers” can betransported into the production environment on an ad-hoc basis, oras soon as a resolution is identified and thoroughly tested. Generally,emergency changes come from help desk tickets that end users reportwhen they are unable to carry out a critical business process. Exam-ples of emergency changes include inability to make a run payroll,system is inoperable and users cannot log on, segregation-of-dutiesviolations where bogus vendors can be set up, and incorrect logic inquota arrangements, which causes the company to incur financiallosses. Emergency changes are normally considered mandatory sincethey have no workarounds and they can cause disruption to the busi-ness if their successful resolution is not identified and transported.Emergency changes can be customer driven or vendor driven. Theyare customer driven because the customer may find an in-house solu-tion to the problem through the production support team. Changesare vendor driven when the solution may come only from the vendor(i.e., OSS note).
Major system releases and system upgrades such as upgrading theGUI, implementing a new industry-specific solution, applying ven-dor-released hot packs, or adding a new SAP module may require fea-sibility studies, gap analyses, workshops, and comprehensive testingbefore their implementation is considered. System upgrades of thisnature are primarily vendor driven and may become necessary oreven mandatory in order to prevent the system from becoming obso-lete or to keep the existing vendor maintenance agreement current.
Exhibit 14.1 illustrates the categories for production changes.
300 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
14_4782 2/5/07 11:27 AM Page 300
The one constant and immutable heuristic associated with systemchanges is that they are subject to testing. The testing can be at anyone of the following levels: unit, string, integration, regression,smoke, security, and performance. The amount of testing necessaryto verify the system change depends on the event/trigger that causesthe system changes.
Production SAP systems are susceptible to changes and upgradesfrom the following events:
■ Addition of SAP modules and/or SAP bolt-ons.■ Application of kernel upgrades or ABAP hot packs.■ SAP upgrades affecting custom configuration, out-of-box func-
tionality, and report, interface, conversion, and enhancement(RICE) objects.
■ End user requests enhancements.■ End user reports problems to the SAP help desk or production
support team.■ Gap analysis reveals needed functionality for a future release.■ Scope from one release is deferred to a future release.■ New division/unit within the same company requests SAP.
Testing in an SAP Production Environment 301
EXHIBIT 14.1 Categories of SAP System Changes
Customer Driven(Optional)
Vendor Driven(Mandatory/Optional)
Customer Driven(Mandatory)
PlannedChanges
Ad-hocChanges
EmergencyChanges
End usercalls help desk
Patches
Gap analysis
Deferred scope
Hot Packs
New IndustrySolution
Prior versiondiscontinued
OSS notes
Production systemis down
Design violatescompany policy
Company merger
System designimpacts bottom line
14_4782 2/5/07 11:27 AM Page 301
■ Exceptions and waivers from prior SAP releases roll over to theproduction team.
■ Application of OSS notes and patches.■ A company with an existing SAP environment buys another com-
pany that needs to have SAP implemented.■ Support for older versions is discontinued, which causes the pro-
ject to upgrade.■ Hardware, database, or network upgrades.
It is inevitable that even the most static, generic, or out-of-the-box SAP production environment during its lifetime will undergo atleast one of the aforementioned events. Exhibit 14.2 highlights a typ-ical SAP production change (assuming the Change Control Board[CCB] has accepted the change) whereby an end user reports a prob-lem to the help desk, the SAP production team resolves the issue, andthe test team executes automated test scripts to resolve the problem.
Production changes and upgrades vary in degree of complexity.Some changes are as simple as adding a new value to a drop-downlist. In contrast, other changes affect system functionality across mul-tiple SAP modules, which can cause far-reaching consequences to thecompany’s bottom line. From a testing perspective, it is the latter pro-duction changes that can consume the most time and resources forprojects when testing is conducted manually. When productionchanges that cause cascading effects are not thoroughly tested, thebusiness is susceptible to a higher risk of failure.
Fortunately, risks that system changes pose to the productionenvironment for most SAP projects can be mitigated with robustregression and performance testing that is supplemented with auto-mated test tools. The first lines of defense against expected produc-tion changes are preparation and planning followed with systemdevelopment, implementation, and testing.
CHALLENGES TO PRODUCTION SUPPORT TESTING
Some of the main challenges to production testing include:
■ Complexity and frequency of system changes■ Having dedicated resources to test system changes
302 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
14_4782 2/5/07 11:27 AM Page 302
303
EXHI
BIT
14.2
Test
ing
a H
ypot
heti
cal S
AP
Prod
ucti
on C
hang
e
App
ly fi
x
Test
in D
ev
Res
olve
issu
es
Rep
ort f
indi
ngs
Com
mun
icat
es im
pact
to T
est T
eam
Aut
omat
es c
urre
nt “
fixed
” pro
cess
Impl
emen
ted
“fix
” did
n’t b
reak
any
thin
g
Rea
dy fo
r “p
rod”
Exe
cute
“su
nny-
day”
sce
nario
s
Sel
ects
scr
ipts
to r
un fr
om r
egre
ssio
n lib
rary
Pro
mot
e in
to Q
A e
nviro
nmen
t
Ret
est i
n Q
A
End
Use
rP
rodu
ctio
n Te
am
Res
olve
d?
Y
Fin
ding
s? P
N
F
Fun
ctio
nal T
eam
Test
Tea
m
Cal
ls h
elp
desk
to r
epor
tpr
oble
ms
in P
rodu
ctio
nR
evie
ws
ticke
t and
assi
gns
to T
eam
Lea
d
14_4782 2/5/07 11:27 AM Page 303
■ Heavy reliance on manual testing■ Rigorous testing causing schedules to slip■ Labor cost of production testing
The sheer magnitude, frequency, and volume of SAP changes in aproduction environment can create a series of logistical issues for theproduction team. The production, integration, and test teams need toaddress the issues of who will do the testing and how system changeswill be coordinated, documented, analyzed, reviewed, and tested. Thesystem changes vary in degree of complexity but even a minor changeto a single SAP transaction can have a rippling effect on the system’sintegrated processes.
Testing in a production environment includes identifying the af-fected processes that need to be tested as a result of the systemchange. For example, the application of an enhancement to the SAPtransaction CJ20N can have cascading effects on an integrated SAPprocess containing touch points within an end-to-end process such asorder-to-close. A process such as order-to-close can contain multiplestrung-together SAP transactions, data values, and process variationsthat can take several individuals hours or days to test and documentresults after all test scenarios are identified from the implementationof a system change. Software vendor Compuware offers the solutionSAP Assessor Tool to identify the impact of a system change withinan SAP environment. Furthermore, SAP transaction code SE51 pro-vides a “where-used” function to identify where a program is usedwithin SAP transactions.
Projects without automated test tools or a robust automation strat-egy will need to rely heavily on manual testing for regression testing ofthe system along with manual documentation or recording of the testresults. Manual testing is not easily repeatable, takes functional re-sources from their primary job responsibilities, is tedious to document,and is time consuming. Manual testing for end-to-end complexprocesses such as purchase-to-pay, hire-to-retire, and forecast-to-orderrequires the coordination of multiple individuals where each individualmay be familiar with only a portion of the end-to-end process. Testingof end-to-end processes can be time consuming and thus cause scheduleslippages. Given these constraints and limitations of manual testing,many projects suspend or delay their plans to apply a system change.
Manual testing also proves to be expensive when the sameprocesses need to be frequently retested with multiple individuals.
304 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
14_4782 2/5/07 11:27 AM Page 304
Test automation allows projects to easily retest the same processeswith multiple sets of data while verifying system attributes and con-figuration settings. Automation can also run unattended or withouthuman intervention, which frees up functional resources. Further-more, test tools are capable of producing test results and test logswith audit trails that facilitate information technology (IT) audits.Depending on the industry where SAP is implemented, regulationsand company policies may dictate that screenshots be produced toverify that system changes were implemented correctly and test toolsfacilitate the process of capturing and storing screenshots.
AUTOMATION TESTING THROUGH SUNNY-DAY SCENARIOS
Although it is possible to maintain and support SAP in the productionenvironment in the absence of third-party tools, experience showsthat SAP manual testing is prone to error, strains the resources fromthe configuration team, requires much coordination, and is expensiveand time consuming for end-to-end processes that have many varia-tions. Production testing in which manual testing would be impracti-cal and would require an army of manual testers means processes thatare subject to SAP variant configurations that have hundreds of pos-sible combinations for creating a finished product. For instance, inthe automotive industry it is possible to build or configure a vehiclefor purchase over a website with hundreds of combinations, and aftera production change all combinations for configuring a car mustwork correctly. In this example, the only feasible or cost-effective wayto test all vehicle combinations would be with automated test tools.
In Appendix B, techniques predicated on Taguchi’s design of ex-perimentation are described under the software-testing principle oforthogonal arrays. Orthogonal arrays can reduce the number of testcases while still providing maximum system coverage. Orthogonal ar-rays are suitable for projects with SAP variant configuration.
Test tools as described in Chapter 5 offer a viable alternative forsupporting the production system. Test tools allow for the parallelexecution of several end-to-end scenarios spanning multiple SAPtransactions that would otherwise prove to be unwieldy or too re-source intensive through manual testing efforts. End-to-end scenariosare processes that include the stringing together of several SAP
Testing in an SAP Production Environment 305
14_4782 2/5/07 11:27 AM Page 305
transactions that can contain verification for system functionality,SAP roles, work flow, interfaces, reports, and performance.
Companies that own test tools can build a library of automatedtest scripts for sunny day scenarios that can be executed and repeatedon a regular basis to ensure that production transports have not ad-versely affected previously working system functionality. Sunny-dayscenarios are a representation of a business process with error/failfree system behavior. They are primarily designed to verify frequentlyexecuted SAP processes within a single module, containing touchpoints and critical system functionality. Testing of sunny-day scenar-ios includes testing of process variations, reversals, adjustments, andcancellations. Exhibit 14.3 shows an example of the end-to-end sce-nario requirement to invoice, consisting of five variations that can beautomated and scheduled to run at a predefined interval beforechanges are promoted into the production environment. Rainy-dayscenarios are in contrast to sunny-day scenarios in that rainy-day sce-narios take into account possible system exception and error cases.
For companies that do not have the resources or in-house exper-tise to build a library of test scripts, they can acquire from third-partyvendors a library of pretest SAP test scripts that can be customized tomeet the internal business processes. Appendix A delves into the con-cept of commercially available SAP test libraries and SAP accelerators.
A starting point for automating sunny-day scenarios would be theSAP implementation tool Solution Manager, which under the activityfor Define Baseline Test Cases offers an accelerator containing a listof predefined test scenarios in addition to the scenarios that can be
306 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
EXHIBIT 14.3 End-to-End SAP Scenario ContainingMultiple Variations
Variations VariationBusiness Scenarios
Requirement to Invoice Service PORequirement to Invoice Nonstock PORequirement to Invoice Service OARequirement to Invoice Material OARequirement to Invoice Consignment OARequirement to Invoice Stock PO
14_4782 2/5/07 11:27 AM Page 306
launched from the SAP support portal. Alternatively, companies thathave built functional teams around production processes such asorder-to-cash, purchase-to-pay, work-to-pay, hire-to-retire, and so oncan build automated test scripts around their established functionalteams. For companies that built functional teams around standaloneSAP modules (e.g., Human Resources, Project Systems, MaterialsManagement, etc.) without the Solution Manager implementationtool, sunny-day scenarios can be identified through workshops andseminars with an audience of stakeholders for the entire end-to-endprocess.
After initial sunny-day scenarios have been identified based onpredefined criteria and proven to successfully work manually in a testenvironment, the test team can either develop the necessary modularand reusable test scripts from scratch or tweak the generic test scriptsif purchased from a commercial test tool vendor. The processes shouldbe proven to work manually in order to avoid wasted automation ef-forts on an unstable SAP environment. The project will need to rely ondedicated in-house or outsourced experienced resources to developand maintain the necessary automated test scripts. Chapter 5 discussesthe suggested skill sets for a test tool automator and rules of thumb(heuristics) for designing test scripts.
Initially, test sunny-day scenarios can be placed in a repository ortest management tool and subjected to version control. The first stepto building a scenario is to record standalone SAP transaction codes,which are the building blocks of the scenario, and then string to-gether various test SAP transactions (building blocks) to form a muchlarger SAP process. The standalone test SAP transactions can be re-cycled or tweaked to form other end-to-end scenarios. For example,SAP t-code VA01, which is for creating sales order, can be tested aspart of the order-to-cash scenario, but through modifications therecording of VA01 can be reused in the order-to-close scenario.
After the initial sunny-day scenarios are scripted, follow-onscripting can include the variations for the end-to-end scenarios as-suming that the scenario variations are stable and proven to workmanually. New processes to be automated should be documentedwith test cases and tested manually in both the DEV and the TESTenvironments. The recording of end-to-end scenarios and corre-sponding variations can lead to the creation of a comprehensive li-brary of test scripts representing business-critical functionality.
Testing in an SAP Production Environment 307
14_4782 2/5/07 11:27 AM Page 307
The following is an example of a potential application of sunny-day scenarios. The SAP production team implements and manuallytests a configuration change. The configuration change will be in-cluded as part of a scheduled production transport. When the CCBanalyzed the configuration change, a determination was made thatthe change would impact other SAP processes within end-to-end sce-narios. The test team is informed of the configuration change andproceeds to execute automated end-to-end processes that ensure thatthe change does not adversely affect business-critical processes beforethe change is promoted into production. The test team ensures thattouch points, business rules, functional requirements, segregation ofduties, and work flow are not impacted with the new system change.The test team tests end-to-end processes with variations and multiplesets of data, which otherwise would have been too expensive or timeconsuming with manual testing.
APPROVALS FOR CHANGES
Approving production changes requires a series of handoffs andapprovals from various business stakeholders, including the end user.Many projects rely on a series of disconnected spreadsheets, phonecalls, and e-mail messages to track the actions performed on a pro-duction change and the person signing off on the change. Visibilityand transparency is hindered when companies do not track approvalsand action for changes within a single version-controlled repository.
For companies that are subject to IT audits, Sarbanes-Oxley, orregulations (such as pharmaceuticals), it is essential to thoroughlydocument all actions performed on system changes in a single repos-itory with audit trails and reporting capabilities.
Companies such as Mercury Interactive offer tools for handlingthe necessary handoffs to transport objects into production after sys-tem changes. A tool that can reduce the time needed to transport SAPobjects from Mercury Interactive is Mercury Deployment Manage-ment Extension™ for SAP® Solutions.
Expected stakeholder teams for approving a production changeinclude the test, integration, change management, development,Basis, functional, and end users (client). The test team verifies the
308 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
14_4782 2/5/07 11:27 AM Page 308
functionality implemented change and ensures that the change doesnot impact existing functionality through the execution of sunny-dayscenarios. The test team can also document the test results and cap-ture screenshots from the system to ensure that it was implemented,designed, and configured properly. The change management teammay update the training materials, business process procedures(BPPs), and release notes depending on the complexity and type ofsystem change. The functional and development teams implement thechange, test the change manually, and help draft or refine test casesfor the system change. Furthermore, for proposed system changesthat have been successfully tested, the configuration and developmentteams will need to update the corresponding documentation, such asflow process diagrams and technical and functional specificationsthat are associated with the changed objects. The Basis team trans-ports the change after the proposed change and its testing have beenapproved by all necessary stakeholders. The integration team ensuresthat all approvals have been granted for the system change, and thatthe change is transported under one of these situations: planned,emergency, or ad hoc.
End users are critical stakeholders in certifying the test resultssince the system enhancement or change is implemented as a meansof helping them achieve their everyday tasks. End users should ver-ify that changes that originated from help desk tickets that end usersreported are in fact resolved successfully. Furthermore, end usersmay need training for system changes that alter the layout and ap-pearance of screens, business logic, creating new roles, integrationtouch points, and work flow. Within some organizations only theend user is permitted to close tickets reported through the produc-tion help desk.
TYPES OF PRODUCTION TESTS
Production changes require much regression testing for impact. How-ever, often misunderstood are which aspects of regression testingshould be considered.
For instance, custom objects and embedded security can be ad-versely impacted by a system with the implementation of hot pack,
Testing in an SAP Production Environment 309
14_4782 2/5/07 11:27 AM Page 309
OSS notes, or configuration change. However, a system upgrade, newinterfaces, new batch jobs, or the addition of a new module canimpact the system performance and cause unnecessary system bottle-necks and degradation points that can render the system inoperable.
Another misunderstood concept in production testing is that pro-jects will test system changes only at the SAP GUI level and overlookthe system behavior at the back end. For instance, if new fields areadded to a screen from the SAP bolt-on Supplier Relationship Man-agement (SRM), it may be necessary to test that the application cor-rectly displays and populates the fields at the GUI level and that thefields are correctly populated and inserted in the system database.
Typically, companies will develop automated sunny-day scenariosthat verify the attributes, properties, and characteristics at the GUIlevel but not that the system is behaving correctly at the back end,which introduces a risk to the business. Test scripts can be enhancedto address this deficiency and include programming logic to verify thedatabase.
The teams conducting production testing should verify at a min-imum that the system security, performance, functionality, workflow,business logic, and enhancements (user-exits) are not compromisedwhen a system change is introduced from both the front end and backend. In addition to verifying system functionality, automated testscripts should include log-on for test users based on their roles in test-ing the system security. The scripts should also have logic for sendingand verifying system notifications.
Depending on the system change, the project may need to developthe same sunny-day scenarios in both the functional testing tool andthe load testing tool to ensure that service-level agreements (SLAs)and optimal system performance are maintained when system changesare introduced. The changes and rippling effects are tested with thefunctional test tool. Exhibit 14.4 depicts the various types of teststhat may be conducted depending on the type of SAP system changethat is implemented. After the functionality has been verified, thesystem performance is tested. The rationale behind this is that thesystem functionality must be stable before a performance test isattempted.
Systematic and robust regression testing for security, workflow,performance, and functionality at the back end and front end will re-duce the risk of system failure as a result of a system change.
310 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
14_4782 2/5/07 11:27 AM Page 310
311
EXHI
BIT
14.4
Type
s of
Tes
ts C
ondu
cted
Bas
ed o
n th
e Ty
pe o
f SA
P Sy
stem
Cha
nge
Pat
ches
OS
S n
otes
GU
I upg
rade
Hot
Pac
ksN
ew M
odul
eC
onfig
Cha
nges
Add
ed R
ICE
Def
erre
d S
cope
Fun
ctio
nal
Sec
urity
Wor
kflo
w(O
ptio
nal)
Fron
t/Bac
kE
nds
Per
form
ance
(Opt
iona
l)
Dev
elop
men
t(O
ptio
nal)
Fun
ctio
nal
Sec
urity
Wor
kflo
w(O
ptio
nal)
Fron
t/Bac
kE
nds
Per
form
ance
(Opt
iona
l)
Dev
elop
men
t(O
ptio
nal)
Fun
ctio
nal
Sec
urity
Wor
kflo
w(O
ptio
nal)
Fron
t/Bac
k E
nds
(Opt
iona
l)
Per
form
ance
Dev
elop
men
t
Fun
ctio
nal
Sec
urity
Wor
kflo
w(O
ptio
nal)
Fron
t/Bac
k E
nds
(Opt
iona
l)
Per
form
ance
(Opt
iona
l)
Dev
elop
men
t(O
ptio
nal)
Fun
ctio
nal
Sec
urity
Wor
kflo
w(O
ptio
nal)
Fron
t/Bac
kE
nds
Per
form
ance
Fun
ctio
nal
Sec
urity
Wor
kflo
w(O
ptio
nal)
Fron
t/Bac
kE
nds
Per
form
ance
Fun
ctio
nal
Sec
urity
Wor
kflo
w(O
ptio
nal)
Fron
t/Bac
kE
nds
Per
form
ance
Fun
ctio
nal
Sec
urity
Wor
kflo
w(O
ptio
nal)
Fron
t/Bac
kE
nds
Per
form
ance
14_4782 2/5/07 11:27 AM Page 311
SUPPORTING THE TESTING EFFORT
Production testing requires support, assistance, and coordinationfrom multiple entities. For example, the following outputs may occurfrom a system change:
■ Updating of BPPs■ End-user training■ Documentation of new test cases■ Development of automated test scripts and new system settings■ Development of new RICE objects■ Updates to functional specifications, flow process diagrams, and
requirements
Managing all these outputs requires resources from the configu-ration, development, change management, test, and integrationteams. For companies without dedicated test teams, the resourcesfrom the functional and development teams have to play dual roles,which prevent them from focusing on their primary job responsibili-ties. Without a dedicated test team, maintaining and developing testscripts can become an intractable challenge. In Exhibit 14.5 the an-ticipated support team roles are identified to introduce, test, and signoff a system change.
Having a test team does not indicate or manifest that the test teamcan effectively provide testing for production changes in isolation. Atest team will need to interface with stakeholders from various teamsin order to test the application, retest the application in the event thatthe change was implemented incorrectly, and document test results.
The interaction between the test team and the configuration teamincreases when the testing activities (the test automation activities, inparticular) have been outsourced. In an outsourced agreement, thetest team typically focuses on test script automation where the ex-pertise resides with test tools and not necessarily on the SAP businessprocesses or logic. The outsourced test team may not have the neces-sary domain expertise or SAP knowledge to test system changes with-out having fully documented test cases from the configuration team.The outsourced team may, however, automate test scripts successfullythrough well-documented test cases that contain expected results,and with the ongoing assistance from the functional SAP expert.
312 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
14_4782 2/5/07 11:27 AM Page 312
313
EXHI
BIT
14.5
Supp
orti
ng R
oles
to
Rev
iew
, Im
plem
ent,
Clo
se, a
nd A
ppro
ve a
Sys
tem
Cha
nge
Test
(In-
hous
e/O
utso
urce
)F
unct
iona
lC
hang
eM
anag
emen
tIn
tegr
atio
nB
asis
Dev
elop
men
tE
nd U
sers
Req
uire
men
tsTr
acea
bilit
y
Par
ticip
ate
in C
CB
Aut
omat
eP
roce
sses
Exe
cute
Sun
ny-
Day
Sce
nario
s
Dev
elop
Per
form
ance
Scr
ipts
Rep
ort
Res
ults
Ret
est
Mai
ntai
nLi
brar
y of
Scr
ipts
Ver
ifyS
LAs
Man
age
Test
Too
ls
Man
ually
Tes
tIn
itial
Cha
nge
Upd
ate/
Cre
ate
Test
Cas
es
Fix
Pro
blem
s
Impl
emen
tC
hang
e
Hel
p E
valu
ate
Aut
omat
ion
Crit
eria
Upd
ate
Spe
cs
Upd
ate
Dia
gram
s
Sig
n-on
Cha
nge
Test
Cha
nge
Upd
ate/
Cre
ate
BP
Ps
Cre
ate
Rel
ease
Not
es
Pro
vide
Trai
ning
Cha
irC
CB
Enf
orce
Sta
ndar
ds
Sch
edul
eR
elea
ses
Req
uire
men
tsTr
acea
bilit
y
Tran
spor
tC
hang
e
Ref
ine/
Add
Rol
es
Mon
itor
Sys
tem
for
SLA
s
Fix
Pro
blem
s
Impl
emen
tC
hang
e
Man
ually
Tes
tIn
itial
Cha
nge
Upd
ate
Spe
cs
14_4782 2/5/07 11:27 AM Page 313
Projects with dedicated test teams (whether the test teams are in-house or outsourced) can also follow consistent testing practices,which include documenting test cases on approved testing templates,reporting test results with screenshots, and supporting test tools. Testteams can also participate in the CCB meetings and provide an inde-pendent verification of the system from the person who implementedthe change.
In Federal Drug Administration (FDA) and other regulated envi-ronments, test teams can focus on documenting test results withscreenshot printouts for the successful implementation of the systemchange and subsequently generate reports containing test metrics forthe executed test cases.
TECHNIQUES FOR EXECUTING AUTOMATED SCRIPTS
Automated test tools increase the testing coverage and reduce theturnaround time needed to introduce a change or a fix into the pro-duction system. It is possible that after an automated test case hasbeen constructed and designed in an automated test tool, the timeneeded to execute the automated test case is only a fraction of thetime needed to execute the test case manually, which expedites orreduces the test execution phase for changes introduced into a pro-duction environment. Automated test cases also provide greater test-ing coverage since they can be executed unattended or for multiplevariations of a given scenario (i.e., hire-to-retire scenario). Automatedtest cases also create automatic test logs and test results after they havebeen executed, which saves time over manually recording test results.
Automated test cases cannot replace all manual testing for re-gression testing. New changes or system fixes introduced into aproduction environment first need to be manually unit- and string-tested in a development environment. The newly introduced change orsystem fix is further tested manually as part of an end-to-end processor larger scenario to ensure that the system change or fix behaves asexpected and conforms to documented requirements. However, beforethe system change is promoted into the production environment, it isnecessary to test that the change does not adversely affect other sys-tem functionality. Automated test cases are an effective technique forverifying the potentially affected system functionality. Attempting to
314 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
14_4782 2/5/07 11:27 AM Page 314
test manually all affected or impacted components may prove unlikelyfor projects that cannot devote resources for full-blown regressiontesting and recording of test results.
Automated test cases increase the confidence that vital or business-critical system functionality still works as expected after a proposedsystem change is introduced into the development and test clientsand prior to its transport into the production environment. Exhibit14.6 shows the various levels leading to a regression test. The newlyproposed system change is first tested manually at the unit, string,and integration level and subsequently regression tested since the im-pact of the proposed change is tested with automated test tools. In theabsence of automated test tools, the production team or SAP consul-tants would have to test all potential impact scenarios manually,which may not be possible given project constraints or availability ofresources. From Exhibit 14.6 one can see that automated test casesincrease testing coverage and increase the likelihood of verifying theimpact of the proposed system changes on various business-criticalscenarios.
Production teams should consider the following techniques be-fore executing automated test scripts when evaluating implementedproduction changes:
■ Sequence for executing test scripts■ Identifying which test scripts need to be executed■ Maintaining test script data and data seeding■ Prioritizing test scripts
Testing in an SAP Production Environment 315
T-code
T-code 2T-code 1
Manual Unit Testing
T-code 3ManualString/integrationTesting
Scenario 2Scenario 1 Scenario 3RegressionTesting(automated)
EXHIBIT 14.6 Hierarchy of Regression Testing for Changes Introduced tothe Production Environment
14_4782 2/5/07 11:27 AM Page 315
■ Workstations (hardware) to execute test scripts■ Allocating a dedicated SAP environment for test script playback■ Assigning tasks for test script execution or running test scripts
unattended■ Announcements (communicating) to the project execution schedule■ Holding scheduled sessions to report results from executed test
scripts■ Capturing and storing test results (including screenshot printouts)■ Resolving test script errors■ Signoffs
Automated test scripts require maintenance, coordination, andanalysis. The promise of libraries of test scripts with unattended(without human intervention) playback is difficult to attain withoutappropriate support and automation standards.
Ideally, entire libraries of regression test scripts can verify systemfunctionality, response times, and SLAs with little or no interventionfrom human beings. However, this goal is hampered when the fol-lowing occurs:
■ Test scripts are not scheduled to execute in the right sequence.■ Test data conflicts exist.■ There is confusion over roles and responsibilities.■ Test results are not captured or saved.■ Signoffs and approvals are ignored.■ Test scripts are not selected to verify cascading effects from the
implemented change.
When the CCB meets to review future production changes, a de-cision is made whether the future change should be adopted, rejected,or put on hold. According to Karl Wiegers, author of books on soft-ware requirements, evaluation of the considerations below can helpaddress the impact of a system change1:
■ Identify the other system components you’ll likely have tochange. These might include other requirements, design descrip-tions, code, tests, user publications, help screens, system docu-
316 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
1Karl E. Wiegers, “Requirements When the Field Isn’t Green,” STQE, May/June2001.
14_4782 2/5/07 11:27 AM Page 316
mentation, project plans, shared libraries, hardware, and evenother subsystems or applications.
■ Judge whether the change is technically feasible and can beaccomplished at acceptable cost and risk. Will it conflict withother functions or overtax system resources such as processorcapacity, memory, or communications bandwidth?
■ Evaluate the possible impact on the system’s performance,response time, efficiency, reliability, usability, integrity, and othercritical quality attributes.
■ Estimate the amount of work effort involved.
Furthermore, before a production change is accepted it should beevaluated for cascading system effects and priority. The test team incollaboration with the configuration team analyzes the impact of thechange in the system and which automated processes need to be exe-cuted in a predefined sequence in order to verify existing system func-tionality. The test team assigns the tasks associated with executingand collecting test results to individuals with sufficient technical ex-pertise in the automated test tools.
The assigned test team members verifying the system changeshould ensure that the test scripts play back successfully, have validand sufficient test data, and that other systems users do not interferewith the execution of the test script.
After test scripts are executed, some of the standards that canhelp facilitate compliance with IT audits include capturing test logs,test results, and screenshot printouts. The test results are subject topeer review and signoffs, which serve as part of the criteria and ap-proval process for transporting the system change into production.
With the aforementioned suggested guidelines and standards thelikelihood of script playback and repeatability increases, which helpsmeet the challenge of timely promotion of SAP objects in an SAPenvironment.
Testing in an SAP Production Environment 317
14_4782 2/5/07 11:27 AM Page 317
14_4782 2/5/07 11:27 AM Page 318
319
CHAPTER 15*
Outsourcing the SAP Testing Effort
OUTSOURCING DEFINED
Outsourcing is a term that describes the practice of seeking resourcesoutside of an organization to provide a service. The goal of out-sourcing is usually to save money and/or to leverage a service providerthat can do the job more efficiently or effectively than the internalstaff. A common example of outsourcing in the information technol-ogy (IT) world is application development. However, complete busi-ness functions such as human resources, customer service, softwaretesting, software development, and call centers may also be out-sourced to another party.
As the practice of contracting service providers outside of NorthAmerica has become prevalent, many people confuse the terms out-sourcing and offshoring. In truth, offshoring, or contracting with aservice provider overseas, is but one of various means to contract out-sourced services.
WHY OUTSOURCE SAP TESTING?
Dynamic organizations encounter many demands and shifting prior-ities of their internal teams. The need for outsourcing of any type canchange as business circumstances change. It may not be a resourcegap or a deficiency in the internal team that leads to outsourcing.Rather, in a mature organization, it is likely a business decision, dri-ven by business value that leads to an outsourcing solution.
*This chapter was authored by Lorrie Collins, National Solutions Director for Sphe-rion Corporation.
15_4782 2/5/07 11:28 AM Page 319
Limited Resources
The project team that drives an enterprise initiative like an SAP imple-mentation typically begins to morph into an enterprise of its own—consuming all subject matter experts (SMEs) from the business and ITcommunities for this mission-critical job. Often, this leaves an equallycritical skeleton crew to maintain the day-to-day business. The inter-nal SME assets are assigned to the highest-value activities to drive pre-sent and future success. Precious few resources remain to conducttesting.
Cost Savings
Most third-party system integrators (SIs) include testing in the scopeof the implementation from a process perspective, as well as aresource perspective. This approach violates several quality assurance(QA) tenets if not handled appropriately (see Independent Testingbelow) but is quite profitable business for the SIs. A savvy IT execu-tive will recognize that outsourcing the testing to another party canshave significant dollars from the budget.
Reduction of Risk through Independent Testing
A test is “independent” when a person who has not been involved inthe development or implementation of the software conducts it. Thisindependence enables the tester to be more objective and readily carryout the fundamental goal of any testing activity—finding defects.Independent testing also provides the following benefits:
■ An “egoless” approach. People who have invested much time andeffort in the build process are not the best people to test it. Inde-pendent testers do not have a vested interest in testing outcomesas do the developers and are, therefore, not as biased.
■ Detection of more diverse types of errors. Independent testers areunlikely to test the software the same way as the build team, pro-viding a greater likelihood of finding errors that the build teammissed.
320 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
15_4782 2/5/07 11:28 AM Page 320
■ More controlled and disciplined management of the testing process.Formally trained and experienced independent testers establish aformal relationship between the build team and the test team.
■ A fresh perspective on the requirements. A “second set of eyes”may reveal if the build team misunderstood or misinterpreted therequirements. Two groups will not likely have the same miscon-ceptions.
Lack of “Know-How”
Many organizations know or suspect that Quality Assurance processmaturity is an area of need. “Knowing that you don’t know” is a pos-itive step in the direction of reducing the risk that a mission-criticalimplementation goes awry. Leveraging a knowledgeable, skilled, andexperienced testing partner is a form of insurance to protect the size-able investment of the new system.
Shortage of Physical Space
The consumption of all resources by an SAP project may extend to thatof physical space. It is not untypical for contractors supporting the pro-ject to fill every cubical, office, conference room, and hallway. Findingthe space for a testing team to plan, collaborate, and execute may be alogistical obstacle that can be addressed through outsourcing.
OUTSOURCING FACTORS
Options for outsourcing the testing effort are numerous—each withits own set of benefits and drawbacks. Layered on top of the com-plexities of the overall system implementation, the successful out-sourcing decision considers the following factors:
■ Cost. Focus not only on the hard dollars, but also on the returnon investment. How quickly can success be realized within eachoption? How does each option achieve reusability for continuedvalue?
Outsourcing the SAP Testing Effort 321
15_4782 2/5/07 11:28 AM Page 321
■ Risk. Consider how the sourcing strategy could positively or neg-atively impact the critical success factors of the project, as well asthe critical success factors of the new system’s ability to supportthe business.
■ Organizational readiness. How will the sourcing option impactthe organization’s capacity to maximize the value of the testingservice? Organizational readiness is a compilation of work ethics,relationships, management style, leadership, process maturity,and old baggage (successes and failures of the past). Will outsideresources be accepted or rejected?
■ Team dynamics. How will individual stakeholders perceive thevarious options as a personal win or loss situation? How do rolesand responsibilities, reporting relationships, promotions, and hir-ing and firing factor into the sourcing decision?
■ Technical factors. How do technical architecture, technical stan-dards, industry knowledge, and regulatory requirements knowl-edge factor into in the sourcing decision?
■ Testing capabilities. The decision maker should weigh the skills,knowledge, experience, and track record of the sourcing vendors.Bottom line: Can the job be done successfully?
■ Business value. In summary, consider how the testing sourcingstrategy will benefit or detract from business goals and projectobjectives. What constraints (budget, time, scope, and resources)must be balanced with potential benefits (return on investment[ROI], time-to-market, and quality)? In other words, “Whatmakes the most sense for the current environment conditions?”and “Where do I get the biggest bang for my buck?”
OPTIONS FOR OUTSOURCING TESTING
It is imperative that the organization aligns the business value and theimpact of factors outlined above to the outsourcing decision. Oncethis is accomplished, the risk of making the wrong sourcing decisioncan be greatly reduced. The following are some options for out-sourcing testing.
322 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
15_4782 2/5/07 11:28 AM Page 322
Deliverables-Based Project
In this option, the sourcing party (supplier) agrees to solve a businessproblem (in this case, to complete testing of the system) in exchangefor a fee. This is typically known as a “solution.” The buyer isexchanging a fee for the promise of a predefined outcome. The sup-plier takes on significant accountability and ownership of the risk inthis arrangement in order to meet the buyer’s expectations. In orderto meet these expectations, the supplier drives the approach, selectsthe team resources, manages and directs the resources on a day-to-day basis, and maintains the test environments as required. The gen-eral approach, deliverables definition, processes, communication,logistics, and other details are agreed to in advance by the buyer andsupplier.
As this arrangement is defined as a project, there is a definitivestart and end, making this solution appropriate for an initial imple-mentation or major release. This option is also viable for a buyer whois expecting “expert” services where the organization is lacking; orwhere the buyer wishes to shift ownership, management, and direc-tion to a third party in order to free the internal staff to focus onother objectives.
Managed Service
This option has the same characteristics of a deliverables-based pro-ject, but is delivered on a time-based arrangement—generally one tothree years. The two approaches differ in that a deliverables-basedproject spans the life cycle of a development or maintenance cycle,while the managed service spans a calendar period. The effectivenessof a managed service is usually measured through service-level agree-ments (SLAs). Since testing is dependent, to a great degree, on theapplication itself, and the progress of the build (development and con-figuration) team, SLAs are sometimes difficult to measure. Amplethought should be given to the approach and effort for gathering datapoints and measuring SLAs against expectations. Many organizationszealously set too many expectations or set expectations that cannotbe easily isolated or measured. Organizations should target a maxi-mum of two to three SLAs for a managed service.
Outsourcing the SAP Testing Effort 323
15_4782 2/5/07 11:28 AM Page 323
Some example SLAs include:
■ On-time test management reporting■ Test case development request response■ Customer satisfaction (measured by survey)
A managed service is ideally aligned with ongoing maintenance ofthe system. All things being equal, the buyer should look for the sup-plier to increase efficiencies over time, potentially reducing the cost ofthe service. As in the deliverables-based project, the managed serviceplaces ownership, management, and control with the supplier.
Staff Augmentation
In this approach, the supplier provides a skilled and experiencedresource who matches the buyer’s requirements. The buyer providesday-to-day direction for the resource and owns the testing approach,as well as the outcome of the testing process. Here, the buyer retainscontrol of the resources. The buyer gains a resource that does notrequire training in the technology, but will need to learn corporateprocesses and adapt to the organization’s culture. Careful considera-tion must be given to the trade-off between these two knowledgeareas.
Managed Staffing
This arrangement is a multistaff augmentation approach with thebenefit of administrative supervision. One or more of the staff aug-mentation team members are given supervisor responsibility. Thestaffing supervisor offloads administrative duties from the client andpushes down the directives of the client to the team. As in managedservices, this type of agreement is time based, often spanning one tothree years with possible contract renewal.
A managed staffing arrangement is ideal for an organization thatis in need of skilled resources, but wants to retain direction and con-trol of the testing function. There aren’t any SLAs involved in this ap-proach since the process and outcome are client driven.
324 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
15_4782 2/5/07 11:28 AM Page 324
ONSITE VERSUS OFFSITE/OFFSHORE
The outsourcing approaches listed above can be delivered in the fol-lowing ways:
■ Onsite in the buyer’s environment■ Offsite at a supplier’s test lab■ Offshore in a supplier’s environment
The key to achieving and maintaining business value in offsite/offshoring lies in establishing the management structures enabling allparties to work together effectively. Gartner research shows that ef-fective, integrated relationships are a key factor in delivering long-term success. Research also shows that:
■ Good services integration increases flexibility and improvesdelivery.
■ Poorly integrated relationships are expensive to manage and, inmost cases, fail to deliver what the business needs.
Many organizations mistakenly believe that services will operatethe same offshore as they would onshore, but at a much lower cost.In truth, risk of an offsite or offshore testing engagement rises signif-icantly for a multitude of reasons.
Many companies also assume that using an offshore vendor thathas been assessed at Capability Maturity Model (CMM) Level 5 willallow the business to reach goals of cost savings while maintainingand even improving the quality of their IT products. After all, if theoffshore vendor is CMM Level 5, they must do everything the rightway. Why, then, have so many offshore initiatives failed to achievethe anticipated quality goals and cost savings?
Offshore services must be delivered with innovative practices thatbridge the chasm between a buyer who is probably not operating atCMM Level 5 and the offshore vendor that can operate at CMMLevel 5. Without innovative processes driven by an Onsite IntegratedRelationship team, the two parties are speaking two widely divergentlanguages and cannot be successful.
The Integrated Relationship team is a critical component thatbridges the local and remote team members. This team, which sits
Outsourcing the SAP Testing Effort 325
15_4782 2/5/07 11:28 AM Page 325
onsite, focuses on people, processes, and skill sets. Members of theonsite team must have a right balance of competencies:
■ Behavioral. Personal attributes and characteristics: “know why”■ Business. Business knowledge and awareness: “know what”■ Technical. Technology skills: “know how”
This onsite team, sourced from the supplier, is a critical elementin the structure of any offsite or offshore testing arrangement andgreatly enhances the chance for success.
The approach to effectively integrate offsite or offshore resourcesinto a cohesive project team involves several basic repeatable princi-ples: accountability, verification, communication, repeatability, andcontinuous improvement.
■ Accountability. Roles and responsibilities must be clearly definedif a project combining near and offshore resources is going to bedelivered on-time and within budget. Each resource must clearlyunderstand their duties and how their respective actions influenceoverall project success. A project liaison role is essential to helpfacilitate this understanding by aggressively communicating andauditing all quality gates established by the project team.
■ Verification. As is the case with any project, authenticating com-pleted tasks is a critical project success factor. By verifying theaccuracy and comprehensiveness of all completed tasks, the teamis able to identify problem and high-risk areas early in the projectlife cycle and improve the odds of success sustainability.
■ Communication. When utilizing an offshore provider, communi-cation and cultural issues will likely surface. There is an abun-dance of research available detailing failed projects caused bypoor communication. A communication strategy and plan mustbe developed and the work effort managed against the plan toassure that all team members receive information in a timely man-ner and understand content.
■ Repeatability. This is an ageless key to long-lasting success.Processes governing all offshore work must be universally under-stood and practiced consistently. By implementing repeatable
326 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
15_4782 2/5/07 11:28 AM Page 326
processes, cycle time is improved, reducing costs, improving qual-ity, and enhancing communication.
■ Continuous improvement. Offshore service delivery is an ever-changing effort. Processes, standards, and policies governing oneclient may not work for another. The service provider must con-tinually evaluate process performance, identify improvement, andintegrate these improvements back into the process.
INCREMENTAL TRANSITION: THE BOT MODEL
Mature offsite/offshore vendors utilize a build–operate–transfer(BOT) model to incrementally transition the testing service from theclient’s location to the offsite/offshore test lab. This is a crucial partof a successful testing solution and should not be cut short. The BOTmodel will include activities such as infrastructure planning andimplementation, process development, resource training, piloting,monitoring, and reporting.
In summary, best practices in offsite/offshore testing include:
■ Effective process integration■ Structured communications■ Process monitoring■ Evaluation and feedback■ Incremental transition■ Continuous improvement
STRUCTURING THE TERMS OF THE TESTING SERVICE
Service providers typically gravitate to a standard and consistent wayto arrive at terms for the testing service that will allow them to deliversuccessfully, achieve customer satisfaction and referenceability, mini-mize risk, and make a reasonable profit. The level of complexity ofthe terms is most often directly related to the amount of risk and own-ership assigned to the service provider.
Outsourcing the SAP Testing Effort 327
15_4782 2/5/07 11:28 AM Page 327
Staff Augmentation
As discussed earlier, staff augmentation bears the least amount of riskfor the service provider and assigns sole ownership of the service out-come to the buyer. In this scenario, the supplier assists the buyer inidentifying the knowledge, experience, skill sets, and traits of theresource and identifies a candidate for the client. This processoffloads the enormous task of searching, qualifying, and screen-ing candidates. Often, the supplier has many ways through whichto source candidates that the buyer does not have at his/her dis-posal. Terms are typically limited to pay rate and duration of theassignment.
Solutions
In other arrangements, the supplier is accountable for delivering anoutcome for the engagement. This is referred to as a “solution.”Deliverables-based project, managed services, and managed staffingfall into the “solution” category. Since the supplier is held account-able to deliver an outcome, much analysis and planning is required.Steps to arrive at terms include:
Step 1 Confirming the scope.
Step 2 Fully understanding and validating the client’s requirements.
Step 3 Architecting a solution that achieves the desired outcome,including definition of deliverables (test strategies andplans, test cases and scripts, test management reports, etc.).
Step 4 Aligning testing tasks to the overall project schedule.
Step 5 Sizing the team accordingly.
Once approach and sizing is achieved, the supplier can provide afixed cost or estimate. Pricing is typically structured as an hourly rate.Pricing by deliverables (i.e., test strategy, test scripts, test results re-porting) is an option, but is rarely used, due to complexities in esti-mating. Testing has such a great dependency of many aspects of thesoftware development life cycle that it does not lend itself well topricing by deliverable.
328 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
15_4782 2/5/07 11:28 AM Page 328
Fixed Cost versus Estimate
Service providers will generally allow the buyer to select fixed cost ortime-and-materials payment terms. Some consideration of each isprovided in Exhibit 15.1.
Fixed or Variable Resource Pool
As in traditional project planning, the size of the resource pool isderived from the project duration, work effort, and other factors. If
Outsourcing the SAP Testing Effort 329
EXHIBIT 15.1 Fixed Cost versus Time-and-Materials
Fixed Cost Time and Materials
Definition Costs are fully estimated in Costs are estimated in advancedetail during the contract stage and communicated to the client.and the buyer is given a fixed The client pays for the servicesor flat fee that can be paid on a rendered, as they are received.variety of payment schedules.
Relative Cost Typically more expensive. The Typically less expensive. Nosupplier may add a premium or contingency is needed since contingency into the price to each hour worked is billed.accommodate unexpecteddelays or problems. If thevendor brings the work inunder schedule, the client stillpays the predetermined price.
Relative Higher. Since a fixed price is Lower. Costs could come inPredictability provided, budgeting can be higher or lower than budgeted.of Cost more accurate.
Relative Less flexible. Scope and More flexible. Vendor shouldFlexibility statement of work is followed be managing the work closely
rigidly so that the supplier can (as if the estimate were a fixeddeliver under the cost constraint. cost) and advising the buyerThis can be frustrating for the whenever the budget is buyer who may not have fully exceeded.planned for all conditions thatmight surface. Change orderscan be implemented to addressscope and SOW problems.
15_4782 2/5/07 11:28 AM Page 329
the number of resources can be predicted with reasonable assurance,a fixed number of resources is likely to be the best approach tostaffing the testing team.
Sometimes, the resources needed may vary, driven by unantici-pated events such as a sudden decline in system performance or ur-gent business process changes. Conversely, a variable resource poolmay be needed to address predictable peaks in work, such as plannedsystem releases or upcoming projects. A variable resource pool maybe the best approach to meet predictable or unpredictable demandsthat require different levels of sourcing.
The On-Demand Resource Pool (Unpredictable)
The supplier may accommodate this need by establishing a core teamto handle the steady, predictable flow of testing needs, and comple-ment this team with a set of resources on reserve. The resources onreserve are priced at a discount rate when not actively working. Thisrate is essentially a retainer fee. When demand calls for the need toutilize the reserve resource, a higher rate is invoked for the period oftime utilized. This arrangement provides consistency in the resourceassignment (reducing training time and startup), keeps the buyerscosts lower, and affords great flexibility.
The On-Demand Resource Pool (Predictable)
When peak demand is predictable and planned for, the supplier mayinvoke an approach to increase staff to address the demand, providedthat ample advance notice is provided. The buyer will play a pre-dictable fee for the on-demand resources, which can assist with bud-get planning. Depending on the factors like the supplier’s engagementportfolio, resources, and timing, the specific testers brought into theproject may not have been trained and oriented to the client’s envi-ronment, requiring additional startup time.
330 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
15_4782 2/5/07 11:28 AM Page 330
Expenses
Other costs that should be anticipated in an outsourced testingengagement include:
■ Lab fees. When the testing service is delivered offsite, the buyershould anticipate a test lab fee that covers office space, hardware,software, connectivity, office equipment, and other facility costs.
■ Travel and expense. Resources may travel to deliver services onpremises, or in the case of offsite/offshore, travel on occasionbetween the client’s location and the test location. These costsmay be passed through to the buyer or factored into the fees.
Quality Management of Solution Engagements
Solution engagements of all types are, by nature, more complex andshould include some level of QA processes to ensure that the vendoris delivering as agreed. Mature service providers will include this ser-vice within the scope of the engagement.
LESSONS LEARNED FROM OUTSOURCING SAP TESTING
■ Consider engaging in a testing outsourcing strategy early, prior tocontracting with the system integrator. Identify any overlap orconflicts in contractual terms and statements of work. Build inprocesses that allow each vendor to work effectively without neg-atively impacting the others. Unresolved issues are certain to delaythe project, drive up costs, and require renegotiation of terms.
■ Utilize a structured request for proposals (RFP) process to obtain,evaluate, and compare vendors. Ensure that a vendor conferenceis included in the process so that vendors have an opportunity tothoroughly understand the testing requirements and desiredapproach. Requiring a presentation from the top two or threevendors can help ensure that expectations are aligned between allparties.
Outsourcing the SAP Testing Effort 331
15_4782 2/5/07 11:28 AM Page 331
■ Request examples of past experience from both the supplier com-pany, as well as the lead resources that are relevant for the pro-ject. Ask for references and follow up with those contacts.
■ Evaluate expertise, experience, communication, flexibility, cost,and overall business value.
■ In offsite and offshore assignments, do not shortcut the onsiterelationship team functions. Follow a BOT process for incremen-tal transition of the testing process.
■ Ensure that your organization is ready to take on and maximizethe investment of outsourcing.
■ Allow business value to drive the decisions involved in selectingthe outsourcing solution.
332 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
15_4782 2/5/07 11:28 AM Page 332
333
APPENDIX AAdvanced Testing Concepts
This book so far has covered numerous testing strategies that can beimplemented to allow for a successful SAP implementation and
testing effort. This appendix will provide an overview of some of themore advanced testing concepts that can be implemented additionallyto enhance the testing efforts. It will cover the Orthogonal ArraysTesting System (OATS), which describes a statistical approach to nar-rowing down test input data, plus a very effective way to automate(i.e., the keyword-driven automation approach). It will also touch onusability testing, including Section 508. Finally, this appendix willcover a test harness architecture.
ORTHOGONAL ARRAYS1
It is generally not feasible or cost effective to test a system using allthe possible variations and combinations of test parameter inputs.SAP implementations that have SAP variant configuration are primeexamples of software implementations that have to test multiple com-binations for creating a product based on different parameter inputsand data dependencies among the parameters. An example of SAPvariant configuration would be an Internet user interested in pur-chasing an automobile over the Internet, which can cause the user tohave hundreds, if not thousands, of combinations to build a carthrough a manufacturer’s website. Another example of a systemrequiring multiple combinations and test parameter inputs is a taxmanagement system that required testing and contained a calculationengine that computed the depreciation of fixed assets (e.g., computers,airplanes, and office furniture) based on user-supplied information
1Modified from Elfriede Dustin, “Orthogonally Speaking,” www.stickyminds.com.
16_AppA_4782 2/5/07 11:30 AM Page 333
about the assets of the company. These types of computations arecomplex and sensitive to different combinations of a large number ofpossible input parameters. Some of these parameters are the “placedin service date” of the fixed asset, methods of depreciation, life of theasset, business use percentages, fixed asset costs, and calendar years—to name only a few of many possible parameters. Each parametercould have numerous data values. As a result, there are tens of thou-sands of potential input variations, each producing different resultsand making exhaustive testing of the calculation engine’s computa-tions nearly impossible.
There is no efficient or quick way to test any calculation enginesource code changes since too many of the variables depend on eachother. An approach to derive a suitable set of test cases when it is notfeasible to use all the possible combinations and variations of test pa-rameters is the test technique called the Orthogonal Array TestingSystem (OATS). This technique is very useful for finding a small setof tests (from a large number of possibilities) that exercises keycombinations.
THE OATS SOLUTION
OATS is derived from manufacturing techniques developed as part ofthe industrial engineering discipline. Orthogonal arrays are used as amathematical tool in the Robust Design methodology described inMadhav Phadke’s Quality Engineering Using Robust Design andother books. The Robust Design methodology and design of experi-ments, created by Professor Genichi Taguchi, is in use in many mod-ern areas of engineering.
The OATS technique supports the system test effort by enablingtest cases to be determined efficiently and uniformly. With this testtechnique testers are able to select the combinations of test parame-ters that will provide maximum coverage from test procedures, whileusing a minimum number of test cases. The assumption here is thattests that maximize the interactions between parameters will findmore faults.
The technique works. In the calculation engine testing, for exam-ple, OATS made it possible for the tax management developers to
334 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
16_AppA_4782 2/5/07 11:30 AM Page 334
change their application’s calculation engine with more confidence byusing automatically generated OATS test parameters that are fed intoa test harness, which in return exercised the calculation engine. Theengine’s outputs were captured and became the baseline for any fu-ture changes to the calculation engine. This test harness has provento be very valuable, as it has uncovered many calculation differencescaused by calculation engine source code changes. Moreover, theOATS procedure has given us an objective measure of testingcompleteness.
What Is an Orthogonal Array?
This section introduces the idea of orthogonal arrays with an exam-ple. Suppose there are three parameters (A, B, and C), each of whichhas one of three possible values (1, 2, or 3). The effort to test all pos-sible combinations involving the three parameters would require 27test cases.
Are all twenty-seven of those tests needed? Yes, if there’s a faultthat depends on the precise values of all three parameters (a fault, forexample, that would occur only for the case A = 1, B = 1, C = 1). But,because of the way programming works, it’s probably more likelythat a fault will depend on the values of only two of the parameters.In that case, the fault might occur for each of these three test cases: A= 1, B = 1, C = 1, A = 1, B = 1, C = 2, and A = 1, B = 1, C = 3. Sincethe value of C in this example seems to be irrelevant to the occurrenceof this particular fault, any one of the three tests will suffice.
Given that assumption, the array in Table A.1 shows the nine testcases required to catch all such faults, in the most economicalarrangement that will show all possible pairs within all three vari-ables. The array is orthogonal because, for each pair of parameters,all combinations of their values occur once. That is, all possible pair-wise combinations between parameters A and B, B and C, and C andA are shown. In terms of pairs, this array has a strength of 2. Itdoesn’t have a strength of 3 because not all three-way combinationsoccur; A = 1, B = 2, C = 3, for example, doesn’t appear. But it coversthe pairwise possibilities, which is what pairwise testing is concernedwith.
Advanced Testing Concepts 335
16_AppA_4782 2/5/07 11:30 AM Page 335
Applying the Technique
Before implementing orthogonal array testing, the test engineer needsto determine the size of the array required for the specific system testeffort. The size of the orthogonal array is based upon the maximumnumber of values for all possible parameters. For demonstration pur-poses, let’s look at a simplified example of how we might use OATSto test our favorite tech bookstore’s applications.
Table A.2 shows example parameters we believe might interact.They are: Type of Credit Card, Credit Card Number, Credit CardExpiration Date, Product Type Purchased, and Quantity Purchased.
336 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
TABLE A.1 Sample Array
A B C
1 1 1 32 1 2 23 1 3 14 2 1 25 2 2 16 2 3 37 3 1 18 3 2 39 3 3 2
TABLE A.2 Bookstore Purchase Parameters and Values
Type of Credit Credit Card Product Credit Card Expiration Date Type QuantityCard Number (Years from Today) Purchased Purchased
Amex Correct 50 Book 1
Discover Incorrect Invalid Year Video 0Length
Visa Invalid Today Software –1Digits
MasterCard Yesterday Book, Software, Videos 10
Invalid Character Book, Software 1
16_AppA_4782 2/5/07 11:30 AM Page 336
Each parameter has its own possible values that need to be testedin combination with the values of other parameters. The possible val-ues pertaining to the Type of Credit Card used by a customer mightinclude American Express, Discover, Visa, and MasterCard. TheCredit Card Number entries could be correct or incorrect. All correctnumbers are assumed to interact in the same way; that is, if one cor-rect number reveals a fault when combined with a Discover card, allcorrect numbers will reveal the same fault when combined with aDiscover card. However, incorrect numbers are assumed to interactdifferently, depending on whether they have an incorrect length orsome invalid digits.
Once the parameters and the values have been derived, we haveto decide how parameters are likely to interact. If only pairwise in-teractions are likely, the array should have a strength of 2. In thiscase, it seems reasonable to say that pairwise testing is sufficient(“good enough”) for this type of application testing, so three-waytesting doesn’t seem necessary. (Note that with a higher-risk applica-tion, one might want to consider selecting an array that allows forthree-way or n-way input parameter testing.)
An orthogonal array tool can be used to produce an orthogonalarray such as the one in Table A.3. Each resulting row in the orthog-onal array specifies one specific test case (without expected results).For example, in row number 0, a test case will be executed usingAmerican Express as the credit card, with a credit card number valueof 402901517, with an expiration date of 2/13/2001, involving thepurchase of one (1) book. [Note: The credit card numbers used hereand in the accompanying table are truncated fictional numbers, so asto avoid any similarity to actual accounts.] Collectively, 25 test casesexercise all pairwise combinations. Exhaustive testing would have re-quired 55, or 3,125 test cases.
The test cases contain specific values, rather than markers like“incorrect length” or “invalid year.” To construct this example, use ascript that replaces the respective values of the orthogonal array withthe actual parameters and values needed for the project.
While OATS is a useful tool, consider using additional testing tech-niques to derive your data elements when you’re determining actualvalues. Techniques such as boundary value analysis (e.g., selecting themaximum, minimum, one more than maximum, one more than mini-mum, or zero data) in combination with OATS can be a powerful
Advanced Testing Concepts 337
16_AppA_4782 2/5/07 11:30 AM Page 337
technique. (It is common for errors to congregate around the bound-ary values.) In this example, one could pick the Quantity 100,000—be-cause that’s the maximum number of any item that can be purchasedat one time in this example—then try 99,999 (one less than maximum),100,001 (one more than maximum), and so on.
338 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
TABLE A.3 Test Case Definitions
Credit Credit Card Expiration
ID Card Number Date Product Quantity
0 Amex 402901517 2/13/2001 Books 11 Amex 123456789 2/15/2001 Software 02 Amex 11111111% 5/15/2001 Videos –13 Amex WER11212p 5/28/2050 Books, Software,
Videos 104 Amex 542212345 3/16/2001 Books, Software 15 Discover 402901517 2/15/2001 Videos 106 Discover 123456789 5/15/2001 Books, Software,
Videos 17 Discover 11111111% 5/28/2050 Books, Software 18 Discover WER11212p 3/10/2001 Book 09 Discover 542212345 2/13/2001 Software –110 Visa 402901517 5/15/2001 Books, Software 011 Visa 123456789 5/28/2050 Books –112 Visa 11111111% 2/22/2001 Software 1013 Visa WER11212p 2/13/2001 Videos 114 Visa 542212345 2/15/2001 Books, Software,
Videos 115 MasterCard 402901517 5/28/2050 Software 116 MasterCard 123456789 3/08/2001 Videos 117 MasterCard 11111111% 2/13/2001 Books, Software,
Videos 018 MasterCard WER11212p 2/15/2001 Books, Software –119 MasterCard 542212345 5/15/2001 Books 1020 Visa 402901517 3/26/2001 Books, Software,
Videos –121 Visa 123456789 2/13/2001 Books, Software 1022 Visa 11111111% 2/15/2001 Books 123 Visa WER11212p 5/15/2001 Software 124 Visa 542212345 5/28/2050 Videos 0
16_AppA_4782 2/5/07 11:30 AM Page 338
This example also illustrates an issue that often arises: unspecifiedvalues. Because there are fewer Type of Credit Card and Credit CardNumber values than there are Expiration Date values, not all rows inthe orthogonal array are required to exercise all the pairwise combi-nations involving them. For some of the rows, the value of Type ofCredit Card or Credit Card Number is left to the discretion of the testengineer. The values can be chosen based on risk, highest usage, orhighest problem area. Table A.4 provides an example. American Ex-press might have been chosen because it has been the card the book-store’s application traditionally has had the most trouble with, or isused most often. A correct Credit Card Number might have beenchosen because incorrect number input might not have been an issuein the past.
In some cases combinations of test parameters and values can beinvalid, and an invalid test case combination is generated (dependingon business logic). For example, with three parameters (A, B, and C),it might be invalid for both A and C to have a value of 1, but theOATS tool would still generate that combination. In that case it is upto the test engineer to make a decision. Execute the test case as is(garbage in and garbage out) and determine how the system handlesinvalid combinations of input or decide to not use the invalid test casecombinations—in order to shorten the test case evaluation cycle, orin cases where the back-end system doesn’t allow for invalid inputcombinations. But choose carefully: If you throw out the invalidcases, you might also be throwing out other combinations. For ex-ample, the A = 1, C = 1 case might be the only row containing A = 1,B = 3 (a valid combination that won’t be tested if you throw out therow). And if you throw out the invalid cases, you won’t know howthe system behaves given these invalid combinations.
In the case study of the calculation engine of the asset manage-ment system, input of invalid test combinations was allowed. The
Advanced Testing Concepts 339
TABLE A.4 Sample Combination
Type of Credit Credit CardCredit Card Expiration Date Product Type QuantityCard Number (Years from Today) Purchased Purchased
Amex Correct Invalid Character Books, Software 1
16_AppA_4782 2/5/07 11:30 AM Page 339
calculation engine was expected to produce consistent results amongthe various builds, whether the input was valid or invalid.
Please note that Table A.4 is a simplified excerpt, one that is smalland readable, of a sample test case combination. In addition to theparameters illustrated here, the bookstore might also wish to trackthe number of books that remain in inventory following the pur-chase, or be able to query the purchase status for a particular cus-tomer. Test professionals should review the resulting test cases andadd additional test cases based on known risk areas.
In our test program for the asset management calculation engine,we executed a test harness that generated over 17,000 test cases usingOATS. A software program was then developed that incorporatedthese test cases and applied them to the back end of the Web appli-cation one by one. (Such a software program might be tailored to cre-ate a particular load on the system, or to simply verify baselinefunctionality.)
KEYWORD-DRIVEN AUTOMATION APPROACH2
A keyword-driven automation approach to testing is similar to theuse of a data template where this approach makes use of a data inputfile that consists of keywords. Not only is data input from a file, butalso the associated controls, commands, and expected results. As aresult, test script code is separated from data, which minimizes scriptmodification and maintenance effort. When using this approach, it isimportant to separate the action of determining “what requirementsto test” (simple user commands) from the effort of determining “howto test the requirements” (actually implementing the code to performthe command). The functionality of the application, selected for test-ing, is documented within a table to include the step-by-step instruc-tions for each test. See Table A.5, Keyword-Driven Automation.
Once the table has been created, a simple parser can be createdthat reads the steps from the table, while the keyword determineshow to execute each of the steps by calling the specific function andperforms error checking based on the error codes returned, among
340 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
2Adaptation of the data-driven approach discussed in Elfriede Dustin, 1999, Auto-mated Software Testing, Reading, MA: Addison-Wesley.
16_AppA_4782 2/5/07 11:30 AM Page 340
other features. The parser extracts information from the table for thepurpose of developing one single (large) test procedure. The parsercode is depicted below.
Script That Makes Use of Keyword-Driven Automation
Window SetContext, “VBName=StartScreen;VisualText=XYZ Savings Bank”, “”
PushButton Click, “VBName=PrequalifyButton;VisualText=Prequalifying”
Window SetContext, “VBName=frmMain;VisualText=Mortgage Prequalifier”, “”
MenuSelect “File->New Customer”
ComboListBox Click, “ObjectIndex=” & TestCustomer.Title , “Text=Mr. “
InputKeys TestCustomer.FirstName & “{TAB}” & TestCustomer.LastName &
“{TAB}” & TestCustomer.Address & “{TAB}” & TestCustomer.City
InputKeys “{TAB}” & TestCustomer.State & “{TAB}” & TestCustomer.Zip
PushButton Click, “VBName=UpdateButton;VisualText=Update”
.
.
.
‘End of recorded code
The test team could create a GUI Map containing entries forevery type of GUI control that the testing would need to address.Controls would include every push button, pull-down menu, drop-down box, and scroll button. Each entry in the GUI Map would con-tain information on the type of control, the control item’s parentwindow, and the size and location of the control in the window. Eachentry would contain a unique identifier similar in concept to controlIDs. The test engineer uses these unique identifiers within test scriptsin a similar way in which object recognition strings are used.
Advanced Testing Concepts 341
TABLE A.5 Keyword-Driven Automation
Table Used to Generate Automated Testing Skill
Window Window(VB Name) (Visual Test) Control Action Arguments
StartScreen XYZ Savings Bank — SetContent —
PrequalifyButton Prequalifying PushButton Click —
frmMain Mortgage — SetContext —Prequalifier
FrmMain File — MenuSelect New Customer
16_AppA_4782 2/5/07 11:30 AM Page 341
The GUI Map serves as an index to the various objects within theGUI and the corresponding test scripts that are available to performtests on the objects. The GUI Map can be implemented in severalways to include the use of constants or global variables. Every GUIobject is replaced with a constant or global variable. The GUI Mapcan also be supported through the use of a data file, such as a spread-sheet. The map information can then be read into a global array. Byplacing the information into a global array, the map information isavailable to every test script in the system and can be reused andcalled repeatedly.
In addition to reading GUI control data from a file, expected resultdata can also be placed into a file and retrieved. This way, the auto-mated test tool can make an automatic comparison of the actual resultproduced by the test to the expected result maintained within a file.
When developing this keyword-driven approach, it is importantthat the test team keeps in mind the size of the application, the size ofthe test budget, and the return of investment that can be expectedfrom applying a data-driven approach. Consider the example of thetest engineer named Bill, who demonstrated a keyword table-drivenapproach at a test tool user group meeting. Bill had developed a sig-nificant number of scripts to support a keyword-driven approach,when the application that he was trying to test in an automated fash-ion was quite simple. The application that he was testing performedsimple record add, delete, and update functions. It would have beenmuch more efficient to simply use a data file to enter the variousrecords, which amounted to a test development effort of no morethan a half hour. The resulting script in turn, could be reused as oftenas necessary. The keyword-driven approach had taken Bill two weeksto develop.
More on Keyword-Driven Testing3
Keyword-driven testing is actually a concept known by many names.It is sometimes called “table-driven testing,” “action-based testing,”and even “data-driven testing” in some contexts or in the user’s man-
342 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
3Contributed by Carl Nagle.
16_AppA_4782 2/5/07 11:30 AM Page 342
Advanced Testing Concepts 343
ual from many commercial vendors of automated test tools. In theworld of software test automation, two of the most important thingsto know about keyword-driven testing are that it provides for the sep-aration of certain roles in the testing process, and it allows us to sep-arate our key test assets from the tools that will execute them.
When talking about the separation of roles we are talking aboutthe ability to retain nonprogrammer testers for the role of Test De-signer, while allowing for a separate role in the Test Automator, if de-sired. At the highest level, the nonprogrammer in test tools, the TestDesigner (i.e., Test Designer is the equivalent of the business analyst,SAP configuration expert, or subject matter expert) is able to expressexecutable keyword-driven tests in the vocabulary most suited to theapplication. There is no need to learn a specific tool’s programminglanguage because the keyword-driven tests are not written for anyspecific tool to execute. In fact, the high-level keyword-driven tests asshown below are even suitable for manual execution.
Here is a simple example of high-level keyword-driven testinstructions:
Keywords Parameters
LoginAsUser “admin” “adminPassword”
AddEmployee “John” “Smith”
VerifyEmpID “12345”
As we can see, the high-level keyword-driven tests are easy to de-velop and easy to interpret. They can be written using familiar texteditors, spreadsheet tools, or table editors. A manual tester or test au-ditor should have no problem understanding what the test is intendedto do.
Some keyword-driven automation frameworks go only this far.The role of the Test Automator (i.e., the person who brings expertisein the automated test tools) is then to create the execution engine thatcan interpret the above tests and call the appropriate automation toolfunctions to accomplish each task. In this scenario, an execution enginemust be written specifically for the application being tested in the lan-guage of the tool that is going to test it. When it is time to test anotherapplication, a new execution engine must be written to support it.
Extended keyword-driven frameworks go a step further and im-plement execution engines that are not at all tied to the application
16_AppA_4782 2/5/07 11:30 AM Page 343
being tested. They provide an additional low-level layer of keywordsupport that allows even the Test Automator to develop test assetsthat are independent of the tool that will execute them. This meansthe execution engine need only be created once and it can be used totest any number of applications. In addition, the execution engine canbe written for different automation tools and the tests themselves donot have to change. This effort is always much smaller than rewritingall the automated tests for all the tested applications using other testautomation techniques.
As we will see in the next example, the most basic of test auto-mation steps are available as low-level keywords in an extended key-word-driven framework. A Test Designer’s “LoginAsUser” testwould be implemented by the Test Automator using the low-levelkeyword instructions provided by the extended framework:
Keywords Parameters
SetText LoginWindow UserName “admin”
SetText LoginWindow Password “adminPassword”
Click LoginWindow Submit
As should be evident, the role of the Test Automator in this sce-nario is not to write a new execution engine but is, instead, to exploitthe extended execution engine that already exists. These low-level as-sets are still in the simple text format and are not tied to any particu-lar test automation tool. We leverage an execution engine that knowshow to interpret these simple commands. If the need to migrate to anew testing tool ever occurs, we need only to write a comparable ex-ecution engine and all of the tests for all of our applications are stillvalid.
Pros and Cons of Keyword-Driven Testing
While this all sounds great, there are some trade-offs that occur bygoing the keyword-driven route:
■ No automated scripting. This is not a record-and-playback sce-nario. Tests must be planned and developed in the keyword-driven format.
344 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
16_AppA_4782 2/5/07 11:30 AM Page 344
■ Longer to develop tests. Because there is no record option for thistype of testing, it does tend to take longer to put the test in placethan it would for a simple recorded script.
■ Framework learning curve for new Test Automators. In additionto learning the nuances of the automation testing tool itself, theTest Automator must learn about the additional keyword-drivenlayer sitting on top of it.
■ Framework from scratch can be cost prohibitive. If the projectcannot leverage an existing framework, then one must be createdfrom scratch. This can take weeks to complete and often this hasnot been factored into existing project schedules.
Keyword-driven testing, however, does provide tremendous ad-vantages over more traditional automation techniques when used inthe proper context. Some of these benefits include:
■ Tests easy to read/enhance. We do not need to be programmers tocreate or interpret the tests. Thus, manual testers and test auditorscan review the tests and actually understand and even executethem.
■ Separates test development from automation tools. Because thisis not a record-and-playback technique, test development by non-programmers can actually begin long before the application hasbeen delivered for testing. And since the keyword-driven tests arenot coded in the programming language of any specific testingtool, the tests can migrate from tool to tool over time whenevernecessary. (It is even possible to use multiple automation tools—each handling the part of the test it is most capable to handle.)
■ Testers can migrate across projects more readily. Where keyword-driven testing is used throughout an organization, testers canreadily migrate from project to project regardless of the testautomation tools that might be deployed in different areas. Atester writing keyword-driven tests executed by Vendor A can justas easily write keyword-driven tests executed by Vendor Bbecause the keyword-driven tests are not tied to either tool andthe editors used to write the tests are the same for all executionengines.
■ Framework is application independent. The automation toollibraries used to execute keyword-driven tests can be made
Advanced Testing Concepts 345
16_AppA_4782 2/5/07 11:30 AM Page 345
entirely generic and independent of the tested applications. Thisis an extraordinary level of code reuse, robustness, and maturityproviding significantly reduced long-term maintenance costs.
For more detailed information on keyword-driven testing, readthe whitepaper on Test Automation Frameworks at http://safsdev.sourceforge.net.
USABILITY TESTING4
Usability testing evaluates the human factor, or usability problems. Eval-uating for usability helps to measure whether usability goals are met.
One of the early references to a usability engineering methodol-ogy was offered by Gould and Lewis (1985). Gould and Lewis de-scribe a very general approach to usability engineering involvingthree global strategies:
1. Early focus on users and tasks. This strategy involves applyingsuch tasks as user profiling, task analysis, prototyping, and userwalkthroughs.
2. Empirical measurement. Here such tasks and techniques as ques-tionnaire administration, laboratory and field usability studies,and usage studies represent some of those available for collectingobjective, quantitative performance, and satisfaction data.
3. Iterative design. Systems built using a User Interface Manage-ment System (UIMS) allow radical changes to the interface (asopposed to the application code itself) to be made quickly andeasily in response to empirical data. This makes iterative testingand redesign feasible.
Usability tests5 are performed in order to help verify that the sys-tem is easy to use and that the user interface appearance is appealing.Usability tests consider the human element in system operation. The
346 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
4Adapted and modified from Elfriede Dustin, 1999, Automated Software Testing,Reading, MA: Addison-Wesley.5Modified from Elfriede Dustin, 2002, “Usability Testing,” in Effective SoftwareTesting, Addison Wesley, 2002.
16_AppA_4782 2/5/07 11:30 AM Page 346
test engineer needs to evaluate the application from the perspective ofthe end user.
Test development considerations for usability tests include ap-proaches where the user executes a prototype of the actual applicationbut the real functionality has not been built. By running a capture/playback tool in capture mode while the users are executing the pro-totype, recorded mouse movements and keystrokes can track wherethe users move and how the users would execute the system. Readingthese captured scripts can help the designers understand the approachof the usability of the application design.
Inadequate attention to the usability aspects of an application cancause an application to have a poor acceptance rate among end users,based on the perception that it is not easy to use, or doesn’t performthe necessary functions. This can lead to increased technical supportcalls, and can negatively affect application sales or user acceptance.
Usability testing is a difficult but necessary part of delivering anapplication that satisfies the needs of its users. The primary goal ofusability testing is to verify that the intended user base of the appli-cation is able to interact properly with the application, with a posi-tive and convenient experience. This will require an examination ofthe layout of the application’s interface, including navigation paths,dialog controls, text, and other elements as necessary, such as local-ization and accessibility testing requirements. In addition, supportingcomponents such as the installation program, documentation, andhelp system must also be investigated.
In order to properly develop and test an application for good us-ability, it is necessary to gain an understanding of the target audienceof the software and their needs. This information should be promi-nently featured in the application’s business case and other high-leveldocuments. There are several ways to determine the needs of the tar-get audience from a usability perspective:
■ Hire subject matter experts. Having staff members that are alsoexperts in the domain area is a necessity in the development of acomplex application. These staff members will be able to counselthe requirements, development, and testing teams on a continualbasis, which can be a great asset to the effort. It is usually neces-sary to have multiple subject matter experts on hand, since opin-ions on certain domain-related rules and processes could differ.
Advanced Testing Concepts 347
16_AppA_4782 2/5/07 11:30 AM Page 347
■ Focus groups. An excellent way to get end-user input on a pro-posed user interface is to hold focus group meetings with potentialcustomers to get their feedback on what they would like to see inan interface. Prototypes or screenshots are useful tools to use in afocus group discussion. It is important to make sure that the mem-bers of the focus groups are representative of all of the actual endusers of the product, so that adequate coverage is achieved.
■ Surveys. Although not as effective as the above approaches, sur-veys can yield useful information about how potential customerswould use a software product to accomplish their tasks.
■ Similar products. Investigating similar products can provide infor-mation on how the problem has been solved by other groups inother problem domains, as well as the same problem domain.Although user interfaces should not be blatantly copied fromanother product, it is useful to see how other groups or competi-tors have chosen to approach the user interface of the application.
■ Observation. Monitoring a user’s interaction with an application’suser interface can provide a wealth of information about its usabil-ity, which can be accomplished by simply taking notes while the userworks with the application, or videotaping the session for lateranalysis. This will enable the usability tester to see where the usersstumbled with the user interface, and where they found it intuitive.
As with most nonfunctional requirements, early attention to us-ability issues can produce much better results than attempting toretrofit the application at a later time. Some application designs andarchitectures may not be suitable for the required user interface, andtherefore would be difficult to change later if it is determined that theapplication is regarded as poor from a usability perspective. In addi-tion, a large amount of time and effort is expended to craft the ap-plication’s user interface, so it is wise to specify the correct interfaceas early as possible in the process.
An effective tool in the development of a usable application is theuser interface prototype. Developing this kind of prototype allows in-teraction between potential users, requirements personnel, and de-velopers to determine the best approach to the application’s interface.Although this can be done on paper, prototypes are the best approachsince they are interactive, and give a “preview” of what the appli-cation will look like. Prototypes, in conjunction with requirements
348 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
16_AppA_4782 2/5/07 11:30 AM Page 348
documents, can also provide an early basis for developing test proce-dures. During the prototyping phase, usability changes can be imple-mented without much impact to the development schedule.
Later in the development cycle, end-user representatives or sub-ject matter experts should participate in the usability tests. If the ap-plication is targeted at multiple types of end users, then at least onerepresentative from each group should take part in the tests. Partici-pation can take place at the site of the software development organi-zation, or it could be done using a prerelease version of the softwaresent to the end user’s site, accompanied by usability evaluation in-structions. Each end user will note areas where they don’t find the in-terface usable or didn’t understand parts of it, and/or providefeedback on how it could be improved. Remember that at this stagein the development life cycle, large-scale changes to the application’suser interface are typically not practical, so only refinements shouldbe targeted here.
A similar approach can be taken for an application that is alreadyin production. Feedback and survey forms are useful tools in deter-mining what usability improvements should be made for the next ver-sion of the application. This type of feedback can be extremelyvaluable, since it is coming from paying customers who have a vestedinterest in seeing the application improved to meet their needs.
Another aspect of usability is Section 508,6 which refers specifi-cally to Section 508 of the Rehabilitation Act of 1973, as amended bythe Workforce Investment Act of 1998 (to learn more about Section508 please visit www.section508.gov/). SAP implementations in theU.S. federal and Department of Defense sector are subjected to Sec-tion 508 compliance. The law requires federal agencies to purchaseelectronic and information technology that is accessible to employeeswith disabilities, and to the extent that those agencies provide infor-mation technology to the public, it too shall be accessible by personswith disabilities.
Actually Section 508 was included in an amendment to the Re-habilitation Act in 1986, with the requirement that the federal gov-ernment provide accessible technology to employees and to thepublic. But the 1986 version provided no guidance for determining
Advanced Testing Concepts 349
6www.access-board.gov/sec508/standards.htm.
16_AppA_4782 2/5/07 11:30 AM Page 349
350 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
accessibility of information technology and there were no enforce-ment procedures. The 1998 amendment addressed both these issues.
If an application has to be Section 508 (accessibility) compatible,there are numerous tools on the market that allow for Section 508compatibility testing, such as Bobby, described at http://www.jimthatcher.com/testing4.htm.
TEST HARNESS
Some components of a system can be tested only by developing a testharness. For example, consider the tester who is designing tests for acalculation engine that allows for hundreds and thousands of inputcombinations, as described in the section on Orthogonal Arrays. Thistype of testing will require a different test design from user interfaceor black box testing. Since the combinations and variations of inputsto the calculation engine are too huge to consider testing through theuser interface, due to speed and other issues, it may be necessary todevelop a test harness to directly test the calculation engine. It willrequire a large set of input values and verify the output.
A test harness is a tool that performs automated testing of a corecomponent of a program or system. It could be developed to allowdeeper testing of core components. Usually written in a more robustprogramming language, such as a standalone Java, C++, or VBA pro-gram, a custom-built test harness will typically be faster and moreflexible than an automated test tool script, which may be constrainedby the test tool’s specific environment.
A test harness could also be used to compare a new componentagainst a legacy component or system. Often, two systems do not usethe same data storage format and have different user interfaces usingdifferent technologies. Therefore, any automated test tool wouldneed a special mechanism, or require a duplicate automated testscript development effort, in order to run identical test cases on bothsystems and generate identical (or at least comparable) results. In theworst case, duplicate test scripts would have to be developed usingtwo different sets of automated testing tools, if one tool is not com-patible with both systems. Instead, a custom-built, automated testharness could be written that encapsulates the differences betweenthe two systems into separate modules, and allows targeted testing tobe performed against both systems. Typically, the test harness will in-
16_AppA_4782 2/5/07 11:30 AM Page 350
teract with each system below the user interface, to achieve optimumperformance and stability. An automated test harness could take thebaseline of the test results generated by a legacy system and auto-matically verify the results generated by the new system by compar-ing the two result sets and outputting any differences. One way toimplement this is to use a test harness adapter pattern.
A test harness adapter is a module that “adapts” each systemunder test to be compatible with the test harness, which executes pre-defined test cases against systems, through the adapters, and storesthe results in a standard format so that results can be automaticallycompared from one run to the next. For each system to be tested, aspecific adapter must be developed that is capable of interacting withthe system, directly against its DLLs or COM objects, for example,and executing the test cases against it. Note that, for example, to testtwo systems with a test harness, it would require two different testadapters, and two separate invocations of the test harness, one foreach system. The first invocation would produce a test result, whichwould be saved and then compared against the test result for the sec-ond invocation. Exhibit A.1 depicts a test harness that is capable ofexecuting test cases against a legacy system and a new system.
Advanced Testing Concepts 351
EXHIBIT A.1 Test Harness Basic Architecture
Test Cases
Test Harness
Test Harness Adapter(Legacy System)
LegacySystem
LegacySystem
Test Harness Adapter(New System)
Test Result
16_AppA_4782 2/5/07 11:30 AM Page 351
Identical test cases can be run against multiple systems using atest harness adapter for each system. The adapter for a legacy systemcan be used to establish a base set of test results against which the re-sults for the new system can be compared.
The test harness adapter works by taking a set of test cases andexecuting them in sequence directly against the application logic ofeach system under test, bypassing the user interface. This allows formaximum throughput of the test cases. Results from each test caseare stored in one or more results files, in a format, such as XML, thatis the same regardless of the systems under test. Result files can be re-tained for later comparison to the results files generated in subse-quent test runs. To compare the results of the tests, a custom-builtresults comparison tool knows how to read and evaluate the resultfiles, and output any errors or differences found. It is also possible toformat the results so they can be compared with a standard “file diff”tool.
As with any type of tests, test harness test cases may be quitecomplex, especially if the component tested by the harness is of amathematical or scientific nature. Since there are sometimes millionsof possible combinations of the various parameters involved in cal-culations, there are also potentially millions of possible test cases.Given time and budget constraints, it is unlikely that all possible testcases will actually be expressed and tested; however, it’s likely thatmany thousands of test cases will be developed and executed usingthe test harness.
With thousands of different test cases to be created and executed,test case management becomes a significant effort. Detailed below isa general strategy for developing and managing test cases for use withthe test harness, which is also applicable to other parts of the testingeffort.
■ Creating test cases. Test cases for a test harness are developed inthe same fashion as test cases for manual testing, using varioustest techniques. A test technique is a formalized approach tochoosing the test conditions that give a high probability of find-ing defects. Instead of guessing at which test cases to choose, testtechniques help testers derive test conditions in a rigorous andsystematic way. A number of books on testing describe testingtechniques such as equivalence partitioning, boundary value
352 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
16_AppA_4782 2/5/07 11:30 AM Page 352
Advanced Testing Concepts 353
analysis, cause–effect graphing, and others7 and a brief overviewis provided here:● Equivalence partitioning identifies the ranges of inputs and ini-
tial conditions that are expected to produce the same results.Equivalence relies on the commonality and variances amongthe different situations in which a system is expected to work.
● Boundary value testing is used mostly for testing input editlogic. Boundary conditions should always be part of your testscenarios, since it has been proven that many defects occur onthe boundaries. Boundaries define three sets or classes of data:good, bad, and on the border (in-bound, out-of-bound, and on-bound). Boundary testing uses values that lie in or on theboundary, such as endpoints, and maximum/minimum values,or field lengths.
● Cause–effect graphing8 is a technique that provides a conciserepresentation of logical conditions and corresponding actions,represented in a graph with the causes on the left and the effectson the right.
● Orthogonal array testing enables the selection of the combina-tions of test parameters that provide maximum coverage fromtesting procedures, using a minimum number of test cases. Testcases using orthogonal array testing can be generated in an auto-mated fashion. (See the section on Orthogonal Array testing.)
■ Establishing a common starting point. All test cases must estab-lish a well-defined starting point that is the same every time thetest case is executed. Setting up a template with record types,types, and record fields, and then creating a new set of recordsusing this template before running a series of test cases can pro-vide this common starting point. When the modular test compo-nents are reused, they will be able to hand off the application inthe same way they found it, for the next test component to run.Otherwise, the second test component will always fail since theassumed starting point is incorrect.
7See Boris Beizer, 1990, Software Testing Techniques, International Thomson Com-puter Press; also see G. J. Myers, 1979, The Art of Software Testing, New York: JohnWiley & Sons.8G. J. Myers, 1979, The Art of Software Testing, New York: John Wiley & Sons.
16_AppA_4782 2/5/07 11:30 AM Page 353
354 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
■ Manage test results. Test scripts produce a test result for everytransaction set they execute. The test results are generally writtento a file. A single test script can write results to as many files asdesired, though in most cases a single file should be sufficient.After running a series of test cases, a number of files containingtest results are created. Since running any given test case shouldproduce the same results every time it is executed, the test resultsfiles can be compared directly via a simple file diff, or by using acustom-developed test results comparison tool. The differencesthat this comparison produces need to be evaluated, and defectsneed to be determined, documented, and tracked to closure.
A custom-built test harness can provide a level of testing aboveand beyond that of automated test tool scripts. Although a test har-ness can be time consuming to create, it will allow deeper coverage ofsensitive application areas, and also allow two applications to becompared.
16_AppA_4782 2/5/07 11:30 AM Page 354
355
APPENDIX B*
Case Study: Accelerating SAP Testing
SAP testing accelerators are a new trend from software testing ven-dors to introduce or facilitate automation testing efforts. SAP test
accelerators are a prebuilt library of previously automated test casesrepresenting SAP test transactions that can be customized or modi-fied to meet a project’s specific and unique configuration settings. SAPtest accelerators hold the promise of reducing the cycle time to auto-mate SAP end-to-end processes (i.e., hire-to-retire, request-to-pay,etc.) while empowering the SAP project’s nontechnical members toassemble and execute automated test cases.
Although SAP testing accelerators ostensibly offer superior bene-fits over traditional SAP automation efforts whereby SAP transac-tions are recorded from scratch, they also have potential drawbacksthat are often obscure and can hamper automation progress. Manyof the drawbacks from SAP testing accelerators are overcome withwhat is known as a “next-generation accelerator.”
BACKGROUND
Test accelerators refer to prebuilt and generically recorded test casesthat could be used to test packaged enterprise business applications.Accelerators typically provided most, if not all, elements necessary totest an entire end-to-end business process such as order-to-cash. Theoriginal thinking was that if a single application was deployed atmany locations, a single out-of-the box library of prerecorded testcases that can be modified as needed would accelerate the implemen-tation of test automation by providing prebuilt content in a provenframework to the SAP user community.
17_AppB_4782 2/5/07 11:32 AM Page 355
Initial test accelerators’ assets focused on screen logic, screenelements, and test scripts. This made it possible for companies to re-configure and edit these preexisting assets to reflect the unique con-figuration settings established at each SAP installation. This furtherallowed test developers to greatly reduce the effort associated withbuilding a test asset development framework and automated testcases from scratch.
This reusability of test assets was a tremendous benefit to the testscript developer. By reusing the fundamental screen elements it waspossible to quickly put together many different test scripts in a shortperiod of time.
CHALLENGES
While test accelerators were an improvement over traditional auto-mation efforts of developing automated test cases from scratch, theystill suffered from the following four main problems:
1. Limited system validation2. Increased maintenance3. High costs4. Complex data management
Traditional SAP test accelerators do not embed sufficient pro-gramming logic for validating business processes or validatingprocesses at the back end of the application.
An effective test acceleration solution must incorporate test assetmaintenance in its thinking. When we say maintenance, we are refer-ring to change management and control. Changes are a natural partof any business process and these changes percolate down to the testassets as well. For a test accelerator to be effective, it must contem-plate this reality and provide a solution to easily manage, modify, andevolve with the changing SAP business processes.
The current model most graphical user interface (GUI) test toolproviders use for managing data for a test script is a spreadsheet. Foreach SAP transaction in a test script they will associate a test script toinput the data and another spreadsheet for validating the results ofthat transaction. Exhibit B.1 is a diagram of an order-to-cash (OTC)
356 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
17_AppB_4782 2/5/07 11:32 AM Page 356
357
Ord
er t
o C
ash
Sce
nar
io 1
Exe
cutio
nV
alid
atio
n
LE
GE
ND
Sal
esO
rder
Dis
play
Sal
esO
rder
Goo
dsIs
sue
Dis
play
Out
boun
dD
eliv
ery
Dis
play
Cus
tom
erD
ispl
ayC
ondi
tion
Sto
ckO
verv
iew
Del
iver
y
Dis
play
Mat
eria
lD
ocum
ent
Bill
ing
Dis
play
Sal
esO
rder
Sto
ckO
verv
iew
Bal
ance
Dis
play
Dis
play
Acc
t.D
ocum
ent
Cus
t. A
cct.
Bal
ance
Dis
play
Dis
play
Acc
t.D
ocum
ent
Cus
t. A
cct.
Bal
ance
Dis
play
Bal
ance
Dis
play
Cus
t. A
cct.
Bal
ance
Dis
play
Inco
min
gP
aym
ent
Dis
play
Acc
t.D
ocum
ent
EXHI
BIT
B.1
Dec
ompo
siti
on o
f O
rder
-to-
Cas
h Sc
enar
io
17_AppB_4782 2/5/07 11:32 AM Page 357
end-to-end scenario encompassing multiple SAP transactions strungtogether. The SAP test accelerator will offer a series of automated testcases for each transaction linked together to form a single test scriptfor the complete business process. Each transaction requires a spread-sheet to drive the execution. It is likely that the end-to-end process inExhibit B.1 for OTC will have over 20 spreadsheets associated withit. Considering that an organization may have 20 different OTC sce-narios that must be tested, it is possible to have hundreds of spread-sheets containing the test data for just OTC.
Existing SAP test accelerators have prebuilt libraries that aregeneric and therefore any economies of scale are limited. For exam-ple, every time a test scriptwriter constructs an automated test casefor entering an order through an SAP transaction such as VA01 (forsales order creation), it is largely a unique activity subject to the spe-cific SAP configuration settings under which the process was auto-mated. When one multiplies this effort across all the transactions thatare part of a typical SAP end-to-end scenario, it becomes obvious thatthere is a lot of labor involved in constructing and modifying auto-mated test cases from the SAP test accelerators. Current tool vendorsdo not want to point this out because they want to sell you their toolsand SAP test accelerators. Service vendors do not want to point thisout because they would rather maximize their profits from billablehours associated with supporting and maintaining test cases derivedfrom SAP test accelerators.
AN ENHANCED APPROACH
Now that we have identified some of the issues with the current par-adigm of SAP test automation and first-generation test accelerators,let us look at how one assembles a better solution through next-generation SAP test accelerators. We will look at new test script cre-ation methods, new methods for managing changes to test assets, newconcepts for managing test data, more efficient techniques for per-forming lights-out testing, and a different cost model that makes testautomation generate a respectable return on investment (ROI), all ofthis done within the context of a new SAP-centric test accelerationparadigm. Furthermore, the new paradigm includes the concept of a
358 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
17_AppB_4782 2/5/07 11:32 AM Page 358
labor cost model that consists of one-time test case automation thatis distributed everywhere.
One-time test case automation that is delivered everywhere im-plies there is an inherent leverage in every test case that is automated.In next-generation test accelerators, this is accomplished through theuse of test components. Test components can be thought of as auto-mated test cases that have the functionality to test all of the configu-ration permutations of the SAP transaction that they test. Forexample, the same test component for SAP transaction VA01 can beused to test SAP transaction VA01 at various SAP implementationsregardless of the SAP configuration settings for transaction VA01.This aforementioned test component could then be reconfigured tomirror the specific configuration of VA01 at each SAP installation.The labor associated with the construction of the test component isthen distributed across various SAP implementations that have SAPtransaction VA01 as part of their functional scope. Test componentsare capable of testing all the different configuration settings of an SAPtransaction.
Each test component corresponds to a SAP transaction code so asuccession of components can be quickly strung together to test theend-to-end business process. To customize this sequence to your spe-cific SAP configuration, the test developer selects from a table thescreens used in a given transactions and the fields used for eachscreen. In other words, they configure the component to match theconfiguration of the transaction code.
There are many benefits to an automated test library for SAP thatis constructed of transaction-level components. The first is naturallythe cost as the leverage of automate-once and distribute-everywhereis intuitive. Another, equally important benefit is the implied frame-work inherent in its structure. The configuration and implementationraises the discussion from a technical, GUI test tool level up to abusiness-process level more common with the tenets of SAP. A thirdbenefit is that this implied structure allows for changes to test assetsin a much more familiar and comfortable manner. Changes such asfield additions or deletions and screen additions or deletions do notneed to happen at a code level but through a forms-based selectionprocess, thus eliminating test script authoring entirely from the process.This simple change reduces cycle time dramatically by reducing the
Case Study: Accelerating SAP Testing 359
17_AppB_4782 2/5/07 11:32 AM Page 359
mental processing time needed in authoring a test script. In fact, notest script authoring is necessary. The time savings, efficiency, and ac-curacy for configuring these test components versus constructing anautomated test case are analogous to your schoolday preference fortaking a true/false test versus an essay test. In short, it is just faster,cheaper, and better.
Next-generation accelerator pricing has driven the cost of SAPtest automation down to the cost of approximately one person-yearof effort to cover the majority of the core SAP critical businessprocesses. Even for the smallest of installations, the gain of efficiencyis hard to summarily dismiss as too expensive. Even the most skepti-cal are wise to take a closer look.
In a next-generation accelerator test, components not only ac-commodate test execution but they perform validation as well. Bybuilding validation into the test component, a tester does not have toperform endless screen reads to retrieve the validation values for afield. A test component simplifies the process of field validation be-cause it knows the location of the field within the SAP database andreads its value directly from the SAP table. It is also possible to iden-tify additional validation elements in other transactions, which maybe useful for validating (technically this is validation not verification)the actual results with the expected results. Retrieval of this data canbe specified quickly and easily if required to augment the prebuilt val-idation associated with each transaction code. Most important, it in-stitutionalizes the validation knowledge of the functional experts inthe test component. This fact provides the greatest value both in timesavings and domain expertise.
By building validation into each test component, there is a signif-icant time savings when constructing an end-to-end test of a businessprocess. Instead of working at a test script level, the construction isdone at a business level by choosing the transaction that needs to betested without much effort focused on the validation of the data,since it is built into the test components.
This technique is possible only through the use of a SAP test ac-celerator, which leverages off components to create the test scenariosand a next-generation test accelerator as it permits the use of valida-tion that is built into each component. The means by which thesecomponents retrieve data from the SAP database is through the useof a validation engine. A validation engine is a software mechanism
360 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
17_AppB_4782 2/5/07 11:32 AM Page 360
that permits direct access to the SAP database to retrieve values. Itworks in conjunction with a test component library that makes a dataretrieval request of the validation engine. As a test is being executed,a test component starts the execution of a test script inside the GUItest tool. That execution goes to a screen transaction displayed in theSAP GUI and inputs the execution data for the transaction. Thiscauses the step-by-step execution of one or more SAP transactions de-pending on the complexity of the test script. After each transaction iscompleted, the test script makes a request of the validation engine,asking for the values necessary to validate the results of the transac-tion just completed. These values are returned to the GUI test tool,compared with the expected results, and a pass or fail value is as-signed to that test step. (See Exhibit B.2.)
A validation engine can also increase the efficiency of testing in-and outbound interfaces to SAP. Interface testing can be donethrough a GUI test tool but it requires the skills of a test script writer,a Visual Basic programmer, and an ABAP program in order to gener-ate the code to access the internals of SAP through a test tool. Usinga validation engine simplifies this process greatly by eliminating theneed for an advanced business application programming (ABAP) pro-grammer as well as a Visual Basic programmer, and once a test com-ponent is in place it can be reused in other end-to-end test scenarioswith little technical expertise. Validation engines greatly simplify the
Case Study: Accelerating SAP Testing 361
System Under Test
Execution Engine
Test Scripts
GUI Test Tool
Core Business Processes
Effecta™Validation Engine
SAP GUI
SAPERP
Database
Interfaces
EXHIBIT B.2 Validation of Processes through the GUI and Back End withValidation Engine
17_AppB_4782 2/5/07 11:32 AM Page 361
effort required to build end-to-end test scenarios that cross over mul-tiple platforms.
Through the use of validation engines for next-generation SAPtest accelerators, building lights-out test automation is a much sim-pler proposition and realistically attainable through the frameworkand structure provided in a next-generation accelerator. By enablingthis efficiency, the real ROI begins to appear to test automationnaysayers and the benefit of faster cycle times, deeper testing, andlower costs can be realized.
MAINTENANCE AND FEATURES OF ACCELERATORS
As previously mentioned, with the use of next-generation test accel-erators it is possible to think of the automated test case for a partic-ular transaction as a test module or test component. That singlecomponent is then used by larger automated test cases for end-to-endprocesses that consist of multiple SAP transactions so that mainte-nance or changes to that individual component are propagated acrossall automated test cases that use that component. This simplifies thechange process and reduces the labor associated with keeping testassets current. First-generation test accelerators solved the challengeof constant change with the use of this component architecture but inso doing introduced a number of other challenges that were subse-quently addressed by next-generation test accelerators.
The challenge with building test scripts from components is themanagement and coordination of these test components across agroup of test developers. Like any development process without asystem for tracking, versioning, and distributing these test assets, theoverall system effectiveness is greatly inhibited. Without a centralizedtool or method for managing this resource, an organization can getinto trouble very quickly.
One method for controlling these assets is to treat them as youwould any software asset and use a source code control program tomanage the distribution and control of these assets. Next-generationtest accelerators are implementing central access to these assetsthrough the authoring environment while still using a source codecontrol program for overall management. This provides easy access,use, and reuse to the test developers, at the same time providing all of
362 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
17_AppB_4782 2/5/07 11:32 AM Page 362
the benefits of an asset that is governed by the rules in a source codecontrol program. The source code control program provides a repos-itory for the test component library as well as a copy of the databasethat contains the configuration data of the metadata, which are theelements and objects that make up the automated test cases. Theoverall impact is a disciplined test environment with the ability to re-trace past tests that assures consistent testing and manages the ongo-ing evolution of your test assets in a controlled environment.
With the management and control of these test assets defined, letus look at how changes are made. When working with a next-generation test accelerator most process changes are taken in stride,as they no longer require the intensive coding changes. Working withmetadata through a forms selection process simplifies the testingprocess greatly. However, there are times when the change to yourSAP system is not based on standard SAP transactions, but is basedon a modification that your organization has made to SAP or perhapsinterfaces to another application altogether. In this case your test ac-celerator needs to be able to handle custom Z-transactions or in-bound or outbound interfaces. If your test accelerator is designedproperly, it is possible to build custom components to address thespecific needs of these custom objects. It is very common to run intomany custom objects that are part of core, critical business processes,so you should assume you will need to deal with these.
In SAP, Z-transactions exist because the necessary functionality ina given SAP transaction did not exactly meet the needs of the user;therefore it is unrealistic to think that a test accelerator will have thenecessary functionality to exactly test that Z-transaction. Similarly,an interface to another system will not be included in any standardtest accelerator, so a custom test component will be necessary.
To implement effective test automation in these environments, atest accelerator provider must either provide effective training in de-veloping test components or provide an impeccable service to buildthem for you, or more likely both. If for no other reason than to havethe freedom of choice, a vendor must supply a developer’s course onthe construction of test components. This course should provideenough detail of the internals of the component so that it can be in-cluded in the library of other test components and operate seamlesslywithin that environment. Additionally, the accelerator vendor shouldbe capable of providing a service to build these components for you,
Case Study: Accelerating SAP Testing 363
17_AppB_4782 2/5/07 11:32 AM Page 363
should you choose to outsource their creation. Eventually, we will seethird-party service providers offering test component development.
Next-generation test accelerators that store execution, validation,and metadata in a relational database are a dramatic improvementover managing test data in spreadsheets. By centralizing data the testplatform simplifies test creation, manages access control, simplifiesbackup, recovery and revision control, and enables simplified auditand compliance with government regulatory requirements under Sar-banes-Oxley, the Federal Drug Administration (FDA), and others.
Initial SAP test accelerators have been a promise for a number ofyears but their value is hampered on account of the following fivemajor challenges:
1. Test script creation slowed the adoption as many were not readyto follow the paradigm of the test tool provider over SAP’s busi-ness process model.
2. Most test organizations were in the dark about lights-out testing,so they were unable to see the ROI on test automation, even withaccelerators.
3. Maintainability of test scripts was a significant hindrance foranyone who was writing test scripts manually and required anarmy of staff to keep test assets current.
4. Data management was unsecured and this only exacerbated themaintainability of test assets.
5. Finally, the cost model for building test assets was based on acustom programming model instead of a build-once, distribute-everywhere model that distributes the cost across the entire SAPcommunity.
Times have changed and forward-thinking test automation com-panies have solved these five key problems, so if one has been skepti-cal of SAP test accelerators in the past it is a good time to take acloser look at the state of the industry.
364 TESTING SAP R/3: A MANAGER’S STEP-BY-STEP GUIDE
17_AppB_4782 2/5/07 11:32 AM Page 364
365
Index
AABAP. See Advanced business
application programming(ABAP)
Action-based testing, 342. See alsoKeyword-driven automationapproach
Ad hoc system changes, 300, 309Ad hoc tests, 72, 166Advanced business application
programming (ABAP), 2, 15,16, 20, 21, 23, 52, 63, 167,192, 200, 204, 227, 231, 245,255, 263, 301
Alexander, Christopher, 12Altova UModel, 47Application under test (AUT), 11Approval (sign-offs)
automated test cases, 91production changes, 309, 310requirements, 44test case, 233, 239test plans, 176test results, 3test strategies, 176
ARIS, 47Arsin Corporation, test tool
evaluation form, 101–106ASAP Roadmap Methodology, 2,
7, 28accelerator for BPP template,
236and early testing, 9feasibility check, 45
integration test, 59requirements, developing, 45test case templates, 183–185,
232, 233test strategy, 174testing activities, 269workshops, conducting, 42
Ascendant, 2, 28and business scenarios, 41experience with and cost
estimates, 67prioritizing requirements, 52, 53test case templates, 232, 233test plan sample, 176test strategy template, 174, 175use of in developing
requ irements, 39Audits, 3, 4, 170, 171, 288, 305,
317Automated testing
capacity testing. See Capacitytesting
criteria for business process testautomation, 163, 164
failure signs, 167, 169, 170functional testing. See Functional
testingand number of resources needed,
200processes, 32production-based SAP system.
See Production-based SAPsystem
regression testing. See Regressiontesting
18_index_4782 2/5/07 3:15 PM Page 365
Automated testing (cont.)sources of automation,
160–161test results, 287, 288tools. See Test toolstypes of tests suitable for,
161–166AutoSys, 15Autotester, Inc., test tool
evaluation form, 107–114AutoTester ONE Special Edition
for SAP, evaluation form,107–114
BBasis team
and capacity testing, 258and performance testing, 17as source of individual resources,
226and system changes, 310, 312test readiness review, 178
Basis team leaderapproval of test plan and test
strategy, 176as member of change control
board, 55Batch Data Communication (BDC),
227Beta testing, 211Black box testing, 1, 7, 15BMC Patrol, 253Bolt-ons, 16, 17, 160, 200, 251,
311Boundary testing, 4Budget, 4, 68, 171. See also CostsBuild-operate-transfer (BOT), 327,
332Build versus buy analysis, 81, 82Business analysts (BAs), 5, 40–42,
61, 63, 64, 71, 86, 169, 231,233, 238
Business process master list(BPML), 49, 160, 161
Business process procedures(BPPs), 21, 161
authoring tools, 221and production changes, 310,
313and quality assurance, 173, 197as source of information for test
case, 231, 233, 236, 237template, 183, 236
Business processeschanges to and maintenance of
automated test components,84
criteria for test automation, 163,164
diagrams, software for designing,47
gathering and analyzing, 87and structure of functional
teams, 24and test automation, 91
Business rules, 87Business scenarios, 41Business Warehouse (BW), 16, 231,
251
CCalendar, test execution, 274–276Caliber-RM, 50Capability Maturity Model
(CMM), 3, 26, 173, 272, 286,325
Capacity testinganalysis, 264–266automated, 253–264execution, 259–264importance of, 243–244manual, 253–255, 260monitoring, 260, 261need for, 243, 244
366 Index
18_index_4782 2/5/07 3:15 PM Page 366
planning, 244–253and production-based systems,
266Roadmap templates, 244test design and construction,
253–259trial runs, 258, 259triggers for, 244, 245types of, 243, 245, 246
Cascading effects of systemchanges, 299, 302, 304, 316
CATT, 262, 277CCMS, 253Certification processes, 58Certify, test tool evaluation form,
131–139Change control board (CCB), 29
and capacity testing, 265defect management, 285defect resolution, 290, 292and help desk system requests,
49. See also Help deskmembers of, 55production changes, 316requirements management, 38,
50role and responsibilities of, 55,
56and system changes, 18waivers, evaluation of impact, 60
Change management team, 17,29–33, 310
Checklist, test readiness review, 31,32, 179–182
Class library framework, 79–82,87, 88
Code-free automation approach,76, 82–84, 87, 88
Coding practices, 88Commitment from management,
72, 73Computer Aided Testing Tool
(CATT), 262, 277
Compuware Corporation, 304test tool evaluation form,
115–121Configuration
changes, 49, 75, 83, 84, 162and maintenance of test
components, 84Configuration team
and capacity testing, 258leader approval of test plan and
test strategy, 176and performance testing, 17and scenario testing, 16as source of individual resources,
225, 226structure of, 24and system changes, 310, 313,
314test readiness review, 178unit testing, 14and user acceptance testing, 17
Conflicts of interest andindependent testing, 57
Consistency, requirements, 51Consultants, 5, 6, 61, 64, 231,
241Continuous process improvement
lessons learned, documentationof. See Lessons learned
and outsourcing, 327and tester evaluation, 205testing, 22
Control-M, 15Corrective actions, 26Costs
automation, 73estimating, 4, 61–68licensing fees, 82and outsourcing, 24
Customer input (CI) templates,39–49, 53, 236
Customer RelationshipManagement (CRM), 251
Index 367
18_index_4782 2/5/07 3:15 PM Page 367
Customization, 31, 91, 160, 163,203
DData
changes to, 84, 85defects, 293dictionary, 232, 233, 236and functional test automation,
72, 74–76, 82–85, 87historical data, 62, 63, 68, 276loading, 202, 223master data, 40, 232migration testing, 18, 237test data, collection of, 290and test dependencies, 277, 288values, 183, 304
Data-driven approach toautomation, 72, 76, 77, 342.See also Keyword-drivenautomation approach
Database administrator (DBA), 247,260
Database team, 17, 229Databases
defects, 285, 288, 398and frameworks, 78, 79Microsoft Access, 222test data, 85
Defects, 3aging, 280, 281density, 282, 283fix retest, 281and implementation partners, 59management, 285, 288, 298newly opened, 281, 282prevention, 11reporting, 285, 290–298and role of quality assurance,
174. See also Qualityassurance (QA) standards
severity levels, 292, 294, 295
and test engineer self-evaluation,214–217
and test management tools, 170,171
trend analysis, 281, 282Deloitte Consulting, 28Department of Defense (DoD), 28,
29, 287Destructive testing, 72Development objects, 49. See also
Report, interface, conversion,enhancement, work flow andform (RICEWF) objects
Development teamand capacity testing, 258and development testing, 15, 16,
227, 228and scenario testing, 16structure of, 24and system changes, 310, 312test readiness review, 178
Development team leaderapproval of test plan and test
strategy, 176as member of change control
board, 55Development testing, 15, 16, 227,
228Diagramming. See also Unified
Modeling Language (UML)flow processes. See Flow process
diagramsprocesses and requirements, 3
Documentation, 3approvals, 311automation, 71, 72, 91capacity testing, 255, 256inadequate, 169lessons learned, 21, 23, 26–28,
31, 173, 198, 265need for, 285, 286and outsourcing, 88requirements, 9, 11, 44
368 Index
18_index_4782 2/5/07 3:15 PM Page 368
retention of, 4and system changes, 313test results, 285–287
DOORS, 50Dustin, Elfriede, 50
EEarly testing, importance of, 9–13Early Watch, 253eCATT, 92, 221Eighty/twenty rule (Pareto’s
principle), 247, 251End users. See also User acceptance
test (UAT)and developing requirements, 38,
45–47and functional requirements, 37hands-on testing, 3help desk tickets. See Help deskand integration testing, 17and performance testing, 162,
163questionnaires, 48as source of individual resources,
225, 226surveys, 48and system changes, 310, 312and test cases, 238, 239training. See Trainingas workshop participants, 42
Entrance criteria, 3, 58, 176, 177Estimates
automation timeline, 86–88costs, 62, 63, 65–67and test execution calendar, 274test schedule, 268, 269
Evolutionary model, 28Exit criteria, 3, 58, 59, 176, 177Expected test results, 183Expert judgment model, 67, 68, 274Extended Computer Aided Test
Tool (eCATT), 92, 221
FFeasibility check, 45Flow process diagrams, 4, 32, 222,
231, 233, 237, 313Frameworks approach, 76–82,
86–88Functional requirements
documenting, 13and managing requirements, 49and prioritization, 52and requirements traceability
matrix, 54. See alsoRequirements traceabilitymatrix (RTM)
as source of information for testcase, 231
and test cases, 161Functional specifications, 236, 237,
313Functional team, 24, 55, 310, 312Functional testing, 3
approaches, 76–83business case for, 69–71documentation, 71, 72management, 85–87negative testing, 72outsourcing scripting, 87–89pitfalls, 74–76positive testing, 72for regression testing, 71success factors, 72–74test library maintenance, 83–85testers, evaluating, 209, 210“to-be” processes, 72when to automate, 71, 72
GGap analysis, 38, 39, 42, 43, 45,
47–48, 193, 240, 301Good manufacturing practices
(GMPs), 65
Index 369
18_index_4782 2/5/07 3:15 PM Page 369
Graphical user interface (GUI), 4,49, 79–83, 311
HHands-on testing, 3Hardware resources, 224, 225Help desk
and automated test tools, 224and cost estimates, 63and emergency changes, 300and production changes, 47, 310reduction of complaints as
objective of SAP, 243and SAP implementation, 5, 6and scope creep, 49, 55as source of requirements,
37–39, 42, 47–49and system changes, 227, 301,
302, 310system defects, 20and traceability of requirements,
53Historical information model, 67,
68, 274IBM, 28
Ascendant. See Ascendanttest tool evaluation form,
150–159
IIDS Scheer, 47IEEE, 3, 7, 28, 56, 57, 173Implementation methodologies, 2,
5–8, 28, 29, 62, 63, 65–67,162
Independent verification andvalidation (IV&V), 56, 57. Seealso Verification
Industry regulationsand CI templates, 43, 44requirements, 37
as source of requirements, 38Infrastructure team, 17Institute of Electrical and
Electronics Engineers (IEEE),3, 7, 28, 56, 57, 173
Integrated Relationship team,325–327
Integration manager, 55, 176Integration team, 18, 310, 312Integration testing, 16, 17,
160–162, 178–185, 201, 228,301
Intellicorp, 47Intermediate documents (IDOCs),
251, 255, 262Interviews, 45–47iTKO Inc., test tool evaluation
form, 140–149
KKey or action word framework,
79, 86, 88Keyword-driven automation
approach, 340–346
LLegacy systems, 228
and capacity testing, 247and challenges in SAP testing, 5and data migration, 23and data verification, 275and development testing, 15, 16,
228documentation, 39and quality assurance, 175as source of requirements, 38,
42, 45and test cases, 21, 232, 237, 238and test team members, 199, 204
Lessons learnedcapacity testing, 265
370 Index
18_index_4782 2/5/07 3:15 PM Page 370
and changes to testing, 31and cost estimates, 61, 63, 66need for capturing, 19from outsourcing, 331, 332and peer reviews, 239repository for, 28reviewing and documenting, 21,
23, 26–28, 31, 173, 198LiveModel, 47Load testing, 162, 243, 245, 247,
255–259, 261, 263, 264. Seealso Capacity testing
Loadrunner, 253Logs, 283, 287. See also Test
resultsLuminate, 253
MMaintenance
automated test components, 84,85
test cases, 240, 241Manual keystrokes, capturing, 91Manual testing, 3, 202
ad hoc tests, 72capacity testing, 253–255destructive testing, 72production-based SAP system,
299, 304–306random testing, 72and signs of automation failure,
169, 170and system changes, 298test results, 287, 288
Mercury Deployment ManagementExtension for SAP Solutions,309
Mercury Interactive, 50, 253, 309
Metricstest case planning, 239, 240test execution, 278–283
NNaming conventions, 75, 76, 85,
88Narratives, 4, 32, 47, 222. See also
Unified Modeling Language(UML)
Negative testing, 4, 14, 15, 72Nonfunctional requirements, 13Nonfunctional testing, 210, 211
OOffshoring, 319, 325Origins of SAP, 1, 2Orthogonal arrays testing systems
(OATS), 4, 306, 333–340OSS (On-line Service System), 18,
49, 63, 66, 162, 190, 203,227, 267, 299, 300, 302, 311
Outsourcingbenefits of, 319–321build-operate-transfer (BOT)
model, 327, 332costs, 24defined, 319deliverables-based project, 323documentation, 88factors to consider, 321, 322and Integrated Relationship
team, 325–327lessons learned, 331, 332managed service, 323, 324managed staffing, 324offshore, 319, 325–327offsite, 325–327onsite, 325payment terms, 329scripting, 87–89as source of individual resources,
225staff augmentation, 324, 328terms of testing service, 327–331
Index 371
18_index_4782 2/5/07 3:15 PM Page 371
Outsourcing (cont.)test automation, 91test teams, 22, 24and testing system changes, 313,
314
PPareto’s principle (80/20 rule), 247,
251Pass/fail criteria, 178Patches, 49, 60, 66, 73, 83, 84,
162, 190, 203, 223, 267, 289,300, 302
Peer reviews, 3, 31, 44, 233, 238,239
Performance testing, 17, 160–162,201, 228, 329, 301, 302
Pilot project, 73, 74Positive testing, 4, 72Prioritization, requirements, 36,
51–53Production-based SAP system
approvals for changes, 309, 310automated testing, 299,
305–309, 314–317and capacity testing, 266and cost estimates, 65–67rainy-day scenarios, 306requirements, sources of, 42, 47,
48sunny-day scenarios, 305–309,
311support for testing, 313, 314system changes, 300–303testing challenges, 302, 304, 305types of tests, 310–312
Production support, 3Production team, 225, 226Project Management Institute
(PMI), 26Project management operations
(PMO), 29, 56, 57, 178
Project manager, 55, 176, 178, 258Prototypes and demonstrations,
39, 45, 47, 53, 55, 58, 63, 166
QQuality assurance (QA) standards,
3, 4applicability of, 186, 187and cost estimates, 61limitations of, 186, 187quality defined, 35and quality management (QM)
module, 173quality measures, 12test case template, 183–186test cases, 240test criteria, 176–178test plan and strategy, 174–176test readiness review, 178–183
Quality assurance (QA) teamcomposition of team, 201, 202and cost estimates, 61, 65and diversion from primary job
responsibilities, 187, 188evaluating testers, 205–214integrated with test team, 191number of resources needed,
200–201project preparation phase, 190responsibilities and skills sets
required, 197, 198role and responsibilities, 173, 174skills, 188, 191–199and test cases, 233test team differences, 188–190when to add to project, 190, 191
Quality Center (TestDirector), 50Quality management (QM)
module, 173Questionnaires
capacity test planning, 247–250end users, 45–48
372 Index
18_index_4782 2/5/07 3:15 PM Page 372
RRainy-day scenarios, 306Random testing, 72Rational Functional Tester, test
tool evaluation form, 150–159Rational Requisite Pro, 50Rational Rose, 47Rational Unifying Process (RUP),
4Record and play, 74–77, 85Regression testing, 3, 4, 18, 32, 71,
160–162, 229, 233, 299, 301,302, 304, 310, 311, 314
Regulatory compliance, 57, 285,287, 309, 314
Relational databases, 79Releases
previous release as source ofrequirements, 37, 38, 47, 48
and system changes, 300, 301testing criteria, 177, 213
Remote function calls (RFCs), 41Repetitive nontesting tasks, 167Report, interface, conversion, and
enhancement (RICE) objects,301, 313
Report, interface, conversion,enhancement, work flow andform (RICEWF) objects, 15,16, 37, 49, 237, 271
Reports, 200, 201, 285. See alsoTest reporting
Repositoriesand cost estimates, 67database, 86documentation of business
processes, 47lessons learned, 28requirements, 47, 50, 54, 55,
222and test management tools, 288,
298
test plan and strategy, 176tests, 170, 171, 176, 241, 283,
308and tracking approvals and
changes, 309vendors, 50
Request for proposal (RFP), 331Requirements, 4
ambiguous, 51, 53, 251approval process, 44and defect prevention, 11. See
also Defectsdefined, 36development objects, 37, 49documentation, 9, 11, 44drafting, 38, 39early testing, 9, 11evaluating, 50–53examples of well-written and
poorly-written, 251–253failure to meet, 7feasibility check, 45functional. See Functional
requirementsinspection, 44linking, 50management tools, 38, 49, 50,
170, 171, 221, 222methods for gathering, 39–49peer review, 44performance, 49prioritizing, 36, 51–53and quality, 12, 13, 35, 36repositories, 47, 50, 54, 55, 222security, 37, 49as source of information for test
case, 236, 237sources of, 37, 38and system changes, 313system performance, 37terminology, 37, 38and test case, 232testing, 11, 12
Index 373
18_index_4782 2/5/07 3:15 PM Page 373
Requirements, (cont.)traceability matrix. See Require-
ments traceability matrix(RTM)
types of, 37UML, use of. See Unified
Modeling Language (UML)usability, 49user interviews, 45–47verification, 12, 13, 56–60work flow, 49workshops, use of, 38, 42–45
Requirements-based testing, 31, 35
Requirements traceability matrix(RTM), 3, 4, 35
construction of, 58developing, 53, 54inadequate, 6, 7quality assurance team, 190and requirement management
tools, 221, 222and requirements-based testing,
31and test cases, 237test team, 190and verification of requirements,
58Requisite Pro, 50Resources, 219, 220
environment, 225hardware, 224, 225individual, 225–229quality assurance (QA) team,
200, 201software, 222–224test lab, 220, 221test team, 189, 190, 200, 201,
204, 205Resumption criteria, 178Return on investment (ROI)
and test case automation,164–166
test tools, 7, 8, 166, 167, 169Reverse engineering, 11RICEWF. See Report, interface,
conversion, enhancement,work flow and form(RICEWF) objects
Roadmap Methodology. See ASAPRoadmap Methodology
RTM. See Requirementstraceability matrix (RTM)
SSAP Assessor Tool, 304SAP modules, 24, 173, 300, 302,
306, 308, 311SAP objects, transporting, 222,
309Sarbanes-Oxley (SOX), 4, 52, 57,
65, 286, 309Scenario testing, 16, 161, 162,
164, 165, 177, 201, 227Schedule, 4, 62, 65. See also Test
scheduleScope and purpose of book, 2–4Scope creep, 49, 55Scope of testing, 4, 6, 7, 65–67Scope statement, 45, 46Screen/window framework, 79–81,
87, 88Screenshots, 186, 225, 262, 283,
288, 292, 305, 310, 314,316, 317
Scriptscapacity testing, 255, 256, 258,
262CATT, 277and cost estimates, 63documentation, 267eCATT, 221and functional testing, 71,
75–79, 84–89outsourcing, 87–89, 330
374 Index
18_index_4782 2/5/07 3:15 PM Page 374
script coding, 86, 87test script, 231, 240, 284, 299,
302, 306, 308, 309, 311,313–317
and test tools, 75, 91, 92, 163,164, 199, 221, 272
and testers, 208, 213Security testing, 14, 175, 301SEI. See Software Engineering
Institute (SEI)Serena-RTM, 50, 58Service-level agreements (SLAs),
4and capacity testing, 259, 263,
265, 266and outsourcing, 323, 324and performance testing, 17and requirements, 253and system changes, 311
Site surveys as source ofrequirements, 38
SiteScope, 253Six Sigma, 173, 176Smart Draw, 47Smoke testing, 7, 161, 301Software development
and early planning, 268life cycle, 9, 267, 328outsourcing, 319and system quality, 35testers, 12
Software Engineering Institute(SEI), 4, 7, 28, 286
Capability Maturity Model. SeeCapability Maturity Model(CMM)
Software resources, 221–224Solution Manager, 28, 42, 47. See
also ASAP RoadmapMethodology
automating sunny-day scenarios,306–308
CI templates, 39–49, 53, 236
stress and volume tests,templates for planning, 175,247
use of in developingrequirements, 39
white paper for documenting teststrategy, 174, 175
Spreadsheetscapacity test design, 253and frameworks, 78–80test case templates, 183, 186,
231, 233test data, 85test results, storing, 283, 288
Standardscoding, 85independent testing, 56–60naming conventions, 75, 76, 85,
88quality assurance. See Quality
assurance (QA) standardstest case conventions, 74–76
Statistical process control (SPC),247
Stress testing, 162, 175, 220, 247
String tests, 16, 160, 201, 219, 301
Structured Query Language (SQL),79, 192, 263
Subject matter experts (SMEs), 5and automated testing
approaches, 87and capacity testing, 247, 258and estimates, 61, 64and exit criteria, 177and functional test automation,
71, 86and integration testing, 17, 228as members of change control
board, 55as members of test team, 198,
199, 206
Index 375
18_index_4782 2/5/07 3:15 PM Page 375
Subject matter experts (SMEs)(cont.)
and peer reviews of test cases,238
requirements, gatheringinformation for, 39
and scenario testing, 16, 227and signs of test automation
failure, 167, 169as source of individual resources,
225, 226, 320and technical experts, 199, 208,
209test case review, 238user acceptance testing, 58as workshop participants, 42
Success testing criteria, 178Sucid Corporation, test tool
evaluation form, 122–130Sunny-day scenarios, 306, 308,
309, 311Supplier Relationship Management
(SRM), 251, 311. See alsoBolt-ons
Suspension criteria, 3, 177System architect, 61, 64System changes. See also
Production-based SAP systemdocumentation, 313emergency (ad hoc), 300, 301,
310enhancements, 49, 66impact of, assessing, 316, 317and maintaining test cases, 240,
241outputs, 313, 314patches. See Patchesplanned, 300, 301and regression testing, 18testing activities, 64upgrades. See Upgrades
System modules, addition of andneed for new requirements, 38
TTable-driven testing, 342. See also
Keyword-driven automationapproach
Taguchi, Genichi, 306, 334Technical expertise, 198, 199, 208,
209Technical specifications, 231, 236,
237Technical testing, 18, 229Templates
Ascendant, 174, 175, 232, 233business process procedures,
183, 236capacity testing, 244customer input (CI), 39–49, 236and documenting lessons
learned, 27evaluation matrix template,
93–100and outsourcing, 24and quality assurance, 173, 186Solution Manager, 175stress test planning, 175, 249test case, 174, 175, 183–186,
231–237, 254test strategy, 174testing SAP, 23, 29, 30
Test accelerators, 367–376Test analysts, 86Test approach
changes, managing, 29–33implementation methodologies.
See Implementationmethodologies
project components, 1review of existing practices,
19–22software methodologies, 28, 29
Test cases, 3automated, 4, 160, 161, 167,
232
376 Index
18_index_4782 2/5/07 3:15 PM Page 376
building, 232, 233characteristics of well-written,
232, 233customized template, 234, 235data dictionary example, 236design of, 231execution of. See Test executionmaintaining, 240–241methods for automating, 168metrics, 239, 240and number of resources needed,
200, 201and orthogonal arrays (OATS),
4, 306, 333–340peer review, 238, 239production-based changes, 314,
315reuse of, 232sources of information for, 231,
233, 236–238templates, 174, 175, 183–186,
231–237, 254test scenarios, 308and test tools, 161–166and use of implementation
partners, 58, 59Test criteria, 176–178Test Data Migration Server, 223Test design
and automated testing, 78, 86, 89and number of resources needed,
200, 201and test management tools, 170,
171tools for, 91
Test engineersand automated testing, 78, 82,
86, 163and benefits of code-free
automation, 83evaluating, 205, 207, 211, 212responsibilities and skills
required, 194–196
and script coding, 87–89self-evaluation, 214–217and signs of test automation
failure, 167, 169skill level, impact of on test
execution schedule, 272Test environment, 73, 74, 196, 225Test execution, 233, 267, 268
automated, 267calendar, 274–276capacity testing, 259–264logs and results, 283manual, 267metrics, 278–283and number of resources needed,
200, 201purpose of, 267test dependencies, 277, 278and test management tools, 170,
171test schedule, 267–274, 278–283tools for, 91
Test harness, 199, 208, 350–354Test labs, 220, 221Test lead, 193, 194Test libraries, 71, 73, 76, 83–87,
89, 204, 299, 306, 309, 316Test management tools, 3, 91, 92,
170, 171, 308test case templates, 231, 233test data collection, 290test results, storing, 283and testing metrics, 278
Test managerlessons learned, documenting,
27and managing changes, 29–33and peer review of test cases,
239responsibilities and skills
required, 192, 193and test strategy, 174tester evaluation, 205–214
Index 377
18_index_4782 2/5/07 3:15 PM Page 377
Test plan, 1, 4, 29–33, 91, 170,171, 174–176, 200, 201
Test program tasks, 269–271Test readiness review (TRR), 20,
31, 32, 178–183, 233, 277Test reporting, 170, 171, 200, 201,
285Test repository, 170, 171, 176,
241, 283, 308Test results, 3, 4
documentation, 285–287screenshots, 289, 290, 292storing, 283, 287, 290
Test schedule, 267, 274, 278–283Test scripts. See ScriptsTest strategy, 4, 9, 29–33, 190Test team
borrowed resources, 204, 205,225, 226
centralized, 22, 23composition of, 202–205and cost estimates, 61, 64, 66decentralized, 22–24formation of, 3integrated with QA team, 191and integration testing, 17manager of as member of change
control board, 55number of resources needed, 200,
201outsourced, 22, 24and performance testing, 17permanent team, 202–204project preparation phase, 190and quality assurance, 174,
188–190and regression test, 18resources, 187, 188and scenario testing, 16skill sets, 191–198structure, 22–26system changes, 310, 312, 314,
317
test lab responsibility, 220, 221test readiness review, 178and user acceptance testing, 17when to add to project, 190,
191Test tools
Arsin Corporation, 101–106automation, 3, 4, 16, 17automation failure signs, 167,
169, 170Autotester, Inc., 107–114benefits of, 166, 167CATT, 262, 277commercial vendors, 92Compuware Corporation,
115–121eCATT, 92, 221evaluation criteria, 160evaluation matrix template,
93–100IBM, 150–159iTKO Inc., 140–149methods of automation, 167,
168production-based SAP system,
306readiness for, 91return on investment, 7, 8role of, 92software, 221Sucid Corporation, 122–130types of tests suitable for
automation, 161–166use of, 91, 92vendor survey, 92Worksoft, Inc., 131–139
Testers. See also Test teamearly involvement, need for, 11,
12evaluating, 205–214expectations, 207, 208lack of skills and knowledge,
5, 6
378 Index
18_index_4782 2/5/07 3:15 PM Page 378
Testing committee, 29, 30Testing practices, basic principles,
2, 3TestPartner, evaluation form,
115–121Text editors
capacity test design, 253and frameworks, 79test case templates, 183, 186,
231, 233test data, 85test results, storing, 288
“The system shall” statements, 11Third-party organizations
documenting lessons learned, 27independent verification of
requirements, 56–58test case review, 238third-party verification, 3
ThreadManager, 28“To-be” processes, 72, 87Total quality management (TQM),
35, 176Touch points, 41, 49, 51, 72, 193,
304, 306, 309, 310Traceability, requirements, 51, 53.
See also Requirementstraceability matrix (RTM)
Training, 8, 72–74, 91, 310, 313Transaction codes, 160, 161, 278,
304, 308Transporting objects, 20, 26, 32,
222, 309TRR. See Test readiness review
(TRR)
UUML. See Unified Modeling
Language (UML)Unified Modeling Language
(UML), 3, 11, 32, 47, 222,237
Unit testing, 3, 14, 15, 227, 301Upgrades, 20, 27, 32, 49, 56, 201,
223, 300, 302, 311Usability testing, 18, 37, 346–350Use case, 11, 13, 47, 48User acceptance test (UAT), 17,
53, 228, 229. See also Endusers
Department of Defenserequirements, 29
resources, 219and role of change control
board, 55and test cases, 233, 238, 239test lab, use of, 220and test strategies, 175verifications, 58, 59
VV-shaped model, 9Validation of system design, 3Verification
independent verification andvalidation (IV&V), 56, 57
of objects, 167and outsourcing, 320, 326points, 161requirements, 12, 13, 56–60service-level agreements, 263of system design, 3
Versionscontrol, 85, 88, 173, 176, 197,
222, 223, 241and test management tools, 170,
171, 241Visio, 222Volume testing, 162, 247
WWaivers, 58–60Waterfall model, 9, 28
Index 379
18_index_4782 2/5/07 3:15 PM Page 379
Weigers, Karl, 38, 316White box testing, 15Work Breakdown Structure (WBS),
269–271
Workshops, 38, 42–45Worksoft, Inc., test tool evaluation
form, 131–139Workstations, 224, 225
380 Index
18_index_4782 2/5/07 3:15 PM Page 380