agile test automationxunitpatterns.com/~gerard/india2011-agile-automation.pdf · 2015-06-26 · 7...
TRANSCRIPT
1
1 Copyright 2010 Gerard MeszarosAgile Tour 2010
Agile Test Automation
Gerard [email protected]
2 Copyright 2010 Gerard MeszarosAgile Tour 2010
Product & I.T.
I.T.
EmbeddedTelecom
My Background
Gerard [email protected]
•Software developer
•Development manager
•Project Manager
•Software architect
•OOA/OOD Mentor
•Requirements (Use Case) Mentor
•XP/TDD Mentor
•Agile PM Mentor
•Test Automation Consultant & Trainer
•Lean/Agile Coach/Consultant
80’s
-----
90’s
-----
00’s
2
3 Copyright 2010 Gerard MeszarosAgile Tour 2010
Test Automation Pyramid
• Large numbers ofvery small unittests
• Smaller number offunctional tests formajor components
• Even fewer testsfor the entireapplication &workflow
Unit Tests
ComponentTests
SystemTests
Variousautomation
tools(or manual)
Keywordor xUnit
xUnit
Origin: Mike Cohn
4 Copyright 2010 Gerard MeszarosAgile Tour 2010
Agenda• Motivation
– The Agile Test Problem– The Fragile Test Problem
• Approaches to Test Automation– Test Preparation Approach– Test Definition Language– Test Interface
• Test Automation Strategy– Selecting the right Approach(es)– Maximizing Automation ROI
3
5 Copyright 2010 Gerard MeszarosAgile Tour 2010
Product Owner Goal
• Goal: Maximize business value received
Features Money
Product Owner
Concept
Minim
ize
Maxim
ize
Quality is Assumed; Not Managed
6 Copyright 2010 Gerard MeszarosAgile Tour 2010
Why Quality Often Sucks
• Iron Triangle of Software Engineering:
You can fix any three; the fourth is the outcome
Resources
Time
Functionality
• What about Quality?
Quality
4
7 Copyright 2010 Gerard MeszarosAgile Tour 2010
In Agile, we“Pin” quality
usingautomated
tests
Why Quality Often Sucks
• Iron Triangle of Software Engineering:
You can fix any three; the fourth is the outcome
Functionality
Quality
• What about Quality?
Resources
Time
8 Copyright 2010 Gerard MeszarosAgile Tour 2010
Speaking of Quality,would you ...
... ask your doctor to reduce the costof the operation ...
... by skipping the sterile technique ?
Test Automation is like hand washing:Improved results but an upfront cost.
5
9 Copyright 2010 Gerard MeszarosAgile Tour 2010
Minimizing Cost of ProductTotal cost includes:• developing the software• verifying the newly built functionality• verifying old functionality still works• fixing any bugs found• Verifying noting was broken by fixes
Agile Test Automation can reduce thecost of all of these activities.
10 Copyright 2010 Gerard MeszarosAgile Tour 2010
Incremental Development
• Type NF bugs: New Func. is wrong• Type RB bugs: New bugs in old func.
(Regression Bugs)
Concept Version 1.0
EvolvedConcept
Initial NF
Version 1.xVersion 1.0
RB
NF
6
11 Copyright 2010 Gerard MeszarosAgile Tour 2010
The Agile Test Problem
RequirementsDevelopment
Testing
12 Copyright 2010 Gerard MeszarosAgile Tour 2010
The Agile Test Problem
• As development increments reduce induration, testing needs to be reducedaccordingly
TestingDevelopment
Requirements
TestingDevelopment
Requirements
7
13 Copyright 2010 Gerard MeszarosAgile Tour 2010
The Agile Test Problem
... and traditional approaches to testing nolonger work
TestingDevelopment
Requirements
TestingDevelopment
Requirements
TestingDevelopment
Requirements
TestingDevelopment
Requirements
14 Copyright 2010 Gerard MeszarosAgile Tour 2010
Traditional Role of Testing
Thanks to Brian Marrick and Mary Poppendieck
Shigeo ShingoCo-inventor of
Toyota Production System
Inspectionto find
defects isWaste
Inspectionto preventdefects isessential
Acceptance TestsRegression Tests
Usability TestsExploratory Tests
Unit TestsComponent Tests
Property Tests(Response Time,
Security, Scalability)
ReportCard
Critique Product
FunctionalityUsabilityScalabilityResponseAvailability
BCABC
BusinessFacing
TechnologyFacing
8
15 Copyright 2010 Gerard MeszarosAgile Tour 2010
Changing the Role of Testing
Acceptance TestsRegression Tests
Usability TestsExploratory Tests
Unit TestsComponent Tests
Property Tests(Response Time,
Security, Scalability)
BusinessFacing
TechnologyFacing
Preventanticipatabledefects from
happening
Find non-anticipatable
Defects, ASAP!
Thanks to Brian Marrick and Mary Poppendieck
Critique Product ReportCard
FunctionalityUsabilityScalabilityResponseAvailability
BCABC
Define ProductRequirements
SoftwareDesign
16 Copyright 2010 Gerard MeszarosAgile Tour 2010
Changing the Role of Testing
Acceptance TestsRegression Tests
Usability TestsExploratory Tests
Unit TestsComponent Tests
Property Tests(Response Time,
Security, Scalability)
BusinessFacing
TechnologyFacing
Thanks to Brian Marrick and Mary Poppendieck
Critique ProductReportCard
FunctionalityUsabilityScalabilityResponseAvailability
BCABC
Define ProductRequirements
SoftwareDesign For effective prevention:
1. Tests must be available beforedevelopment
2. Developers must be able to run testsbefore check-in
9
17 Copyright 2010 Gerard MeszarosAgile Tour 2010
Reducing the Cost to Fix Defects
• Why?• We can remember where we put the newly
inserted defect because1. We know what code we were working on2. The design of the code is still fresh in our minds
• We may have to change less code– Because we wrote less code based on the defect
Cost to understandand fix a defect goes
up with the time ittakes to discover it.
18 Copyright 2010 Gerard MeszarosAgile Tour 2010
18
Continuous Acceptance Testing!
• Defines what “Done Looks Like”– Several to many tests per User Story / Feature
Write StoryTest
Write StoryTest Build Code Test Code Test Story
Build Code Test Code Test Story
• Tests executed as soon developer says “It’sReady”– End-of-iteration: OK– Mid-iteration : Better
10
19 Copyright 2010 Gerard MeszarosAgile Tour 2010
Read
ines
sAss
essm
ent
19
Continuous Readiness Assessment!
• Defines what “Done Looks Like”– Several to many tests per User Story / Feature
Write StoryTest
Write StoryTest Build Code Test Code Test Story
Build Code Test Code
• Executed by developers during development– To make sure all cases are implemented– To make sure it works before showing to business
Test Story
• Tests executed as soon developer says “It’sReady”– End-of-iteration: OK– Mid-iteration : Better
25 Copyright 2010 Gerard MeszarosAgile Tour 2010
The Role of Tests in Agile• Provide a definition of what “done” looks like
– Know before we start building what we want» Tests help us think about how it should work
– Facilitate communication between stakeholders» Provide concrete examples of requirements
11
26 Copyright 2010 Gerard MeszarosAgile Tour 2010
The Role of Automation in Agile• Provide rapid, repeatable execution of tests
– No human intervention required– Flatten the cost of testing each increment
• Provide a Safety Net for Change & Innovation– Provide rapid feedback to reduce cost of fixing defects.
» On demand (Developer) and event driven (CI build)– Rapid feedback enables experimentation
» Don’t have to choose between Quick and Safe
• Support Manual Testing– Remove the repetitive drudgery so testers can focus on
high value activity by:– Automating entire tests, or by– automating the steps that can be automated.
31 Copyright 2010 Gerard MeszarosAgile Tour 2010
Agenda• Motivation
– The Agile Test Problem– The Fragile Test Problem
• Approaches to Test Automation– Test Preparation Approach– Test Definition Language– Test Interface
• Test Automation Strategy– Selecting the right Approach(es)– Maximizing Automation ROI
12
32 Copyright 2010 Gerard MeszarosAgile Tour 2010
Anatomy of an Automated Test
Our System
Other System
Other System
Interface BusinessLogic Database
Container Services
Other System
Test
Test Scenario Name-----------------------Preconditions----------------------1. Do Something2. Check Something-----------------------Clean Up
1&2 May berepeated
TestSetup
TestTeardown
Preconditions(State)
AdapterGiven...
When ...Then ....
33 Copyright 2010 Gerard MeszarosAgile Tour 2010
The Fragile Test ProblemWhat, when changed,may break our testsaccidentally:
– Behavior Sensitivity» Business logic
– Interface Sensitivity» User or system
– Data Sensitivity» Database contents
– Context Sensitivity» Other system state
Our System
Other System
Other System
Interface BusinessLogic Database
Container Services
Other System
Test
In Agile, these are all changing all the time!
13
34 Copyright 2010 Gerard MeszarosAgile Tour 2010
Our System
Other System
Other System
Interface BusinessLogic Database
Container Services
Other System
Interface Sensitivity• Tests must interact with
the SUT through someinterface
• Any changes to interfacemay cause tests to fail.– User Interfaces:
» Renamed/deleted windows ormessages
» New/renamed/deleted fields» New/renamed/deleted data values
in lists– Machine-Machine Interfaces:
» Renamed/deleted functions in API» Renamed/deleted messages» New/changed/deleted function
parameters or message fields
Window
Window
FieldButton
TitleCaption
LinkData Grid
E.g.: Move tax fieldto new popup window
35 Copyright 2010 Gerard MeszarosAgile Tour 2010
Our System
Other System
Other System
Interface BusinessLogic Database
Container Services
Other System
Behavior Sensitivity• Tests must verify the
behavior of the system.– Behavior also involved in test
set up & tear down• Any changes to business
logic may cause tests tofail.– New/renamed/deleted states– New/changed/removed
business rules– Changes to business
algorithms– Additional data requirements
Object
Object
StateIdentity
AlgorithmRule
E.g.: Change fromGST+PST to HST
14
36 Copyright 2010 Gerard MeszarosAgile Tour 2010
Our System
Other System
Other System
Interface BusinessLogic Database
Container Services
Other System
Data Sensitivity• All tests depend on
“test data” which are:– Preconditions of test– Often stored in databases– May be in other systems
• Changing the contentsof the database maycause tests to fail.– Added/changed/deleted
records– Changed Schema
Table
RowRowRow
E.g.: Change customer’sbilling terms
37 Copyright 2010 Gerard MeszarosAgile Tour 2010
Our System
Other System
Other System
Interface BusinessLogic Database
Container Services
Other System
Context Sensitivity• Tests may depend on inputs
from another system– State stored outside the
application being tested– Logic which may change
independently of our system
• Changing the state of thecontext may cause tests tofail.– State of the container
» e.g. time/date– State of related systems
» Availability, data contentsSecurity System
User: XPermissions: none
Container ServicesTimeDate
Customer X
E.g.: Run test in ashorter/longer month
15
38 Copyright 2010 Gerard MeszarosAgile Tour 2010
Agenda• Motivation
– The Agile Test Problem– The Fragile Test Problem
• Approaches to Test Automation– Test Preparation Approach– Test Definition Language– Test Interface
• Test Automation Strategy– Selecting the right Approach(es)– Maximizing Automation ROI
39 Copyright 2010 Gerard MeszarosAgile Tour 2010
How is Agile Test Automation Different?
• We automate the tests for a different reason– Defect Prevention (instead of Detection)– To communicate requirements (via Examples)– To “Pin” the functionality once it’s built (Regression)
• We automate the tests a different way– Many different kinds of tests
» E.g. We don’t rely solely on GUI-based automation– Using tools that support collaboration & communication
» in addition to confirmation
• We plan the automation based on ROI– Goal isn’t: 100% automation– Goal is: To maximize benefit while minimizing cost
16
40 Copyright 2010 Gerard MeszarosAgile Tour 2010
How Effective is our Automation?• Are the tests fully automated?
– Can they run unattended?– Are they fully self-checking?
• Are the tests low maintenance?– How often do we need to adjust them?– How many tests are affected by a change in the SUT?
• Do the tests describe the requirementsclearly?– Can everyone understand them?– Could we (re)build the system from them?
• Can anyone run them?– Can developers run them before checking in code?
41 Copyright 2010 Gerard MeszarosAgile Tour 2010
Common Approaches to TestAutomation
TestPreparation
TestLanguage
TestInterface
Test DataSetup
Recorded Code Raw UI Global, StaticRefactored Keyword Adapter Per Suite RunHand-written Data API Per Test Run
Our System
Interface BusinessLogic Database
Container Services
Test Scenario Name-----------------------
----------------------
-----------------------Clean Up
APIAdapterVia
Raw UI
Fixture(state)
Preconditions
1. Do Something2. Check Something
17
42 Copyright 2010 Gerard MeszarosAgile Tour 2010
Common Approaches to TestAutomation
TestPreparation
TestLanguage
TestInterface
Test DataSetup
Recorded Code Raw UI Global, StaticRefactored Keyword Adapter Per Suite RunHand-written Data API Per Test Run
Our System
Interface BusinessLogic Database
Container Services
Test Scenario Name-----------------------
----------------------
-----------------------Clean Up
APIAdapterVia
Raw UI
Fixture(state)
Preconditions
1. Do Something2. Check Something
43 Copyright 2010 Gerard MeszarosAgile Tour 2010
Keeping Tests Simple: Testing via API
• API’s need to be designed in– Design for Testability (DfT)
• Requires collaboration with Dev’t– Agile fosters collaboration through co-located teams
CoreBusiness
Logic
UserInterface
Test Invoice Generation-New Customer
-----------------------------Logged in as ClerkItem1, Item2 exist-----------------------------1. CreateCustomer “Acme”2. CreateAccount NewCust3. AddPurchase Item14. AddPurchase Item25. GenerateInvoice NewAcct6. ….
What we want to write:
API
Intention-basedKeywords SUT
18
44 Copyright 2010 Gerard MeszarosAgile Tour 2010
When There’s No API Available
• Large gap between:– what we want to write & what can be executed– Many tests to adjust when UI changes High Maintenance Cost
CoreBusiness
Logic
Test Invoice Generation-New Customer
-----------------------------Logged in as ClerkItem1, Item2 exist-----------------------------1. CreateCustomer “Acme”2. CreateAccoun NewCust3. AddPurchase Item14. AddPurchase Item25. GenerateInvoice NewAcct6. ….
UserInterface
Test Invoice Generation-New Customer
-----------------------------Goto Login ScreenEnter “Clerk” in UserName fieldEnter “Pw123Secret” in Password fieldEnter …..-----------------------------Goto Cust ScreenClick “New Customer”Enter “Acme” in Name fieldEnter “123 Main St.” in Addr fieldEnter …..-----------------------------GotoScreen( “Account” )Find customer “Acme”Click “Add Account”Enter “Credit” in Type fieldEnter …..
Without a test API we have to write:
IntentionObscuring
Code
CodeDuplication
No APISUT
45 Copyright 2010 Gerard MeszarosAgile Tour 2010Adapter
Keeping Tests Simple: Testing via Adapters
• Adapters can be tacked on– Single place to adjust when UI changes
• But not 1st choice (DfT is)– But may be complex and error prone
CoreBusiness
Logic
Test Invoice Generation-New Customer
-----------------------------Logged in as ClerkItem1, Item2 exist-----------------------------1. CreateCustomer “Acme”2. CreateAccoun NewCust3. AddPurchase Item14. AddPurchase Item25. GenerateInvoice NewAcct6. ….
UserInterface
No API
What we want to write:Code in AdapterSimulates API
Intention-basedKeywords SUT
19
46 Copyright 2010 Gerard MeszarosAgile Tour 2010
executiondefinition
• User executes tests manually; tool records as tests• Tool replays tests later without user intervention
Fixture
Test Result Repository
Test Script Repository
TestRecorder
SUT
TestScript 1
TestScript 2
TestScript n
ExpectedOutputsInputs
Inputs
Outputs
InputsTestRunner
Script nResult
TestResults
ExpectedOutputs
(C)OTS Record&Playback
The tests areare code/datainterpretedby the test
runner.
47 Copyright 2010 Gerard MeszarosAgile Tour 2010
Sample Recorded Test@@ Login()Browser("Inf").Page("Inf").WebButton("Login").Click@@ GoToPage(“MaintainTaxonomy”)Browser("Inf").Page("Inf_2").Check CheckPoint("Inf_2")Browser("Inf").Page("Inf_2").Link("TAXONOMY LINKING").ClickBrowser("Inf").Page("Inf_3").Check CheckPoint("Inf_3")Browser("Inf").Page("Inf_3").Link("MAINTAIN TAXONOMY").ClickBrowser("Inf").Page("Inf_4").Check CheckPoint("Inf_4")@@ AddTerm("A","Top Level", "Top Level Definition")Browser("Inf").Page("Inf_4").Link("Add").Clickwait 4Browser("Inf_2").Page("Inf").Check CheckPoint("Inf_5")Browser("Inf_2").Page("Inf").WebEdit("childCodeSuffix").Set "A"Browser("Inf_2").Page("Inf").WebEdit("taxonomyDto.descript").Set "Top Level"Browser("Inf_2").Page("Inf").WebEdit("taxonomyDto.definiti").Set "Top Level Definition"Browser("Inf_2").Page("Inf").WebButton("Save").Clickwait 4Browser("Inf").Page("Inf_5").Check CheckPoint("Inf_5_2")@@ SelectTerm("[A]-Top Level")Browser("Inf").Page("Inf_5").WebList("selectedTaxonomyCode").Select "[A]-Top Level“@@ AddTerm("B","Second Top Level", "Second Top Level Definition")Browser("Inf").Page("Inf_5").Link("Add").Clickwait 4Browser("Inf_2").Page("Inf_2").Check CheckPoint("Inf_2_2")
infofile_;_Inform_Alberta_21.inf_;_hightlight id_;
Manually Added
Manually Added
Manually Added
Manually Added
Manually Added
Manually Added Comment
20
48 Copyright 2010 Gerard MeszarosAgile Tour 2010
Refactored Recorded TestLogin()
GoToPage(“MaintainTaxonomy”)
AddTerm("A","Top Level", "Top Level Definition")
SelectTerm("[A]-Top Level“)
49 Copyright 2010 Gerard MeszarosAgile Tour 2010
AddChildToCurrentTerm( “A.1”, “Definition of 1st Child Term of A”)
AddChildToCurrentTerm( “A.2, “Definition of 2nd Child Term of A”)
Login()
GoToPage(“MaintainTaxonomy”)
AddTerm("A","Top Level", "Top Level Definition")
SelectTerm("[A]-Top Level“)
Now we hand-write additional tests usingthe resulting adapter (library)
Refactored Recorded Test
21
50 Copyright 2010 Gerard MeszarosAgile Tour 2010
Record, Refactor, Playback• Use Test Recording as a way to capture tests• Remove duplication by replacing with calls to
domain-specific Test Utility Methods– using Extract Method refactorings
• Make Test Utility Methods reusable– Replace Hard-Coded Literal Values with
variables/parameters• Effectively turns recorded tests into
programmed or keyword-driven test scripts– But, still through UI Adapter & original tool choice
Most appropriate with legacy systemsEspecially with many interfaces
51 Copyright 2010 Gerard MeszarosAgile Tour 2010
(C)OTS Record & Playback
TestPreparation
TestLanguage
TestInterface
Test Data
Recorded Code Raw UI # Global, StaticRefactored Keyword* Adapter Per RunHand-written Data API Per Test
Notes:* Keywords, if used, tend to be very low level:
•GotoWindowNamed: name•SelectFieldNamed: name•EnterText: text•(Not the same as true Keyword-Driven testing)
# Most COTS Tools operate at UI or HTTPinterface; many open-source tools do so as well
Poor OK GoodExample Driven X
Lega
cy
Workflow XSystem XBusiness Rules XComponent XUnit X
New
Workflow XSystem XComponent XBusiness Rules XUnit X
22
52 Copyright 2010 Gerard MeszarosAgile Tour 2010
Record, Refactor, Playback
TestPreparation
TestLanguage
TestInterface
Test Data
Recorded Code Raw UI Global, StaticRefactored Keyword Adapter # Per RunHand-written Data API Per Test
Notes:# The result of refactoring is an adapter
between the test script and the SUT’s UI.
Poor OK GoodExample Driven X
Lega
cy
Workflow XSystem XBusiness Rules XComponent XUnit X
New
Workflow XSystem XComponent XBusiness Rules XUnit X
53 Copyright 2010 Gerard MeszarosAgile Tour 2010
Test Result Repository
Script nResult
definition
Built-In Record&Playback• User executes tests manually; SUT records as tests• Tool replays tests later without user intervention
The tests aredata
interpretedby the test
runner.
SUT
Test Script RepositoryTest
Script 1Test
Script 2Test
Script n
execution
TestRunner
Inputs
ExpectedOutputs
TestRecorder
Inputs
ExpectedOutputs
ActualResults
23
54 Copyright 2010 Gerard MeszarosAgile Tour 2010
Built-in Record&Playback
TestPreparation
TestLanguage
TestInterface
Test Data
Recorded Code Raw UI Global, StaticRefactored Keyword Adapter Per RunHand-written Data API Per Test
Notes:• Needs to be implemented within SUT• Can sometimes be retrofitted to legacy systems
Most appropriate with legacy systemswhen playing “automation catch-up”
Poor OK GoodExample Driven X
Lega
cy
Workflow XSystem XBusiness Rules XComponent XUnit X
New
Workflow xSystem xComponent xBusiness Rules xUnit x
55 Copyright 2010 Gerard MeszarosAgile Tour 2010
Sample Built-in R&PB Test Recording2. Supply Create
24
56 Copyright 2010 Gerard MeszarosAgile Tour 2010
Prev. Rec. User Input
Actual choicesplus
test results
Previouslyrecorded
choices
Raw XML for “Designation” Field
<field name="designation" type="selection"><used-value>DIRECTIONAL</used-value><expected>
<value>DIRECTIONAL</value><value>WORK</value><value>PSGR</value><value>PLOW</value><value>PLOW WORK</value><value>ENG</value>
</expected><actual>
<value status="ok">DIRECTIONAL</value><value status="ok">WORK</value><value status="ok">ENG</value><value status="ok">PSGR</value><value status="surplus">MIXED</value><value status="ok">PLOW</value><value status="ok">PLOW WORK</value>
</actual></field>
RecordingPlayback
57 Copyright 2010 Gerard MeszarosAgile Tour 2010
Sample R&PB Test Hooksif (playback_is_on()) {choice = get_choice_for_playback(dialog_id,choices_list);
} else {choice = display_dialog(choices_list, row,col, title, key);
}
if (recording_is_on()) {record_choice(dialog_id, choice_list,choice, key);
}
25
58 Copyright 2010 Gerard MeszarosAgile Tour 2010
Sample R&PB Test Hooksif (playback_is_on()) {choice = get_choice_for_playback(dialog_id,choices_list);
} else {choice = display_dialog(choices_list, row,col, title, key);
}
if (recording_is_on()) {record_choice(dialog_id, choice_list,choice, key);
}
59 Copyright 2010 Gerard MeszarosAgile Tour 2010
Sample R&PB Test Hooksif (playback_is_on()) {choice = get_choice_for_playback(dialog_id,choices_list);
} else {choice = display_dialog(choices_list, row,col, title, key);
}
if (recording_is_on()) {record_choice(dialog_id, choice_list,choice, key);
}
26
60 Copyright 2010 Gerard MeszarosAgile Tour 2010
• Test automater writes test code to exercise thesoftware
• Runs test code whenever testing required
Hand-Coded Tests
executiondefinition
The tests are aprogram that
tests theprogram
A.K.A. Scripted,Programmed
TestRunner
Test Result Repository
Test Script RepositoryTest
Script 1Test
Script 2Test
Script n
Fixture
S/W Devt.Tool
SUT
ExpectedOutputsInputs
Inputs
Outputs
Script nResult
TestResults
Inputs
ExpectedOutputs
TestScript n
61 Copyright 2010 Gerard MeszarosAgile Tour 2010
Hand-Coded TestsTestPreparation
TestLanguage
TestInterface
Test Data
Recorded Code Raw UI Global, StaticRefactored Keyword Adapter Per RunHand-written Data API Per Test
Notes:• Hand-written code requires software
development skills and test automation skills.• API preferred but can script browser-based
(UI) tests.• Code can be primitive or abstract therefore...
• Developers need training on writing cleartests!
Poor OK GoodExample Driven X
Lega
cy
Workflow XSystem XBusiness Rules XComponent XUnit X
New
Workflow XSystem XComponent XBusiness Rules XUnit X
27
62 Copyright 2010 Gerard MeszarosAgile Tour 2010
Hand-Coded Test – w/ Primitive Obsessionpublic void testAddItemQuantity_severalQuantity () {
// Setup Fixturefinal int QUANTITY = 5;Address billingAddress = new Address("1222 1st St SW", "Calgary",
"Alberta", "T2N 2V2", "Canada");Address shippingAddress = new Address("1333 1st St SW", "Calgary",
"Alberta", "T2N 2V2", "Canada");Customer customer = new Customer(99, "John", "Doe", new
BigDecimal("30"), billingAddress, shippingAddress);Product product = new Product(88, "SomeWidget", new
BigDecimal("19.99"));Invoice invoice = new Invoice(customer);// Exercise SUTinvoice.addItemQuantity(product, QUANTITY);// Verify OutcomeList lineItems = invoice.getLineItems();if (lineItems.size() == 1) {
LineItem actualLineItem = (LineItem)lineItems.get(0);assertEquals(invoice, actualLineItem.getInvoice());assertEquals(product, actualLineItem.getProduct());assertEquals(quantity, actualLineItem.getQuantity());assertEquals(new BigDecimal("30"),
actualLineItem.getPercentDiscount());assertEquals(new BigDecimal("19.99"),
actualLineItem.getUnitPrice());assertEquals(new BigDecimal(“69.96"),
actualLineItem.getExtendedPrice());} else {
assertTrue(“Invoice should have exactly one line item“, false);}
}
63 Copyright 2010 Gerard MeszarosAgile Tour 2010
Hand-Coded Test – Appropriately Abstractedpublic void testAddItemQuantity_severalQuantity () {
// Fixture set up:final int QUANTITY = 5 ;Product product = createAnonymousProduct();Invoice invoice = createAnonymousInvoice();// Exercise SUTinvoice.addItemQuantity(product, QUANTITY);// VerifyLineItem expectedLineItem = newLineItem( invoice,product, QUANTITY, product.getPrice()*QUANTITY );assertExactlyOneLineItem( invoice, expectedLineItem );
}Test Developers need training on writing clear tests!
28
64 Copyright 2010 Gerard MeszarosAgile Tour 2010
executiondefinition
• The tests are expressed in domain-specific vocabulary.• The tests are read & executed by a test interpreter written
by techies.
Keyword-Driven Tests
Prepared like Hand-Coded Tests butwith a much morelimited vocabulary.
Keyword
65 Copyright 2010 Gerard MeszarosAgile Tour 2010
Keyword-Driven Tests
TestPreparation
TestLanguage
TestInterface
Test Data
Recorded Code Raw UI * Global, StaticRefactored Keyword Adapter Per RunHand-written Data API Per Test
Notes:• While the Keyword Interpreter may go
against the Raw UI, it is better to delegateto an adapter if no API is available.
Poor OK GoodExample Driven X
Lega
cy
Workflow XSystem XBusiness Rules XComponent XUnit X
New
Workflow XSystem XComponent XBusiness Rules xUnit x
29
66 Copyright 2010 Gerard MeszarosAgile Tour 2010
Scenario Invoice Generation-New Customer
-----------------------------Given: Logged in as ClerkAnd: Item1, Item2 exist-----------------------------1. CreateCustomer “Acme”2. CreateAccount NewCust3. AddPurchase Item14. AddPurchase Item25. GenerateInvoice NewAcct6. ….
SUT
Sample Keyword-Driven Test(e.g. Cucumber, JBehave, or Fit)
ComponentUnder Test
ResultsDocument
Resultof step
Resultof step
Cucumber Library
Cucumber ScenarioRunner
CreateCustomer Interpreter
AddPurchase Interpreter
• Test script defined using keywords• Keyword Interpreter invokes underlying code• Can go direct to API or via an Adapter
67 Copyright 2010 Gerard MeszarosAgile Tour 2010
Keyword-Driven vs. Coded DSLJust two different ways to implement a
Domain-Specific Language
Per Martin Fowler’s new book:• Keyword-Driven is an External DSL• Abstract Coded is an Internal DSL
30
68 Copyright 2010 Gerard MeszarosAgile Tour 2010
executiondefinition
• The tests are expressed as tabular data by users.• The tests are read & executed by a test interpreter written
by techies.
Data-Driven Tests
Runs the same testscript many times;once per set of data.
69 Copyright 2010 Gerard MeszarosAgile Tour 2010
Data-Driven Test
TestPreparation
TestLanguage
TestInterface
Test Data
Recorded * Code * Raw UI Global,Static#Refactored Keyword Adapter Per RunHand-written Data API Per Test
Notes:* The underlying script may be either hand-written
or recorded and parameterized. But the datascenarios (input values and expected outputs)are almost always prepared by hand.
# The inputs/outputs are per test (per row) butthere may be global or per-run data used asreference data by the underlying script.
Poor OK GoodExample Driven X
Lega
cy
Workflow XSystem XBusiness Rules XComponent XUnit X
New
Workflow XSystem XComponent xBusiness Rules xUnit x
31
70 Copyright 2010 Gerard MeszarosAgile Tour 2010
Sample Data-Driven Test in FIT
• Same script is run for each row of table• Avoids duplication of test script.• Compact summary of input values & results• Sometimes called “Business Unit Test” or “Business Rule Test”
PayrolFixtures.WeeklyCompensationStandardHours
HolidayHours
HourlyWage
Pay( )
40 0 10 $40040 0 20 $80041 0 20 $83040 1 20 $840 expected
$800 actual41 1 20 $870 expected
$830 actual
---Inputs--- Outputs
SUTComponentUnder Test
ResultsDocumentMarked up
TableFit
Library
Fit TestRunner
TableInterpreter
71 Copyright 2010 Gerard MeszarosAgile Tour 2010
Agenda• Motivation
– The Agile Test Problem– The Fragile Test Problem
• Approaches to Test Automation– Test Preparation Approach– Test Definition Language– Test Interface
• Test Automation Strategy– Selecting the right Approach(es)– Maximizing Automation ROI
32
72 Copyright 2010 Gerard MeszarosAgile Tour 2010
So What’s the Point?
Why is the approach to test automationsignificant?
Because test automation is hard work
And the approach effects the nature of thebenefits of the automation.
73 Copyright 2010 Gerard MeszarosAgile Tour 2010
For Success, Focus on Intent• Write the tests using the best language for
expressing the requirement being validated.– Not necesarily the language provided by the System
Under Test’s interface– May require different approaches for different tests
• Close any gap using an adapter if necessary
33
74 Copyright 2010 Gerard MeszarosAgile Tour 2010
Which Automation Approach?Depends heavily on Context• Legacy Systems:
– Stabilize with Recorded Tests while you refactor toenable Component testing.
– Only do hand-written unit tests for new components.• Greenfield Development:
– Keyword-Driven workflow and system tests.– Data-Driven tests for business rules– TDD via hand-written Unit Tests
75 Copyright 2010 Gerard MeszarosAgile Tour 2010
Which Automation Approach?• Recorded tests:
– implies a “Test After” approach; won’t help define therequirements
– Typically results in tests with Primitive Obsession Fragile Tests with high test maintenance cost
– Best for: Playing “Catch-up” on Legacy Systems• Hand-Written Tests:
– Amenable for use in Example-Driven Development» But must use Domain-Specific terminology to be effective
– Can be written in code or keywords depending on who’spreparing the tests
– Best for: Workflow tests (Keyword) and unit tests (code)
34
76 Copyright 2010 Gerard MeszarosAgile Tour 2010
Which Automation Approach?• Keyword-Driven Tests:
– Good separation between business and technical workinvolved in automating tests.
– Easy to prepare before development.– Best for expressing workflow or system tests.
• Data-Driven Tests:– Best for repeating same test script with many
combinations of inputs– Best for: Verifying Business Rules & Algorithms
» (A form of Component Testing)
77 Copyright 2010 Gerard MeszarosAgile Tour 2010
Maximizing Test Automation ROI• Need to Treat Automation as an Investment• Need to Prioritize / Triage Which Tests to
Automate• At least 3 Approaches to Choose From:
– Traditional QA-Based Automation– Collaborative Critical Path Automation– Collaborative Selective Automation
35
78 Copyright 2010 Gerard MeszarosAgile Tour 2010
Automation After Dev CompleteA.K.A. Traditional Approach to Automation
Summary:– Done by QA/SV Department (i.e. Testers)– After Product is Built– Typically done using (C)OTS Record & Playback tools
Issues:• Too Late for Defect Prevention
– Tests aren’t available to development team• Too Late to Ensure Easy Automation
– System not Designed for Testability• Tools Create Fragile Tests
– Unreadable due to Primitive Obsession and too muchduplication
79 Copyright 2010 Gerard MeszarosAgile Tour 2010
Typical Test Strategy #1
• Large numbers of(possibly automated)“customer” tests
• Very few if anyautomated componentor unit tests
SystemTests
UnitTests
Manual orRecord &Playback
Typical when testing (& automation) is “QA’s job”
QA Automation
ComponentTests
TypicallyManual
TypicallyManual
36
80 Copyright 2010 Gerard MeszarosAgile Tour 2010
Collaborative Automation on Critical PathA.K.A. Dogmatic (A)TDD Approach
Summary:–Goal: 100 % automation–Automate Tests Before Building Functionality
» Test automation task for each User StoryIssues:• Some Tests are MUCH Harder to Automate• May Increase Costs and Delay Benefits of
Functionality• May Cause EDD to be Abandoned
81 Copyright 2010 Gerard MeszarosAgile Tour 2010
Typical Test Strategy #2
• Large numbers ofautomated unit tests
• Some component tests– Business Rules– Technical Components
• Much fewer automated“customer” tests
Unit Tests Hand-Coded
All must be runnable bydevelopers
Manual orautomated
SystemTests
Agile (A)TDD
Component TestsHand-
Written
Origin: Mike Cohn
37
82 Copyright 2010 Gerard MeszarosAgile Tour 2010
Collaborative Automation based on ROIA.K.A. Pragmatic Approach
Summary:–Goal: Just Enough Automation–Apply Agile Principles to Implementation of
AutomationIssues:– Won’t Have Complete Test Coverage– Can Lead to Automation Being Dropped in
Favour of More Functionality– Requires a Disciplined Product Owner, or,– A Fixed Budget for the Automation
83 Copyright 2010 Gerard MeszarosAgile Tour 2010
• Define the tests first• Apply the 80/20 rule
– Automate the most valuable tests first• Automation is optional
What if Automation is Really Hard?
38
84 Copyright 2010 Gerard MeszarosAgile Tour 2010
Closing Thoughts• Are you automating to find defects or prevent
them?• Are your automated tests good examples?
– Why not? What would you need to change?• Are your tests low maintenance?
– Why not? What causes them to break?– What could you change to make them break less often?– .... to reduce the impact of breakage?
• Can anyone run the tests at any time?– Can the developers run the tests on-demand before
they check their code in?– What would you have to change to make that possible?
85 Copyright 2010 Gerard MeszarosAgile Tour 2010
Thank You!Gerard Meszaros
[email protected]://www.xunitpatterns.com
Slides: http://tour2010.xunitpatterns.com
Call me when you:• Want to transition to Agile or Lean• Want to do Agile or Lean better• Want to teach developers how to test• Need help with test automation strategy• Want to improve your test automation
Jolt Productivity Awardwinner - Technical Books
Coming Soon:
Upcoming Training:Testing for Developers:Oct 25-28 Waterloo
Nov 24-25 Calgary
39
86 Copyright 2010 Gerard MeszarosAgile Tour 2010
References• For Success, Build Record/Playback into Your
Application - StarEast 2008 Class
– http://Tour2010.xunitpatterns.com