escaping automated test hell - one year later

Post on 06-May-2015

962 Views

Category:

Technology

4 Downloads

Preview:

Click to see full reader

DESCRIPTION

Slides from my talk at 33rd Degree 2013 Conference in Warsaw. More than year ago we faced the fact that we are hitting the wall with our large scale automated testing in Atlassian JIRA. We analysed the problems and possible solutions and shared them with community at 33rd Degree in 2012. Since then we've implemented a lot of our ideas and come up with new, learnt new quite unexpected things and got rid of Selenium 1 completely. This session shows the learnings from our journey – escaping from Test Hell – back to the normality. If you are interested to hear what problems you can (and probably will) face if you have thousands of automated tests on on levels of abstractions (functional, integration, unit, UI, performance) and what solutions can be applied to remedy them – this presentation is for you.

TRANSCRIPT

Main sponsor

Escaping Automated Test Hell

Wojciech Seliga

One year later...

About me

• Coding for 30 years

• Agile Practices (inc. TDD) since 2003

• Dev Nerd, Tech Leader, Agile Coach, Speaker

• 5+ years with Atlassian (JIRA Development Team Lead)

• Spartez Co-founder

Year ago - recap

18 000 tests on all levels

Very slow and fragile feedback loop

Serious performance and reliability issues

FeedbackSpeed

`Test

Quality

Test Code is Not Trash

Design

MaintainRefactor

Share

Review

Prune

Respect

Discuss

Restructure

Optimum Balance

Optimum Balance

Isolation

Optimum Balance

Isolation Speed

Optimum Balance

Isolation Speed Coverage

Optimum Balance

Isolation Speed Coverage Level

Optimum Balance

Isolation Speed Coverage Level Access

Optimum Balance

Isolation Speed Coverage Level Access Effort

Dangerous to temper with

Dangerous to temper with

Quality / Determinism

Dangerous to temper with

MaintainabilityQuality / Determinism

Splitting codebase is key aspect of short test feedback loop

Now

People - Motivation

Shades of Red

Pragmatic CI Health

Build Tiers and Policy

Tier A1 - green soon after all commits

Tier A2 - green at the end of the day

Tier A3 - green at the end of the iteration

unit tests and functional* tests

WebDriver and bundled plugins tests

supported platforms tests, compatibility tests

Wallboards: Constant

Awareness

Training

• assertThat over assertTrue/False and assertEquals

• avoiding races - Atlassian Selenium with its TimedElement

• Unit tests over functional tests

• Brownbags, blogs, code reviews

Quality

Automatic Flakiness Detection Quarantine

Re-run failed tests and see if they pass

Quarantine - Healing

SlowMo - expose races

Selenium 1

Selenium 1

Selenium ditching Sky did not fall in

Ditching - benefits

• Freed build agents - better system throughput

• Boosted morale

• Gazillion of developer hours saved

• Money saved on infrastructure

Ditching - due diligence

• conducting the audit - analysis of the coverage we lost

• determining which tests needs to rewritten (e.g. security related)

• rewriting the tests

Flaky Browser-based TestsRaces between test code and asynchronous page logic

Playing with "loading" CSS class does not really help

Races Removal with Tracing// in the browser:function mySearchClickHandler() {    doSomeXhr().always(function() {        // This executes when the XHR has completed (either success or failure)        JIRA.trace("search.completed");    });}// In production code JIRA.trace is a no-op

// in my page object:@InjectTraceContext traceContext; public SearchResults doASearch() {    Tracer snapshot = traceContext.checkpoint();    getSearchButton().click(); // causes mySearchClickHandler to be invoked    // This waits until the "search.completed" // event has been emitted, *after* previous snapshot        traceContext.waitFor(snapshot, "search.completed");     return pageBinder.bind(SearchResults.class);}

Speed

Can we halve our build times?

Speed

Parallel Execution - Theory

End of Build

A1

Batches

Start of Build

Parallel Execution

End of Build

A1

Batches

Start of Build

Parallel Execution - Reality Bites

End of Build

A1

Batches

Start of Build

Agent availability

Dynamic Test Execution Dispatch - Hallelujah

Dynamic Test Execution Dispatch - Hallelujah

"You can't manage what you can't measure."

W. Edwards Deming

"You can't manage what you can't measure."

W. Edwards Deming

If you believe just in it

you are doomed.

You can't improve something if you can't measure it

You can't improve something if you can't measure it

Profiler, Build statistics, Logs, statsd → Graphite

Anatomy of Build*

CompilationPackaging

Executing Tests

Anatomy of Build*

CompilationPackaging

Executing Tests

Fetching Dependencies

Anatomy of Build*

CompilationPackaging

Executing Tests

Fetching Dependencies

*Any resemblance to maven build is entirely accidental

Anatomy of Build*

CompilationPackaging

Executing Tests

Fetching Dependencies

*Any resemblance to maven build is entirely accidental

SCM Update

Anatomy of Build*

CompilationPackaging

Executing Tests

Fetching Dependencies

*Any resemblance to maven build is entirely accidental

SCM Update

Agent Availability/Setup

Anatomy of Build*

CompilationPackaging

Executing Tests

Fetching Dependencies

*Any resemblance to maven build is entirely accidental

SCM Update

Agent Availability/Setup

Publishing Results

JIRA Unit Tests Build

Compilation (7min)

JIRA Unit Tests Build

Compilation (7min)

Packaging (0min)

JIRA Unit Tests Build

Compilation (7min)

Packaging (0min)

Executing Tests (7min)

JIRA Unit Tests Build

Compilation (7min)

Packaging (0min)

Executing Tests (7min)

Publishing Results (1min)

JIRA Unit Tests Build

Compilation (7min)

Packaging (0min)

Executing Tests (7min)Fetching Dependencies (1.5min)

Publishing Results (1min)

JIRA Unit Tests Build

Compilation (7min)

Packaging (0min)

Executing Tests (7min)Fetching Dependencies (1.5min)

SCM Update (2min)

Publishing Results (1min)

JIRA Unit Tests Build

Compilation (7min)

Packaging (0min)

Executing Tests (7min)Fetching Dependencies (1.5min)

SCM Update (2min)

Agent Availability/Setup (mean 10min)

Publishing Results (1min)

Decreasing Test Execution Time to

ZERRO alone would not let us

achieve our goal!

Agent Availability/Setup

• starved builds due to busy agents building very long builds

• time synchronization issue - NTPD problem

• Proximity of SCM repo

• shallow git clones are not so fast and lightweight + generating extra git server CPU load

• git clone per agent/plan + git pull + git clone per build (hard links!)

• Stash was thankful (queue)

SCM Update - Checkout time

• Proximity of SCM repo

• shallow git clones are not so fast and lightweight + generating extra git server CPU load

• git clone per agent/plan + git pull + git clone per build (hard links!)

• Stash was thankful (queue)

SCM Update - Checkout time

2 min → 5 seconds

• Fix Predator

• Sandboxing/isolation agent trade-off:rm -rf $HOME/.m2/repository/com/atlassian/*

intofind $HOME/.m2/repository/com/atlassian/ -name “*SNAPSHOT*” | xargs rm

• Network hardware failure found (dropping packets)

Fetching Dependencies

• Fix Predator

• Sandboxing/isolation agent trade-off:rm -rf $HOME/.m2/repository/com/atlassian/*

intofind $HOME/.m2/repository/com/atlassian/ -name “*SNAPSHOT*” | xargs rm

• Network hardware failure found (dropping packets)

Fetching Dependencies

1.5 min → 10 seconds

Compilation

• Restructuring multi-pom maven project and dependencies

• Maven 3 parallel compilation FTW -T 1.5C*optimal factor thanks to scientific trial and error research

Compilation

• Restructuring multi-pom maven project and dependencies

• Maven 3 parallel compilation FTW -T 1.5C*optimal factor thanks to scientific trial and error research

7 min → 1 min

Unit Test Execution

• Splitting unit tests into 2 buckets: good and legacy (much longer)

• Maven 3 parallel test execution (-T 1.5C)

3000 poor tests(5min)

11000 good tests(1.5min)

Unit Test Execution

• Splitting unit tests into 2 buckets: good and legacy (much longer)

• Maven 3 parallel test execution (-T 1.5C)

7 min → 5 min

3000 poor tests(5min)

11000 good tests(1.5min)

Functional Tests

• Selenium 1 removal did help

• Faster reset/restore (avoid unnecessary stuff, intercepting SQL operations for debug purposes - building stacktraces is costly)

• Restoring via Backdoor REST API

• Using REST API for common setup/teardown operations

Functional Tests

Publishing Results

• Server log allocation per test → using now Backdoor REST API (was Selenium)

• Bamboo DB performance degradation for rich build history - to be addressed

Publishing Results

• Server log allocation per test → using now Backdoor REST API (was Selenium)

• Bamboo DB performance degradation for rich build history - to be addressed

1 min → 40 s

Unexpected Problem

• Stability Issues with our CI server

• The bottleneck changed from I/O to CPU

• Too many agents per physical machine

JIRA Unit Tests Build Improved

Compilation (1min)

JIRA Unit Tests Build Improved

Compilation (1min)

Packaging (0min)

JIRA Unit Tests Build Improved

Compilation (1min)

Packaging (0min)

Executing Tests (5min)

JIRA Unit Tests Build Improved

Compilation (1min)

Packaging (0min)

Executing Tests (5min)

Publishing Results (40sec)

JIRA Unit Tests Build Improved

Compilation (1min)

Packaging (0min)

Executing Tests (5min)

Fetching Dependencies (10sec)

Publishing Results (40sec)

JIRA Unit Tests Build Improved

Compilation (1min)

Packaging (0min)

Executing Tests (5min)

Fetching Dependencies (10sec)

SCM Update (5sec)

Publishing Results (40sec)

JIRA Unit Tests Build Improved

Compilation (1min)

Packaging (0min)

Executing Tests (5min)

Fetching Dependencies (10sec)

SCM Update (5sec)

Agent Availability/Setup (3min)*

Publishing Results (40sec)

Improvements Summary

Tests Before After Improvement %

Unit tests 29 min 17 min 41%

Functional tests 56 min 34 min 39%

WebDriver tests 39 min 21 min 46%

Overall 124 min 72 min 42%

* Additional ca. 5% improvement expected once new git clone strategy is consistently rolled-out everywhere

The Quality Follows

The Quality Follows

The Quality Follows

But that's still bad

We want CI feedback loop in a few minutes maximum

Splitting The Codebase

Resistance against splittingThe last attempt: Magic Machine

Decide with high confidence (e.g. > 95%) which subset of tests to run basing on the committed changes

Magic Machine

• Looking at Bamboo history (analysing correlation between changes and failures)

• Matching: package test/prod code and transitive imports

• Code instrumentation (Clover, Emma, AspectJ)

• Run most often failing first

Inevitable Split - Fears

• Organizational concerns - understanding, managing, integrating, releasing

• Mindset change - if something worked for 10 years why to change it?

• We damned ourselves with big buckets for all tests - where do they belong to?

Magic Machine strikes back

With heavy use of brain, common sense and expert judgement

Splitting code base• Step 0 - JIRA Importers Plugin (3 years ago)

• Step 1- New Issue View and NavigatorJIRA 6.0

We are still escaping hell. Hell sucks in your soul.

Conclusions

• Visibility and problem awareness help

• Maintaing huge testbed is difficult and costly

• Measure the problem

• No prejudice - no sacred cows

• Automated tests are not one-off investment, it's a continuous journey

• Performance is a damn important feature

Do you want to help?We are hiring in Gdańsk• Principal Java Developer

• Development Team Lead

• Java and Scala Developers

• UX Designer

• Front-End Developer

• QA Engineer

Visit us at the booth or apply at http://www.atlassian.com/company/careers

• Turtle - by Jonathan Zander, CC-BY-SA-3.0

• Loading - by MatthewJ13, CC-SA-3.0

• Magic Potion - by Koolmann1, CC-BY-SA-2.0

• Merlin Tool - by By L. Mahin, CC-BY-SA-3.0

• Choose Pills - by *rockysprings, CC-BY-SA-3.0

Images - Credits

Thank You!

top related