lessons learned in implementing test on-commit for mobile devices
TRANSCRIPT
Confidential
Rev PA1 2011-10-26 1
Lessons Learned in Implementing Test-on-Commit for Mobile Devices
Going from vision to reality is often not trivial
Introduction Test-on-commit, integration test, or testing each software
upload before merging to the software main branch, is a key enabler to continuous integration
For mobile devices it is often more difficult than for web services or applications because of dependencies towards hardware
A vision to having thousands of test being executed on each commit for all different variants and configurations is easy to set in power point, but very complex to realize in practice
This presentation presents some of the lessons learned when implementing test-on-commit for mobile devices
Integration Test Overview
Code RepositoryBuild System
Developer
Test SystemUpload
Triggers
Triggers Reports
Reports
Reports
Lessons Learned Overview
Build Capacity Test Execution Time
Test Case Ownership
Test Environment
Test Result Analysis
Test Case Repository & Ownership
Build Capacity Building applications can go quite fast, but if there are
hardware and software dependencies that require a rebuild of the entire system, the build time will increase drastically
With that in mind, adding the fact that test-on-commit requires the build system to be able to handle hundreds or thousands of commits each day, this puts a heavy requirement on the capacity of the build cluster
Even worse, if each commit has to be built in several variants, this further increases the capacity requirements
Build Capacity Solution
One recommendation can be to start small; don’t build for all commits, select the most important areas and only build for commits to those areas
Perhaps select the highest priority variants and test the rest after merge to the software main branch
Look at possibilities to increase build cluster capacity while starting out small with the integration test
Google has solved this by building in the cloud [2]
Test Execution Time It is possible to execute thousands of test cases in a very short time,
but only a specific sort of test cases
Testing functionality and throughput over Wi-Fi will take much longer than a fraction of a second
Testing a file systems functionality can take a long time, depending on how much reading and writing to the file system that is necessary
If the test execution time is too long this will put heavy requirements on the test system’s capacity
The test execution time should not be longer than the code review time, as the test should be input to the merge decision and not delay merge to the software main branch – if it takes to long it is an obstacle to continuous integration
Test Execution Time Solution Design integration test cases with the time aspect in mind,
don’t just take existing test cases and try to squeeze them into the test scope
If it is possible – have a dynamic integration scope that selects only test cases relevant for the specific commit – if the commit changes something in network signalling, run test cases in this areas, and in areas with dependencies to network signalling
If this dynamic scope is not possible, make sure to at least touch each area, and then focus on adding more test cases to the high priority/high risk areas
Understand your scope – don’t try to test everything before merge to the software main branch
Test Case Robustness
If a test case is going to be executed hundreds of times each day, it is critical that the test case does not generate inaccurate results
Try to have as few dependencies towards the external environment as possible, and try to limit the ways that the test case can give false positives or negatives
Creating robust test cases that are complex and add value is one of the major challenges
If the test cases are not robust, no one will believe and take actions in the results of the tests, thus making them practically useless
Test Environment In the same way as the test cases have to be robust, the
test environment has to be robust – it can not generate false positives or negatives for the same reasons as the test cases
The test environment will be under a heavy load – make sure to invest in good equipment, as it must be able to handle a lot more than under normal circumstances
Take a Wi-Fi access point for example – the cheaper once will most likely fail to establish connection maybe 1 time out of 10 when being accessed by several phones simultaneously – this is not good enough, invest in a more expensive Wi-Fi access point to make the test environment more robust
Test Results Analysis Executing test cases is useless without proper analysis of the results
Remember to allocate resources for maintaining and monitoring the integration test after it has been implemented
Analysis must be quick to not delay integration, so there must be dedicated resources available to handle this
The more robust the test cases and the test environment is, the less time is spent on test results analysis
The results of the analysis must be communicated to the right stakeholders
Microsoft has implemented automatic test result analysis, which of course is an even better solution [3]
Test Case Repository & Ownership Usually unit tests and API tests are stored together with the
code, but it is worth investigating if storing integration tests separately adds value
There needs to be one clear owner of all the integration tests, as changes need to happen controlled and quickly
This is often easier if the test cases are not stored together with code owned by someone else
Conclusion Creating a test-on-commit system that runs an integration
test on all uploads before merging to the software main branch is difficult
Google spent many years getting their system operational, but they have succeeded [1]
Doing the same for mobile devices is even harder as it adds the complexity of the mobile device hardware, and complex dependencies
It is not easy and requires a lot of dedicated time and resources to implement – but it is a key enable to continuous integration, and a problem everyone will have to tackle eventually
Reference[1] Tools for Continuous Integration at Google Scale http://www.youtube.com/watch?v=b52aXZ2yi08
[2] Build in the Cloud: Distribution Build Outputs
http://google-engtools.blogspot.com/#!/2011/10/build-in-cloud-distributing-build.html
[3] Test Innovation
http://angryweasel.com/blog/?p=362