2011 logigear global testing survey · 2010 – 2011 logigear global testing survey results –...

91
2011 LogiGear Global Testing Survey By: Michael Hackett For LogiGear Corporation www.logigear.com

Upload: others

Post on 20-May-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Testing Survey

By: Michael HackettFor LogiGear Corporation

www.logigear.com

Page 2: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

TABLE OF CONTENTS

Survey Overview 3 Survey 1 Automation Testing 11 Survey 2 Test Process and SDLC 20 Survey 3 Testing in Agile 27 Survey 4 Outsourced and Offshore Testing 30 Survey 5 Test Methods 36 Survey 6 The Politics of Testing 44 Survey 7 Metric and Measurements 63 Survey 8 Tools 70 Survey 9 Managers 76 Survey 10 Demographics 81

Page 3: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Survey Overview

Page 4: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW

I am the Senior Vice President at LogiGear. My main work is consulting, training, and organizational leadership and optimization. I’ve always been interested in collecting data on software testing – it keeps us rooted in reality and not some wonkish fantasy about someone’s purported best practice! Just as importantly, many software development teams can easily become myopic as to what they do as being normal or “what everyone does.” Very often, clients ask me: “What do other companies do in a situation like ours?” I did a survey to keep my view on testing practices current.

In 2010 and 2011, I conducted a large survey called “What is the current state-of-the-practice of testing?” I opened the survey in the first part of 2010 and collected data for an entire year. Invitations were sent out to testers from around the world. Since software testing is a global trend with experts and engineers having all sorts of ideas on how to do certain methods and practices differently – I wanted to capture and understand that diverse cross-section of ideas.

Some of the data was pretty much what you’d expect, but for some of the sections especially around outsourcing, offshoring, automation and Agile to name a few; the answers were quite surprising.

This overview is designed to give you an introduction into my approach and some preliminary findings. The remainder of this document goes into detail with individual questions and responses to the entire survey.

Page 5: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW

The GoalsMy goal in doing the survey was to move away from guesses about what is happening, and what is common, and move to using actual data to provide better solutions to a widervariety of testing situations. First, we need to better understand the wide diversity of common testing practices already in use and how others are using these processes and techniques for success or failure. In order to make positive changes and provide useful problem solving methods in software development and specifically testing, we need to know what is actually happening at the ground level, not what a CTO might think is happening or wants to happen!

Also, when I write a white paper or article I want it to reference and contrast real-world testing and software development. I hope this will help many teams in checking their practice against some broader test/dev situations as well as give realistic ideas for improvement based on what other teams and companies may really be doing!

The QuestionsI wrote the survey on a wide variety of current topics from testing on Agile projects; to opinions of offshore teams; to metrics. The survey also featured more question sets on the size and nature of teams, training and skills, the understanding of quality, test artifacts and the politics of testing.

This was a very large survey, with over 100 multiple choice questions combined with several fill-in essay type responses. The survey was meant to be cafeteria style, that is, testers could choose sections that applied to their work or area of expertise and ignore or skip those that did not apply to them, professionally or by interest. For example, there were sections for “teams that automate,” and “teams that do not automate,” teams that “self-describe as Agile”, offshore teams, onshore teams, etc. So no one was expected to complete the entire survey.

The questions originate from software development team assessments I executed over the years. A process assessment is an observation and questioning of how and what you and your team does for the purpose of process improvement

Page 6: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW

The Sample SetThis was a very large survey, with over 100 multiple choice questions combined with several fill-in essay type responses. The survey was meant to be cafeteria style, that is, testers could choose sections that applied to their work or area of expertise and ignore or skip those that did not apply to them, professionally or by interest. For example, there were sections for “teams that automate,” and “teams that do not automate,” teams that “self-describe as Agile”, offshore teams, onshore teams, etc. So no one was expected to complete the entire survey.

Some Sample ResponsesHere are some preliminary findings from my survey. Analyzing the entire survey will take more time – but I did want to put out a selection of findings to give you an idea of what type of information I will be sending out. I picked some responses that were interesting because they confirmed ideas or surprising because they are rarely discussed, poor planning, old ideas, or just surprising! I’ve broken them down into four sections, “answers that I expected”, “conventional wisdom that seems validated”, “answers that did not appear uniform”, and some “surprising data” that was in some cases unexpected.

We received responses for 14 countries!

Page 7: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW

Answers that were along the lines I expected:

Question: Test cases are based primarily on Answer

A – Requirements Documents 62%

B – Subject Matter Expertise 12%

The overwhelming majority of teams still begin their work referencing requirements documents. However good, however bad, complete or too vague – most people start here. I did think the number of teams starting with test cases with workflows, user scenarios – using their subject matter expertise would be higher. How a user completes some transaction or some task – I guess, is still secondary to the requirement.

Convention Wisdom that was validated:

Question: What is the name of your team/group? Answer

A – QA 48.8%

B – Testing 20.5%

This is conventional wisdom, but surprised me. It is definitely a trend – at least in Silicon Valley – to move teams away from the outdated term “QA.” Since the people who test rarely ever, almost never, really do QA. If you are a tester and you think you do QA, please return to 1985. It is interesting, though, that this number calling themselves QA has dropped below 50% — as time goes on this number will continue to drop.

Page 8: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW

60% of all respondents write test plans for each project

Question: Educational Level (selected responses) Answer

A – High School 3.0%

B – Bachelors of Arts/Sciences 40.0%

C – Some Graduate Work 19.0%

D – Masters Degree 24.6%

E – PhD. 3.0%

It seems conventional wisdom that the vast majority of people who test have university degrees, but I am surprised at how many have done post graduate work, have a master’s degree and have PhDs. It runs against conventional wisdom that people who test are the least trained on the development team, perhaps they are the most educated!

Here is some more conventional wisdom – this can be a great point of interest when you are debating – should I/we write a test plan for each project?

Far from Uniform Answers:

Page 9: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW

34% of all respondents indicated that their regression testing was entirely manual

A very big surprise to me! The lack of automated regression! Wow. That is one of the biggest and most surprising results of the entire survey! Why do 1/3 of teams still do all manual regression? Bad idea, bad business objective.

52% do not test their application/system for memory leaks

The number of teams not doing some variety of memory, stress, DR (disaster recovery), buffer overflow (where applicable) load, scalability, etc. testing was another big surprise. We need to look further into this. Is it bad planning? Lack of tools, skill, lack of knowledge, keeping your fingers crossed? In many cases I bet this is bad business planning.

87% of respondents rank offshoring or outsourcing as “successful”

Such a very high number of people responding that offshoring and outsourcing was successful goes against conventional wisdom that it’s the managers who like outsourcing/offshoring but production staff (the people who actually do the work), are not happy with it!

37% of teams say they do not currently automate tests, with 10% indicating they’ve never tried to automate

That over 1/3 or respondents currently do not automate tests is in line with what I see in my work at many companies but is contrary to popular belief and any sort of best practice. What I see out in the business world is teams that automate think everyone automates and they automate enough. Teams that do not automate see automation as not common, too difficult, not something testers do. This number is way, way too high. Any team not automating has to seriously look at the service they are providing their organization as well as the management support they are receiving from that organization!

Page 10: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW

The ResultsThe overriding result is that the current testing practice is quite diverse! There is no single test practice, no one way to test, and no single preferred developer/tester ratio. Everyone’s situations were different and even some similar situations had very different ideas about their product quality, work success and job satisfaction!

My Future PlansI plan to continue to commission surveys as part of my desire to take a pulse on what is really happening in the software development world — with regard to testing rather than postulations from self-described experts. As technology changes, tools change, processes change, practices grow – it is crucial for test teams, as a community, share our ideas, learn from each other and this starts with understanding where our industry is.

Page 11: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Survey 1 –

Automation Testing

Page 12: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – AUTOMATION TESTINGThe target audience of the survey were black box testers. Please note that to these respondents, test automation is mainly about UI level automation, not unit, performance or load testing.

These automation survey results contain two mutually exclusive sections. There were sets of questions for teams that currently automate tests and another set for teams that currently do not automate any tests.

I. Overview

Before delving into the respondent’s frame of mind with answers to questions from Test Automation, I will highlight some results from The Politics of Testing, Training, Strategy and Overview sections that will set the stage for a better understanding of the issues faced by these respondents in test automation

PT1 (Politics of Testing)- What phase or aspect of testing do you feel the management at your company does not understand? (You can select multiple answers.)

Projects are most often behind schedule because of shifting requirements not test delays.

43%

How to measure testing 41%

How to adequately schedule testing 41%

Test automation is not easy 40%

Testing requires skill 35%

The inability of complete testing 32%

Choosing good coverage metrics 23%

None, my management team fully understands testing. 17%

Result analysis: The 3rd highest response, virtually tied as the area of testing that management does not understand test automation is not easy!

Page 13: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – AUTOMATION TESTING

PT2- What challenges does your product team have regarding quality? (You can select multiple answers.)

Insufficient schedule time 63%

Lack of upstream quality assurance (requirements analysis and review, code inspection and review, unit testing, code-level coverage analysis)

47%

Feature creep 39%

Lack of effective or successful automation 36%

Poor project planning and management 33%

Project politics 31%

Poor project communication 27%

Inadequate test support tools 25%

No effective development process (SDLC) with enforced milestones (entrance/exit criteria)

25%

Missing team skills 23%

Low morale 16%

Poor development platform 8%

Result analysis: By far, the #1 answer here is insufficient schedule time. The #4 answer is a lack of successful automation. It is too easy to say more investment in test automation will solve all your team’s problems—but it will definitely help! More and more effective test automation always helps projects.It is not the answer to all problems, as clearly emphasized in the second highest choice, that a lack of upstream quality practices cannot be solved by downstream test automation! But better test automation will go far in helping the manual test effort and by doing so, at least relieve some tester stress.

Page 14: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – AUTOMATION TESTING

T1 (Training)- What skills are missing in the group? (You may select multiple answers.)

Automation skill 52%

Coding/tool building 42%

Technical understanding of the code, system, platform, environment 35%

Subject matter/domain knowledge 33%

QA/testing skill 32%

Test tool (ALM, test case manager, bug tracking, source control, continuous integration tool)

22%

Results analysis: Interesting but never surprising, the highest chosen answer by teams in regards to what they lack ─ more than half the respondents ─ Is test automation skills! It is obvious and clear that acquiring more test automation skills is the single most important job security point.

S1 (Strategy)- How do you do regression testing?Both 47%

Manual 34%

Automated 15%

We do not do regression testing 4%

Results analysis: A very big surprise to me─the lack of automated regression! Wow. That is one of the biggest and most surprising results of the entire survey! Why do 1/3 of teams still do all manual regression? Bad idea, bad business objective.

O1 (Overview)- Do you currently automate testing?Yes 63%

No, not currently 37%

If no, has your team ever automated testing?

Yes 90%

No 10%

Results analysis: With over 1/3 of respondents currently not automating tests, these results, however, are contrary to popular belief and any sort of best practice. What I see out in the business world are many teams that think everyone automates and they themselves automate enough. I also see many teams where all testing is manual and see automation as not common, too difficult, and not something testers do. This number is alarmingly high. Any team not automating has to seriously look at the service they are providing their organization as well as the management support they are receiving from that organization!

Page 15: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – AUTOMATION TESTING

II. For teams that currently automate testing

A1 (Automate)- Have you been trained in test automation?

Yes 70%

No 30%

Results analysis: For teams that currently automate and still have 30% of the team untrained in test automation is deeply problematic. When this is the case, I often see the problem as too much technical information centralized into too few people. This is problematic for the career growth of staff, a business analyst or subject matter expert team as it demonstrates management does not invest in its staff. If your team automates and you have been left behind, it is a good idea to get a book, take a class, educate yourself and insinuate yourself into test automation for your own career planning!

A2- Estimate what percentage of the test effort is automated?

Less than 25% 36%

50 – 74% 32%

Over 75% 14%

25 to 49 % 13%

Very little. 5%

All our testing is automated 0%

Results analysis: Still amazing to see how little test groups automate. Over 40% of teams automate less than 25% of their tests! With 46% of teams automating over 50% of their test effort reveals very significant strides can be made in reducing the dependence on manual testing.

Page 16: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – AUTOMATION TESTING

A3- How would you describe your automated test effort?

Is it a very effective part of your test effort. 33.30%

It is key to meeting schedule dates. Our test schedule would be much longer without automation.

19%

It is key to improving product quality. 14.30%

It is key to product stability. 9.50%

It is effective for just what we have automated and does not significantly affect the rest of the test effort.

9.50%

It frees me up from repetitive tasks to do more interesting, exploratory testing. 9.50%

It is somewhat useful, but has not lived up to expectation or cost. 4.80%

It is the most important part of our test effort. 0%

It is a waste of time and does not tell us much. 0%

Results analysis: What surprises me most about this set of answers is that no one thought the automated tests were the most important part of their test effort. What does this say? The respondents take their automation for granted? It isn’t trusted or it isn’t good? Or quite possibly, it is important but having a human manually interact with the system is the most important part of the test effort! Exactly 1/3 said it was a very effective part of the effort. Almost 20% said it is key to meeting the schedule. The fact that these numbers are not higher shows how test automation has not yet achieved its full potential in helping teams.

A4- How do you measure test automation effectiveness?

Time saved from manual execution. 81%

Percentage lines of code covered by automated tests. 14.30%

Number of bugs found. 4.80%

Product stability measured by number of support/help desk calls from users or hot fix/patch fixes.

0%

We do not measure test effectiveness for automation 0%

Results analysis: This is an overwhelming measurement of test automation by time saved from manual testing. An important observation to note is that we don’t measure test automation by bugs found- it’s important for teams to understand this and clearly they do.

Page 17: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – AUTOMATION TESTING

A5- What strategy do you use to design your automated tests?Direct scripting of manual test cases 33.30%

Data driven 19%

Action based/keyword 19%

Record and Playback 14.30%

No specific method 14.30%

Results analysis: These results are interesting to see the level of sophistication of various teams’ efforts.

A6- What is the main benefit of test automation?

More confidence in the released product 42.10%

Faster releases/meeting the schedule 36.80%

Higher product quality 10.50%

Waste of time 5.30%

None/no benefit 5.30%

Finding more bugs 0%

More interesting job/skill than manual testing 0%

Finding less bugs 0%

Less focus on bug finding 0%

Slower releases 0%

Additional useful comment from a respondent: “Some of the best uses of automation are to exercise the application, perform redundant or error-prone tasks, configure or set up test environments, and to execute regression tests.”

Results analysis: The results are encouraging for automation ─ there is a consensus that testers believe test automation provides more confidence in the quality of the product and increases the ability to meet schedules.

Page 18: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – AUTOMATION TESTING

III. For teams that do not automate testing

A7- Have you tried to automate tests for your product?

Yes 56%

No 44%

Results analysis: A surprising number of teams have never tried automating! I think this is another dark secret of software testing. Drawing from my own speculation, it’s my opinion that many companies either have never invested in test automation, do not realize its benefits, afraid to try something new, or realize they need a significant investment to make it work and are not willing to further fund testing. It could also be that teams may have tried automating and given up, were not supported by the development organization, or test tool funding was cut. These are situations that need to be addressed for test teams to provide long term benefits to the organization.

A8- What would prevent you from automating tests now? (You may select multiple answers.)

Investment in automation program of training, staff, tool, and maintenance cost is too high.

43.80%

Tool cost too high. 37.50%

Management does not understand what it takes to successfully automate. 37.50%

Not enough technical skill to build successful automation. 37.50%

Code or UI is too dynamic to automate. 37.50%

Test case maintenance cost too high. 25%

Not enough knowledge about how to make automation effective. 25%

It will not help product ship with better quality or in shorter schedule. 25%

Bad project management will not be helped by automating tests. 12.50%

Results analysis: The great variety of reasons why teams do not automate is clear: cost, management misunderstanding of automation and lack of knowledge are the great downfall of test automation.

Page 19: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – AUTOMATION TESTING

A9- Are you, or is your current team technically skilled to the point where you can build and maintain test automation? ( Remember, this response is only from teams currently not automating.)

No 56%

Yes 44%

Results analysis: If you are a software tester without much knowledge about automation, it would be best for your own career security to dive into test automation, read as much as you can, learn about the best methods, and see what various tools can actually do. Take responsibility to learn about the future of your career.

A10- Would product quality improve if you automated tests?

Yes 69%

No 31%

Results analysis: It is problematic that 31% of respondents do not see product quality improving with automation. Some teams may think the quality of their product is high and they do not need to automate. For those teams not so optimistic, there are a few possible ideas behind this: not understanding what information automated tests do and do not give you, not understanding tasks that can be automated to free up time to do more, deeper manual testing, but also, some teams may be resigned to low quality products, regardless of how their testing gets done.

IV. Participant Comments

The following statements are comments from respondents and their experience with test automation:

1. “Make sure the test cases that are designated for automation are of the right quality.”2. “I have oh so many. I worked for a test tools company for over 8 years. How much time do you have? Common

problems were inflated expectations of automation, lack of skills to implement effectively, lack of cooperation with the dev teams in enabling testability features, etc. But the worst stories I have are related to an over-reliance on automation where tools replaced people for testing and products were shipped with confidence that failed in the field horribly upon first use. These scenarios were VERY embarrassing and cause me to often throw the caution flag when I see a team driving toward ‘automating everything.’”

3. “Automation frees up resources to do more stuff. Instead of spending 5 days running a manual regression the automated regression runs in 1/2 day and the team can spend a day doing data validation and exploratory testing.”

4. “Test automation requires collaboration between developers, automation engineer, and functional test engineer. The more transparent the automation effort is, the more useful it will be.”

5. “Identify the appropriate tools for your project. Use it everywhere possible, when in the testing”6. “I’m not sure if this is experienced by testers worldwide, but I had several encounters of IT Project Managers having the

misconception of test automation. They have the expectation that every functionality in the application should be fully automated. As a test automation engineer, I’ve [learned] that in an application, not all functionality could be automated due to limitations on the automated tool. Or the application function is too complex to automate, producing an ineffective test automation effort. My strategy to overcome this is to advise the manager to identify the critical functionalities that can be effectively automated, thus reducing manual testing effort significantly and reaping the most out of the automated tool.”

Page 20: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Survey 2 – Test

Process and SDLC

Page 21: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST PROCESS & SDLC

Process

In such a long survey, I wanted to keep process questions to a minimum and seek free-form comments and opinions on perceptions of process, and save more survey time for assessment of actual practices, strategy and execution rather than an individual’s plan for a process. The following section of questions deals directly with the development process itself. Results from this section has always been informative for managers as it deals directly with how things ought to be and what perceptions are for the process itself─in most cases there is a discrepancy between reality and the best laid plans! Let’s take a look at the Process portion of the survey.

P-1. How would you describe the effort to release your team’s software product or application?

The process is occasionally difficult, but we reach fair compromises to fix problems 43.40%

The projects are smooth with occasional process or product problems 28.90%

It’s a repeatable, under-control process. 19.70%

It’s difficult; problems every time 6.60%

It’s a huge, stressful effort 1.30%

Result analysis: This is an encouraging response! The overwhelming majority, 92% say their development process is either under-control, smooth or occasionally difficult. Only eight percent states the process difficult or stressful.

I am surprised at the very positive response teams have for their development process. This is a very common area for teams to complain and express frustration. Current conventional wisdom has teams frustrated with traditional processes. More than half of the survey respondents self-describe as using development process other than Agile.

Page 22: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST PROCESS & SDLC

P-2. Do you think your SDLC processes are followed effectively?

Yes 52.1%

No 47.9%

Now we see the big disconnect with the first question. This response is a virtual split. Just over half think their SDLCs are followed effectively, and the remaining portion does not.

From my experience, I find in development today is that there are “process docs” detailing how teams should effective make software yet are commonly written outside of development or many times by consultants. Teams regularly disregard these docs and are more comfortable with their own culture or practice that is either expressed or implied in tribal knowledge.

P-3. Have people been trained in the process?

Yes 74.3%

No 25.7%

Result analysis: This is encouraging results matching what I expected for internal training. If a team has a process in place, then it would be easy for the department to create either a PowerPoint slideshow or flash video of the process. Such a tool would make training easy for all employees. The opposite would be a team that does not have a process and works based on either tribal knowledge or “whatever works” ethics—a problematic process for long term goals. Making your SDLC as a training standard is also a great opportunity to question why the team does certain practices and sharpen the process or connect reality to “shoulds.”

P-4. During the testing phase of a project, what percentage of time do you spend executing tests on an average week? Example: 10 hours testing in a 40 hour week: 25%

50% 27.30%

74% – 51% 23.40%

Less than 25% of my time is spent on executing test cases 20.80%

49% – 26% 18.20%

More than 75% of my time is spent on executing test cases 10.40%

Page 23: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST PROCESS & SDLC

Result analysis: If there is one big, dark, hidden secret in testing I have chosen as my cause, it is to expose the amount of time testers actually spend testing as opposed to documenting, time in meetings, building systems, data, planning, maintenance ─all those things testers need to do as well. The perception of most managers is that testers spend the overwhelming majority of their time executing tests. This is not the case.

Ten percent of respondents say they spend 75% of their time or more testing. This can also be read that only 10% of respondents are testing at least 30 hours in a 40 hour week with 10 hours a week spent on other efforts.

Just over 20% put themselves in the lowest category admitting to less than 25% of their time spent testing. This means 10 hours/week or less is spent testing and during a test phase, spending 30 hours/week working on other initiatives.

Two-thirds, 66.3% of respondents spend half their time or less (20 hours/week or less) in a testing phase actually testing. This is common and always very surprising to managers.

I do not conclude any fault lies in this. I know what test teams have to do to get a good testing job planned, executed and documented. Yet, in some situations, this is an indication of a problem. Perhaps the test team is forced into too much documentation, must attend too many meetings, or gets called into supporting customers and cannot make up lost testing time.

What is most important here is that everyone concerned ─managers and directors, project managers, leads, anyone estimating project size ─all know, most testers spend half or less of their time testing

P-5. How much time do you spend documenting (creating and maintaining) test cases during an average product cycle?

Less than 25% of my time is spent on documenting test cases 51.30%

49% – 26% 27.60%

50% 11.80%

74% – 51% 5.30%

More than 75% of my time is spent documenting test cases 2.60%

We do not document our test cases 1.30%

Page 24: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST PROCESS & SDLC

Result analysis: There are many useful pieces of information to glean from this. Few groups spend too much time documenting. This is a great improvement from just a few years ago when many teams were under tremendous pressure to document every test case. Aside from this being completely useless, it leads to missed bugs! Teams spending so much time documenting were not testing enough causing testers to overlook bugs.

Some teams were collapsing under the stress of regulatory pressure to prove requirements traceability to auditors who were using naïve test case design methods or using a MS Word for test cases.

The last few years have seen an explosion in better test case management tools, better test design methods, like action based testing and Agile methods where lean manufacturing ideas have teams agreeing that less is more when it comes to test case documentation.

Very few teams reported not documenting their tests. This is a significant improvement to just a few years ago; during the dot-com boom when web applications might get a rapid fire test there was no time for documenting any tests, no regression testing, and no repeatability. In the next release, you are expected to start from scratch. All business intelligence is left undocumented and kept with few individuals. Hopefully, those days are gone.

Still almost 20% of the groups report spending 50% or more of their time during a project documenting. That is pretty high. There has to be excellent reason those groups are documenting so much, otherwise this is a problem. If you are managing that time you must ask yourself: do these testers have enough time to test? Is it a test project or a documentation project?

P-6. If your group receives requirement documents prior to the testing planning and test case design process, how would you characterize the adequacy and accuracy of these documents?

Very useful 48.60%

Somewhat useful 48.60%

Useless 2.70%

Result analysis: It is a positive result that only very few respondents find their requirements useless. It is also encouraging to note that almost half of the respondents find their requirements very useful! This is habitually another area where test teams complain of the level of quality of the requirements docs.

An assumption from these results is that requirements docs may be getting better. Perhaps a balance has developed in many teams as to how much information business analysts or marketing teams need to give both developers and test teams for them to do their jobs effectively. That, or, test teams have stopped complaining about requirements and make due in other ways.

Page 25: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST PROCESS & SDLC

P-7. What is your view on the quality of the code that is handed to you at the start of testing?

Usually stable/testable, no idea about unit testing, no accompanying documentation of the build

45.90%

Stable, unit tested 21.60%

Stable with build notes 13.50%

Often unstable with accompanying documentation of known problems 6.80%

Often unstable, little/no information on unexpected changes 6.80%

Very stable, unit tested, with build notes explaining bug fixes and changes 5.40%

Result analysis: To highlight: 40% of respondents appraise their builds as stable; 46% of respondents appraise their builds as usually stable; and 13% found the quality of code often unstable. This all seems pretty good. There is one area that is particularly troubling in all this data.

Almost half of all the respondents do not get information from development. Testers have no idea about unit testing and no information about what changed in the build. There is no viable reason for this and it hurts product quality.

Agile development with TDD and daily scrums is meant to prevent this problematic lack of information. The Continuous Integration practice including automated re-running of the unit tests and a smoke or build acceptance test is very effective in speeding up development and delivery.

The following statements are hand-written responses.P-8. Please fill-in the blank: My biggest problem with our development process today is:

• “Not all out BA’s are trained in writing effective clear requirements.”• “Lack of Unit testing. Unit Testing is NOT automated, very few developers write test harnesses. It is all manual Unit

Testing.”• “We have no clue what they are doing.”• “We test our changes, but do not test the overall product.• Regression testing is our biggest problem.”• “It’s a black hole!”• “On most projects, there is a lack of collaboration and cooperation between test and development teams (these by

and large are not Agile projects, of course!).”• “No technical documentation of what had been build.”• “They are rude with testing team.”• “We need earlier involvement.”• “They don’t understand the business or the users well enough.”• “Bad communication.”• “Bad estimation.”• “No timely escalation of probable risks on quality delivery.”• “Too many processes are followed by rote.”• “Bad scope and requirements management”• “They are laying off QA staff and I’m not sure how they are going to adequately test the product.”• Lots of documentation required that does not increase the quality of the product.”

Page 26: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST PROCESS & SDLC

P-9. If you could do anything to make your projects run smoother, what would that be?

• “Better communication.”• “More communication with remote team.”• “More testing by development.”• “Unit testing to be mandatory & unit test report should be treated as a exit criteria to start the Testing.”• “Send bad developers to training.”• “Spend more time automating regression test cases.”• “Automate our testing.”• “More time allowed for test case preparation / documentation.”• “Re-Focus on planning and requirements gathering. Also we could stand to enforce creation of unit tests

by developers. We rely too heavily on QA Automation to catch everything”• “Get buy-in from key people up-front and work to expose icebergs (blockers to success) as early as

possible.”• “Policy in handling customer requests on change requests. Project management and Sales Team have

to follow process so as not to over commit on deliveries.”• “Plan to reduce last-minute changes.”• “Lighten the documentation.”• “Stronger project managers that can lead.”• “Better project management, better enforcement of standards for SW development, CM and Testing.”• “Integrate ALM tools.”

P-10. If you have a lessons learned, success or failure story about your team’s development processes that is interesting or might be helpful to others, please write it below:

• “We have done a good job on creating a repeatable build process. We release once a week to Production. Where we fail is in integration and regression testing.”

• “The processes are well-defined. The team’s commitment and unity of the team are critical to the success of the project.”

• “Don’t develop in a vacuum. The less exposure a team has to the business, user needs, how software supports tasks, etc., the less likelihood of success. Get integrated – get informed – get exposed! At a previous company, I used to drag my developers out to customer sites with me and while they dreaded facing customers, they were ALWAYS extraordinarily energized at the end having learned so much and feeling much more connected and *responsible* for satisfying customers. This tactic was ALWAYS valuable for our team.”

• “Maintain high morale on the team. Motivate them to learn and develop technical and soft skills.”

Page 27: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Survey 3 – Testing in

Agile

Page 28: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TESTING IN AGILE

Michael Hackett, Senior Vice President, LogiGear Corporation

Question 1

Have you been trained in Agile Development?

Yes 47.8%

No 52.2%

Result analysis: The fact that more than half of the respondents answered “no” here is troubling in many ways; let’s just stick to the Practices issue. It is clear some of these organizations are calling themselves “Agile” with no reality attached. Whether you want to call them “ScrumButs” or refer to them as Lincoln’s 5-legged dog, calling yourself “Agile” without implementing practices and training on what this is all about is just not Agile! Attempting to be Agile without training all the team in the why and how of these practices will fail.

Question 2

Since your move to Agile Development, is your team doing:

More Unit Testing? 50%

Less Unit Testing? 6%

The Same Amount of Unit Testing? 28%

I have no idea? 16%

Result analysis: Ideas to take from this are many: That more “unit” testing is happening in 50% of the responding organizations is a good thing! That more “unit” testing is happening at only 50% of the organizations is a problem. More troubling to me is that 16% have no idea! This is un-Agile on so many levels — a lack of communication, no transparency, misguided test efforts — a lack of information on test strategy, test effort, test results — and a lack of teamwork!

Question 3Does your team have an enforced definition of done that support an adequate test effort?

Yes 69.6%

No 30.4%

Result analysis: This is encouraging. Hopefully the 30% without a good Done definition are not “ScrumButs” and will be implementing a useful definition of done very soon!

Page 29: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TESTING IN AGILE

Question 4

What percentage of code is being unit tested by developers before it gets released to the test group? (Approximately)?

100% 13.6%

80% 27.3%

50% 31.5%

20% 9.1%

0% 4.5%

No Idea 13.6%

Result analysis: I won’t respond again about the No Idea answer, as that was covered above, but it’s important to know that most Agile purists recommend 100% unit testing for good reason. If there are problems with releases, integration, missed bugs, and scheduling, look first to increase the percentage of code unit tested!

Page 30: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Survey 4 –

Outsourced and Offshore Testing

Page 31: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OUTSOURCED AND OFFSHORE TESTING

Part 1- The Home Team

HT1. Do you outsource testing (outside your company)?Response percent Response count

Yes 87.5% 7

No 12.5% 1

Result analysis: You can see from the varied results in this section, many people and organizations are conflicted about distributing work. At the same time, for most respondents, the outsourcing/offshoring is effective, the teams are competent but not trusted, and they would not do it if they had the choice. The results are an indication that we have work to do!

HT2. Is your outsourcing/offshoring (any variety) successful/effective?Yes 87.5% 7

No 12.5% 1

Result analysis: It is very good that so many of these organizations see their outsourcing/offshoring as successful. There is a lingering notion that some teams are forced into unsuccessful distributed teams based on business necessities. This is not the case.

HT3. What is biggest impact of outsourcing/offshoring of testing

Faster product release 12.5% 1

More test time 25% 2

More effective testing 0% 0

More technical testing 12.5% 1

More automation 12.5% 1

Slower releases 0% 0

Less effective testing 12.5% 1

More management oversight 25% 2

No difference in test effort 0% 0

Successful projects 0% 0

Failed projects 0% 0

Better project team morale 0% 0

Worse project team morale 0% 0

Page 32: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OUTSOURCED AND OFFSHORE TESTINGResult analysis: The truth about outsourcing and offshoring is that it leads to more management supervision. This has been found many times in surveys of all levels and varieties of outsourcing. The good news is you will get better at leading and managing.

The bad news is the increased time and effort needed to get the same work done. The range of other answers is positive, except for teams getting less effective testing from distributing work.

HT4. Is the outsourced/offshore team respected and trusted to the same level as the internal team?

Response percent Response count

Yes 37.5% 3

No 62.5% 5

Result analysis: The “No” answer being so high is problematic and common, yet gets to the heart of all other problems with outsourcing and offshoring; the remote team is often not respected or trusted like the home team.

The reasons for this are many and spring from shortcomings on both sides; ranging from unrealistic expectations by the home team of immediate ramp-up and smooth sailings to incompetent teams. Regardless of why, the problem of mistrust must be resolved or the problem is guaranteed to get worse. Angry teams and high staff turnover can be the next step in unresolved situations.

HT5. Do you view the outsource/offshore test team as competent?

Yes 75% 6

No 25% 2

HT6. How much time and effort is spent training the outsourced/offshore team?

None 12.5% 1

Little 25% 2

Enough 37.5% 3

A lot 25% 2

Result analysis: Training is the key to any successful work distribution. More important than any process or tool, training builds trust as well as skill. That many organizations do not train the distributed teams enough is a problem.

Page 33: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OUTSOURCED AND OFFSHORE TESTING

HT7. If you had the choice to outsource/offshore or not would you?

Yes 37.5% 3

No 62.5% 5

Result analysis: I am surprised with this answer in that although distributing work needs more management oversight time─a negative for most other responses, these organizations seemed happy with their work distribution arrangements.

HT8. Please share a success or failure story about offshoring/outsourcing that would be interesting and informative for other test teams.

1. “Offshore team members have to spend time onsite to better understand domain knowledge and better understand team structure, roles and responsibilities. Leads need to have daily communication with offshore, to communicate sense of urgency, which is not perceived the same way by remote locations. When team gets large offshore, think about sending onsite folks on long assignments offshore.”

2. “I have built 5 QA ODC (offshore development center). The keys to success is standard process, good resources, effective knowledge transfer and ongoing engagement. On and offshore need to follow the same process, this enables resources to ramp up quickly. Good resources, we screen all our offshore candidates when we build out initially and then allow the vendor to chose junior level shadow resources who are brought up to speed on the vendors time.

Effective knowledge transfer, knowledge transfer is bi-directional and continuous. Some of our best process improvements such as ”’video taping of complex defects” has come from offshore. Offshore resources are professionals, treat them as such. Ongoing engagement, an engaged resource is a productive resource. Rotate and cross train to keep people interested and have backups.”

Part 2 The Distributed Team

DT1. Is your team respected and trusted to the same level as internal team?

Yes 50% 8

No 50% 8

Result analysis: With half the outsourced/offshored teams feeling no respect or trust, this high percentage lines up with the home team’s similar response. It is a serious problem that so many teams feel they are not trusted.

Page 34: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OUTSOURCED AND OFFSHORE TESTING

DT2. Do you do an effective testing job?Response percent Response count

Yes 86.7% 13

No 13.3% 2

DT3. Do you view the home/main corporate test team as competent?

Yes 64.3% 9

No 35.7% 5

Result analysis: This is a high and interesting number of groups that do not view the home team as competent. It is a direct comment on the relationship between distributed teams.

DT4. How much time and effort is spent training your team?

None 7.1% 1

Little 50% 7

Enough 42.9% 6

A lot 0% 0

Result analysis: As I said above, training is the key to every successful distributed project. More than half the teams that responded felt they are not adequately prepared for their work.

DT5. Do you have tools to support effective communication and quick access to information?

Yes 86.7% 13

No 13.3% 2

Page 35: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –OUTSOURCED AND OFFSHORE TESTING

DT6. If you could fix one thing about the home office test team you what would it be?

1. “Stop looking at offshore team as your competitor.”• Be ready to move up the value chain when you want to introduce offshoring (actually, demarcate

key contribution areas from home & offshore teams).• Understand the difference between managed offshore resources and unmanaged onsite

consultants.• Use offshore teams to complement onsite teams and get the best out of both worlds.”

2. “Make them more cooperative.”3. “Provide more training to offshore.”4. “Provide better documentation of tool/features being tested.”

• Test plan details• Focus areas• Provide overall objectives of the team, so we don’t lose sight of the forest while going for the

trees.5. “More clear requirements.”

DT7. Please share a success or failure stories about offshoring/outsourcing that would be interesting and informative for other test teams.

1. “What I would like to quote here is my experience as a manager of an offshore team working for U.S. based financial services client. The most difficult part for me was to make the client ‘QA Manager’ understand that the offshore team is a team of managed resources. They always thought of it like bodies being shopped to them and they have no management support or that they have to manage them individually.

I had to work for over 6 months without being recognized as a manager by the client. I prepared the ground for 6 months, created a lot of data with respect to team members and projects, metrics, etc identified improvement areas, training needs, etc and once I visited the client and presented this to them, they were then able to appreciate that the team is managed and they don’t have to micro-manage.”

2. “We get the job done with very little help from the outsourcing location.”3. “The ‘rotational’ model does not always work due to incompatibility between the rotated resources in

client environment; requires higher degrees of management than originally anticipated.”

Page 36: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Survey 5 – Test

Methods

Page 37: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST METHODS

METHODS

Response percent

Response count

Requirements documents 61.3% 46

Discussions with users on expected use 2.7% 2

Discussions with product, business analysts, and marketing representatives

9.3% 7

Technical documents 4% 3

Discussions with developers 8% 6

My experience and subject or technical expertise 12% 9

No pre-writing of test cases, I test against how the application is built once I see it

2.7% 2

Guess work 0% 0

Response percent

Response count

Yes for both 42.1% 32

Yes for usability 23.7% 18

No for both 23.7% 18

Yes for security 10.5% 8

Result analysis: This confirms conventional wisdom. Over 60% of the respondents list requirements documents as the basis of their test cases. This brings up two important discussion points: 1) test cases can only be as good as or as thorough as the requirements and 2) past survey results exhibit that most testers are hired because of their subject matter expertise. This subject matter expertise is the primary basis for test case development for a far-distant 12%.Some test teams complain regularly about requirements documents they receive. I would assume that reliance on subject matter expertise would have been a more common basis for test case development.

M2. Are you executing any application-level security and/or usability testing?

M1. The test cases for your effort are based primarily on:

Page 38: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST METHODS

Response percent

Response count

Yes 74% 54

No 26% 19

Response percent

Response count

No 52.1% 38

Yes 47.9% 35

Response percent

Response count

No 56% 42

Yes 44% 33

Response percent

Response count

No 50.7% 38

Yes 49.3% 37

M3. Are you conducting any performance and scalability testing?

M4. Does your group normally test for memory leaks?

M5. Does your group normally test for buffer overflows?

M6. Does your group normally test for multi-threading issues?

Page 39: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST METHODS

Response percent

Response count

Yes 62.7% 47

No 37.3% 28

Response percent Response count

10 – 33% (important part of our strategy) 46.7% 35

Less than 10% (very little) 26.7% 20

33 – 66% (very important, approximately half, more than half)

13.3% 10

66% – 99% (our most important test method) 10.7% 8

0% (none) 2.7% 2

100% (all our testing is exploratory) 0% 0

M7. Do you do any API testing?

Result analysis M2- M7: As test engineers, you need to know what types of testing need to be executed against your system even if you are not the person or team executing the tests. For example, performance tests are often executed by a different team separate from those that execute functional tests. However, your knowledge of test design and data design will help those executing these tests.

Knowledge in other tests can also help cut redundancy and shorten a test cycle. Most importantly, it becomes a serious problem if you are not executing these tests because you don’t know how or hope someone else will take over. If you are not doing memory tests because you think or hope the developers are, this is also a mistake.

API testing can find bugs earlier in the development cycle and has easier defect isolation. If API testing is not being practiced, you should have a good reason.

M8. What percentage of testing is exploratory/ad hoc and not documented?

Result analysis: With almost half responding that 10 -33% of exploratory testing plays an important part of their strategy, my biggest concern is that the project team knows and understands your use of exploratory testing and the difficulty in measuring it.

The high percentage calling it important and the difficulty measuring exploratory testing often leads to incorrectly sizing a test project or increasing risk in cutting schedule time allotted for exploratory testing.

I expected a bigger reliance on exploratory testing with only over a quarter responded, “less than 10 %.” Most team teams say they find most of their bugs running exploratory tests as opposed to validation test cases. This may still be true, but many test teams may lack the time to do exploratory tests.

Page 40: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST METHODS

M9. How effective is your exploratory/ad hoc testing?

Response percent Response count

Somewhat effective, it is useful 54.8% 40

Very effective, it is our best bug finding method 39.7% 29

Not effective, it is a waste of time/money testing 5.5% 4

M10. How is exploratory/ad hoc testing viewed by project management?

Response percent Response count

Somewhat effective, it is useful 58.6% 41

Essential, necessary for a quality release 20% 14

Very effective, it is our best bug finding method 11.4% 8

Not effective, it is a waste of time/money testing 10% 7

Result analysis: Close to 60 % of respondents say management views the strategy as somewhat effective. In the previous questions, nearly the same percentage saw the testing as useful. This surprises me. Very often testers see ad hoc testing as more valuable than management who often see it as immeasurable and unmanageable.

Page 41: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST METHODS

M11. What is the goal of testing (test strategy) for your product?

Response percent

Response count

Validate requirements 34.20% 25

Find bugs 26% 19

Improve customer satisfaction 23.30% 17

Cut customer support costs/help desk calls 8.20% 6

Maximize code level coverage 5.50% 4

Improve usability 1.40% 1

Improve performance 1.40% 1

Comply with regulations 0% 0

Test component interoperability 0% 0

Result analysis: The number one answer for what is the goal of testing is validating requirements. This is a surprise. Typically, finding bugs is seen by testers as the goal and management, business analysts or marketing see validating requirements as the main goal and job of test teams.

Even with this, about half the respondents see finding bugs and improving customer satisfaction as the goal of testing. We see a few times in this survey a large reliance on requirements as the basis of testing. This can be a problem with anything less than great requirements!

Page 42: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST METHODS

M12. Which test methods and test design techniques does your group use in developing test cases? (You can choose more than one answer.)

Response percent

Response count

Requirements-based testing 93.20% 69

Regression testing 78.40% 58

Exploratory/ AdHoc testing 68.90% 51

Scenario-based testing 56.80% 42

Data driven testing 40.50% 30

Equivalent class partitioning and boundary value analysis 27% 20

Forced error testing 25.70% 19

Keyword/Action-based testing 17.60% 13

Path testing 16.20% 12

Cause-effect graphing 12.20% 9

Model-based testing 10.80% 8

Attack-based testing 9.50% 7

M13. How do you do regression testing?

Response percent

Response count

Both 47.30% 35

Manual 33.80% 25

Automated 14.90% 11

Result analysis: For 33% of respondents, regression testing is purely manual. I see this commonly in development teams. There are so many good test automation tools on the market today that can be used on more platforms that teams not automating their regression tests ought to re-examine test automation. For all testers, test automation is a core job skill.

Page 43: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TEST METHODS

M14. Is your test environment maintained and controlled so that it contributes to your test effort?

Response percent

Response count

Yes 83.1% 49

No 16.9% 10

M15. Are there testing or quality problems related to the environments with which you test?Response percent

Response count

Yes 64.4% 47

No 35.6% 26

M16. Are there testing or quality problems related to the data with which you test?

Response percent

Response count

Yes 63% 46

No 37% 27

Result analysis M14- M16: Controlling test environments and test data is essential. Environments and data leading to testing problems is very common and very problematic! Building and maintaining great test environments and test data needs time and investment.

These answers confirm what I see in companies regularly─ problems in environments and data not getting resolved. With some time and perseverance, fixing these would greatly improve the effectiveness of testing and the trust of the test effort.

Page 44: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Survey 6 – Politics in

Testing

Page 45: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

Few people like to admit team dynamics and project politics will interfere with successful completion of a software development project. But the more projects you work on, the more you realize it’s very rare that technology problems get in the way. It’s always the people, project, planning, respect, communications issues that hurt development teams the most.

Historically, test teams bore the brunt of ill-will on projects in traditional development models─ they go last! Comments regarding test teams include, “They are slowing us down,” and “They are preventing us from hitting our dates.” And, when the test team misses the seemingly obvious bug, their entire work is suspect.Even in Agile, issues of being “cross-functional,” inclusion in the estimation process, extremely lean [read: none] documentation can continue some political problems for people who test.

It is no accident that the first value of the Agile manifesto is “Individuals and interactions over processes and tools.” People and how we get along with each other consistently have a larger impact of project success than any process or tool. Communications problems are most commonly cited as the biggest problems with software development.

I hope we have moved away from the worst work environments since we have so much information and experience that it is political and difficult work situations that badly impact productivity and work quality.As I reviewed the results of this survey, it seems we still have far to go to fix team political problems. A better understanding and awareness of common political problems will help recognize them quicker and give you an opportunity to remedy situations before they get worse.

Duplicated responses were deleted.

PART 1

1. What is the biggest pain area in your job?• “Poorly Written or insufficient requirements leading to scope creep.”• “Delayed release to QA creates immense testing pressure in terms of meeting deadline.”• “Not enough resources for development or testing.”• “Timetables to testing.”• “Thinking alone with no help.”• “Making teams understand the importance of QA participation early in the life cycle.”• “Time pressures and lack of understanding in regards to testing by external parties.”• “Estimating time to automate on clients’ perceptions.”• “Educating people on the value of testing, tester skills, domain knowledge – for example, helping

managers understand that quantifiable measures related to testing are generally weak and sometimes risky measures, especially when used independently (like number of bugs, or number of tests run/passed/failed, etc.). Demonstrating how to use qualitative measures more effectively and advise managers/stakeholders on trusting the intuitive assessments of senior testers to make decisions.”

• “Keeping track of schedules and keeping test cases up to date.”

Page 46: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING• “Complete lack of any process at all – all projects done uniquely = chaos.”• “Domain knowledge”• “Production and testing environments are not 100% synced.”• “Dealing with some executives who do not understand a thing about testing.”• “Having other team members value my opinion when it comes to process improvement, as well as scheduling for

releases correctly – giving unrealistic expectations and time frames.”• “Explaining the GAP between theory and practical.”• “Testing rare scenarios.”• “Developer says, ‘This is not reproducible on my machine’ and marks the status back to ‘Can’t Reproduce’” I call him

back and say ‘It’s 100 % reproducible on my machine. Come here, look and fix the problem.’ Sometimes have to reproduce the issue many times for better debugging and this eats up lot of time.”

• “Managing resources and data for testing. Such as Database servers, Different environments, etc.”• “Underfunding for new projects.” • “People’s unwillingness to choose testing as a career path.”• “No clear career path.”• “I don’t have access to the code base.”• “Creation of reports.”• “Balancing on-the-job learning with providing results.”• “Over reliance on testing quality into a product, application or service.”• “The need to repeatedly prove that independent testing adds value.”• “Writing good test cases and having to retest the application due to late changes in requirements.”• “Developers are not doing proper unit testing and give QA build late for testing. This results into – QA Team stays

late.”• “Not being trusted to know what I’m doing.” • “Delivery dates, testing work is not measured or estimated”

· “Deliver documentation from regulatory and quality assurance perspective and not tester perspective.”• “Not giving much time to do regression testing and also not giving time to developers to fix the defects.”• “HR issues with certain personnel.”• “Finding good pragmatic QA lead, who understands solutions architecture, solutions design and the value solutions

bring to customers.”• “Deadlines driving releases, not schedules or quality.”• “Having to do the same things day in day out.”• “Getting the project organized before code is produced and sent for testing.”• “Switching in multiple projects daily.”• “Changing requirements, unavailability of business experts to inform requirements gathering process.”• “Designing the effective testcase”• “Repeated releases due to very small changes in requirements.”• “Receiving timely information.”

Result analysis: The universal problems in software development continues to astound me. On one hand, most people think their team and company problems are unique—they rarely are. Secondly, the problems are so universal and well known—why haven’t organizations moved to fix such common issues that would have a big impact on productivity and quality?

Also, Testing and QA political problems are sometimes confined to testing but much more often a result of problematic corporate culture.

Page 47: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

2. Does your development team value your work?

Result analysis: Most teams are valued. The response results of 58% is actually a low number here, the number should be closer to 100%!

3. Who is ultimately accountable for the high quality or poor quality of the product?(QA- quality assurance/guaranteeing quality)?

Response percent Response count

Yes 57.9% 73

Somewhat 38.1% 48

No 4% 5

Everyone 53.8% 64

Project manager 16% 19

Testers 14.3% 17

Product team (marketing, business analyst) 7.6% 9

Upper corporate management 5% 6

Developers 3.4% 4

Result analysis: The only correct answer here is “Everyone.” I often ask this question to clients and it is usually a much higher percentage. Everyone on the team has unique and important contributions to the level of quality. Any weak link will break the chain. We all share responsibility

Page 48: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

4. Who enforces unit testing?

Result analysis: Over 70% responding that the Development Manager or Developers enforce unit testing is the correct answer. The 16% responding Testers is wrong.Just in the past few years as Agile—particularly XP practices such as TDD— are finally becoming more common unit testing practices.

5. What is the most important criteria used to evaluate if the product is shippable?

Risk assessment 43.7% 52

Completion of test plan/sets of test cases 22.7% 27

Date 17.6% 21

Bug counts 14.3% 17

Code turmoil metrics 1.7% 2

Development Manager 37.8% 45

Developers 32.8% 39

Testers 16% 19

Project Manager 10.9% 13

Product Management 2.5% 3

Result analysis: Interesting to see how the ship or release criteria differs greatly from company to company

6. What is your perception of the overall quality of the product you release?

Good 58.1% 68

Average 21.4% 25

Excellent 17.1% 20

Poor 3.4% 4

Bad 0% 0

Page 49: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING7. What pain does your product team have regarding quality? (You can select multiple answers.)

Insufficient schedule time 63.8% 74

Lack of upstream quality assurance (requirements analysis and review, code inspection and review, unit testing, code-level coverage analysis)

47.4% 55

Feature creep 38.8% 45

Lack of effective or successful automation 36.2% 42

Poor project planning and management 32.8% 38

Project politics 31% 36

Poor project communication 26.7% 31

Inadequate test support tools 25.9% 30

No effective development process (SDLC) with enforced milestones (entrance/exit criteria)

25.9% 30

Missing team skills 23.3% 27

Low morale 15.5% 18

Poor development platform 7.8% 9

Result analysis: These responses speak for themselves. Insufficient schedule for testing, lack of upstream quality practices, feature creep, lack of effective automation anyone familiar with software development could have predicted this. A larger problem here is that we have had these same problems since 1985!

8. What is the most important value of quality?

Ease customer pain/deliver customer needs, customer satisfaction 52.1% 61

Better product (better usability, performance, fewer important bugs released, lower support, etc.)

40.2% 47

Save money during development (cut development and testing time to release faster)

7.7% 9

Page 50: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

9. Who ultimately decides when there is enough testing?

Test team

Project manager

Upper Management

Product (marketing, product, customer, biz analysts)

Developers

Result analysis: Surprising answer here. Remember this survey is primarily made up of testers. I would have expected Product or Project Management decides as the number one answer.

10. How often does your testing schedule not happen according to plans?

Some/few 38.1% 45

Often 26.3% 31

Most 25.4% 30

Every project 6.8% 8

Never 3.4% 4

Result analysis: Clearly more than half projects do not happen according to plan. Over 57% responded “Every, Most or Often” projects do not happen as planned. Have project planners not learned much in the past 20 years?

11. What are the main reasons test schedules do not happen according to plan?

• “Dev issues or dependencies on hardware.”• “Scope Creep”• “Requirements changes, greater than average defect count.”• “Change of release content.”• “Resource conflicts – our testing resources are not dedicated to testing, it is a function they perform along

with their normal job functions.”• “The deadlines for the delivery of the projects are short.”• “Unstable or late deliverables, changing requirements or scope.” • “Usually dev slip the date or too many problems (bugs).”• “Emergency releases”• “Unexpected events, additional workload, new directives, rework …”• “Requirement changes delayed development and subsequently delayed testing. Since there will be no

change in the production release date, the days for QA are shortened.”• “Inconsistency in development process.”• “Delay in releases from development team Scope creep during testing phase Rework of fixes due to lack of

unit testing and poor quality of the component released to QC”

Page 51: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

• “Lack of quality Development and so the repetitive dev cycle.”• “New Feature requests injected in between a sprint.”• “Major bugs that prevents testing of some areas.”• “Lack of understanding of new product/enhancements.”• “Other, equally high-priority deliverables (in products/features in different ship vehicles).”• “Unrealistic and well vetted project plans, schedules and estimates.”• “Lack of unit testing by dev team, due to which when build is given to testing team, then test team got

blockers etc. This increases test cycles.”• “Never actually included in scheduling. Developers decide what goes into the next release based on

development effort. Testing is an afterthought.”• “Unexpected user issues that come up; problems with software that our product integrates with.”• “Late turnover.”• “Environment stability.”• “Analysis/Code not complete on time for test to begin.”• “Multiple developers putting in changes right before deadline.”• “Multiple projects at one time.”• “Lack of funds.”• “Issues with product installation, server problems, unrealistic testing schedules assigned by upper

management.”

Result analysis: Scope creep is by far the number one answer. I removed duplicate answers. It comes by name names- all meaning addition of features after agreed upon schedules. Hopefully, as Agile, particularly SCRUM practices such as the Planning Game, Sprint Planning and a strong Product Owner become more common, one day scope creep will become just a bad memory.

12. How do you decide what to test or drop (prioritize coverage) if your testing schedule/resource is cut in half?

• “The CEO.”• “What feature is critical and test positive cases not negative.”• “Project Team Decision”• “Focus on high risk areas that would cause customer dissatisfaction.”• “Products’ basic functionality”• “Severity of corrections”• “Areas affected by corrections”• “Importance to customers”• “Importance to sales and marketing”• “Ability to simulate in test lab”• “We don’t drop testing, we delay deployment of features if they cannot be tested.”• “What code has been changed the most.”

Page 52: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

• “Negotiate with developers and Project Management.”• “Core test case coverage.”• “Based on the historic data and feature change: what has been tested in previous releases and what

new features has been introduced/changed.”• “Depends on the requirements prioritization and their impact in de software functionality.”• “Speak with the business to prioritize the most important/critical items to receive testing and/or

personal experience of where the pain points are going to be (where do we often see the most issues with previous testing?).”

• “Project management and application owner.”• “Knowledge of customer usage, therefore impact.”• “Technology and business risk assessment. Mostly from the QA/Test team. Not ideal but the best out

of a challenging situation.”• “Combination of time required, business criticality and business priority.”• “Discuss the Functionalities with Business Analyst/Product Manager and decide priorities.”• “I think about how the customers are going to use the product. Anything which will affect all

customers is the highest priority. Anything affecting individual customers is lower priority. Anything which requires a reset to the server (affects all customers) has a higher priority. Anything with a work around is a lower priority. The more important the customer, the more important their user stories.”

• “Items that need to be delivered for launch will be tested.”• “These decisions are made project by project. We use customer feedback and knowledge of which

features are most used by customers to decide.”• “Focus on main functionality; leave out additional features that we don’t have time to test.”

· “Safety critical vs. non-safety critical.”• “Risk Based Testing, risk categorization.”• “Scrum planning.”• “From bug history, new feature, customer usage.”• “ROI”• “Any test cases with dependencies to new feature are prioritized.”• “Drop multiple passes on relatively simple changes; drop regression testing for portions of system

that scheduled changes should not impact.”• “Test main functionality and skip regression.”• “Test cases are ranked H-M-L and we focus on the High. Usually the most desired feature has the

most High ranks. Also, it may be decided to drop a feature and only focus on that feature and defer release of other feature. The business ultimately makes that decision of focus.”

• “Test features that are crucial for clients acceptance, and / or features which are complex and broad within the solution.”

• “Operational requirements.”• “We usually don’t. We will throw more resources on a project.”

Page 53: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

13. Is testing (the test effort, test strategy, test cases) understood by you project manager, product manager/marketing and developers?

Yes 75.7% 87

No 24.3% 28

Result analysis: I often ask this question during my consulting work and it is always a very similar answer: testing is not well understood by a significant percentage of people in the development team.

PART 2

1. How would you characterize the relationship between your team and the development teams?

Good

Leading to better quality releases

Adversarial

Excellent

Poor

Hurts/reduces product quality

Result analysis: A positive response. Too bad over 16% of teams have poor, hurtful or adversarial relations with other groups. It is very good that over 83% have positive team relationships.

2. When and how are testing groups involved with the development team?

In adequate time to prepare a good test project 56.4% 44

Late 28.2% 22

Too late 9% 7

Too early 6.4% 5

Page 54: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

3. Do you frequently get reviews and feedback of your work (e.g., test plans, test cases, test reports, bug reports, etc.) from the development team?

No 53.7% 36

Yes 46.3% 31

Result analysis: More than half of the responding teams do not get regular feedback from their teams. Not good. In Agile, having an immediate and effective retrospective should give each and all team members feedback and suggestions from fellow team members on how everyone can do more productive work.

4. How is morale on the test team?

Good 38.9% 42

Up and down 38% 41

Happy, excellent 11.1% 12

Low 5.6% 6

Stressed 4.6% 5

Poor 1.9% 2

5. Does your group have adequate resource and time to do its job?

Yes 44.9% 48

No 55.1% 59

Result analysis: Another disappointingly high percentage and yet another issue we have encountered in 1985. So, perhaps your team is adequately staffed but you need to do a better job at coverage and risk communication.

6. Who makes staffing decisions on the need for additional testing resources?

Product/ Project 70.1% 75

Test Team 29.9% 32

Page 55: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

7. How do you think other groups would characterize the competency of your group?

Highly competent 46.8% 51

Adequately qualified 34.9% 38

Not competent enough 10.1% 11

Excellently qualified 8.3% 9

Unqualified 0% 0

Result analysis: The results of this question are always interesting to me. Approximately 90% of respondents are characterized as qualified/competent adequately, highly or excellently. That is a great number and shows a maturation of our industry. In the past, test teams were often viewed as the least technically competent and least trained on the product team. Over the past two decades, Test and QA classes became more common and a change in the percentage of people hired into the industry have either been based on subject matter experts, technical expertise or on QA and Test expertise leading to a much more qualified testing staff.

8. What would you like to change or improve concerning your team and team dynamics if you could (list as many as 3 items)?

• “More resource for long term not contract”• “Take the time to study the project”• “Slow projects down, we often jump into ‘Small’ changes without sufficient review.”• “Higher synchronization between test teams”• “Disciplined and planned approach to optimize testing and improve productivity.”• “Increase our team testers.”

• “Increase the time for tests.”• “Give more training time to testers.”• “More local testers.”• “Much earlier involvement in design process, much less ‘test this now because its going out to the field

next week.’”• “Cross functional and domain expertise.”• “Add resource, training.”• “Above answers would vary from project to project, but common issues I deal with:

1) Lack of understanding of what testing can provide – incorrect expectations; 2) Lack of accountability for quality across project teams – making QA/test teams some kind of

inadequate gatekeeper; 3) Improper use of test metrics and data”

Page 56: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

• “1) Improve technical testing skills among the team 2) To have the project mgmt and dev. team to listen to quality team in regards to quality”3) Implement automation, performance and securitytesting in team of skills and tools.”

• “Better peer reviews, more motivation.”• “Involve testers sooner in regards to requirements; more collaboration in general.”• “Be more proactive in finding defects upstream.”• “More communications, care about personal develop space, good environment”• “Team interaction, team awareness on latest techniques and Open culture.”• “Access to better testing support tools.”• “More resources for development to be able to provide better quality code. More time for test

automation to be written and implemented”

• “1.Make them think out of the box, 2.Come up with new ideas to improve efficiency, 3.Better communication with the developers”

• “Follow Agile methodologies and Work from the very first stage of Project.”• “1. Stop changing requirements, 2. Stop injecting feature requests in between a

Sprint.”• “More freedom for testers”• “Proper communication about updates and latest Soft skills training”• “There is more to testing a system then validating that the GUI works. We need to test data processes

and web and messaging services. Different kinds of testing require different skills”• “1. Technical abilities, 2. Testing knowledge, 3. Professionalism (tester’s attitude)”• “Appreciation of work.”• “More actively collaborative work.”• “Change the culture from a test Q in to build Q in. Not just words but real practices and disciplines

designed to build it right the first time with lean practices.”• “The attitude towards testing Knowledge of testing technologies and Domain Knowledge”• “Better cooperation with Operations (aka OPS Run)”• “Better testing for MPI models”• “More involvement with contracted projects”• “Improve tester skills (test techniques, product knowledge):

• “Higher motivation”• “Respect a person’s work and give space to testing team.”• “Have more employees in the role of QA leads, not consultants.”• “One more engineer level QA person and one more tech level QA person.”

• “Centralize all QA under one umbrella organization Independent budget for QA organization”

Page 57: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

• “Operate on a shared services model Partnership with tool vendors”• “More structured test cycles, ability to stop releases if medium to high priority defects are not fixed

More training”• “There could be two of me.”• “Understand customer better”• “Team member trainings, More coordinated work and Openness in culture.”• “1 – Senior management support, 2 -Senior management support, 3 – Senior management support”• “Open Communication, Schedule Breathing Space, Strengthen Business Relationship”• “The perception that QA doesn’t know what they’re doing because we find too many defects that delay a

project; the perception that QA is a roadblock to be overcome or circumvented rather than a key element of any project; the perception that QA finding a defect is a NEGATIVE against a project.”

• “Our image to the company more time to test and implement better testing strategies.”• “Would like team members to actively educate themselves on the base product functionality and cross

train other members.”

• “QA needs to be a more integral role throughout the project Lifecycle.”• “More personnel in the testing team, More automation and testing tools, Fully support the SDLC and

testing of all applications”• “More employees, More time to test, Better tools”• “Nearly everything.”• “Need more resources.”• “QA as a viable authority to allow for change or work in the queue (third leg of a stool with Dev,

Business).”• “Clearer roles and responsibilities, QA work more with business for UAT, Configuration manager

needed.”• “Better skillset, More time for learning new skills, Increased focus on QA as a craft that can be learned

and developed.”• “Get marketing and management to guard against scope creeps, make better schedules so that every trip

to the restroom doesn’t affect it.”

Page 58: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

8. Is the (office) working environment conducive to your productivity?

Yes 84.8% 89

No 15.2% 16

9. If you have other roles on the project, what are they? (You may select multiple answers.)Test Only 44.1% 45

Manage project 42.2% 43

Write requirements 32.4% 33

Design/code test tools and harnesses for internal use 27.5% 28

Specify design, UI or user workflow 20.6% 21

Develop code for customers 6.9% 7

Result analysis: Very interesting results with the field of testing and what is now included in a job of a tester. For Agile projects, these people are already “cross-functional!”

10. What is the PRIMARY method of regular team communication?

Email 68.4% 52

Yelling around the office 10.5% 8

IM 9.2% 7

Wiki 9.2% 7

SharePoint 2.6% 2

Blog 0% 0

Result analysis: Great that problematic IM is not the primary tool. However, having email still a primary method from about half of the respondents is problematic. Email has many, many problems for project communication and management. Wikis and “project pages” are much more effective.

Page 59: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

11. What ALM/Team/Offshore communication tool do you use?

We built our own 25.6% 20

HP Quality Center 23.1% 18

Rational Suite (ReqPro-Rose-ClearCase…ClearQuest or combination) 5.1% 4

Jazz- Team Concert 2.6% 2

SourceForge 2.6% 2

CodeGear 1.3% 1

ScrumWorks 1.3% 1

Rally 1.3% 1

CollabNet 1.3% 1

MKS 0% 0

Show replies Other (please specify) 201. “Share Point”2. “Testlink”3. “JIRA” 4. “Bugzilla”5. “Email” 6. “Microsoft Communicator”7. “Jabber”8. “Perforce”9. “Rally & Quality center”10. “Drupal”11. “Skype” 12. “Jira”13. “Digite”14. “Combination SCRUM and ActiveCollab”

12. What is the biggest project driver?

Schedule

Cost/resources

Risk/quality

Result analysis: Schedule remains, by far, the main project driver. We know from the questions above that projects rarely follow the planned schedule. When test times get compressed, test teams must have great coverage, risk analysis and reporting mechanisms.

Page 60: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

13. Do you believe that there are product quality problems?

Yes 77.4% 82

No 22.6% 24Result analysis: This question is often a reality check on opinions of your product. In my consulting work, I have found most teams believe they release product with quality problems.

14. What is the biggest single quality cost on your project?

Testing by test team 24.1% 19

Support (patches, bug fixes, documentation and phone/help desk) 17.7% 14

Don’t know 16.5% 13

Building and maintaining effective test data 10.1% 8

Building and maintaining test environments 10.1% 8

Test automation (tool, writing and maintaining scripts) 10.1% 8

Requirements review, design review, requirements analysis 6.3% 5

Code walkthrough, inspection, review 3.8% 3

Other 1.3% 1

Result analysis: Responses to this question are always interesting to me— it is a problem that 16% of test/quality respondents do not know or understand the cost of various quality activities.I am very glad to see certain groups recognizing Support as the biggest quality cost on their products. Quality cost analysis must include post-release quality cost. The test strategy must consider reducing support costs.Note: The next two questions concern regulatory compliance. I have read estimates that half of all software written is regulated. Regulatory compliance necessitates test strategies and documentation to pass audits.

15. Does your team directly have regulatory compliance (are you subject to external audit)?

No 65.8% 48

Yes 34.2% 25

Page 61: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

16. If the answer to 19 is yes, you could get externally audited for which type compliance?

SOX 52.4% 11

SEC 19% 4

FDA 14.3% 3

FDIC 9.5% 2

DOT 4.8% 1

DOD 0% 0

17. If you could do anything to release higher quality product or service, what would that be?• “More smart testing”• “Spend more upfront time gathering client input into requirements. Stop rushing projects into development

without due diligence on features and impacts.”• “Implement and ALM tool (like Rational tools).”• “Improved Project Management”• “Improve the time to testing the product.”• “Start testing earlier.”• “Continuous improvement.”• “More unit testing.”• “Structured communication on new features (release notes would be nice!).”• “Extract accurate quality criteria and measures from key project stakeholders and customers (if

applicable).”• “Hire skillful team, buy testing tools, Improve SDLC”• “Solidify and document requirements prior to design.”• “More time to plan more thorough testing.”• “Improve process.”• “More automation testing.”• “Proper and solid test case design and introduce a proper test process”• “Interact with the product management and clients more”• “Rebuilt software development process, add a measurement system and buy adequate tools for

management and control software “• “1.Enforce the defined processes, 2.Automate testing for regression”• “Better Test Strategy. Including TDD”• “More data testing”• “Invest into training of testers.”• “Make sure that all team members are aware of the main user scenarios that we’re trying to provide and

to hire more testers.”• “We use waterfall SDLC, changing to Agile would seem to reduce many of our problems.”• “Higher awareness of management for costs of bad quality.”• “Hire more testing resources.”• “Improve metrics to be more accurate about the quality of the product - Improve tester skills.”

Page 62: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – POLITICS IN TESTING

• “Satisfying the customer requirement and giving more quality what is expected.”• “Hire better qualified QA leads.”• “Have more test environment flexibility to build and simulate a multitude of conditions”• “Better scheduling of test time tools to automate regression testing.”• “Manage scope.”• “Better planning.”• “Have entire team (pm, dev, ba, qa) all follow SDLC methodology.”• “Survey customer.”• “Integrate the Product management and QA tighter.”• “Proper project funding/ scheduling for QA.”• “Have clear customer facing quality goals and metrics that are used by all levels of management to drive for

quality-first, to deliver on time with quality, efficiency and predictability.”• “Reduce changes in requirements after some phase of testing at least”• “Test acceleration tools.”• “Clearer requirements and functional specifications and Tester understanding of user workflows.”• “Get our internal customers to understand that the end product is only as good as the requirements they

provide – quality isn’t the responsibility of one group, but of everyone at every step at every level.”• “Give the testing team more authority to adjust schedules.”• “Better, consistent training on the product and its complexities.”• “Establish with client at the outset what they really want to use our product for -and then work this into our

test plans. Conduct broader testing”• “More reliable computers/software.”• “Have QA run through the Project Lifecycle”• “If we are developing new features, we should not mimic old bad behavior just because customers are

accustomed to the bad behavior.”• “Enforce testing in the early stages of the SDLC (test requirements through use cases, create automated

tests for development to use during development)”• “Expect more from the vendors.”• “Test more.”• “Extend the project schedule, add more qualified testers, more regression testing, build nightly and test”• “Take more breaks.”• “Spend more time on requirements.”• “Re-analyze and redesign the test plans/cases.”• “Improve people on applications/support side. They are overburdened, so they don’t learn the product, just

memorize answers to commonly asked questions.”• “Do the peer reviews effectively and meet the customer needs with low number of priority issues.”

Page 63: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Survey 7 – Metric and

Measurements

Page 64: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – METRIC AND MEASUREMENTS

METRICS AND MEASUREMENTS

MM1. Do you have a metric or measurement dashboard built to report to your project team?

Response percent

Response count

Yes 69% 49

No 31% 22

Result analysis: Anything worth doing is worth measuring. Why would almost 1/3 of teams not measure? Is the work not important or respected? Does the team not care about the work you do?I am not measurement obsessed but when test groups do not report measurements back to the project team, it is very often the sign of a bigger problem.

MM2. If yes, are these metrics and measurements used or ignored?

Response percent

Response count

This information along with schedule decides product release 43.40% 23

Some attention is paid to them 26.40% 14

This information decides product release 18.90% 10

Minimal attention is paid to them 5.70% 3

They are ignored 5.70% 3

Result analysis: It is good to see that over 60% of teams reporting metrics use the information to decide release. Test team metrics should help decide product release.

As with the previous question, if the project team is not paying attention to test measurements, that is often the sign of a problem.

Page 65: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – METRIC AND MEASUREMENTS

MM3. What testing specific metrics do you currently collect and communicate? (You may select multiple answers.)

Response percent

Response count

Test case progress 84.10% 53

Bug/defect/issue metrics (total # open, # closed, # high priority, etc.)

82.50% 52

Requirements coverage 49.20% 31

Defect density 34.90% 22

Defect aging 25.40% 16

Root cause analysis 25.40% 16

Code coverage 22.20% 14

Defect removal rates 22.20% 14

Requirements stability/requirements churn 17.50% 11

Hours tested per build 11.10% 7

Page 66: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – METRIC AND MEASUREMENTS

MM4. What methods/metrics do you use to evaluate the project status? (Comments from respondents.)

• “Too many.”• “Bug Rate Trends and Test Case Coverage (Feature and Full Regression)”• “Test case progress, defect status.”• “Earned & Burned”• “Number and severity of remaining open bugs. ’Finger in the air’ (tester intuition that some areas need

more testing).”• “Executed test cases.”• “Defect counts and severity along with test case completion.”• “Bug/defect/issue metrics (total # open, # closed, # high priority, etc.)”• “Test case progress, defect density, and requirement coverage.”• “Requirement coverage and test completed.”• “Bugs found vs. fixed.”• “Test coverage and defect metrics.”• “Defects open along with test case execution progress.”• “None.”• “Are all the features ‘tested.’”• “Track change proposals and outcomes (P/F) for all changes by project, by application, by developer,

and by week.”• “Exit criteria are agreed upon upfront, then metrics report progress against those. “• “Test case complete %, pass/fail %, # of high-severity bugs.”• “Schedule deviation, defects detected at each stage of project.”

MM5. Do you collect any metrics to evaluate the focus of the test effort, that is, to audit if you are running the right tests?

Response percent

Response count

Yes 56.9% 37

No 43.1% 28

Result analysis: It is a step higher in responsibility and ownership when test teams evaluate their own work for effectiveness. Good work!

Page 67: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – METRIC AND MEASUREMENTS

MM6. Do the metrics you use most help you:

Response percent

Response count

Release better product 31.30% 20

Improve the development and test process 26.60% 17

Do more effective/efficient testing 23.40% 15

They do not help 18.80% 12

Result analysis: With 80% respondents using metrics to improve is great! More teams can be using metrics to point out problems, improve risk reporting, and give greater visibility into testing. Also, if your team is not capturing any measurements for the difficult reasons, it is safe to say your work is not respected, no one cares, or the team is purely schedule driven regardless of what testing finds. I recommend you start a metrics program to improve your job skills!

MM7. Do you measure regression test effectiveness?

Response percent

Response count

Yes 59.1% 39

No 40.9% 27

Result analysis: Regression test effectiveness is a growing issue in testing. Numerous teams have been doing large scale test automation for many years now. Regression suites can become large, complex or difficult to execute. Many of the regression tests may be old, out of date, or are no longer effective. Yet, teams are often afraid to reduce the amount of regression tests.

At the same time, running very large regression test suites can take up too much bandwidth and impede a project. If you are having problems with your regression tests, start investigating their effectiveness.

Page 68: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – METRIC AND MEASUREMENTS

MM8. How much of an effort is made to trace requirements directly to your test cases and defects?

Response percent

Response count

The test team enters requirements into a tool and traces test cases and bugs to them

41.50% 27

We write test cases and hope they cover requirements; there is no easy, effective way to measure requirements coverage

18.50% 12

The product/marketing team enters all requirements into a tool. Test cases and bugs are traced back to the requirements. Coverage is regularly reported

15.40% 10

We do not attempt to measure test coverage against anything 15.40% 10

We trace test cases in a methodical, measurable, effective way against code (components, modules or functions)

9.20% 6

Result analysis: Tracing requirements to a test case does not guarantee a good product. Measuring requirements coverage has become an obsession of some teams.

The big issue for teams doing this is that the test case can only be as good as the requirements. Gaps in the requirements, incomplete or even bad requirements will help assure a problem product. Tracing test cases to problematic requirements will do no one any good.

This practice can be really useful for measuring requirements churn, bug density, or root cause analysis. Tracing or mapping requirements can be a good method of assessing the relevance of your tests.

Page 69: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – METRIC AND MEASUREMENTS

MM9. If code coverage tools are used on your product, what do they measure?

Response percent

Response count

Code-level coverage is lines of code, statements, branches, methods, etc.

55.3% 21

Effectiveness of test cases (test cases mapped to chunks/blocks/lines of code)

44.7% 17

MM10. How do you measure and report coverage to the team

Response percent

Response count

Test plan/test case coverage 45.30% 29

Requirements coverage 25% 16

We do not measure test coverage 15.60% 10

Code coverage 7.80% 5

Platform/environment coverage 6.30% 4

Data coverage 0% 0

Result analysis: Coverage, however you define this is crucial to report to the team. It is the communication of where we are testing. This is the crux of the discussion of what is enough testing.

Page 70: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Survey 8 - Tools

Page 71: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TOOLS

T1. What testing-support tools do you use? (Please check all that apply.)

Response percent

Response count

Bug tracking/issue tracking/defect tracking 87.70% 64

Source control 54.80% 40

Automation tool interface (to manage and run, not write automated tests) 52.10% 38

Test case manager 50.70% 37

Change request/change management/change control system 47.90% 35

A full ALM (application lifecycle management) suite 19.20% 14

Result analysis: Thirteen percent do not use a bug tracking tool. This does not surprise me but it does many others that many test teams do not track their bugs!

About half of the respondents use a test case manager, and the same percentage uses a requirements manager or change control system. Half use a automation tool interface; these tools most commonly contain manual and automated test cases. Yet, only 20% use a full lifecycle ALM tool. A few years ago this number would have been much smaller.

With each passing year─especially as more teams go Agile or offshore work─this number will dramatically increase.

T2. I would describe our bug (bug, issue, defects) tracking as:

Response percent

Response count

Effective 37.70% 26

Very effective; has a positive impact on product quality 34.80% 24

Adequate 20.30% 14

A mess 4.30% 3

Poor 2.90% 2

Hurts product quality/has a negative impact on product quality 0% 0

Page 72: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TOOLS

T3. What type bug tracking tool do you use?

Response percent

Response count

Relational database tool (Bugzilla, TrackGear, TeamTrack, Team Test, ClearQuest, Homebuilt web-based or client server database)

68.10% 47

ALM tool that includes defect tracking 18.80% 13

Excel 8.70% 6

Email 2.90% 2

We do not track bugs 1.40% 1

Result analysis: This is a very positive move in our profession. Just a few years ago the number of teams using Excel to track issues was significantly higher.

Excel is not an adequate issue tracking tool to sort, query, retrieve old issues from past releases, or control and manage access. With so many good and open source tools, there is no reason to be using a naive system.

T4. How many bug tracking systems do you use during a regular test project?Response percent

Response count

1 69.60% 48

Combination of tool and Excel and/or email 14.50% 10

2 11.60% 8

More than 2 4.30% 3

Result analysis: The problem of multiple bug tracking tools is common. In this survey, almost 30% of teams use more than one bug tracking tool. Most problematic is that almost 15% who use a tool, use Excel and email. I see this often in my consulting work. It always causes headaches.

One team will not use another team’s tool, developers have a work management tool and will not use the bug tracking tool so the test team has to use two tools, some remote team may not be allowed access to the internal tool so all their bugs get communicated in excel and email. It is a management problem, but it also lends to a more devious problem of giving the impression that testing is disorganized.

Page 73: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TOOLS

T5. How do you communicate and manage test cases?

Response percent

Response count

A relational database/repository focused on test case management (TCM, Silk Central, Rational Test Manager, TA, etc)

41.50% 27

Excel 21.50% 14

ALM tool that includes test case management 20% 13

Word 15.40% 10

We do not track, communicate or manage test cases 1.50% 1

Result analysis: The problem here is that almost 37% of teams are using MS-Word or Excel─that is dead data. It is difficult to share, edit, maintain, sort, query, and measure with these programs.There are so many good test case management tools, some open source that make writing, editing/maintaining, sharing and measuring test cases so much easier. In my experience, there are very few good reasons for not migrating to an easier and more sophisticated tool set.There are also easy solutions to have test cases and bug tracking linked to the same tool. Test teams can graduate to a higher level of management, reporting and efficiency with tool sets.

T6. How are the test cases used? (Choose the MOST appropriate.)

Response percent

Response count

They are used only for testers to execute tests 34.30% 24

They are used to measure and assess test coverage 30% 21

They are used to assess proper test execution 21.40% 15

They are used to measure and manage project progress 14.30% 10

They are not used during the project 0% 0

Result analysis: For teams not using test cases for more than execution, it may be useful to know that it is very common for them to have other usage.

Page 74: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – TOOLS

T7. If you use a test case management tool, how is it used?

Response percent

Response count

It is used to run our manual and automated tests 57.40% 31

It is used only for manual tests 40.70% 22

It is used to run only automated tests 1.90% 1

T8. If you have experience with success or failure regarding test tool use (ALM, bug tracking, test case management, automation tool interface, other) that is interesting or helpful to others, please write it below: (Comments from practitioners)

• “The best thing to do is manage the progress of the tests and see the bugs. You can measure the project’s health.”

• “I find Bugzilla reporting and commenting adequate communication most of the time. Its only problem is when immediate problems surface – at that point an email to appropriate parties telling them to look at Bugzilla usually works. So does walking over to the developer and showing them the issue.”

• “So far Jira was the best bug tracking tool.”• “If you want people to use a TCM or bug management tool, make sure it has good performance and it’s

simple.”• “For a large project or program it is crucial to select a single method of tracking defects and what is

considered defects versus ‘issues.’ This can lead to a great deal of confusion where defects identified as issues are not handled and addressed properly. I worked on a large project that various efforts had four different ways of tracking defects and issues. The result was that it was hard to assess the overall quality of the product that was being implemented.”

• “Testing should be driven by proven testing methodologies; not by the tool itself.”• “Generating quality reports can be difficult using bug tracking systems.”• Certain automation tools will not be suitable for certain type for projects.”• “Test case management tools are not integrated to requirement management tools reason why our test

cases are sometimes tested against obsolete functionality.”• “Rally is very useful.”• “Process discipline matters more than any tool.”• “The tool is difficult to use for non-technical team members.”

Page 75: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

ABOUT LOGIGEAR CORPORATION

LogiGear Corporation provides testing expertise and resources to software development organizations. Our partners benefit from our seasoned testing staff and facilities, practical training programs, and test support products. We help development teams deliver high quality software, improve time-to-market, and optimize development productivity.

Founded in 1994 as SoftGear technology, LogiGear has built a reputation on partnering with software development organizations to help make the most of outsourcing and staff training solutions. We assist our clients in delivering the best possible quality products while juggling limited resources and schedule constraints.

Page 76: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Survey 9 - Managers

Page 77: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –MANAGERS

When I created this survey, I expected that the vast majority of respondents would be testers, QA analysts (staff who execute tests), and leads. It surprised me to find that over a quarter of the respondents were managers – QA managers, test managers, etc. Luckily, I estimated enough managers would respond that I did write a few survey questions geared specifically to that group.

1. What phase or aspect of development do you feel the test team at your company does not understand? (you may select multiple answers)

Analysis: The response, “the test team is not uniquely responsible for assuring (guaranteeing) quality”is the least understood is not surprising. I hear this very often in my consulting work. This reveals that many people incorrectly think the test team (often the least technically trained, the least paid, and whose work often occurs at too late a stage to make changes) is responsible for quality. This is compounded by the fact that many people do not know the difference between testing, QC, V&V and QA. There is a long way to go to get the message across that test teams are just one part of releasing a quality product.

It is also important to note risk analysis as the number 2 answer. Risk and risk analysis are being talked about more and more in software projects these days. What is not evident is that “talk of risk”has any meaning to real project change.

Response Percent Response Count

Schedule necessities and pressure 25% 6

That the test team is not uniquely responsible for assuring (guaranteeing) quality

58.3% 14

How to measure their test effort 41.7% 10

How to communicate their test effort 25% 6

How to choose good coverage metrics 25% 6

How to analyze and assign risk and priority 54.2% 13

Page 78: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –MANAGERS

2. How do you define success in training?

Response Percent

Response Count

Generally higher job performance 47.8% 11

Greater employee satisfaction 0% 0

Skill development, absorption of new state-of-the-practice

34.8% 8

More efficient/shorter test cycles 17.4% 4

New/more task execution for specific learned skill

0% 0

3. How do you and/or your staff normally get trained for the job?

Response Percent

Response Count

The company provides classes with outside instructors

28% 7

The company reimburses for outside classes

20% 5

The company provides internal staff training

40% 10

The company provides for no training of testers

12% 3

4. Does your group have a test team training budget?

Response Percent

Response Count

Yes 64% 16No 36% 9

Analysis: It is remarkable that such a high number of companies have no test team training budget. Training, especially continuous training, is crucial to every aspect of work, from effective task execution to job satisfaction.

Page 79: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –MANAGERS

5. What types of training have you and/or your group taken within the last two years? (you may select multiple answers)

Response Percent

Response Count

Programming language 20.8% 5

Test productivity/management tool 41.7% 10Automation tool 54.2% 13Test methodology 79.2% 19Test or project process (Agile, RUP, IT governance, etc.)

41.7% 10

Analysis: It’s encouraging to see the emphasis on methodology and not just on tool training.

6. Do you get all the training you need? If no, why not?

Response Percent

Response Count

Yes 32% 8No 68% 17

Representative sample of responses:•“Time, We have a small test team that is always busy!”•“Time to train means taking time from doing work.”•“Budget”•“Low budget”

Analysis: That close to 70% of teams do not get the training they need is surprising, since there are so many methods of training available!

Also, 15 out of 17 responses to the question, “Why not?”, said they were lacking in time and/or budget. Management needs to realize that the investment of time and money in training increases efficiency, and effectiveness, ultimately saving more time and money down the road.

Page 80: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –MANAGERS

7. Does your group share intelligence with other test groups on testing methods, techniques and the effective use of testing tools?

Response Percent

Response Count

Yes 75% 18No 25% 6

Analysis: One of the easiest forms of training is teams sharing methods, tools and practices across their own organization. This response ought to be 100% “Yes”.

8. Does your company have a documented career growth plan?

Response Percent

Response Count

No, it’s up to the individual 41.7% 10Yes, the company has a career growth plan for testers

58.3% 14

Analysis: That 41% of respondents answer “No” is troubling. A lack of career plan hurts morale and retention.

Page 81: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2011 LogiGear Global Survey 10 – Survey

Demographics

Page 82: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –SURVEY DEMOGRAPHICS

In this installment of the 2010-2011 Global Testing Survey, we analyze the demographics of the more than 100 respondents from 14 countries.

1. Job Title – for people who test

Response Percent

Response Count

QA 12.6% 13QC 1.9% 2QE 3.9% 4Tester 4.9% 5Test Engineer 8.7% 9Quality Analyst 4.9% 5V&V Engineer 1% 1Analyst 1% 1Automation Engineer 3.9% 4Test Architect 1.9% 2Senior Tester/ Sr. Test Engineer 5.8% 6Developer (who test their own or other’s code)

2.9% 3

Contractor full time 1.9% 2Contractor part time 2.9% 3Consultant full time 4.9% 5Consultant part time 2.9% 3Test Lead 15.5% 16QA Lead 18.4% 19

Analysis: Most popular job title for people who test is QA Test Engineer. 34% of respondents were Leads. Split almost evenly between Test and QA Leads. At least in the US, the QA role is disappearing and being more commonly replaced by QE or Tester/Test Engineer.

Page 83: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –SURVEY DEMOGRAPHICS

2. Job Title – for Managers

Response Percent

Response Count

Test or QA Manager 58.5% 31Project Manager 13.2% 7Development Manager 0% 0Division Manager 3.8% 2QA Director 20.8% 11Director of Development 3.8% 2

Analysis: More than half of all respondents were Test or QA Managers. This makes many of the responses throughout the survey interesting from the perspective that many of the strategies, team dynamics and politics of testing questions were designed for staff QA/Testers but were answered by Leads and Managers who are very often on the front line of political battles and must defend strategy and time estimates. In total, two-thirds of the respondents to the survey were Leads, Managers, Directors.

3. How many people are on your in-house onshore or offshore test team, not outsourced?

Response Percent

Response Count

1 person team 5.4% 52 – 5 40.2% 375 – 10 18.5% 17More than 10 35.9% 33

Analysis: The respondents to this survey were mainly distributed between smaller 2-5 person teams and more than 10 size teams. These respondents made up 76% of all respondents. This gives a nice balance to the survey with answers from large and small teams.

Page 84: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –SURVEY DEMOGRAPHICS

4. What is the main reason you were hired?

Response Percent

Response Count

SME (subject matter expert/domain knowledge)

9.2% 12

Technology knowledge 13.8% 18QA/Testing knowledge 56.2% 73General job skills 20.8% 27

Analysis: This answer definitely surprised me. Conventional wisdom has it that most testers in the US and western Europe are hired with subject matter expertise. You hire bankers to test banking software! And that conventional wisdom has it that testers are hired in most LCC (low cost countries) where the US and Western Europe outsource/offshore are hired for technical knowledge. Wrong? That over 56% of respondents were hired because of their testing knowledge/skill is great and points to the value of testing knowledge, tester training and understanding quality theory and practices. But it is a surprise. I expected the number of people who test for their subject matter expertise to be the number 1 answer. It is number 4!

5. Which do you feel is most important for testing?Response Percent

Response Count

Technical software testing expertise 18.6% 24

Domain/subject matter testing expertise, user focused tests.

29.5% 38

QA/Testing knowledge 51.9% 67

Analysis: This answer follows the previous answer.

Page 85: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –SURVEY DEMOGRAPHICS

6. What is the ratio of developers to testers on your product/service?

Analysis: In all my consulting work, the most commonly asked question by directors is and non-technical manager is: “what is the average ratio of developers to testers?”

To answer the question, I quote a couple of published company ratios. Microsoft published at one point they had a 1:1 ratio of developers to testers. But, there is no industry standard. Just as banking software is not game software and operating system software is not an e-commerce site, there is no standard ratio. The needs are too different.

A general rule of thumb I use is, if you develop internal software (no external users), the ratio of developers to testers will be much higher. If you are a stock brokerage and your software is for your internal staff, stock brokers or fund managers, the ratio of testers to developers is very low. This is referred to as eating your own dog food (pardon the famous expression). This staff does not have much choice. Yet, in the same company, the teams that make, for example, the web portal for your customers to access their brokerage account information, buy and sell equities, will have a higher ratio of testers to developers. These customers, with the ability to move to another stock brokerage, need testing for ease of use/easy usability, testing to cut the cost of support calls as well as assess risk, assess security and performance and sometimes evaluate user experience against competitors. When you have more demanding customers, more competition requiring higher quality, your ratio will be more testers per developers.

Page 86: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –SURVEY DEMOGRAPHICS

Also affecting the ratio is the platform you support. Even with well automated projects, when you have to support multiple OSs, browsers, languages, mobile devices, devices with embedded systems- these situations will require greater numbers of testers. Consumer products generally have more testers than developers with reputation, support costs, warranty, PR, all potential risks to consumers, pushing the numbers of testers up.

Games have high numbers of testers per developers due to the intense competition in their markets and big revenues.

For our respondents we have 63% with 2, 3, 4, or 5 developers to 1 tester. About 9% with a 1:1 ratio and a total of about 9% with multiples of testers to 1 developer. We also have 16.5% with more than 5 developers to 1 tester.

Looking through the list of companies responding, which will be kept confidential, there are consumer product companies, but the majority of companies make software that stays inside the company, or embedded software, hardware, ERP or B2B products. This majority of companies would have higher multiples of developers to testers.

6. What is the ratio of developers to testers on your product/service? (cont’d)

7. How many years of experience in IT do you have?

Analysis: Considering that two-thirds of the respondents are Leads/Managers/Directors, this makes sense. This is a very experienced groups of survey participants.

Page 87: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –SURVEY DEMOGRAPHICS

8. What is the total annual budget for software and application development products and services at your company? (or at the companies to whom you consult)

9. What is the total number of employees in your company?

Page 88: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –SURVEY DEMOGRAPHICS

10. How many employees are in your division?

Analysis: We have a good distribution of very large companies, big companies, mid-size and small companies. I often hear from software development teams: “Maybe they do that at big companies, but small companies like ours can’t afford that many testers, can’t afford to send our test team to training, can’t afford an automation tool, can’t take the time to use a test case manager, we’re different…” When in fact, most software development teams, regardless of size of team, are very similar. Many of the questions in this survey point to the fact that regardless of company size, the problems and issues are the same.

Page 89: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – D SURVEY DEMOGRAPHICS

11. What programming languages are you comfortable with or use in your testing? (you can select multiple answers)

Response Percent

Response Count

Java 38.3% 44C, C++ 34.8% 40C# 29.6% 34PHP 13% 15Visual Basic 40.9% 47Perl 14.8% 17Python 8.7% 10JavaScript 27% 31Ruby 6.1% 7Delphi 4.3% 5Unix Shell Scripts 18.3% 21TCL 7% 8Other 21.7% 25

Analysis: This should burst some bubbles of conventional wisdom that a large percentage of testers are comfortable programming and very many in more than one language. Similar to conventional wisdom that test teams are hired for domain knowledge (wrong), that testers are technically proficient and seem to be moving to be much more technically proficient by the year is an important move for our industry.

12. Do you feel prepared in your job skills to effectively test?

Response Percent

Response Count

Not prepared 3.8% 5Somewhat 7.7% 10Partially 36.2% 47Fully 52.3% 68

Page 90: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS –SURVEY DEMOGRAPHICS

13. Is your job technically challenging?

Response Percent

Response Count

Yes 60.2% 77Somewhat 34.4% 44No 5.5% 7

14. Are you happy in your job?

Response Percent

Response Count

Yes 60.6% 77Somewhat 35.4% 45No 3.9% 5

Analysis: For the above answers, the overwhelming response is that people feel well prepared for their jobs and are technically challenged (2 of the 4 components for knowledge worker job satisfaction according to Peter Drucker). Overall, this level of job satisfaction is quite high, especially during years of the global economic slowdown during which this survey was taken.

Page 91: 2011 LogiGear Global Testing Survey · 2010 – 2011 LOGIGEAR GLOBAL TESTING SURVEY RESULTS – OVERVIEW The Results The overriding result is that the current testing practice is

ABOUT MICHAEL HACKETT

MICHAEL HACKETT, Senior Vice President Director, has over 18 years of experience in software engineering and the testing of shrink-wrap and Internet based applications. He has developed for Windows, Macintosh and UNIX operating systems. Michael has helped well-known companies including Palm Computing, Electronics for Imaging, Adobe Systems, CNET, The Learning Company, Power Up Software, Oracle, PC World, ADP, The GAP and The Well produce, test and release applications ranging from business productivity to educational multimedia titles in English as well as a multitude of other languages.

Michael is a founding partner of LogiGear Corporation. Prior to joining LogiGear, he served as Director of Quality Assurance at The Well, an online service that is renowned for its electronic conferencing system. Michael has developed professional training courses dealing in engineering, business communication and computer training. He has also written many instructional manuals used by professional trainers.

He is the co-author of Testing Applications on the Web published by Wiley.

Michael holds a Bachelor of Science in Engineering from Carnegie-Mellon University. [email protected]