automating the software test process

48
Automating the Software Test Process Development of an Automatic AT Command Tester Master’s Thesis Martin Dahlberg Christian Åkesson Supervisors Thomas Thelin, LTH Fredrik Jönsson, Sony Ericsson Department of Communication Systems CODEN:LUTEDX(TETS-5510)/1-47/(2004) & local 10

Upload: others

Post on 29-Nov-2021

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Automating the Software Test Process

Automating the Software Test ProcessDevelopment of an Automatic AT Command Tester

Master’s ThesisMartin Dahlberg

Christian Åkesson

SupervisorsThomas Thelin, LTH

Fredrik Jönsson, Sony Ericsson

Department of Communication Systems

CODEN:LUTEDX(TETS-5510)/1-47/(2004) & local 10

Page 2: Automating the Software Test Process

Depa r tmen t o f Commun ic a t i on Sys tems

Automating the Software Test Process – Development of an Automatic AT Command Tester –

A master thesis work performed at Sony Ericsson Mobile Communication in Lund, Sweden, during the spring of 2004. Authors: Martin Dahlberg 800909-3579 and Christian Malmström 801002-3359, both students at Lund’s Institute of Technology at the time.

Page 3: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

2

Page 4: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

3

Abstract Sony Ericsson Mobile Communication (SEMC) has lately noticed a lower quality in their mobile phones concerning attention (AT)1 commands. When intensifying testing in this area they want to automate it, because automatic tests require fewer resources and are more reliable. The manual tests that were performed during the spring of 2004 involved four testers during almost one day, testing 135 AT commands out of a total of 256 commands in that particular mobile phone. The main objective with this master thesis is to fulfil SEMC’s demand on testing AT commands totally by developing a new test program that will be usable to improve their current testing. Focus will be on automatic testing, usefulness, reliability and possibility to extend the software for future needs, i.e. new phone models as well as new AT commands. The analysis/validation consists of an evaluation of the software’s performance compared to the manual tests at SEMC today. General conclusions about automating test processes for software in mobile phones have also been drawn. First software engineering and testing theory were studied. We found it most suitable to use evolutionary development together with spiral testing, which means that several prototypes of the program were developed and tested all along. This made it possible for SEMC to redefine their requirements gradually, because they did not really know what they wanted in the beginning. It was also advantageous for us since we had never developed a complete windows application before and needed confirmation of that we were doing the right thing. The requirements of the program (called SQAT) were for example that it should be written in C++, use AutoMMI as a communication interface, be able to test all available AT commands accurately and be easy to use and understand. The last requirement made us study user interface as well, which resulted in a logic structure and many message boxes telling the user what happens during the test if needed. We have estimated the savings for SEMC to be SEK 262 500 per phone model, only calculating less workload for performing fifteen automatic instead of manual tests. The amount would be significant higher if we take the following into account: finding faults at an early stage, mobility costs, overall higher quality, easier communication and more accurate cost calculations. The effectiveness of the test process has been improved by SQAT since the test case writing and the result interpretation are now fixed and defined by experienced AT commands testers. The workload for adjusting SQAT to a new phone model is estimated to be between one and five days for an experienced tester just administrating the test cases, without the need of recompilation. Our program will then be used three times a week during approximately six months per phone model, according to the test managers at SEMC. The maintenance for future development is good since SQAT is written as a test organizer using AutoMMI as an interface between the phone and the computer. For example test scripts, testing other things than AT commands, may be written and administrated by SQAT after small modifications.

1 AT (attention) commands are the foundation of software in mobile phones.

Page 5: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

4

Page 6: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

5

CONTENTS

ABSTRACT........................................................................................................................................................... 3

1. INTRODUCTION............................................................................................................................................. 7 1.1 BACKGROUND ............................................................................................................................................... 7 1.2 OBJECTIVES................................................................................................................................................... 7 1.3 METHOD........................................................................................................................................................ 7 1.4 OUTLINE........................................................................................................................................................ 8

2. SOFTWARE ENGINEERING ........................................................................................................................ 9 2.1 THE SOFTWARE PROCESS.............................................................................................................................. 9

2.1.1 The Waterfall Model............................................................................................................................. 9 2.1.2 Evolutionary Development ................................................................................................................. 10

2.2 REQUIREMENTS AND SPECIFICATIONS......................................................................................................... 10 2.2.1 Definition and Specifications.............................................................................................................. 10 2.2.2 Requirements Validation .................................................................................................................... 11 2.2.3 Software Prototyping.......................................................................................................................... 11 2.2.4 Requirements Evolution...................................................................................................................... 11

2.3 SOFTWARE DESIGN ..................................................................................................................................... 12 2.3.1 The Design Process ............................................................................................................................ 12 2.3.2 Design Strategies................................................................................................................................ 12 2.3.3 Design Quality.................................................................................................................................... 13

2.4 VERIFICATION AND VALIDATION ................................................................................................................ 13 2.4.1 Creating Reliable Software................................................................................................................. 13

2.5 EVOLUTION ................................................................................................................................................. 14 2.5.1 Software Maintenance ........................................................................................................................ 14

3. SOFTWARE TESTING ................................................................................................................................. 15 3.1 TEST OBJECTIVES........................................................................................................................................ 15 3.2 PLANNING AND DOCUMENTATION .............................................................................................................. 15 3.3 TEST CASE DESIGN ..................................................................................................................................... 16

3.3.1 Black-box Testing ............................................................................................................................... 16 3.3.2 White-box Testing ............................................................................................................................... 16 3.3.3 Grey-box Testing ................................................................................................................................ 16 3.3.4 Static vs. Dynamic Testing.................................................................................................................. 16 3.3.5 Regression Testing.............................................................................................................................. 17

3.4 LIFE CYCLE TESTING .................................................................................................................................. 17 3.4.1 Software Test Plan.............................................................................................................................. 17 3.4.2 Acceptance Testing ............................................................................................................................. 18 3.4.3 System Testing .................................................................................................................................... 18 3.4.4 Integration Testing ............................................................................................................................. 18 3.4.5 Unit Testing ........................................................................................................................................ 18 3.4.6 Coding Phase...................................................................................................................................... 18

3.5 SPIRAL TESTING .......................................................................................................................................... 19 4. USER INTERFACE ....................................................................................................................................... 19

4.1 THREE BASIC MODELS ................................................................................................................................ 20 4.2 DEVELOPING A USER INTERFACE ................................................................................................................ 20

4.2.1 Menu Systems ..................................................................................................................................... 20 4.2.2 Information Presentation.................................................................................................................... 20

4.2.2.1 Text or Graphical Presentation ..................................................................................................................... 21 4.2.2.2 Colour in the Presentation ............................................................................................................................ 21

4.3 USER GUIDANCE ......................................................................................................................................... 21 4.4 INTERFACE EVALUATION ............................................................................................................................ 22

5. SEMC’S TEST PROCESS............................................................................................................................. 22

5.1 AT COMMANDS.......................................................................................................................................... 23

Page 7: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

6

5.2 AUTOMMI .................................................................................................................................................. 24 6. SPECIFICATION OF SQAT......................................................................................................................... 24

6.1 REQUIREMENTS ANALYSIS AND FEASIBILITY STUDY.................................................................................. 24 6.2 ABSTRACT DESIGN...................................................................................................................................... 25

6.2.1 Select Test Cases ................................................................................................................................ 25 6.2.2 Test Sequence ..................................................................................................................................... 27 6.2.3 Summary of Test ................................................................................................................................. 27 6.2.4 Adding a Test Case ............................................................................................................................. 29 6.2.5 Deleting a Test Case........................................................................................................................... 29 6.2.6 Log File .............................................................................................................................................. 30

6.3 ARCHITECTURAL DESIGN............................................................................................................................ 31 7. VERIFICATION AND VALIDATION ........................................................................................................ 32

7.1 TESTING SQAT........................................................................................................................................... 33 7.2 REQUIREMENTS........................................................................................................................................... 33 7.3 SQAT COMPARED TO MANUAL TESTS ....................................................................................................... 36

8. CONCLUSIONS ............................................................................................................................................. 36 8.1 OUR METHOD.............................................................................................................................................. 36

8.1.1 Evolutionary Development ................................................................................................................. 37 8.1.2 Software Prototyping.......................................................................................................................... 37 8.1.3 Spiral Testing...................................................................................................................................... 37

8.2 SAVINGS...................................................................................................................................................... 37 8.2.1 Efficiency ............................................................................................................................................ 37 8.2.2 Effectiveness ....................................................................................................................................... 38

8.3 PROGRAM EVOLUTION ................................................................................................................................ 39 8.4 GENERALIZATION........................................................................................................................................ 39

8.4.1 Automatic Test Processes for Mobile Phones..................................................................................... 39 8.4.2 Use SQAT for Other Devices than Mobile Phones............................................................................. 40

WORKS CITED.................................................................................................................................................. 41 BOOKS .............................................................................................................................................................. 41 ARTICLES.......................................................................................................................................................... 41 WEB SOURCES................................................................................................................................................... 41 PERSONAL COMMUNICATION............................................................................................................................ 41

APPENDIX 1 – PROGRAM MANUAL FOR SQAT 1.0 ................................................................................ 43 INDEX ............................................................................................................................................................... 43 CONSTRAINTS ................................................................................................................................................... 43

File Structure............................................................................................................................................... 43 Failure When Starting SQAT....................................................................................................................... 43 Faults within AutoMMI ............................................................................................................................... 44

USING THE PROGRAM FOR DAILY TESTING....................................................................................................... 44 Select Test Cases ......................................................................................................................................... 44 Start Test Sequence...................................................................................................................................... 44 Understand Test Result................................................................................................................................ 44 Regression Test and Saving a Log File ....................................................................................................... 45

MAINTENANCE OF THE PROGRAM’S TEST STRUCTURES ................................................................................... 45 Add a New Phone from Scratch................................................................................................................... 45 Add a New Phone by Modifying an Existing Phone .................................................................................... 45 Add Test Cases ............................................................................................................................................ 45 Delete Test Cases ........................................................................................................................................ 45

WRITING NEW TEST CASES .............................................................................................................................. 46 Building New Script Files in AutoMMI ....................................................................................................... 46 Testing the Functionality of the AT Command ............................................................................................ 47

Page 8: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

7

1. Introduction

1.1 Background When Sony Ericsson Mobile Communication (SEMC) was established as a joint venture between Sony’s and Ericsson’s mobile divisions in 2001 most testing personnel concerning attention commands remained at Ericsson Mobile Platform (EMP)2, where the products are tested before delivery to SEMC and other companies. AT (attention) commands are the foundation of software in mobile phones and can change parameters as well as executing actions, please see section 5.1 for further information. SEMC came to trust these tests and in the beginning they only performed tests for in-house developed AT commands. When SEMC adds additional AT commands only the developers themselves test the commands. The testing has been partly automated before by EMP and Sony, but no program has been able to test all AT commands accurately. Lately EMP has decreased their testing intensity, and hence SEMC has noticed a lower quality. Before this report and the development of SQAT3 only 135 out of 256 AT commands were tested at SEMC. This test procedure involved one person for two working days and because of regression tests the work was repeated about fifteen times for every phone model. It was very often different people involved in testing the AT commands. The reasons for automating testing of AT commands are primarily related to resources; the test department has to start testing all attention commands but will not hire any extra personnel. When new mobile software is to be released it should be tested as much as possible, but the test personnel only have a few days to perform these tests. Automatic testing is much faster than manual testing and hence more testing will be possible. Another reason is that automatic tests always perform test cases and interpret the results in the same way as oppose to manual tests, which can make manual tests more unreliable. (Jönsson, F. & Töörn, F. Personal communication)

1.2 Objectives The overall objective is to fulfil SEMC’s demands on testing AT commands totally and that SEMC will be able to use the software in order to improve their current testing. Focus will be on automatic testing, usefulness, reliability and possibility to extend the software for future needs, i.e. new phone models as well as new AT commands. The analysis/validation will consist of an evaluation of the software’s performance compared with the manual tests SEMC uses today. We will also discuss the general conclusions we can draw out of automating test processes for software in mobile phones.

1.3 Method To fully understand the process of software development in general and the testing process in particular we have studied books in the subjects, and the most important features are presented in the first part of the thesis. An important issue when developing software is to develop a good user interface. Since SEMC has a need for a test program which is user-friendly and easy to use, we have also studied how to develop a good user interface.

2 EMP is Sony Ericsson Mobile Communication’s supplier of the mobile platform. 3 SQAT is the developed program that this report ended up in, i.e. a program that automatically tests all AT commands existing in a mobile phone.

Page 9: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

8

Development of the test program has been made using Visual C++4. Since we were not familiar with C++ before, we had to learn the possibilities and how to code in C++, using primarily MSDN5. The program has been made using evolutionary development and spiral testing6. We developed a first initial version of the software to test all basic functions before expanding it gradually until a final version was finished. Our requirements were enduring, i.e. they were relatively stable over time and derived from the core of the organization. Our strategy concerning the design of the software was to design our program in a functional way. When designing the program in a user-friendly way the aspects discussed in the user interface part were considered. Since one of SEMC’s requirements concerned integration with their current test program, AutoMMI7, we had an informal education by personnel at the company on that program’s possibilities and shortages. Our program uses the interface between computer and mobile phone that already exists in AutoMMI. The validation, i.e. making sure we have made the right program, has been performed throughout the development in close contact with our commissioner. Every important stage we have made has been validated by SEMC. When performing these validations we also sometimes received new requirements for possible features, which highly increased the program’s future usage. The validation has been made by our supervisor Fredrik Jönsson and the test manager Fredrik Töörn gradually during the development. Verification has been performed together with SEMC employees working at the test department, and we have tested the program concerning user interface, code structure, flexibility, and reliability. Most of these tests were qualitative and hence we had long useful discussions resulting in improvements of the final program. In the reliability test, we have tested SQAT according to the manual tests performed in parallel on the same phone as we have developed the program for.

1.4 Outline Introduction – Chapter 1 The introduction contains information about the background and the objectives of this thesis work, as well as which methods we have used making this report and developing SQAT. Theory – Chapter 2,3,4,5 In order to develop a useful program and to fully satisfy the needs from SEMC we have studied the latest theory concerning software engineering, software testing, user interface and the software process at SEMC. Specification of SQAT – Chapter 6 This section presents the requirements analysis of the developed program and the final design. It also discusses how the requirements have changed over time.

4 Microsoft Visual C++ is a developer program for C++ applications. 5 Microsoft’s Developer Network, which is a guide for developers of windows applications. 6 Evolutionary development and spiral testing means that only a small part of the program is developed, then the requirements and the functionality is revised before further development. This continues as a cycle. Please see the theory section, chapter 2 and 3. 7 AutoMMI is an in house developed test program at Sony Ericsson Mobile Communication, see chapter 5.3.

Page 10: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

9

Verification and validation – Chapter 7 It is very important to be certain that the program is correct, especially when developing a program that will automatically test software. This section discusses how we have performed the verification and validation of our program, SQAT. Conclusions – Chapter 8 At the end of this report we present the results of our work. We also discuss the advantages of automatic testing processes of the mobile phone’s software in general and the AT commands in particular. Appendix 1 To this report there is also a program manual attached, describing both the program’s possibilities and constraints.

2. Software Engineering

2.1 The Software Process Developing new software is a necessary part in today’s fast developing world. User demands increase and to keep a leading position among other computer and mobile phone companies it is essential to keep improving the software used. When developing software, similar companies may use different methods to keep their software updated. However there are four basic activities that have to be executed to fulfil the development process. First a software specification has to be made. The specification describes the functionality of the software and constraints on its operations. When a specification has been made the software development starts and it is time to produce the changes and new features, which have been described in the specification. Since the software is to be used by a consumer, a software validation has to be done. The validation ensures that the software fulfils what the commissioner wants. Finally the software has to be developed in a way that makes it easy to evolve and change the features to meet changing customer needs. This activity is often called software evolution. (Sommerville, 1996) There are different approaches to manage the development process. Separating the activities above, not continuing with the next activity until the foregoing activity is finished, usually is referred to as the waterfall model. The alternative is called evolutionary development, and here the different activities are done simultaneously to constantly update the specification and validate the changes made. (Sommerville, 1996)

2.1.1 The Waterfall Model The waterfall model was developed to increase the visibility of a development project, i.e. the structure and documentation of the activities. The stages vary between different projects and partly cover each other, but usually five major stages can be distinguished. The first step in a development process is to do a requirements analysis and definition. The objectives and limitations of the software are investigated and made clear to both users and development staff. Next the software needs to be designed in a way that fulfils its purposes and makes communication with its future users possible. (Sommerville, 1996)

Page 11: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

10

If the system passes the tests and meets its requirements, it can be distributed to its customers. However the work is not finished here. When performing installation and maintenance it is important to correct faults which were not discovered during the development stages. One major disadvantage when using the waterfall model is that the development process becomes inflexible, since software requirements are frozen before the design begins. Sometimes the users do not know their requirements beforehand. Another disadvantage is that the waterfall model is document driven, which sometimes makes it impractical, e.g. when developing interactive applications needing user interface elaborating. (Sommerville, 1996, Khalifa & Verner, 2000)

2.1.2 Evolutionary Development The basic idea when using evolutionary development is to develop an initial version of the software, test the version and expand and refine it gradually. There are two subtypes of evolutionary development. Exploratory programming means that that the objective of the work is to work with the customers to explore their needs and points of view. Developers start by developing the parts of the system for which they know what the customers want. By receiving more and more information about customer needs the software development can be processed forward. Throw-away prototyping means that developers make a prototype of the final software without knowing exactly what the customers want. The prototype has the goal to find out which parts of the customer requirements that are poorly understood. (Sommerville, 1996) Evolutionary development is usually more effective, and results in more stable requirements, than the waterfall model when the customers’ needs are unknown. However there are disadvantages when using evolutionary development. It is hard to get a clear structure of the work and therefore the process is not visible, i.e. well structured documentation is hard. Often this kind of development also requires special skills to produce well developed software with customer needs in focus. Evolutionary development can be used on small simple software systems as well as on large and more complex systems. Sometimes evolutionary development is used within a waterfall model, in order to reduce risk, in subprojects where user requirements are weak. (Sommerville, 1996, Khalifa & Verner, 2000)

2.2 Requirements and Specifications Before designing the software, all the requirements have to be decided and put into a specification. There are four stages through which the requirements are determined. First a feasibility study must be made. Customer’s demands are investigated to decide whether new software is needed or not. If the investigation shows that there is no demand for new software, there is no need for the development to go on. Next an analysis is to be done. By observing existing software systems and asking future customers and users, developers can get a picture of the requirements needed for the software. Definition of requirements is where the information from the analysis is documented. It is important that this document is written so that it can be understood by the end-users. Finally a detailed description of the system requirements, often called requirements specification, is made. This description is supposed to act as a basis for a contract between developers and customers. While creating the specification, developers also start the design of the software. These four activities are not executed in sequence but are iterated. (Sommerville, 1996)

2.2.1 Definition and Specifications The definition is an abstract description of the services which the system should provide. The specifications are more precise descriptions of the system’s functionality and constraints. The

Page 12: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

11

definition should only specify the external behaviour of the software system and not include the system design. System requirements may be either functional or non-functional. Functional requirements state the services that the system should be able to perform, what it should not do and how it should react in different situations. Non-functional requirements are constraints on the functions offered by the system, for example if the application should be written in a certain programming language. (Sommerville, 1996) Requirement specifications add further information to the definition and explain the services and functions of the system in a technical way. This information is used when coding the application and is very detailed. If the language is abstract, as in the definition, problems may arise when developing the system. Therefore it is important to write the specifications using a technical language which must be understood by the developers. (Sommerville, 1996)

2.2.2 Requirements Validation It is important to ensure that the requirements fulfil the customers’ needs and wishes. Therefore some aspects have to be considered. The validity has to be high, i.e. the software system has to be able to perform exactly what it is supposed to perform. Another important issue is that all requirements should be consistent, i.e. the requirements have to be complete and include all functions intended by users. (Sommerville, 1996)

2.2.3 Software Prototyping To be able to make a complete definition and specifications, a prototype is often needed. Making a software prototype is a very good idea if the demands and needs of the users are not completely known. The prototype also works as a validation of the initial requirements definition and specifications. The benefits of developing a prototype early in the software process are many. Misunderstandings between developers and users are identified when testing the functions of the system. Missing or confusing services are detected when users are able to try the software prototype. When developing the prototype, developers may find incomplete details in the requirements definition and specifications, which make it easier to change the requirements and develop much better final software. (Sommerville, 1996) Besides that the prototype is a good way of validate the requirements it may also be useful in other ways. Letting future users test the prototype software can be seen as user training as well as system testing. All the benefits of developing a prototype means that it can be seen as a way of reducing risks in the software development process. However the prototype process is often expensive and requires much effort. A main reason for developing a prototype is however to save money. Late changes of a product are much more expensive than early changes, and faults are discovered early by developing a prototype. After deciding what services the prototype should have, the task of creation and distribution to future users will be in focus. But to gain as much as possible from the prototype the user-tests have to be carefully evaluated. The disadvantages are that prototyping is costly and that it takes time, time which not always exists in a short software project. (Sommerville, 1996)

2.2.4 Requirements Evolution It is important that the software requirements are made so that they may easily be changed over time. Demands and needs of users may change during the development of the software system and it is therefore important to keep the specification updated. A change of requirements may also be forced by programming constraints, user tests and feedback from

Page 13: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

12

other phases in the development, e.g. testing. Often requirements evolution is seen as an issue to be solved and eliminated rather than as a natural feature. (Anderson et.al., 2002) From an evolutionary perspective there are two sorts of requirements. Enduring requirements are stable over time and are derived from the core activity of the organisation. The opposite is called volatile requirements and these requirements are likely to change over time during the system development. (Sommerville, 1996)

2.3 Software Design One of the most important issues when developing software is to design it in a way that suits users in the best way possible. Any design must be handled in three stages. First developers have to study and completely understand the problem. It is important to view the problem from different angles to provide different insights. Without this understanding, effective software design is very hard. Next all possible solutions have to be identified. The choice of solution depends on the designer’s experience, the resources available and how simple the solutions are. Finally designers must describe each detail in the chosen solution by first writing an informal description to discover any faults in the design. A formal description is then written. (Sommerville, 1996)

2.3.1 The Design Process The design process involves developing several models of the system at different levels of abstraction. Early versions of the system are developed through the process and results in a final version ready to be implemented. There are several stages of the design process. These are not preceded in sequence, but in parallel.

• Architectural design describes how all sub applications are connected and how they communicate.

• Abstract specification is a non-detailed description of the application’s functions.

• Interface design shows how the application will look like for the user. • Component design describes which components are used. • Data structure design is a description of all variables and procedures

within the application. • Algorithm design shows how the solutions are made out from a

programming perspective. These stages are repeated for every sub-system until a complete system is designed. Designs are documented in a set of design documents that describe the design for programmers and other designers. There are three main types of notation used in design documents. Graphical notations are used to display the relationships between the components and to relate the design to the real-world system it is modelling. Graphical notations are good for giving an overall picture of the system. Program description languages are based on programming language, but also allow explanatory text. These languages allow the intention of the designer to be expressed and not only the details of how the design is to be implemented. The last type of notation is informal text. Here the information that can not be expressed formally can be written. (Sommerville, 1996)

2.3.2 Design Strategies There are two main strategies when designing software.

Page 14: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

13

• Functional design means that the system is designed from a functional

viewpoint. Developers start with a high-level view and refine it into a more detailed design.

• Object-oriented design means that the system is viewed as a collection of objects rather than as functions.

Different development projects require different design strategies. However it is important to understand that functional and object-oriented methods are complementary rather than incompatible. When developing large systems different design approaches may be used on different subsystems within the system. (Sommerville, 1996)

2.3.3 Design Quality The quality of the design is always crucial for success. There are no absolute measures for quality but there are some characteristics which may be used to evaluate the quality. (Sommerville, 1996)

• Cohesion of a system is a measure of the closeness of the relationship between its components. All components should contribute to the implementation and play a role in the system.

• Coupling is an indication of the strength of the connections between components.

• Understandability is an important characteristic for a system design. Anyone who works with the system has to be able to understand the design specification.

• Adaptability of a design is important if there should be a possibility to change the design. The adaptability is high when the system’s components are loosely coupled. To be able to change in the design the design process also has to be well documented and easy to understand.

2.4 Verification and Validation Before releasing software it has to be verified and validated to ensure that it meets its requirements and works properly. Verification can be explained as building the program right and validation as building the right program. Valid software meets the specifications and needs of customers. The verification process checks the correctness of the structure and the code of the program. It is important to consider both verification ad validations early in the development process, already in the design phase. Late changes when discovering faults require more time and resources than early changes do. (Dorothy, 1993)

2.4.1 Creating Reliable Software Dependable systems are systems which have critical non-functional requirements for reliability, safety and security. A dependable system is in other words a software system which can be trusted. Reliability is the most important characteristic, and it is usually defined as the probability of failure-free operation for a specified time in a specified environment for a specific purpose. Safety and security are important for certain software applications but will not be described more here. Designing reliable software with proof does not however guarantee reliability in practical use. One reason for that is that the specification may not reflect the real requirements of system users. Another may be that the proof itself contains faults. (Sommerville, 1996)

Page 15: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

14

The reliability of a system is often measured using information about how the system acts and reacts. There are at least three kinds of measurements which can be used when deciding the reliability of a system.

• The number of failures given a number of inputs. • The time between system failures. • The time to restart or repair the system when a failure has occurred.

These three measures only consider the probability of a failure and not the consequences. (Sommerville, 1996) Already when creating the requirements specification reliability has to be considered. Often reliability requirements are expressed in an informal and qualitative way. To be able to test that the reliability specifications are fulfilled, the requirements should be quantitative. Therefore it is important to create specifications that are testable. One way of deciding these specifications is to use the three measurements above. It is also important to consider the consequences to state the most essential specifications. (Sommerville, 1996) Reliability in a software system can be achieved with three strategies. (Sommerville, 1996)

• Fault avoidance means that the implementation process should be organised with the objective of producing fault-free systems.

• Fault tolerance is a strategy that assumes that some faults always remain in the system. The system is created in a way that it can work even if some faults occur.

• Fault detection means that faults are detected before the software is put into operation. In the system validation methods are used to discover any faults which remain in the system after implementation.

2.5 Evolution Every software system has to be changed. Whether the request for change is triggered by users, management or customers, the system has to evolve. New hardware also creates a demand for new software. Evolution may also be caused by faults, undiscovered during system tests, appearing when the system is used by the customers.

2.5.1 Software Maintenance The process of changing a system after it has been delivered is called software maintenance. The changes differ from simple coding faults to design faults or even requirement faults, which may require a new requirements analysis. The maintenance is crucial for the survival of the software system. There are three different types of software maintenance.

• Corrective maintenance means fixing reported faults in the software. Coding faults are relatively cheap to repair, design faults are more expensive, and faults in the requirements are most expensive to fix.

• Adaptive maintenance derives from a change in the environment, e.g. a new operating system or a new hardware platform. The software functionality does not change very much.

Page 16: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

15

• Perfective maintenance means implementing new system requirements generated by the customers as their organizations change.

When developing software it is important to try to look into the future to predict future needs and errors. The cost of changing a software system after it has been finished and released are almost always greater than the cost of changing the system within the development process. (Sommerville, 1996)

3. Software Testing Testing software is at least as difficult as developing software. Even when software has been fully tested some faults still remain. Software testing is associated with major costs, which amount differs depending on when fault is detected and corrected. Faults detected after the program is operational can incur costs that are 10 to 90 times higher than those of faults detected during the design phase. Two-thirds of the faults introduced during the design phase are usually not detected until the software is in use, which makes it very important to have good testing routines throughout the development process. About 40 per cent of the total time and effort spent during developing software concerns testing. (Van Vliet, 1993) Testing is a paradox; if all tests are ok people think the software has no faults, even though the tests could have failed in locating them. On the other hand if several faults are found during the tests no one will really trust the software despite that the faults are corrected. When testing it is vital to construct accurate test cases that find faults. (Dorothy, 1993)

3.1 Test Objectives Error, fault and failure are defined as different malfunctions. An error is a human activity that results in software containing a fault, i.e. a fault is a symptom of an error. The failure is the result of this error and fault, i.e. the sickness caused. Note that one failure may have several faults. To decide if a result of an action causes a failure or not the tester needs a reference, which can be a specification, user manual etc, hence the test can never be better than the reference. This matter makes a distinction between verification and validation. Verification responds to the question; is the system correctly built, in accordance to specifications? Validation, on the other hand, responds to if it is the right system. (Van Vliet, 1993)

3.2 Planning and Documentation It is essential to plan and document all activities that concern testing, for the same reasons as for the rest of the software development process. IEEE (Institute of Electrical and Electronic Engineers) suggests that Standard 1012 should be used, because it handles both verification and validation activities for a waterfall-like life cycle. The following phases are identified. (Institute of Electrical and Electronical Engineers, 2004-03-15)

• Concept phase structures the idea and puts it on paper. • Requirements phase analyzes all demands and converts them into

requirements. • Design phase will result in a detailed description of the program. • Implementation phase is when the application is coded. • Test phase ensures that the validation and verification is accurately

performed. • Installation and checkout phase is when user receives the application and

starts to use it.

Page 17: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

16

• Operation and maintenance phase is an ongoing activity as long as the application is in use.

Already in the concept and the requirements phases the plan for verification and validation is made. This Test Plan describes the scope, approach, resources and schedule of intended test activities. Four additional documents are also being made eventually; Test Design, Test Cases, Test Procedures and Test Reporting. (Van Vliet, 1993)

3.3 Test Case Design An important and often very useful test technique is ad-hoc testing or fault guessing, but this relies on inspiration, creative thinking and brainstorming and hence it is not very formal. Instead relevant test cases are based on detailed specifications including description of the functionality, inputs and outputs. This way the test cases become a part of the overall system documentation. (Lewis, 2000)

3.3.1 Black-box Testing Black-box testing tests the program’s functionality against its specification, therefore it is also called functional testing. An example may be a driver testing a new car, not knowing anything about how it works other than how to use the functions. The advantage of this test is that it is natural and easy to fulfil and its major disadvantage is that it is not complete, because it is not feasible to try every possible input combination since they are too many. Another disadvantage is that since it does not concern the internal structure of the program there may be undetected faults inside the program after testing. (Lewis, 2000)

3.3.2 White-box Testing In white-box testing the tester examines the internal logical structure of the program. The advantage is that after the test, the code will be logically correct, but its disadvantage is that it does not consider potential faults in the specification. An example of white-box testing is if a mechanic checks the inner workings of a car to see that everything works properly but not tests it by driving. Worth noting is also that there may be faults in the code even though they will not be found by this test. For example the code “for i <10” is logic and hence not a fault, but if it is supposed to be “for i < 1” the program will not work properly. (Lewis, 2000)

3.3.3 Grey-box Testing While black-box focus on functionality and white-box focus on internal structure, the grey-box testing tries to combine them both. In this case the tester communicate with the developer and hence make sure that all the functionality will be correct at the same time as he gets an understanding for the program’s code. For example the car mechanic tests the car by driving it as well as checking the inner workings to make sure that everything has been developed right. (Lewis, 2000)

3.3.4 Static vs. Dynamic Testing Static testing in general involves checking the code line by line looking for logical faults and dynamic testing concerns tests while the program or part of it is running or simulated (Khalifa & Verner, 2000). Pure static testing hardly ever exists, because when testing the code line by line the tester usually simulate the code result in his head. (Sternberg & Grigorenko, 2001)

Page 18: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

17

3.3.5 Regression Testing After finding and fixing a fault, a test has to be performed to make sure the fix has solved the problem. But also all other test cases that test the overall performance of the program has to be run again to make sure the new fix does not disturb any other function. Both these kinds of testing are called regression tests. (Kaner, Falk & Nguyen, 1999)

3.4 Life Cycle Testing To avoid programmers testing their own program a separate quality assurance organization is required. The organization uses all the program’s documents in order to make test plans, test cases and test specifications. The reason is that it would be hard for a programmer to look at the program from a different perspective and to objectively trace faults. Another problem may occur if the programmer has misunderstood a requirement, and hence will not consider it as a fault when testing the program. Life cycle testing focuses on the importance of continuous testing procedures throughout the development process. Already in the requirements phase a test plan should be made, which is an organization of future testing work. During the rest of the development phases before coding the quality assurance organization will make the test plan more detailed and also create test cases, test scripts and a list of expected/wanted results. Figure 3.1 shows how the verification corresponds to the development. As the development proceeds from user requirements to coding, testing personnel expand the test plan with the future tests cases for acceptance, system, integration, and unit testing. When the coding has been done test personnel use the test plan in order to perform the tests from unit testing to acceptance testing. (Lewis, 2000)

Fig. 3.1 Development Phases vs. Testing Types (Lewis, 2000)

3.4.1 Software Test Plan A software test plan frame should be made early, preferable already in the in the beginning of the development phase. This because the plan is used as a service level agreement between quality/testing and development functions, and improves the interaction of the analysis, design and coding. A test plan defines roles and responsibilities, objectives and timetable as well as which tests are important and where and how they should be performed. It is very important that the plan is accurate, current and simple, has a logic flow, minimizes redundant

Coding

User Requirements

Logical Design

Physical Design

Program Unit Design

Acceptance Testing

System Testing

Integration Testing

Unit Testing

Verifies

Verifies

Verifies

VerifiesDevelopment Test personnel

Page 19: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

18

testing, demonstrates full functional coverage, clearly documents the test results and is accessible by appropriate persons who can give feedback and approval. Establishing a test plan is made by certain steps; define the test objectives, develop the test approach, define the test environment, develop the test specifications, schedule the test and review and approve the test plan. (Lewis, 2000)

3.4.2 Acceptance Testing Acceptance testing verifies the requirement phase and confirms that the future software will respond to the customers’ expectations and also that the requirements are not erroneous, redundant, too restrictive, contradictory nor ambiguous. At the same time it is important to reconsider the performance, security and functions definition as well as the interface documentation (Lewis, 2000) Incomplete requirements are a common reason for software project failure. (Farre, 2001)

3.4.3 System Testing The system tests verifies the data model, the process model and the linkage between them, where the data model consists of information needed or data object types required by the application. The process model is a breakdown representation of the activities into successively more details (Lewis, 2000) The tests have to determine that the design meets the requirements, is complete and possible and also that it covers fault handling. (Kaner, Falk & Nguyen, 1999)

3.4.4 Integration Testing The integration tests make system specifications from the user’s expectations which can be used by programmers in this phase. Physical design phase involves making a high-level design, which is a basic structure of the future software, and this has to be tested. The structure must be firm and accomplish the intent of the documented requirements, which are checked without executing the program. (Lewis, 2000)

3.4.5 Unit Testing Unit tests are performed to verify subunits of the program. Program unit design phase involves detailed design including specific algorithmic and data structure which easily can be translated into a programming language. A good structure only includes three different kinds of control constructs; sequence, selection (if, while etc.) and iteration (for) since it makes it possible to use many diverse programming languages. Both black-box and white-box testing are used to determine that the unit works properly. (Lewis, 2000)

3.4.6 Coding Phase By the end of the coding phase the test plan should be completed and it is time for executing the actual tests. Since all four test stages have been made in order to verify the four different stages in the development, all test cases are written and should be dynamically executed. Whenever a fault is found by the tests it is needed to be well documented. These problem reports should include fulfilled information in order to make it easy to trace the faults and quickly correct them. A report should at least include the following elements (Lewis, 2000):

• Problem identification • Author

• Release • Open date

Page 20: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

19

• Close date • Problem area • Defect or enhancement • Test environment • Defect type

• Who detected • Assigned to • Priority • Severity • Status

3.5 Spiral Testing Using the waterfall model in reality is not as easy as it seems. Especially the testing procedures are hard to implement as a guarantee for a future satisfied user. One reason for this is that often the users do not know exactly which requirements they want, and testing the requirements are the most important thing that will finally decide the quality of the program. In order to avoid these kinds of problems spiral development and testing can be used, see figure 3.2, which involve the end users more frequently throughout the development of the software. Each step in the life cycle process is performed as usual but the life cycle is much shorter and repeated several times. The end users first give some vague requirements and a first prototype is being made and tested, following the life cycle process. When the users have reviewed the program and eventually changed and added some requirements a second life cycle process will start and so on. (Lewis, 2000)

Fig. 3.2 Spiral testing process (Lewis, 2000)

4. User Interface One of the most important characteristics of a software program is its user interface. Without a well developed interface the use of the software gets complicated and the program will not be successful. There are a few characteristics of graphical user interfaces.

• The information is displayed in a window. A program may use multiple windows to display different information at the same time.

• Icons represent different types of information. • To have a good overview of possible processes in the program the use of a

menu from which commands are selected is a good solution. • To be able to choose from the menu or to mark different icons on the

screen, the user has to be able to use some kind of pointing device, such as a mouse.

Design

Test case Design

Test Planning Planning/Analysis

Coding

Test Development

Test Execution /Evaluation

Test/Deliver

Page 21: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

20

• The interface also includes the graphical elements. (Sommerville, 1996)

4.1 Three Basic Models As mentioned above user interface is very important. Studies have shown that in many systems, the user interface incorporates over 30 percent of the program code. When working with user interfaces three basic models, describing the interaction between human and computer are often used. The user’s mental model is a model of the machine that the user creates. The mental model is based on education, knowledge and characteristics of the user. It is used in the interaction with the system to plan actions. The user’s understanding of what the system contains and how it works creates the model. The mental model is based on the system image, which consists of all the elements of the system that the user comes in contact with. The system image includes physical outlook, style of interaction and the content of the information exchange. Finally, the conceptual model is the model of the system computer created by system designers for their purposes. The conceptual model reflects itself in the system’s reactions to user actions. (Lewis, 2000)

4.2 Developing a User Interface There are some main issues to consider when developing a good user interface. It is important to have all aspects in mind during the development. Even if almost every aspect is considered and treated, the few ones not considered may impair the entire user interface.

• Use simple and natural dialog. Only relevant and needed information should be displayed in a logical and natural order.

• Speak the user’s language and use phrases and concepts that are familiar to the user.

• Minimise memory load, i.e. the user should not need to remember information from one part of the dialog to another.

• Consistency is important. The user has to be sure that one word or action means the same thing in different dialogs.

• The software must provide feedback at all times. The user should not have to wonder what is going on and not be able to get information about that at any time.

• There have to be shortcuts for different functions. This makes it easier for both new and advanced users.

• The error messages have to be clear and easy to understand. Any solutions to the errors should also be displayed. (Lewis, 2000)

4.2.1 Menu Systems In a menu the user chooses one of several possible options. Menu based systems have quite a few advantages. For example, the presence of a menu results in less typing for the user. The user does not have to know the command names, since they are displayed in a list. A menu also results in fewer errors caused by the user. Menus can either be pull-down, i.e. the menu name is displayed and by pressing it the entire menu is pulled down, or pop-up, i.e. the menu appears when selecting an option in a form. (Sommerville, 1996)

4.2.2 Information Presentation Presenting the results and possible errors for the user is one of the most important issues when developing a good interface between user and system. The presentation must be easy to

Page 22: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

21

understand and should reflect the knowledge of the users. When deciding how to present information there are a few factors that the designers must take into account. For example if the user is interested in precise information or just in the relationships between different data values. Another question is whether the information should be presented textual or numeric. (Sommerville, 1996)

4.2.2.1 Text or Graphical Presentation Information can be presented either with plain text or graphically. If precise numeric information is required and the information changes relatively slow the result should be presented as text. If the data changes quickly or if the relationships between data are significant the graphical presentation is the best way to go. When the result of a system operation consists of very large amounts of information, a graphical presentation is also the best approach. (Sommerville, 1996)

4.2.2.2 Colour in the Presentation Colour gives an extra dimension when presenting information. Different colours can be used to draw attention to the most important information. However misuse of colours can cause disturbance when using the software system, which means that designers have to be careful with which and how much colours they use in designing the software. The information has to be understandable even for colour-blind persons and persons with different backgrounds interpreting the colours in different ways. There are some guidelines when using colour in a presentation of information.

• Limit the number of colours and be conservative how these are used. Designers should not use more than five different colours in one window and not more than seven different colours in an entire system interface.

• Use colour change to show a change in the system status. • Use colour coding to support the tasks which users are trying to perform,

e.g. to highlight the most important figures. • Be consistent and do not use a colour for different meanings. Designers

also have to be aware of that colours mean different things for different users.

• Some colour combinations may be disturbing, e.g. blue and red. (Sommerville, 1996)

4.3 User Guidance It is important to develop good user guidance. When faults in the system occur the user has to receive the information in an understandable form, which can help him or her to solve the problem. Error messages should always be polite, concise, consistent and constructive. When errors occur the system should not give the information in a blaming way, i.e. blame the user for causing the error. Instead there should be a function creating a possibility to correct the error. A software system should also have a help-function helping the user when a fault occurs. The help system can provide a number of entry points which allow the user to enter the help system at the top of the message hierarchy and browse for information. Another possibility is that the user may enter the help system to get an explanation of an error message of a particular command. When releasing a software system a user manual has to come with the system. The user manual should include a functional description of the services, an

Page 23: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

22

introduction to how to use the software and a system reference, installation and administrator’s manual. (Sommerville, 1996)

4.4 Interface Evaluation After developing a user interface, an evaluation has to be done. The evaluation should include user tests and check that all requirements are fulfilled. The work may include cognitive scientists as well as graphic designers, and is often very expensive. For smaller companies, which do not have the economic resources to do a complete evaluation there are easier ways to test the user interface. It is possible to use regular questionnaires which collect information about what users thought of the interface. Observations when users use the software can be made to create a picture of how good the interface is. Another way to evaluate the interface is to include a code in the software which collects information about the most used processes and the errors that occur. (Sommerville, 1996)

5. SEMC’s Test Process Our gained knowledge in software development and software testing was first used to better understand the testing process at SEMC. New software, only used internally for testing, is released approximately three times a week at SEMC. The test manager for the specific phone is then deciding if it is suitable with a test of the AT commands on the newly released software (figure 5.1). This decision is made based upon prior test results together with available test resources at the time. In general 75 software versions are released but only 15 tests testing AT commands are performed per phone model. The manual tests are time consuming and take 20 hours in total for the four to eight employees that are involved. There are testers who are accessible that carry out the tests; therefore it is never the same testers every time. A decision has been made to only test the most important AT commands in order to improve efficiency in the test process. As a result only 135 of 256 AT commands are tested. The alternative to the manual tests, that has been developed in this master thesis work, is called SQAT8. SQAT is a dialog based program sending test scripts via AutoMMI to the mobile phone. The main advantages using SQAT instead of manual tests is that the tests are less time consuming and that all 256 AT commands are tested. A more detailed comparison between automatic and manual testing of AT commands is made in the conclusions. The reporting to the development department is also made easier by using SQAT. The test results that are presented after performing an automatic test help the tester to report specific reasons for faults to the development department. 8 SQAT is a quibble. Automatic Testing of AT makes ATAT, which technically written is AT^2 and coded in program languages as SQ(AT).

Page 24: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

23

Fig. 5.1. SEMC’s Test Process.

5.1 AT Commands AT stands for attention and is used to tell a modem device that a command will follow, e.g. ATD means attention Dial. Dennis C. Hayes introduced the attention commands when developing the world’s first PC modem in 1981, and thereby solved the interface problems to allow any computer using a standard serial port to control the modem functions with software. Hayes’ smart modem became a standard by which modem compatibility was measured. (US Internet Industry Association, 2000-02-09) The AT commands at SEMC are situated on a low software level. Most of the commands are added to the platform by EMP. About ten per cent of the commands are developed at SEMC. It is possible to use AT commands in order to configure the phone and the modem to connect via infrared port or the system bus, but also to request information about the current configuration or operational status of the phone or the modem. In addition SEMC use AT commands to test availability to the network in the phone or modem and, when applicable, request the range of valid parameters for an AT command. (SEMC, 2003-12) When using AT commands the built-in modem always terminates each response with a result code that can be either OK or ERROR. An ok means that the attention command and its parameters were valid and also that the command has completed the execution, while an error means that something has gone wrong. There are several reasons why an error may occur, for example (SEMC, 2003-12):

Test?

4-8 employees test 135/256 AT commands manually with HyperTerminal

Test Manager

YesTest results are reported to the development department:

•Date

•Tester

•Release

•Test environment

•Severity

•Priority

•Defect or enhancement

•General explanation

1 day later

No

New internal software release, approximately three times a week.

SQAT1h 15min

Page 25: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

24

• There is a fault in the command syntax. • One or more parameters are outside the permitted range. • The issued command is not implemented in the built-in modem. • The command is not appropriate to the service.

But also when an ok is received there may be faults within the AT command, e.g. some commands are correctly executed but the parameter changes are not correctly stored in the memory.

5.2 AutoMMI The interface between the computer and the mobile phone at SEMC is handled by a program called AutoMMI. The program allows the user to create test cases with scripts consisting of AT commands and other commands controlling and reading values in phone, sending them to the phone and receiving a response whether the script was executed in the phone or not. AutoMMI can also give an exit value before closing, which has been used to send information back to SQAT about the test performance. Some constraints are still present in AutoMMI concerning testing attention commands. Sometimes when an execution of a command result in a direct FAULT the program stops the script and a manual close down is required. According to AutoMMI specification it should also be possible to force the program not to create a log file, but the file is created every time despite our correct settings. These faults have been reported to the developers of AutoMMI. AutoMMI is an internal program at SEMC, and it may not be used outside the company. This is the reason for not discussing AutoMMI any further in detail in this report. (Sjöstrand, P. Personal Communication)

6. Specification of SQAT A well defined specification was required for later verification and validation tests of SQAT. We decided to perform a requirements analysis together with a feasibility study first. This resulted in an abstract design, which was easy for SEMC to understand. Later we specified the program more in detail in an architectural design.

6.1 Requirements Analysis and Feasibility Study The requirements in this work come from SEMC. The main focus of this work was the development of an automatic AT command tester released before June 31, 2004. Our objectives have worked as our definition9 and the detailed requirements and specifications for the program are described below.

1) The test program should be written in C++. 2) AutoMMI will primarily be used as a communication interface. 3) The program must be able to test all current AT commands accurately. 4) Possibility to easily extend the program with future test cases and new AT commands. 5) Ability for users to trace faults when testing AT commands and ability to understand

what the program actually has tested. 6) The program must be easy to use and understand. 7) The results of tests must be presented in a logic and user-friendly way. 8) A manual, to help users understand the possibilities and limitations of the program should

be included.

9 Please see section 1.2 in this report for further information.

Page 26: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

25

9) Option for user before running test to decide whether testing should stop at faults or not. 10) If possible load test suite from a list of existing AT commands, which makes the program

more flexible concerning adding test cases. We decided to use AutoMMI even though it first seemed to require a lot of extra work reading log-files automatically. This was later solved by ordering a new implementation of AutoMMI, making it possible to send exit values which SQAT could receive after each test case. This new functionality improved the possibilities for more narrow interpretation of AT commands tests. Requirement number ten, which focuses on loading a test suite from a list of existing AT commands, has changed over time. In the beginning SEMC wanted SQAT to create this list from existing AT test case script files. In the feasibility study, this turned out to be very hard to implement and instead SQAT uses a reference file, called test structure. This is unique for every phone and controls the location of the script files.

6.2 Abstract Design Before we started programming we agreed on an abstract design layout together with our supervisor at SEMC, Fredrik Jönsson. The described design below is very much the same as the one at the beginning, even though improvements have been made especially with the user interface. Our architectural design is described in section 6.3 and a program manual is located in appendix 1. When considering the user interface, the following issues were in focus. According to the theory there are several other issues to consider, but we chose to focus on the most important ones.

• The dialogs should minimize memory load, i.e. that the user should not have to remember any information from one dialog to another.

• The information has to be consistent, e.g. the colour red always means that a fault has occurred.

• There has to be feedback at all times, i.e. the user has to be told what is happening and why it is happening. During the test sequence a dialog gives information about the test process.

• Faults should be presented in an easy and usable way. By presenting the results of a test, SQAT creates an opportunity to report the faults.

6.2.1 Select Test Cases When SQAT is executed a message box may appear. This only happens if the program is not located correctly on the computer. AutoMMI requires that file paths contain no spaces and since all test cases’ script files are located together with SQAT it is important to check this already at start up. After this the main window will be shown, named SQ(AT) > Select Test Cases, see figure 6.1.

Page 27: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

26

Fig. 6.1. SQ(AT) > Select Test Cases.

Here the user may choose add new phone and also change the test structure by adding or deleting test cases. He or she then has the option to check/uncheck specific attention commands in the tree structure. A double click on a command will show a message box with more detailed information about that specific command, as seen in fig. 6.2.

Fig. 6.2. Message box with information about a specific AT command.

Page 28: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

27

Before the test can start correctly AutoMMI settings has to be given, and once again message boxes appear if the user makes any mistakes. It is also recommended to read Required Test Environment information, which connected to each phone model, can be edited by a tester. It is also possible to load a log file for a more user-friendly presentation than just reading the file in a text editor, or view the last results for the preloaded phone. Clicking on Run New Test will result in a test sequence testing all checked AT commands.

6.2.2 Test Sequence The test sequence, see figure 6.3, is performed automatically all the time showing the user what it is doing and the results. All test cases that need tester interaction, such as receiving an incoming call, are executed in the very beginning. The tester may pause (same button as continue but the label differs) and continue the test as well as stop the test sequence by pressing Stop and exit or Stop and Show Errors, which have appeared so far. If a script in AutoMMI causes the phone to stop responding, SQAT will pause the test sequence. This makes it possible for the tester to manually restart the phone and to continue the test. When the test is finished or stopped by the user a detailed summary of the test will be presented.

Fig. 6.3. SQ(AT) > Running test…

6.2.3 Summary of Test Summary of test is accessible by performing a test, by loading a log file or by looking at last test sequence’s results, as seen in figure 6.4. It is also possible for users not only to see faults, but to see the result for all tested attention commands by clicking on Show Tested AT. This will redraw the tree view to look like figure 6.5. A double click on an attention command will display more information about the result and give the user detailed reasons why the test failed, unless the test was ok, see figure 6.6. To the right in figure 6.4 and 6.5 there are also general explanations why a fault may occur. This is our way of passing on our experience of testing AT commands to the tester using SQAT.

Page 29: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

28

Fig. 6.4. SQ(AT) > Summary of test showing only faults.

Fig. 6.5. Summary of test showing all tested AT commands.

Page 30: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

29

Fig. 6.6. Detailed test report for a specific fault. This differ for all kinds of faults, see appendix 1.

Before ending this dialog the user may save a log file, or test the commands that failed once more to make sure the faults are reproducible.

6.2.4 Adding a Test Case The user may add test cases (figure 6.7) by describing the parameters needed by SQAT, i.e. the name of the AT command, a short title, script file name, which test group it belongs to and if the script requires tester interaction or not.

Fig. 6.7. SQ(AT) > Add Test Case for phoneX.

6.2.5 Deleting a Test Case The user may delete a test case that he or she finds unsuitable for the specific phone model (figure 6.8). This operation is very useful when adjusting one test structure to fit a new phone model with a slightly different AT command specification.

Page 31: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

30

Fig. 6.8. SQ(AT) > Delete Test Case.

6.2.6 Log File The log file has the same content as the test structure file (figure 6.9), and they are both readable in a normal text editor such as NotePad10. In the log file each test case is separated by a line and every case is described concerning its settings and its performance during the last test. A log file may later be uploaded in SQAT again, as in figure 6.4, where it is possible to perform a regression test with the faults reported.

10 NotePad is a simple text editor within Windows XP operating system.

Page 32: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

31

Fig. 6.9. Log file for phone X May 11, 2004.

6.3 Architectural Design First we designed the dialog structure and decided to have a test structure (figure 6.10) as the interface between all dialogs. This test structure is specific for every phone model and works as a memory since it contains all information about the test cases and the last test that was performed (figure 6.11). We decided to create the test structure file as readable as possible, which made it possible to have the exact same composition of the log file (figure 6.9). This gave us two major advantages: if the tester forgets to save the log file it is possible to copy the test structure file, and it is also possible to load a log file and perform regression tests since it contains all information that SQAT needs.

Page 33: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

32

Fig. 6.10. Dialog Structure.

Fig. 6.11. Test Structure.

All interactions between SQAT and AutoMMI are handled by the test sequence object (figure 6.12). It sends a command prompt action command for AutoMMI including all parameters needed, i.e. script file location, log-file location, com port and bit rate. SQAT is then waiting for AutoMMI to finish by observing the processes that are running at the computer. When finished it saves the exit value and interprets the result of the specific test case. After two second pause, the next test case is performed. The pause makes it possible for the tester to notice the result of the last test case and also pause or stop the test sequence.

Fig. 6.12. Interaction between SQAT and AutoMMI

7. Verification and Validation Verification is building the product right and validation is building the right product. (Institute of Electrical and Electronical Engineers, 2004-03-15) Both verification and validation are

Page 34: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

33

very important when developing software, especially when the software is supposed to be used testing other software.

7.1 Testing SQAT The testing of SQAT has been performed together with a few of SEMC’s employees. After developing a new version of our program, approximately once a week, the program has been sent to testers, who have tested it. This means that we have used spiral testing to gradually develop our program. The testers have been told to try to find errors and to use the program in every possible way to really discover things which are missed or needed to be corrected. The manual of the program has been the only instruction and information about the program that we have given the testers. In other words the tests have been black-box tests. By letting the manual give all the information needed, the tests have also helped to improve the manual. We also received feedback about the user interface. Often the testers asked spontaneous questions about how to perform a certain action using the program, giving us a hint about that the user interface needed to be improved for the testers to fully understand the actions. Since the user interface was an important issue in the development process, we improved it gradually in parallel with adding new functions. As a result of the tests several improvements have been added to the functionality of SQAT. The validation was performed by discussing with the testers after the user tests. This gave us valuable feedback of that we were developing the right program in a good and logic way. There was not a possibility to perform more formal tests than we have done. All employees are very busy and do not have time to participate in bigger user tests. Despite that we feel that our program has been tested enough in real life situations. We have also evaluated SQAT according to the requirements and to the manual tests performed at SEMC before SQAT was developed. These evaluations are described below. Our supervisor Fredrik Jönsson has performed grey-box and regression tests during the whole development process.

7.2 Requirements After validating SQAT according to the requirements together with Fredrik Jönsson11 at SEMC, we ended up with the following information.

The Test Program Should be Written in C++

We have been using Visual C++ when developing SQAT and hence it is compatible with all new Windows operating systems, e.g. Windows XP. By testing the programs functionality on several different computers with different configuration we are certain that it is reliable.

AutoMMI will Primarily be Used as a Communication Interface All test scripts have been written in AutoMMI. The tester does not have to be familiar at all with AutoMMI in order to properly use our program. SQAT only need access to the program and the location of it to be able to send commands via AutoMMI to the phone. This can easily be seen in the program architecture, see chapter 6.3.

The Program Must be Able to Test All Current AT Commands Accurately

11 Fredrik Jönsson works with software testing and was our supervisor at SEMC during the complete project.

Page 35: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

34

In parallel with developing SQAT we have written test cases for all AT commands available in a phone model soon to be released. By keeping a good communication with the testers testing AT commands manually, we know that our program tests the commands in an accurate way. The scripts have been carefully tested several times with grey-box testing to assure that the testing is accurate and can be trusted. It is less than ten percent of the commands that differ from model to model so it is easy to adjust SQAT to fit a new phone model in the future.

Possibility to Easily Extend the Program with Future Test Cases and New AT Commands The functionality of adding test cases to our program, without the need of recompiling and restarting, fulfils this requirement. SQAT also have the possibility to extend itself with new phone models. Depending on which AT commands that differ we estimate the workload to be one to maximum five days of adjusting SQAT to a new phone model. We have adjusted the program to also fit four other phone models.

Ability for Users to Trace Errors When Testing AT Commands and the Ability to Understand What the Program Actually has Tested We have, after having written several hundred test scripts and gained experience about test results, decided to use four possible exit codes for AutoMMI. Fredrik Jönsson has also agreed on these four exit codes, leading to different result presentations.

• Process Error = something went wrong with AutoMMI. Maybe the wrong settings for

AutoMMI were made or maybe AutoMMI was terminated by the user.

• TestOK = no problems what so ever were found.

• No AT recognition = the phone does not support this AT command accurately. There are two other possible reasons for failure except that the AT command is not supported. The AT may be supported but has not the correct range or it is not possible to read parameters in the phone with the AT command.

• AT function error = the AT command is supported but not operating correctly. Reasons for this could be: > Not possible to change parameters in phone > Not possible to read parameters in phone > Not possible to correctly relocate in phone and read parameters via AutoMMI > Phone is not responding correctly to an AT action command

This could have been done even more explicit with more possible exit codes, but since it is also possible to test AT commands manually if confusion would arise, we found four to be enough. It is easy to report the faults found during a test and the result presentation fulfils SEMC’s needs. We have also used black-box testing to assure that users with less competence in AT commands can understand the program’s test result and its performance.

The Program Must be Easy to Use and Understand

Page 36: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

35

A lot of work has been made in order to create a good user interface. Several extra information boxes have been added to simplify for the user. This dialog with the user is made simple and natural and gives all feedback that is needed. Our black-box user tests with different employees also show that there is no problem understanding what SQAT has done and not done when a button is pressed. The most common user mistakes have also been in focus when programming and hence a lot of functionality has been added in the program, checking that the user has followed the program’s constraints. We have also put extra information where suitable to pass on our gained experience, when creating all test scripts, to the tester using the program. By performing black-box user tests during the development, the program has been gradually improved regarding the user interface. This has resulted in a user interface that is consistent, easy to understand, supporting and with minimized memory load for the user. Consistent because all actions taken and the results from them are presented in the same way with exactly the same words and phrases. Since we have used a natural and simple dialog which speaks the user’s language our testers have found the program to be easy to use and understand. Everywhere in the program it is possible to double click on items that may contain extra information and further explanations, which supports the user. Finally, we have designed the program in a way that minimizes the memory load for the user giving all information needed at all times.

The Results of Tests must be Presented in a Logic and User-Friendly Way To avoid misassumptions we have explicitly made it clear to the tester if the test failed or passed. We have also used colour, i.e. red for failed, in order to stress faults. Since we have been working mainly among the employees working daily with testing we think that we have constructed a logic sequence according to other test activities at SEMC, e.g. we have adjusted the program for the normal error reporting process. Our assumptions have also been tested with black-box user tests among the employees.

A Manual, to Help Users Understand the Possibilities and Limitations of the Program should be Included

The manual (appendix 1) has been tested by employees and we have improved it gradually after receiving feedback from the users. It has been checked through user tests and it covers all the functions and possibilities in the program and there is no problem using the program without getting any education other than what is in the manual.

Option for User before Running Test to Decide Whether Testing Should Stop at Errors or Not This requirement turned out to be less important, since SQAT reports during test sequence and since it is possible to pause or stop at any time. Our supervisor agreed on this.

If Possible Load Test Suite from a List of Existing AT Commands, which Makes the Program More Flexible Concerning Adding Test Cases This turned out to be very hard to implement and instead we use a reference file, called test structure, which is unique for every phone model, controlling where all the script files are located. Our solution satisfies the need of flexibility at SEMC, according to Fredrik Jönsson.

Page 37: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

36

7.3 SQAT Compared to Manual Tests Manual tests of AT commands have been performed at the same time as we have developed SQAT. Since these tests have been made on the same phone prototype as we have worked with, we have easily been able to compare SQAT’s performance compared to the tests the employees have carried out. When testing AT commands manually at SEMC 135 out of 256 commands are tested, and it is always the same 135 commands. These tests involve about four employees during one day, and because they are not working only with testing AT commands, they work approximately 20 hours all together specifically testing AT commands. It is not the same four testers at any occasion, even though several people have participated in many AT tests. The testing process is executed by manually typing commands via Hyper Terminal12 and checking the result according to an AT specification. SQAT carries out a test in about one hour and fifteen minutes, testing all 256 AT commands, and only requires a tester the first five minutes, due to some test cases that are impossible to automate with only one phone in the test environment. For example one test case requires that the phone receives an incoming call. We have decided to compare a specific version of software in the mobile phone prototype in order to correctly compare manual tests to automatic testing with SQAT. We have been able to draw the conclusion that SQAT found about seven times more faults in the AT commands that had not been discovered before.

8. Conclusions There are several advantages with automating the test process for mobile phones. Time-to-market is becoming more and more important as well as high quality, mostly because increased competition and more sophisticated demands from customers. This requires an efficient and effective test process. With this report and our developed program for testing attention commands, we have shown how automatic testing fast and accurate can meet these future challenges. This is further discussed in the generalization section, 8.4. SQAT will be used at SEMC every time new internal software is released, i.e. approximately three times a week. (Rexelius, U. Personal Communication)

8.1 Our Method When we set up our objectives for this thesis work, SEMC had just started testing attention commands. These tests were completely manual and since the workload were increasing all the time, our commissioner wanted to automate this test process. Since we had no experience of AT commands we started to study all the theory presented in this report. After that we decided to develop our program evolutionary with several prototypes and spiral testing. This section highlights the advantages and disadvantages we have encountered doing it this particular way.

12 Hyper Terminal is a program in Windows XP for direct communication via com port with external devices.

Page 38: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

37

8.1.1 Evolutionary Development The reasons for us choosing evolutionary development were to make it easier for both us and our commissioner. For us because we had never developed a complete Windows-based program before and had never programmed at all in C++. Evolutionary development made it easier for us to check if we were doing the right thing, by simply asking SEMC employees, who could explain their requirements once more if we were doing it wrong. Since our commissioner did not really know what kind of program was needed at the time, besides that it was supposed to completely test all AT commands, this development model made it easy for SEMC to redefine requirements during the complete process. The disadvantage with evolutionary development was that the requirements have changed a lot over time, which has made it impossible for us knowing how much work there is left to do at certain times.

8.1.2 Software Prototyping During the development of SQAT we have made about seven prototypes, which have been used to validate the requirements. Often the prototypes have resulted in new requirements defining general requirements more specifically. This has increased our workload since it has added extra functionality to SQAT, but we also believe that this has kept us on the right track and hence we have not developed unneeded functionality.

8.1.3 Spiral Testing Spiral testing has been defined after we decided to use evolutionary development, and it makes it easy to test the program often at many stages. All tests have been performed using black-box testing, because no one at the SEMC test department tests Windows programs in their daily work. White or grey box testing would take too much recourse. We found this to be satisfying even though we always had to interpret abstract user errors to technical functional faults ourselves. A key to success was our closeness with the test department and our commissioner, who always could answer our interpretation questions. The main disadvantage with our form of testing has been to know how much to test at every stage. The first five prototypes were very instable, which made it hard separating errors from faults.

8.2 Savings SQAT is more efficient and effective than manual testing of AT commands. All numbers used in this discussion is validated by employees at SEMC, so we find them to be quite narrow and accurate estimates.

8.2.1 Efficiency Today’s manual tests that involve four testers five hours each, testing 135 attention commands, can be seen as quite inefficient. Partly because different people are involved and not all have experience from testing AT commands before, which make them relatively slow, mostly because it is hard studying the AT specifications the first time. There are more than 300 pages describing the attention commands in the phone that we have tested. But also because of the manual typing needed in Hyper Terminal, which is a time consuming activity. We think that SEMC would make their testing more efficient in the near future, and therefore we guesstimate that it would take one tester 16 hours testing 135 AT commands. This test has

Page 39: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

38

to be done about fifteen13 times for every different phone model, during the development of software, which makes a total 240 hours of testing for one person. SEMC uses consultants on top of ordinary employees in order to easily meet changes in workload, so we have calculated of removing consultancies. One consultant costs in average SEK 50014 per hour at the company, which makes our savings to become SEK 120 000 per phone model. If we on the other hand assume that SEMC would really test all 256 commands15, we can estimate the work load to be almost one week for one experienced tester, which makes about 35 hours. Performing the test 15 times with consultancies, which costs SEK 500 per hour, makes the saving 35×15×500 = SEK 262 500 per phone model. Our calculations above only take into account savings of fewer employees and less work load. Of course the savings would become much greater if we also had counted following:

• Finding faults at an early stage – It is much more costly finding faults late in the development process, compared to finding them early. SQAT makes it possible to test all AT commands directly within one and a half hour when there is new software released. This short testing time makes it easier to prioritize AT testing at all stages.

• Work place – Since SQAT takes care of work that prior has been made manually by employees, less workplaces are needed.

• Mobility costs – SQAT makes it possible to easily test AT commands on production sites right before the release of a new phone model. The tests can be performed by less educated employees and the log file can be sent to test personnel at SEMC, who evaluate the results.

• Overall higher quality - SEMC will increase their number of AT tests from 15 to 70 and also increase the number of tested commands on every phone model. This will result in higher quality and better products on the market.

• Easier communication – The testers at SEMC may more easily describe the faults accurate when reporting to Ericsson Mobile Platform (EMP), where the great majority of all attention commands are developed.

• More accurate cost calculations - SEMC will have more accurate information about the quality of the goods they are buying from EMP. This will make them stronger in the negotiations.

8.2.2 Effectiveness When testing, it is important that the tester fully understands what he or she is supposed to test and how he or she should interpret the results. AT commands in SEMC’s mobile phones have been tested manually by different employees with sometimes several weeks between the tests. This has two obvious drawbacks:

13 Fifteen times is the average for the phone models of today at SEMC, but it may differ some depending on how many errors they find during the tests. (Rexelius, U. Personal Communication) 14 SEK 500 per hour is an accurate average according to human resource at Sony Ericsson Mobile Communication. (HR HQ 2004-05-18) 15 The testers at Sony Ericsson want to test all attention commands three times a week.

Page 40: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

39

• The interpretation differs. An inexperienced tester has sometimes reported a fault that actually was not a fault, and also the other way around16. This makes the testing process unnecessary uncertain.

• Test cases differ. The specification for AT commands is hard to understand and it is hence difficult for new testers knowing what to test. There are more than 300 pages describing the attention commands and it is sometimes hard even for experienced testers to understand the specification.

SQAT takes care of both of these disadvantages since we as experienced AT testers17 have written the interpretation within the test cases. This guarantee a uniform testing process and thus all software releases will be tested exactly the same.

8.3 Program Evolution As with all developed software it is important to realize that the requirements may change in the future. We consider that SQAT fulfils all the requirements SEMC has of testing AT commands in their mobile phones of today. In case this will change in the future we believe that minor changes in the program will make SQAT useful for several years. The future needs are much more dependent on AutoMMI which is used as the interface between SQAT and the mobile phone. SEMC has still developers connected to AutoMMI, adjusting it to new phone models and this is why we see a relative low need of future development of present functionality within SQAT. On the other hand is it possible to extend SQAT quite easily and increase its possibilities to test also other things than AT commands. For example would it be possible to write test cases for other parts of the mobile phone’s software and use SQAT as the organizer, sending scripts to the phone and after the test sequence presenting the results. The workload for adjusting SQAT to a new phone model is estimated to be between one and five days for one tester. Preferably an experienced tester should perform this update since his interpretation will be used every time the program is used. Notable is that SQAT does not need a recompilation when adding new phones, it is only new or changed AT commands’ test script files that need to be adjusted.

8.4 Generalization The generalization of our work consists of two major parts; making test processes for new software in mobile phones more automatic and testing AT commands also in other types of devices with SQAT.

8.4.1 Automatic Test Processes for Mobile Phones SQAT tests all available AT commands in the mobile phones with a high level of accuracy, and we think that similar programs could be developed for many other parts of the software in mobile phones. The time and cost savings are significant as well as the increased quality in the test process. In the future it seems to become more and more important with higher quality in phones together with more and faster testing. All this together makes it possible to use the theory and the discussions presented in this report for new development projects automating test processes for mobile phones.

16 We have examined the manual tests that have been performed during the spring of 2004. 17 We have been writing test cases for AT commands during several weeks, the spring of 2004, and have been in contact with specialists within SEMC whenever a question has appeared.

Page 41: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

40

8.4.2 Use SQAT for Other Devices than Mobile Phones Attention commands are being used in all electronic devices containing a modem. In a future where more and more devices are getting mobile and hence connected to the phone networks, we see potential opportunities for SQAT also testing these devices. SQAT of course requires an interface such as AutoMMI, but besides that our program needs only smaller modifications for testing for example game devices using phone calls for internet gaming.

Page 42: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

41

Works Cited

Books Kaner, C., Falk, J. & Nguyen, H.Q. (1999). Testing Computer Software, 2. ed. USA: John Wiley & Sons Inc. Lewis, W.E. (2000). Software Testing and Continuous Quality Improvement. Auerbach, USA: CRC PresLLC. Sommerville, I. (1996). Software Engineering, 5. ed. Wokingham, UK: Addison-Wesley, cop. Van Vliet, H. (1993). Software Engineering, principles and practice. Chichester, UK: John Wiley & Sons, Ltd.

Articles Anderson, S. and Felici, M. (2002). Quantitative Aspects of Requirements Evolution. In Computer Software and Applications Conference. 26th Annual International, pp. 27-32, Edinburgh, United Kingdom. IEEE Computer Society Press. Dorothy, R. (1993). Testing, Verification and Validation. In Layman’s Guide to Software Quality, pp 7/1-7/2, London, United Kingdom. IEEE Computer Society Press. Farre, T. (2001). Get it right the first time. In Informationweek; Jan 8, pp 91-98, Proquest. Khalifa, M. and Verner, J.M. (2000). Drivers for Software Development Method Usage. In Engineering Management, IEEE Transactions on, vol: 47, pp. 360-369, Hong Kong, China and Philadelphia, USA. IEEE Computer Society Press. Sony Ericsson. (December 2003). Developers Guidelines AT Commands (Internal document). Sternberg, R.J. and Grigorenko E.L, (2001). All testing is dynamic testing. In Issues in Education, vol: 7, pp 138-172, Greenwich, United Kingdom. Information Age Publishing.

Web sources Institute of Electrical and Electronic Engineers. (2004-03-15). www.ieee.org US Internet Industry Association (2000-02-09). http://www.house.gov/judiciary/hay30209.htm.

Personal Communication Jönsson, Fredrik. (Spring 2004). Test Engineer at SEMC. Rexelius, Ulf. (2004-15-12). Test Manager at SEMC. Sjöstrand, Peter. (2004-05-05). Responsible for AutoMMI at SEMC. Töörn, Fredrik. (Spring 2004). Department Manager – Product Software Verification, SEMC.

Page 43: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

42

Page 44: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

43

Appendix 1 – Program Manual for SQAT 1.0

Index Constraints

File Structure Failure When Starting SQAT Faults within AutoMMI

Using the Program for Daily Testing Select Test Cases Start Test Sequence Understand Test Result Regression Test and Saving a Log File Maintenance of the Program’s Test Structures Add a New Phone from Scratch Add a New Phone by Modifying an Existing Phone Add Test Cases Delete Test Cases Writing New Test Cases Building New Script Files in AutoMMI Testing the Functionality of the AT Command

Constraints

File Structure The program has been developed with some constraints and the file structure is one of these. In order for the program to properly find all its data all test script files (*.tcf) must be found in a subdirectory to the test structure (*.tst). The name of this subdirectory must be the same name as the test structure, without its file type ending. Settings.ini must have in the same directory as the main program file (SQAT.exe). Please see figure 1.

Failure When Starting SQAT There has been an unexpected handle when starting SQAT and there are com port cables that are not connected to phones. The problem is solved by either connecting all cables to phones or disconnect the cables from the computer.

Page 45: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

44

Fig. 1. SQAT’s file structure.

Faults within AutoMMI SQAT has been developed with as low maintenance work needed as possible in mind, and hence no log file should be created by AutoMMI. Despite this AutoMMI (ver.4 released May 4, 2004) creates these result files in the same directory as where the script files are located. All these log files have to be removed manually.

Using the Program for Daily Testing

Select Test Cases This section will explain how the program operates and all the functionality behind the different buttons. When the program file ‘sqat.exe” is executed a window appears with possibilities to load a specific phone (if you want to load a new phone, please see Maintenance of the Program in this manual). When pressing the Load Phone button all test cases for the selected phone will be displayed in a tree structure. A double click on a test case will result in a message box displaying; titles of the command, description, if tester interaction is required, script file location and last test result. To the right there is information about the required test environment, e.g. it is vital that the phone language is English in order for the test script to correctly read text labels in the phone.

Start Test Sequence After you have selected which test cases to use during test, please press Run Test. The test sequence starts and all tests needing tester interaction will be tested first. After that the rest of the tests will be performed. It is possible to pause/continue the test and when it is paused and it is also possible to stop the test sequence by pressing either show errors or exit.

Understand Test Result When all tests have been performed a summary will appear telling you which AT commands that have resulted in errors and which have not. By double clicking on a test case more information will be presented in a message box. There are four possible outcomes from a test:

0. Process Error = something went wrong with AutoMMI. Maybe the wrong settings for AutoMMI were made or maybe AutoMMI was terminated by user.

Page 46: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

45

1. TestOK = no problems what so ever were found.

2. No AT recognition = the phone does not support this AT command accurately, i.e. it could be the case that it is supported but has not the correct range. It could also be the case that it is supported but it is not possible to read parameters in phone with the AT command

3. AT function error settings = the AT command is supported but not operating correctly. Reasons for this could be: > Not possible to change parameters in phone > Not possible to read parameters in phone > Not possible to correctly relocate in phone and read parameters via AutoMMI > Phone is not responding correctly to an AT action command

Regression Test and Saving a Log File If you want to perform a regression test with AT commands that turned out invalid, you may choose Test Errors Once More. Please note that to the right there are several general explanations for errors displayed. There is also the option to save the log file. Please note that when doing so add ‘.txt’ as file ending, since this will make it easier opening the file in the future. The log represents all available test information for the chosen phone and it is also possible to load it into SQAT and view the test result in a more user friendly way.

Maintenance of the Program’s Test Structures There are two ways of adding a new phone; from scratch or by modifying an existing phone.

Add a New Phone from Scratch When doing it from scratch you simply press Add New Phone and write a file name for the phone, e.g. ‘phoneX.tst’. The file ending ‘.tst’ is very important because it tells SQAT that it is a test structure. After that you have to put all script files in a subdirectory named phoneX in the SQAT directory.

Add a New Phone by Modifying an Existing Phone If you know there are only minor differences between your new phone’s AT commands and an older phone you may save a lot of work. Go to the SQAT directory and copy both the test structure file (‘*.tst’) and the script file folder for the older phone. Then rename these suiting the new phone, e.g. copy ‘phoneX.tst’ and rename it to ‘phoneY.tst’ and copy the script folder ‘phoneX’ and rename it to phoneY. After this go to the SQAT program and choose Add New Phone. Open the renamed test structure file, i.e. ‘phoneY.tst’. You will now have all the test cases loaded for phoneY just as they were for phoneX.

Add Test Cases It is possible to add test cases for the loaded phone by choosing Add Test Case. A new dialog will appear and all you have to do is to fill in the fields and choose save.

Delete Test Cases By choosing Delete Test Cases it is possible to remove test cases from the test structure. All test cases will be listed in the order they have been added to the test structure and you may check unwanted scripts and delete them. Please note that the script file will not be deleted, it is just the reference for SQAT in the test structure that will be removed.

Page 47: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

46

Writing New Test Cases

Building New Script Files in AutoMMI To be able to write new test scripts it is important to have a basic knowledge of AutoMMI and the different functions that can be used, e.g. TextReader. When creating new scripts in AutoMMI it is essential to check if the AT Command is supported by placing an “if statement” first in the script (figure 2). If the command is supposed to be able to read settings, this function also should be verified in the first if statement (figure 2). The reason for this check is that the test process will not be aborted even if a command is not supported or can not be read properly. Another reason is that the return value can be varied, making SQAT able to identify different types of faults. If the AT command is not supported or settings can not be read, AutoMMI will return “2” to SQAT instead of breaking the test process.

Fig. 2. An example of an AT script in AutoMMI. The information about name, date, authors and comments in the beginning of the script is not necessary, but it gives the tester a little more information about the script running at the moment.

Page 48: Automating the Software Test Process

Automating the Software Test Process – Development of an Automatic AT Command Tester

47

Testing the Functionality of the AT Command There are a number of different actions that the AT commands are able to perform. The testing of the functionality should be performed carefully and if possible the action should be checked with PressKey and TextReader. When using PressKey some pauses in the script may be needed. If there is a default value in a setting, this value should be set by the end of the script (figure 2). The AT actions that can not be checked in the phone should be tested by using the read function to check if the settings has been made. If the AT does not have a read function CompareReturnOfAtCmd may be used to verify that an OK is received from the phone when performing the AT action. When running the test scripts a logfile is created. If the tester wants to use this logfile to search for faults the LogEntry commands can be used by removing // in front of them. If the command makes it through the first if statement (which means that the return value will not be “2”) there are two different outcomes. If the check of the functionality of the AT command is successful, the value “1” should be returned. If the AT is unable to perform the desired action, the value “3” should be returned.