thesis writing

169
TABLE OF CONTENT TABLE OF CONTENT.......................................................................................i CHAPTER 1....................................................................................................6 INTRODUCTION..............................................................................................6 1.1 Introduction...................................................................................6 1.2 Background of the study...............................................................6 1.2.1 The important of quality audit performance................... 7 1.2.2 Impact of information technology on audit judgments performance..........................................................................................8 1.2.3 Audit technology adoption by auditors..............................9 1.2 Research Problem............................................................................10 1.4 Objective of the Study...............................................................11 1.5 Rationale of the Study...............................................................12 1.6 Contribution of the Study..........................................................13 1.7 Definition of Terms Used........................................................... 13 1.8 Organization of the Thesis........................................................14 CHAPTER 2..................................................................................................15 LITERATURE REVIEW...................................................................................15 2.1 Introduction..................................................................................15 2.2 Technology Adoption.....................................................................15

Upload: moktar-awang

Post on 21-May-2015

532 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Thesis writing

TABLE OF CONTENT

TABLE OF CONTENT...............................................................................................................i

CHAPTER 1...............................................................................................................................6

INTRODUCTION......................................................................................................................6

1.1 Introduction..................................................................................................................6

1.2 Background of the study...................................................................................................6

1.2.1 The important of quality audit performance.........................................................7

1.2.2 Impact of information technology on audit judgments performance....................8

1.2.3 Audit technology adoption by auditors.................................................................9

1.2 Research Problem.......................................................................................................10

1.4 Objective of the Study.....................................................................................................11

1.5 Rationale of the Study.....................................................................................................12

1.6 Contribution of the Study................................................................................................13

1.7 Definition of Terms Used................................................................................................13

1.8 Organization of the Thesis..............................................................................................14

CHAPTER 2.............................................................................................................................15

LITERATURE REVIEW.........................................................................................................15

2.1 Introduction.....................................................................................................................15

2.2 Technology Adoption......................................................................................................15

2.2.1 Standards and regulation of Technology Adoption in Auditing..............................16

2.2.2 Computer-assisted audit tools and techniques and technology adoption.................20

2.2.3. Audit technology adoption......................................................................................21

2.2.4 Audit Software Application in Audit Practices........................................................22

Page 2: Thesis writing

2.3 Individual Performance...................................................................................................24

2.3.1 Individual Performance Definition...........................................................................24

2.3.2 Relationship between Technology Adoption and Individual Performance..............25

2.4 Theoretical background...................................................................................................26

2.4.1 The History of Technology Acceptance Model.......................................................26

2.4.2 The Unified Theory of Acceptance and Use of Technology (UTAUT) Model.......27

CHAPTER 3.............................................................................................................................32

RESEARCH FRAMEWORK AND HYPOTHESES DEVELOPMENT................................32

3.1 Introduction.....................................................................................................................32

3.2 Research framework........................................................................................................32

3.3 Operationalization and measurement of variables..........................................................34

3.3.1 Audit software adoption...........................................................................................34

3.3.2 Determinant factors of audit software adoption......................................................34

3.3.3 Individual performance.............................................................................................34

3.4 Hypotheses Development................................................................................................35

3.4.1 Audit software application and audit performance...................................................35

3.4.2 Determinant factors of audit software use................................................................36

CHAPTER 4.............................................................................................................................44

RESEARCH METHODOLOGY..............................................................................................44

4.1 Introduction.....................................................................................................................44

4.2 Research design...............................................................................................................44

4.3 Study One: Determinants of user intention to use Audit Command Language (ACL) and

impact to audit performance..................................................................................................47

4.3.1 The participants........................................................................................................47

4.3.2 Data collection method.............................................................................................47

Page 3: Thesis writing

4.3.3 The questionnaire and variables development..........................................................47

4.3.4 Pre-test......................................................................................................................47

4.3.5 Validity and reliability..............................................................................................48

4.3.6 Operationalisation of Variables................................................................................49

4.3.7 Control Variables......................................................................................................51

4.3.8 Techniques for Analysing Quantitative Data...........................................................52

4.4 Study Two: Determinant factors and impact of audit software application to audit

performance...........................................................................................................................54

4.4.1 The participants........................................................................................................54

4.4.2 Data collection method.............................................................................................54

4.4.3 The questionnaire and variables development..........................................................54

4.4.4 Pre-test......................................................................................................................58

4.4.5 Validity and reliability..............................................................................................59

4.4.6 Operationalisation of Variables................................................................................60

4.4.7 Control Variables......................................................................................................62

4.4.8 Analysis of Structural Equation Modelling (SEM)..................................................62

4.5 Summary.........................................................................................................................70

CHAPTER 5.............................................................................................................................71

RESULTS AND DISCUSSIONS OF FINDINGS...................................................................71

STUDY ONE: DETERMINANTS OF USER INTENTION TO USE AUDIT COMMAND

LANGUAGE (ACL) AND IMPACT TO AUDIT PERFORMANCE.....................................71

5.1 Introduction.....................................................................................................................71

5.2 Preliminary analysis........................................................................................................71

5.2.1 Normality analysis....................................................................................................71

5.2.2 Reliability analysis...................................................................................................72

5.2.3 Factor analysis..........................................................................................................73

Page 4: Thesis writing

5.2.4 Descriptive Statistics of Participants........................................................................73

5.2.5 Correlation analysis..................................................................................................74

5.3 Hypotheses Testing.........................................................................................................75

5.4 Discussion of Findings....................................................................................................77

5.5 Summary.........................................................................................................................78

CHAPTER 6.............................................................................................................................79

RESULTS AND DISCUSSIONS OF FINDINGS...............................................................79

STUDY TWO: DETERMINANT FACTORS AND IMPACT OF AUDIT SOFTWARE

APPLICATION TO AUDIT PERFORMANCE..................................................................79

6.1 Chapter Overview...........................................................................................................79

6.2 Descriptive Statistics of Participants...............................................................................79

6.2.1 Response rate...........................................................................................................79

6.2.2 Demographic Profile................................................Error! Bookmark not defined.

6.2.3 Descriptive Statistics of Constructs..........................Error! Bookmark not defined.

6.3 Measurement Model........................................................................................................85

6.3.1 Development of Measurement Model......................Error! Bookmark not defined.

6.3.2 Congeneric Measurement Model..............................................................................87

6.4 Structural Model..............................................................................................................88

6.4.1 Assessment of the Structural Model.......................................................................106

6.5 Hypotheses Testing.......................................................................................................107

6.6 Discussions of Findings................................................................................................107

6.7 Summary.......................................................................................................................107

CHAPTER 7...........................................................................................................................109

CONCLUSIONS, IMPLICATIONS, LIMITATION AND FUTURE RESEARCH.............109

Page 5: Thesis writing

7.1 Chapter Overview.........................................................................................................109

7.2 Discussions of Findings................................................................................................109

7.3 Implication of the Findings...........................................................................................109

7.3.1 Theoretical Implications.........................................................................................109

7.3.2 Practical Implications.............................................................................................109

7.4 Limitations of the Study................................................................................................109

7.5 Suggestions for Future Research...................................................................................109

7.6 Summary.......................................................................................................................109

REFERENCES.......................................................................................................................110

Page 6: Thesis writing

CHAPTER 1

INTRODUCTION

1.1 Introduction

This thesis studies the level of audit software use in audit practice and it impact to individual

auditor performance. This study also looks into the determinant factors of audit software

adoption as tested in Unified Theory of Acceptance and Use of Technology (UTAUT) model.

The chapter aims at providing an overview of the thesis and its structural scheme. The first

section provides background to the research, followed by the research problem, research

objectives and questions. The rationale of carrying out this study is also explained in the

following section, together with brief explanation of contribution of this study to the

theoretical and practical point of view. The chapter concludes with an outline of the

organization of the thesis.

1.2 Background of the study

The emergence of information technology has had a tremendous impact on many areas of

human activities, including engineering, medicine, education as well as accounting and

auditing practices. Information technology (IT) or electronic data processing has changed the

way many organizations conduct business activities. In fact, IT is considered as one of the

major technological advances in businesses this decade. IT system has the ability to perform

many tasks, and IT providers continuously strive in finding new ways to enhance the use of

computer to promote efficiency and aid in decision making. Since many businesses at present

use computers to process their transactions, the auditing profession has to face with the need

and requirement to provide the audit services that can deal with the IT environment.

While the impact of information technology (IT) in business has grown exponentially, few

studies examine the use and perceived importance of IT, particularly outside of the largest

audit firms (Fischer 1996; Banker et al. 2002). This issue is important since IT has

dramatically changed the audit process. Standards now encourage auditors and audit firms to

adopt IT and use IT specialists when necessary (American Institute of Certified Public

Accountants [AICPA] 2001, 2002b, 2005, 2006; Public Company Accounting Oversight

Page 7: Thesis writing

Board [PCAOB] 2004b). However, auditing researchers and practitioners have little guidance

available on what IT has been or should be adopted.cp (Janvrin, Bierstaker, Lowe, 2007)

Although studies have suggested that the adoption of IT in audit practices would increase

auditor’s productivity (Zhao et al. 2004), the adoption of audit technology by auditors is still

low (Liang et al., 2001; Debreceny et al., 2005; Curtis and Payne, 2008). Apart from

perception that adoption of IT in audit practices particularly audit software is costly,

complicated to learn and use, other possible reasons for lack of usage could be due to

unconvincing evidence of the merits in using audit technology to enhance audit performance

(Ismail and Zainol Abidin, 2009). Usability is not sufficient and large potential gains in

effectiveness and performance will not be realized if users are not willing to use information

system in general (Davis F. D., 1993) and audit software in particular, therefore, adoption is

crucial.

The usage of audit software can be increased provided auditors are convinced of the positive

impact of audit software on audit performance. Based on attitude-behaviour theory, Doll &

Torkzadeh (1998) describe a ‘system to value chain’ of system success construct from beliefs,

to attitudes, to behaviour, to the social and economic impacts of information technology.

Torkzadeh & Doll (1999) argued that impact is a pivotal concept that embodied downstream

effects. It is difficult to imagine how information technology can be assessed without

evaluating the impact it may have on the individual’s work. Thus, in audit practice to assess

the audit technology adoption impact is through the assessment of the auditor’s individual

performance impact.

1.2.1 The important of quality audit performance.

Many accounting firms all over the world have faced various forms of litigation. At the same

time, the threat of litigation has demanded audit firms to maintain and improve the quality of

audit work (Manson et al., 2001). There is evidence that the use of audit software could give

rise to more quality audit. In fact the use of audit software in the audit process has greatly

increased in the last few years. This is true in the case of large audit firms who are motivated

by the desire to improve their efficiency to compete for clients. Manson et al., (2001) pointed

out that audit automation has been used in most areas of the audit processes, more extensively

by the Big Four audit firms than others.

Page 8: Thesis writing

Therefore, arguably, accounting firms in Malaysia should also strive for better audit quality to

be at par with the global accounting giants. This is especially important given that the service

sector may prove to be the main pillar of our economy after natural resources run out. For the

audit firm to survive in this competitive era, the highest quality of the audit judgment must be

maintained. However the audit quality of audit firms has undergone severe criticism this

decade due to various financial crisis and management fraud. The fiasco of the Enron Scandal

in 2001 has further alarmed regulators and the public in many countries about audit quality

including various parties in Malaysia. Obviously, huge efforts by the audit firms need to be

taken in order to restore public confidence in the auditors’ integrity and ability and

subsequently uphold the reputation of the profession. One of the ways to increase the public

confidence in the auditors is to provide quality audit judgments consistently. Speed and

accuracy of audit judgment would certainly help build public confidence in the auditors.

Although many audit firms are introducing audit technology in accounting processes, not

many are actually using the software available and even those who are using, are not using the

higher end software. There are many reasons for the reluctance to incorporate audit

technology in audit processes such as negative perception and unconvincing benefit of the use

of audit technology. Ismail and Zainol Abidin (2009) investigated the level of information

technology knowledge and information technology important in the specific context of audit

work among auditors in Malaysia. Their study suggested that that information technology

knowledge among the auditors is still at the lower level.

1.2.2 Impact of information technology on audit performance.

Within the information technology literature, there are many studies that have examined the

impact of information technology on firms’ performance in different industries such as

manufacturing (Barua et al. 1995), banking (Parson et al. 1993), insurance (Francalanci and

Galal, 1998), healthcare (Menon et al. 2000), and retailing (Reardon et al. 1996). However,

empirical research to examine the impact of information technology on audit performance in

the accounting practices is under-research. To this date, only one study has examined the

impact of information technology on firms’ productivity in producing quality audits (Banker,

Chang and Kao, 2002). The other studies examined the factors influencing the use of

information technology (Janvrin, Bierstaker and Lowe, 2009; Curtis and Payne, 2008 and

Merhout, 2007) and perception of use and belief in using the technology (Bhattacherjee 2001,

2004; Venkatesh and Morris 2000; and Davies et al 1989).

Page 9: Thesis writing

Although there is a general perception that information technology investments by public

accounting firms could improve firms’ productivity in terms of consistent audit quality (Lee

and Arentzoff, 1991), the impact of information technology on auditors’ performance is not

directly observable. To date there is still inadequate data available that could allow one to

examine in-depth processes involving the use of audit technology by auditors when

performing audit procedures (Zhang and Dhaliwal, 2009). Zhang and Dhalilal pointed out that

more data is needed to examine the influence of critical factors that may mediate or moderate

the performance value gained by the auditors when adopting audit technology.

1.2.3 Audit technology adoption by auditors

In audit situations where use of technology is optional, the implementation decision is

typically made by joint discussion between the audit manager and in-charge auditor (Houston,

1999) Auditing technology studies have primarily examined how the use of technology

affects cognitive processing and the resulting decisions auditors make.

Today, the extent to which auditors have adopted information technology, in particular audit

software in their audit process remains an empirical question (Arnold and Sutton 1998; Curtis

and Payne 2008; Janvrin et al. 2009). Audit software is an essential component of audit

technology, refers to computer tools that allow the extraction and analysis of data using

computer applications (Braun and Davis 2003). It is a type of computer program that performs

a wide range of audit management functions.

Although many studies have suggested effective usage of audit software would permits

auditors to increase their productivity in achieving quality audit judgments (Zhao et al. 2004),

the incorporation of audit technology by auditors is still low (Liang et al., 2001; Debreceny et

al., 2005; Shaikh, 2005; Curtis and Payne, 2008). Apart from the perception that the audit

software is costly, complicated to learn and use, other possible reasons for lack of usage could

be due to unconvincing evidence of the merits of using audit technology to enhance audit

performance (Ismail and Zainol Abidin, 2009). However, the usage of audit software can be

increased if they are convinced of the positive impact of audit software on audit judgment

performance.

This study seeks to identify the relationship between the adoptions of the audit software with

the individual audit performance. In other word, this study tries to justify that the individual

audit performance is increased with the increase in the level of audit software use among the

auditor. This study also aims at examining what influences the auditors to adopt audit

Page 10: Thesis writing

technology in their practices. The finding of this study hopefully will be able to clarify many

facts about the factors that the auditors normally consider for them to be comfortable enough

with the audit technology.

1.2 Research Problem

The relationship between investment in information technology (IT) and its effect on

organizational performance continues to interest academics and practitioners. Most

researches on audit technology success or its impact on business function such as auditing

have focused on a firm’s level. There is still very limited empirical evidence that

investigate audit technology success from individual level dimension such as user

adoption of audit technology and its impact to audit performance. Such investigation is

required as uncertainty, resistance and dissatisfaction could occur among auditors due to

new working style or culture in audit technology environment. Uncertainty, resistance

and dissatisfaction would eventually, lead to the failure of the audit technology

implementation in the audit practices, and ultimately affect audit performance. Measuring

the audit technology adoption in term of level of use by auditor gives the management

more accurate feedback about user’s acceptance towards audit technology.

"Whether Information Technology (IT) use leads to better individual performance hasalways been an intriguing topic in IS field. However, not many studies examined theInformation Technology use/individual performance relationship given the significance ofthe topic. Researchers and practitioners simply assumed that more IT use lead to betterindividual performance. A review of the literature presented a different, rather conflicting,picture than the conventional wisdom. The current study thus aims at investigating ITuse/individual performance relationship by focusing on the measurement issue i.e. howdifferent richness level measurement of IT use and individual performance affects the

use/individual performance relationship. Cp Shen 2009

Venkatest et al., (2003) stated that one of the most important directions for future research

is to tie this mature stream of research into other established streams of works. They

further stated that little to no research has addressed the link between user acceptance and

individual or organizational usage outcome. Thus, while it is often assumed that usage

will result in positive outcomes, this remain to be tested..see venkatesh page 470. Straub

(2009) pointed out that the TAM and

oem, 01/19/13,
Eg Devaraj anh Kohli studied the impact of IT on the hospital performance
Page 11: Thesis writing

The extensive use of IT in audit process especially among big audit firms has been

motivated by the desire to improve efficiency to compete for clients (Manson et al.,

2001). Audit firms justified their large investments in audit automation by the need to

improve the quality of audit work and reduce audit costs. In other words, audit

automation can be viewed as simply another technology that audit firms employ to

maintain their competitiveness and profitability. Most of the studies on the technology

adoption have revealed that much of what is term audit automation consist merely of

word-processing and spreadsheet applications. There is little evidence on the manner in

which external auditors employ audit software in the pursuits of their audit objectives.

Although there were studies carry out on the use of Computer assisted Audit Tools and

Techniques (CAATTs) and Generalised Audit Software (GAS), two main terms often

associated with audit software, the focus was not on the use of audit software by the

external auditors. For example, Wehner and Jessup (2005), Debreceny, Lee, Neo and

Toh (2005) and Braun and Davis (2003) look into the adoption of GAS among internal

auditors. Therefore this study is carry out to fill the gap that exist in the literature.

1.4 Objective of the Study

The main objective of this study is to empirically test the impact of application audit software

in practice to the audit performance. Audit software application is measured based on the

auditor’s normal practice; planning, testing and report writing. Audit performance is measured

based on respondent’s perception of the audit software impact to the quality, speed,

productivity and effectiveness of the work. This study also attempted to empirically test the

factors contributing to the application of audit software in practice among auditors in

Malaysia. Factors that drive audit software application are classified under three

characteristics; individual, organizational and external factors.

More specifically, the research objectives of the present study are:

1. To examine the nature of relationship between application of audit software and

individual audit performance.

2. To investigate the extent to which individual factors (performance expectancy, effort

expectancy), organizational factors (organizational supports, facilitating condition,

technological and infrastructure support) and external factors (social influence and

Page 12: Thesis writing

client’s technology) contributes to the application of audit software in practice among

the auditors.

3. To determine whether training moderates the individual factors (performance

expectancy and effort expectancy) and audit performance relationship.

4. To determine whether experience moderates the individual factors (performance

expectancy and effort expectancy) and audit performance relationship.

5. To investigate whether performance expectancy, effort expectancy, social influence

and facilitating conditions have influenced on the behavioral intention to adopt audit

software.

6. To determine whether specific knowledge and experience moderate the performance

expectance and effort expectancy relationship with the behavioral intention to adopt

audit software.

The research objective 1 to 4 answered by the Study Two, while the research objective 5 and

6 answered by the Study One.

1.5 Rationale of the Study

Based on the development in audit practice and research, this study aims to promote audit

quality through the adoption of audit technology specifically audit software in audit practices.

The relationship between investment in information technology (IT) and its effect on

organizational performance continues to interest academics and practitioners. In many cases,

due to the nature of the research design employed, this stream of research has been unable to

identify the impact of individual technologies on organizational performance (Devaraj and

Kohli, 2003). As individual performance plays a great role in organizational performance, this

study aim to investigate the impact of the use of audit software in audit practices among

auditors to the individual audit performance.

Many audit tasks, including workpaper documentation and review, increasingly are performed

in electronic environments (e.g.. Croft 1992; Flagg, Glover, and Smith 1992; Knaster 1998;

Rothman 1997; Vezina 1997a, 1997b). Although many believe that automation eliminates

human calculation errors, saves time and money, minimizes paper documentation, and

increases accuracy (Rothman 1997), there is little empirical data to support these claims.

Indeed, it has been observed, "although the use of IT to strengthen the audit function is

widespread, its impact on perfonnance has never been determined" (Vezina 1997a; p. 37).

Most disturbing is that performance may decline in electronic environments (Galletta, Hartzel,

Page 13: Thesis writing

Johnson, Joseph, and Rustagi 1997). Cp Bible, Graham and Rosman (2005)

While there is a developing literature demonstrating audit technology and its possible benefits

of use by the auditors (e.g., Liang et al. 2001; Shaikh 2005), there is little research

investigating the extent of usage among auditors in practice and the factors that associated

with its use (Curtis, Jenkin, Bedard and Deis 2009). There is also limited study to present

empirical tests of its efficiency and effectivenes or in general its impact to audit performance.

Among the little was study by Janvrin et al. (2008) whose explored audit IT use and its

perceived importance across several audit application and across diverse group of audit firms.

Their study reported that some applications are used extensively and some are not. It also

reported that auditors are varies in opinion about the importance of several audit application,

although not used extensively. However their study did not aim at the impact of audit

application use to the individual audit performance. Thus, to fill the gap, this study is carry

out to examine the auditors perception of the impact of audit technology used to their

individual audit performance.

Previous studies has shown that the use of audit technology among auditors in general is

somewhat low (Quoted?). There are many reasons contribute to this scenario. Perhaps

individual auditors are uncomfortable with certain computer-related procedures because of

their own IT knowledge and experience limitations. It may also be caused by insufficient IT

training and support from the firm that

1.6 Contribution of the Study

This research distinctly contributed to the fields of accounting and information system by

exploring the adoption of audit technology and its impact to the individual performance. The

evolution of information system and the popularity of the technology acceptance theories,

particularly TAM and UTAUT, have made the research in this area became targeted by many

information systems as well as accounting researchers. Most researches involving audit

technology focused on the factors that contribute to the adoption of technology.

1.7 Definition of Terms Used

There are various terms used in this study. For the ease of the reader’s understanding, the

following sub-sections give definition on some terms which are of interest in this study. The

define terms are auditors, audit software, audit software application, audit performance,

Page 14: Thesis writing

1.7.1 Auditors

For the purpose of this study, auditors refer to “external” auditors or also known as the

“financial statement” auditors. External auditors are the individuals who work for an audit

firm that is completely independent of the company they are auditing (Leong, Coram,Cosserat

& Gill, 2001)

1.7.2 Audit software

1.8 Organization of the Thesis

The thesis is structured as follows: Chapter Two reviews relevant literature related to the

technology adoption in audit practices. Specifically it presents a comprehensive critical

review of the evolution of audit technology, the development of auditing standard pertaining

to the adoption of audit technology in audit practices, the impact of audit technology adoption

on the individual performance. The chapter then investigate existing literature on technology

adoption theories particularly the history of Technology Acceptance Model (TAM) and

Unified Theory of Acceptance and Use of Technology (UTAUT) model.

Chapter Three presents the research framework and hypotheses development. This chapter

discusses the components of the research variables, the operationalization as well as the

measurements of the variables, and lastly the proposed hypotheses.

Chapter Four highlights the research methodology adopted in this study. The chapter starts by

discussing the rationale for adopting quantitative survey as the method of the present study.

The chapter proceeds with the discussion of the research design for Study One followed by

discussion of Study Two. Basically, the discussion is concentrated on the participants, data

collection method, the questionnaire and variables development, pre-testing and

operationalization of variables in questionnaire. Validity and reliability measurement also

discuss as part of instrument development procedures. The chapter finally discusses the

techniques adopted for data analysis. Analysis of Variance (ANOVA) technique using SPSS

is used for data analysis in Study One and Structural Equation Modelling (SEM) using AMOS

is used for data analysis in Study Two.

Chapter Five details down the data analysis and report of Study One. It consists of three main

sections, which are the preliminary analysis, hypotheses testing and discussion of findings.

The preliminary analysis reports results related to the descriptive statistics of the sample,

normality and reliability analysis as well as factor analysis. The correlation analysis of the

Page 15: Thesis writing

independent and dependent variables is also reported in this section. The results of the

hypotheses testing using multiple regressions analysis is reported next.

Chapter Six details down the data analysis and report of results of Study Two. It consist of

five main sections; descriptive statistics, measurement model, structural model, hypotheses

testing and discussion of findings.

CHAPTER 2

LITERATURE REVIEW

2.1 Introduction

This section discusses literature pertaining to the scope of present research. Section 2.2

discusses about the technology adoption and auditing, specifically, the definition, histories

and evolution of auditing software, modules and auditing application. Next, section 2.3

discusses the individual performance as a consequence of audit technology adoption. Section

2.4 presents a review of the determinant factors of audit technology adoption as per UTAUT

model. This section also discusses the ….. and experience that play a role as moderator

variables.

2.2 Technology Adoption

The technology adoption domain is a well researched area in the information system.

Research in this area has explored topics such as the adoption of mobile banking (Zhou et al.,

2010), internet banking (Foon and Fah 2011; AbuShanad and Pearson 2007; Tan and Teo

2000), the use of websites (Schaik 2009), electronic commerce (Grandon and Mykytyn

2004), software application (Davis et al. 1989; Mathieson 1991), e-mail usage (Szajna 1996),

telemedicine applications (Chau and Hu 2001) and computer usage (Compeau and Higgins

1995).

In term of definition, technology adoption is defines as the decision to accept, or invest in, a

technology (Dasgupta, Granger and McGarry 2002). ……….

Technology adoption has been studied at two levels; the first is at the organizational level and

the other is at the individual level. Oliveira and Martins (2011) reviewed theories for adoption

models at the firm level used in information systems (IS) literature and discussed two

prominent models; diffusion on innovation (DOI); and the technology, organization and

Page 16: Thesis writing

environment (TOE) framework. This study was motivated by the fact that there are not many

reviews of literature about the comparison of IT adoption models especially at the firm level.

Since most studies on IT adoption at the firm level are derived from these two theories

(Chong et at. 2009), such reviewed of the literature on these models aims to fill the gap.

At the individual level, the emphasis of the analysis is on the acceptance of the technology.

The Technology Acceptance Model (TAM) proposed by Davis (1989) has explained

acceptance of information technology. TAM states that and individual’s adoption of

information technology is dependent on their perceived ease of use and perceived usefulness

of the technology. This model has been used and tested, and at a times modified, to study the

adoption of a number of different technologies in the past decade (

2.2.1 Standards and regulation of Technology Adoption in Auditing

Many business at present use computers to process their transactions, the auditing profession

has been faced with a need to provide increased guidance for audits conducted in an IT

environment. Various authoritative bodies, such as the American Institute of Certified Public

Accountants (AICPA) and the International Federation of Accountants and the Information

Systems Audit and Control Association (ISACA), have issued standards in this area to be

observed by their members in performing an IT audit (Yang and Guan, 2004). The following

sub sections explain the development of the standards relevant to technology adoption in

auditing.

2.2.1.1 American Institute of Certified Public Accountants (AICPA) Standards.

Auditing Standards Board (ASB) is the senior technical body of American Institute of

Certified Public Accountants (AICPA) designated to issue pronouncements on auditing

matters. The ASB was formed in October 1978 and is responsible for the development and

promulgation of auditing standards and procedures known as Statement on Auditing

Standards (SAS) to be observed by members of the AICPA. The AICPA code of professional

conduct requires an AICPA member who performs an audit (the auditor) to comply with the

standards promulgated by the ASB. The auditors are expected to have sufficient knowledge of

the SASs to identify those that are applicable to them and should be prepared to justify

departures from the SASs.

Page 17: Thesis writing

Over the years AICPA has issued numbers of SAS that are related directly to IT and audit

even before ASB was formed. SAS No. 3, “The effects of on the auditor’s study and

evaluation of internal control” (AICPA, 1974) was issued in conjunction with the need for a

framework concerning auditing procedures in examining the financial statements of entities

that use IT in accounting applications. This was the first bold step in defining the auditing

standard for IT system (Jancura and Lilly, 1977 as quoted in Yang and Guan, 2004). The

statement provided guidance for audit conducted in IT environments and required auditor to

evaluate computer during their audit.

According to SAS No. 3, the objectives of accounting control are the same in both a manual

system and an IT system. However, the organization and procedures required to accomplish

these objectives may be influenced by the method of data processing used. Therefore, the

procedures used by an auditor in the evaluation of accounting control to determine the nature,

timing and extent of audit procedures to be applied in the examination of financial statements

may be affected.

SAS No. 3 has been superseded by SAS No. 48, “The effects of computer processing on the

examination of financial statements”. It was effective for the examination of financial

statements for periods beginning after 31 August 1984. It also amended SAS No. 22 on

“Planning and supervision” (AICPA, 1978a), SAS No. 23 on “Analytical review procedures”

(AICPA, 1978b), and SAS No. 31 on “Evidential matter” (AICPA, 1980) to include

additional guidance for audits of financial statements in IT environments.

The ASB was in the opinion that auditors should consider the method of data processing used

by the client, including the use of computers, in essentially the same way and at the same time

that they consider other significant factors that could affect the audit. The use of IT could

affect the nature, timing and extent of audit procedures, so the auditor should consider these

effects throughout the audit. Therefore, the ASB felt that the guidance concerning the effect

of computer processing on audit of financial statements should be integrated with existing

guidance rather than presented separately. This is the primary reason why SAS No. 48

amended so many other existing statements.

Before amendment, SAS No. 22 on “Planning and supervision” requires the work in an audit

engagement to be adequately planned, and assistance, if any, to be properly supervised. SAS

No. 22 also provides guidance for the auditor making an examination in accordance with

GAAS. The engagement must be adequately planned and supervised for the auditor to achieve

Page 18: Thesis writing

the objectives of the examination, which is to gather the appropriate amount of sufficient

competent evidential matter to form the basis for an audit opinion on the financial statement.

SAS No. 48 came to place to amend SAS No. 22 by adding further planning considerations to

those already required. It requires the auditor to consider the methods (manual or

computerized) used by the client in processing significant accounting information.

SAS No. 23 which covers analytical review procedures superseded by SAS No. 56,

“Analytical procedures” (AICPA, 1988b), issued in April 1988. SAS No. 56 provides

guidance on the use of analytical procedures and requires the use of analytical procedures in

planning and overall review of all audits. When the client has an IT system, the auditor must

consider a particular factor in determining the usefulness of such procedures. This factor

relates to the increased availability of data prepared for management’s use when computer

processing is used.

SAS No. 31 on “Evidential matters” states that once the auditor completes the study and

evaluation of internal control, substantive testing must be performed to obtain sufficient,

competent evidential matter on which the auditor can based his/her opinion. SAS No. 48

amended SAS No. 31 and states that audit evidence is not affected by computer processing,

but the methods used to gather audit evidence may be affected. In an IT environment, the

auditor may have to use computer-assisted audit techniques (CAAT) such as computer-aided

tracing and mapping, audit software, and embedded audit data collection to gather evidence.

The auditor will have to rely more heavily on CAAT methods for inspection and analytical

review procedures.

Later on, the AICPA issued a professional pronouncement on the implications of electronic

evidence, SAS No. 80, Amendment to Statement on Auditing Standard No. 31, Evidential

Matter. This amendment suggests that a system that predominantly consists of electronic

evidence, it might not be practical or possible to reduce detection risk to an acceptable level

by performing only substantive tests for one or more financial statement assertions. (Helms

and Fred, 2000). SAS No. 80 further notes that the auditor may find it difficult or impossible

to access certain information for inspection, inquiry, or confirmation without using IT. Hence

the auditor might use generalised audit software (GAS) or other computer-assisted audit

techniques to test system controls or access information.

SAS No. 94 “The effect of IT on the auditor’s consideration of internal control in a financial

statement audit” (AICPA 2001) was released and came to effect for the audits of financial

Page 19: Thesis writing

statement beginning on or after 1 June 2001. SAS No. 94 provides guidance to auditors about

the effect of IT on internal control, and on the auditors’ understanding of internal control and

assessment of control risk. This indicates that, in computer intensive environments, auditors

should assign one or more computer assurance specialist (CAS) to the engagement in order to

appropriately determine the effect of IT on the audit, gain an understanding of controls, and

design and perform test of IT controls. SAS No. 94 also requires that an auditor planning to

perform only substantive tests on an engagement must be satisfied that such an approach will

be effective (Curtis, Jenkin, Bedard, & Deis, 2009)

The AICPA, in addition to issuing several standards for IT-related auditing, also publishes

Top 10 Technologies list annually to build member awareness about important and emerging

technologies that will contribute to the profession. Auditor knowledge levels are clearly

specified in the International Standard on Auditing (ISA) 401, paragraph 4, (IFAC, 1999)

which states that the auditors should have sufficient knowledge of the computer information

system (CIS) to plan, direct, supervise and review the work performed. (Ismail and Abidin,

2009)

2.2.1.2 International Federation of Accountants (IFAC)

2.2.1.3 Information Systems Audit and Control Association (ISACA)

ASACA was formed in 1969 to meet the unique, diverse and high technology needs of the

burgeoning information technology field. In an industry in which progress is measured in

nano-seconds, ISACA has moved with agility and speed to bridge the needs of the

international business community and the information technology controls profession.

2.2.1.4 Public Company Accounting Oversight Board (PCAOB)

Public Company Accounting Oversight Board (PCAOB) ...see curtis, bedard for training.

The Public Oversight Board (2000) pointed out that auditors’ professional capabilities in an

accounting information system (AIS) and the evaluation ability of a computer assurance

specialist (CAS) are the main factors of auditing quality (Lin and Wang, 2011). Brazel and

Agoglias (2004) has examined the impact of auditors’ professional capability on CAS and

AIS auditing system. The finding suggested that auditor with high AIS professionalism would

formulate higher standards in risk assessment of computerized auditing environments, while

the auditors of high CAS capability would be able to provide more accurate auditing reports.

Page 20: Thesis writing

2.2.2 Computer-assisted audit tools and techniques and technology adoption

CAATTs are computer tools and techniques that an auditor uses as part of their audit

procedures to process data of audit significance contained in an entity’s information systems

(Singleton, 2003). Lin and Wang (2011) further referred CAATTs to software that helps

auditors to conduct control and confirmation tests, analysis and verification of financial

statement data, and continuous monitoring and auditing. It can be widely applied in analysis

of financial data and error inspections to identify frauds or false statements. Braun and Davis

(2003) defined CAATTs more broadly to include any use of technology to assist in the

completion of an audit. This definition would include automated working papers and

traditional word processing applications. More importantly CAATTs are defined as computer-

assisted tools that permit auditors to increase their productivity, as well as that of the audit

function (Zhao, Yen and Cheng, 2004).

The advantage of the CAATTs systems is the automated auditing procedures for overall

auditing, rather than sample auditing. Thus, it can enable auditors to enhance the validity of

the data and results and also enable them to expand the scope of audit to a more high risk area

(Lin and Wang, 2011).

The failure of CAATTs to meet the expectation of the users could be due to several factors.

First, GAS or CAATTs lack of common interface with IT systems, such as file formats,

operating systems, and application programs (Shaikh, 2005). He started with interactive data

extraction and analysis, IDEA, one of the most popular GAS package that is able to extract

several file formats, such as ASCII, DBASE III, and other with common interface. He found

that the problem is that auditors will have to design one specialized audit software for each

Electronic Data Processing (EDP) system if the EDP system uses proprietary file formats or

different operating systems (Liang et al., 2001).

Second, other concurrent CAATTs often requires special audit software modules be

embedded at the EDP system design stage (Pathak, 2003). Therefore, the early involvement of

auditors at the time when the system is under development become necessary (Liang et al.,

2001; Tongren,1999). Furthermore, any changes in auditing policy may also require major

modification not only to individual audit software modules, but also entire EDP systems

(Wells, 2001; Liang et al., 2001). Thus, in summary, applying these advanced CAATTs is

usually very costly even if it is possible.

Page 21: Thesis writing

Third, as the auditees’ EDP systems become more complex, it is essential for auditors to audit

through computers. The paper stream into and out of computers disappears and is replaced by

electronic data streams, which can only be analyzed in automated fashion. Most CAATTs

currently in use cannot directly access an auditee’s live data. Auditors usually gather the

historical data file from the auditee’s personnel. This situation creates the possibility to be

given manipulated or even fraudulent data.

From other perspective, CAATTs can be potrayed as the tools and techniques used to examine

directly the internal logic of an application as well as the tools and techniques used to draw

indirectly inferences upon an application’s logic by examining the data processed by the

application (Hall, 2000). Of the five CAATTs that have been advanced in popular audit

literature, three – test data, integrated test facility, and parallel simulation – directly examine

the internal logic of the application. While the remaining are embedded audit module and

generalized audit software, examine the application’s logic indirectly.

2.2.3. Audit technology adoption

Different authors have used different term to refer to the audit technology adoption in auditing

practices. Dowling and Leech (2007) and Dowling (2009) have used the term audit support

system and decision aids to reflect the adoption of audit technology in auditing. Audit support

systems are the key terminology application deployed by audit firms to facilitate efficient and

effective audits (Dowling and Leech, 2007). They refers audit support systems to include

electronic workpapers, extensive help files, accounting and auditing stsndards, relevant

legislation, and decision aids. Dowling (2009) revealed that audit support systems are the

primary technology application audit firms deploy to control, facilitate, and support audit

work. His study investigates how several auditor, audit team, and audit firm factors influence

whether auditors use audit support systems the way audit firms intend them to be used.

Manson et al. (2001) used the term audit automation to reflect the IT use in the audit process.

They claimed that the increased use of IT is part of strategies being adopted by the big audit

firms to cope with a more competitive environment. Earlier, a survey by Manson et al. (1997)

found that audit automation was used in most aspects of the audit process, more extensively

by the Big Five audit firms than others, although much of what is termed audit automation

consists merely of word-processing and spreadsheet applications.

Page 22: Thesis writing

Generalised Audit Software (GAS) is the most common computer-assisted audit tool and

techniques (CAATTs) used in recent years (Braun and Davis, 2003; Singleton, 2006). GAS is

software package which is used by auditors to analyse and audit either live or extracted data

from a wide range of applications (Debreceny et al., 2005). Gas allows auditors to undertake

data extraction, querying, manipulation, summarization, and analytical task (Debreceny et al.,

2005). Two most of the popular GAS are Audit Command Language (ACL), Interactive Data

Extraction and Analysis (IDEA) (Braun and Davis, 2003) and Panaudit Plus (Debreceney et

al., 2005). These packages contain general modules to read existing computer files and

perform sophisticated manipulations of data contained in the files to accomplish audit task.

GAS also has other products like CA’s Easytrieve, Statistical Analysis System (SAS), and

Statistical Package for Social Sciences (SPSS) (Singleton, 2006)

GAS is rapidly increased in use by internal auditors in their profession and audit staffs who

are involve requires background in data analytic technologies to perform their audit tasks

(Bagranoff and Vendrzyk, 2000). Debreceney et al., (2005) also found that GAS are

frequently being used in special investigation audits of two large local bank in Singapore. The

key reasons for the widespread use of GAS include its relative simplicity of use requiring

little specialized information systems knowledge and its adaptability to a variety of

environments and users (Braun and Davis, 2003).

While studies show that GAS is widely used by internal auditors, recent surveys show,

however that CPAs do not frequently and systematically use these CAATTs in practice

(Kalaba, 2002). Other surveys (1998-2001) indicate that both ex-post and concurrent

CAATTs are used primarily in internal audit settings by proprietary implementation. There

are several research concentrate on the adoption of GAS (Wehner and Jessup 2005;

Debreceney et al. 2005; Braun and Davis 2003), but only few papers analyzed about its usage

by external auditor.

2.2.4 Audit Software Application in Audit Practices

2.2.4.1 Client acceptance and audit planning

Technology is already having a major impact on audit planning. For example,

computers are used to generate client specific internal control templates to help

identify strengths and weaknesses in a system. To generate a client-specific internal

control template, auditors input data into a computer-based questionnaire developed by

the audit firm. In response to queries from the software, the computer can then be used

Page 23: Thesis writing

to analyze a client's business processes, determine controls that are present or missing

(based on a comparison with industry benchmarks), assess inherent and control risk,

and generate a detailed series of audit tests to be performed. As audit work continues,

the results of audit testing can then be entered into the software to

determine if the risks identified during planning have been appropriately addressed.

This helps to ensure that all significant risks have been addressed during the audit. cp

(Bierstaker et al. 2001)

Many firms have adopted a risk-based audit approach and developed or purchased

software to help the auditor gain an understanding of how external and internal risk

affect the audit. These software packages can also be used to help sell risk

identification and/or risk management services to existing and potential clients. cp

(Bierstaker et al. 2001)

In term of sampling....The new risk standards (SAS Nos. 104-111) suggest that

auditors use the computerized assisted auditing to select sample transactions to audit

from key electronic files, sort transactions with specific characteristics, test an entire

population instead of a sample, and obtain evidence about control effectiveness

(AICPA 2006)

2.2.4.2 Audit substantive testing

Every audit engagement involves testing management’s assertions (e.g. existence of

assets, liabilities and owner’s equity, quality of earnings, reliability of internal control,

compliance with applicable laws and regulations) by gathering sufficient and

competent evidence.

2.2.4.3 Audit completion and report writing

A major advantage of electronic working paper s that enhances efficiency is taht

information can be shared among auditors at different locations through the use of e-

mail or remote access software (Debreceney et al., 2005). As needed, working papers

from prior years can easily be integrated into the current year working papers.

Page 24: Thesis writing

2.3 Individual Performance

In many organizational life and other human affairs, individual performance plays a great role

in achieving the goals set. Different performance measurements are given in different

situations. For example, students in classroom at school or university, they are normally

evaluated based on their participation, assignments or capability to work in a group. In an

organizational context, the workers may be evaluated based on their productivity, quality of

their output, commitment skills, or integrity (Shen, 2009).

Due to the variety of context, individual performance was differently defined, so as the

measurements also different. In this section, individual performance definitions,

operationalization, measurements and it relationship with information technology that are

relevant to current study will be reviewed.

2.3.1 Individual Performance Definition

In recent years there has been a large increase in research related to individual performance

particularly in psychology, educational and learning, human resource as well as in general

management . Researchers have defined individual performance differently but consistent

over their respective area of study. In Information system (IS) literature however, researchers

seem to assume that performance is rather self-explanatory. This explains why in this research

area, clear definition of individual performance is still lacking. In addition, the review of IS

literature on the research in the individual performance found that the contexts, the construct

measured, or the theories based upon are not consistent.

Can put the summary of previous literature on Ind Performance in IS (in table form)

Most studies in IS literature developed their definitions of individual performance based on

“individual impact” definition from DeLone and McLean (1992). According to DeLone and

McLean (1992), IT use leads to three types of outcomes: user satisfaction, individual impact,

and organizational impact. Individual impact was defined as “the effect of information on the

behavior of the recipient”. Compared to individual performance, the term individual impact

was used loosely and it transcends mere individual performance and includes all other

outcomes under different contexts, for example, change in decision making productivity,

change in user activity, and user’s perception of the importance of the system (DeLone and

McLean, 1992). Cp Chen Shen

Page 25: Thesis writing

In auditing

2.3.2 Relationship between Technology Adoption and Individual Performance

The relationship between IT use and individual performance has not been well addressed in

previous studies (Sundarraj & Vuong, 2004). The general believe is that more use of IT will

lead to better individual performance. This can be traced back to DeLone & McLean’s work.

In their study, the measurements of information systems success fall into six major categories

– system quality, information quality, use, user satisfaction, organizational impact and also

individual impact. After that, several studies based their model on this study, and overlooking

testing the link between IT use and individual performance (Almutairi & Subramaniam, 2005;

Livari, 2005, McGill, Hobbs, & Klobas, 2003). However, prior researches failed to reach

consensus on the nature and strength of the relationship between IT use and individual

performance. Only conflicting results were presented from previous studies, some found IT

use improves individual performance and some found negative relationship.

Different researcher study different nature of IT use and examine the impact to individual

performance. In fact, the linkage between information technology and individual performance

has been an ongoing concern in IS research (Goodhue and Thompson, 1995). Most of the

studies in organizational setting show a positive relationship between IT use and individual

performance. For example, Goodhue (1988) reported that information systems have a

positive impact on performance only when there is correspondence between their

functionality and the task requirements of users. Devaraj & Kohli (2003) argued that the

driver of IT impact is not the investment in the technology, but the actual usage of

technology. Their study on the use of technology in hospital resulted in finding that

technology usage was positively and significantly associated with measures of hospital

revenue and quality.

There were also studies reported the negative relationship between IT use and performance.

Bible, Graham, & Rosman, (2005) examined the impact of electronic work environments on

auditor performance. Their assessment was on whether audit technology affects decision

making in a workpaper review task. The result of an experiment revealed that the electronic

environment negatively impact auditors’ performance. Auditors in the electronic work

environment found to be less able to identify seeded errors and to use them properly in

evaluating loan covenants as compared to the auditors in the traditional paper environment.

Page 26: Thesis writing

Many studies tested the association between “system use” and “individual impacts” and the

association was found to be significant in each of the studies. (DeLone and McLean, 2003)…

to explain one by one..the seven studies

2.4 Theoretical background

The previous sections discussed the technology adoption and impact to individual

performance. For the technology to be of value it must be accepted and use. An important

model for studying technology adoption and usage is the Unified Theory of Acceptance and

Use of Technology (UTAUT).

The following sub-sections discuss the history of Technology Acceptance Model (TAM) of

which most of the basic variables tested in UTAUT model were adopted partly from this

model. This is followed by the discussion of the history of UTAUT model, the model tested in

this study. This sub-section lead the discussion into the introduction of the variables adopted

and tested in the current study.

2.4.1 The History of Technology Acceptance Model

Technology Acceptance Models (TAM) have been developed to measure system use,

acceptance, and user satisfaction of those systems (Davis, Bagozzi, & Warshaw, 1989). The

Davis model specifically focuses on information systems use and is based on the theory of

reasoned action (TRA) originally introduced by Ajzen and Fishbein in the early 80’s (Ajzen

& Fishbein, 1980) and further refined by Ajzen as the extended TRA in 1991 (Ajzen, 1991).

TRA is a technology acceptance model that can be used to predict behavior in a wide variety

of situations, not just the adoption of information systems technology. Ajzen states that an

individual’s beliefs influence his/her attitude towards various situations. The users’ attitude

joins with subjective norms to shape the behavior intentions of each individual. (Cp Moran

2006)

This theory was further refined and called the theory of planned behavior (TPB) which is also

titled the extended theory of reasoned action. The TPB is a general behavior model which can

be used to study broader acceptance situations than the TAM but it has been applied to

information systems studies (Mathieson, 1991) & (Taylor & Todd, 2001). (Cp Moran 2006).

TPB includes many factors, or constructs, used to determine users’ acceptance of innovations.

The three considerations are behavioral beliefs, normative beliefs, and control beliefs. These

are the users core beliefs about the consequences of the action, the expectations of others, and

Page 27: Thesis writing

beliefs about how the user controls, or does not control, the end result of the behavior. Table 1

further describes the model parameters.

2.4.2 The Unified Theory of Acceptance and Use of Technology (UTAUT) Model.

The Unified Theory of Acceptance and Use of Technology (UTAUT) integrated the concepts

of previous…. This synthesized model created to present more comprehensive pictures of

acceptance process than any previous model able to do. This model emerged from the

combination of components from eight models previously established in IS literature, all of

which had their origins in psychology, sociology and communications. Theses eight models

are the Theory of Reasoned Action (TRA); Motivational Model (MM); Theory of Planned

Behaviour (TPB); Decomposed Theory of Planned Behaviour (DTPB); Technology

Acceptance Model (TAM); the Motivational Model (MM); Combined TAM and TPB (C-

TAM-TPB); Model of PC Utilization (MPCU); Innovation Diffusion Theory (IDT) ; and

Social Cognitive Theory (SCT). Each model attempts to predict and explain the user behavior

using a variety of independent variables.

Researchers have analysed and compared the competing technology acceptance theories and

models as noted above in order to identify the most promising ones in respect of the ability to

predict and explain individual behaviour towards the acceptance and usage of technology. The

UTAUT model formulated after eight models have been thoroughly revieved and empirically

compared. The UTAUT model explained about 70 percent of the variance in intention to use

technology, vastly superior to variance explained by the eight individual model (Rosen,

2005). Although the UTAUT model is relatively new, its suitability, validity and reliability in

technology adoption studies in different context and across the country has been proven

(AlAwadhi and Morris, 2008; Venkatesh and Zhang, 2010).

The UTAUT aims to explain user intentions to use an information system and subsequent

usage behavior. The main variables tested in the UTAUT model are: performance expectancy,

effort expectancy, social influence, and facilitating factors. Venkatesh et al. (2003) explain

performance expectancy as the degree of performance gain after using a new system or a

technology. This is an important variable in predicting user behavior. Considering the fact that

many people take in-service training courses in order to pursue career enhancement

opportunities, it is logical to offer them something new that would contribute to their job

oem, 01/17/13,
To explain about the theories alone in one section and 8 subsection, before coming to UTAUT
Page 28: Thesis writing

performance. Therefore, high performance expectancy can encourage possible users to adopt

the new technology or the new system. Because of its importance, many theories have

adopted this construct in different ways.

The second main variable is effort expectancy. This variable measures the degree of effort

that a person needs to put forth when using a new technology or a new system. Research has

shown that users are more likely to adopt or use new technologies if they require a relatively

minimal amount of effort (Agarwal and Prasad, 1997; Konradt et al., 2006). It is likely that

resistance can be expected from the users, when employing a new technology, if the new

system requires them to work hard in order to learn it. It is a well known fact that many

people do not resist an innovation itself, but do resist learning a new thing that requires effort

instead of using a well known system. Therefore, this variable is also important in predicting

user behavior in terms of accepting or rejecting a new technology. Effort expectancy groups

several constructs from other theories or models.

The third variable in the UTAUT is social influences. Social influences are the external and

internal factors that effect people when making a decision or displaying a behavior. In other

words, the degree that the people value significant other’s opinions constitutes social

influences. Some people may feel pressure to comply with the proposed behavior, which in

this case can be the use of a new technology, while others may not. Social influence is used as

an independent variable in many models such as the “subjective norm in TRA, TAM2,

TPB/DTPB and C-TAMTPB, social factors in MPCU, and image in IDT” (Venkatesh et al.,

2003, p.451).

Finally, facilitating conditions is the last main variable in UTAUT. Venkatesh et al. (2003)

describes facilitating conditions as the state of readiness of the technological environment

with regards to its support for the user. Users may need support such as technical help in

using the new system or the new technology. If the technological environment offers such

support, users will be more likely to be in favor of using it. On the other hand, if the system

does not offer such support, it would be more difficult to encourage users to adopt the new

system or technology. Like the previous variables, facilitating conditions is included in earlier

models and theories, but in different formats. One example of this variable in a different

format is the perceived behavioral control variable used in TPB. This variable is also used in

TAM.

Page 29: Thesis writing

This model also incorporates certain variables as moderators of the

relationship described above. In particular UTAUT model tested several

user variables and posits that gender, age, experience, and voluntariness

of use (the demographic characteristics), mediate the impact of the four

key constructs on usage intention and behavior (Venkatesh et al., 2003).

Gender, which has received some recent attention, is one of the key moderating influence in

technology adoption. Park, Yang, and Lehto (2007) examined the adoption of mobile

technologies by consumers in China. In their study, they surveyed 221 Chinese people in

order to understand their perceptions regarding mobile communication technologies. The

results of their analysis revealed the role that gender plays in terms of affecting user

intentions. They further revealed that male users were more influenced by performance

expectancy than female users. In other words, male users were more focused on increasing

their gains from mobile technologies than female users. This finding is consistent with the

study by Wang and Shih (2009). On the other hand, effort expectancy was higher for females

than males. Interestingly, experience did not significantly affect user intentions in this study.

Wang, Wu, and Wang (2009) investigated the acceptance of mobile learning technologies and

focused on gender and age issues to see whether they make a difference in users’ perceptions.

In contrast to previous studies, Wang et al. (2009) did not find age or gender to have a

significant moderating effect on performance expectancy. On the other hand, both gender and

age significantly moderated effort expectancy and social influences. Wang et al. (2009)

reported that effort expectancy was more important for older users than younger ones. While

this finding was not unexpected, as older users tend to look for less complex systems to

operate, the moderating affect of gender differences on social influences was really

unanticipated. Interestingly, male users’ social influences scores were higher than that of

female users; that is, male users were more affected by the opinions of significant others than

female users.

Age....

Experience...

Page 30: Thesis writing

Many studies have explored the affects of moderating variables on user intentions.

Koivumaki, Ristola, and Kesti studied user perceptions towards mobile services (2008). The

researchers tested the University of Oulu’s SmartRotuuari2 program on 243 people. The

results indicated that experience played a major role in determining user intentions.

Experience positively moderated performance expectancy and effort expectancy. On the other

hand, facilitating conditions was negatively moderated by experience. Particularly,

Koivumaki et al. (2008) noted that skilled users found the system useful and easy to use.

Wang and Shih (2009) study on 244 Taiwanese users of E-Government information kiosks

also produced significant results in terms of moderating variables. According to the results,

effort expectancy was greater for old versus young users. Moreover, gender was significant in

determining user intentions. Wang and Shih found that performance expectancy was stronger

for men than women. Furthermore, social influences were stronger for women than men.

Voluntariness....

Figure 1 : UTAUT Model

Effort Expectancy

Social Influence

Facilitating

Condition

Behavioral

Intention

Use

Behavior

Gender AgeExperience Voluntariness

of use

Performance Expectancy

Page 31: Thesis writing

2.4.2 Computer self-efficacy

2.4.2.1 Self-efficacy define

The concept of self-efficacy is derived from the work of Bandura and Social Cognitive

Theory (1986). Social Cognitive Theory (SCT) suggest that human behavior is reciprocally

influenced by environmental as well as cognitive factors, which include outcome expectation

and self-efficacy (Downey and McMurtrey, 2007). Self-efficacy is an individual’s confidence

in their ability to successfully accomplish a given task or activity (Bandura, 1997). Self-

efficacy belief, therefore, determines how an individual feels, thinks, motivate themselves,

and how they behave and produce diverse effect through cognitive, motivational, effective,

and selection processes (Reid, 2008). Bandura (1986, 1997) holds that self-efficacy is more

than a belief in ability level; it also orchestrates the motivation necessary to conduct the

behavior. Self-efficacy helps determine what activities an individual engages in, the effort in

pursuing that activity, and the persistence in the face of adversity (Downey and McMurtrey,

2007).

General and specific CSE..see Argawal et al (2000) page 419..

Self-efficacy also applies to computing behavior. Several studies (Burkhardt and Brass, 1990;

Gist et al., 1989) have examined the relationship between self-efficacy with respect to using

computers and a variety of computer behaviors. Compeau and Higgins (1995b) define

computer self-efficacy as the judgment of one’s capability to use an information technology.

They refer CSE as self-assessment of individual ability to apply computer skills to complete

the specific tasks. They remark on the relative paucity of prior research examining the

influence of self-efficacy in the context of computer training. Compeau and Higgins (1991,

1995a, 1995b, 199) are pioneers in studying the impact of CSE on human interaction with

computer.

This study extends current understanding of the concept of CSE in the context of the usage of

the audit software . See Argawal et al (2000) pg 419, para 4..explain about CSE concept in

audit software…and it’s role as moderating factor to training.

Page 32: Thesis writing

CHAPTER 3

RESEARCH FRAMEWORK AND HYPOTHESES DEVELOPMENT

3.1 Introduction

The previous chapter has thoroughly reviewed the literature related to UTAUT and individual

performance. This chapter presents a research framework to determine the relationship

between research variables. The research variables are: (1) the level of audit software

application as technology adoption constructs (2) performance expectancy, effort expectancy,

social influence and facilitating conditions, organizational supports and training as the

antecedents of the technology adoption construct (3) experience and computer self-efficacy as

a moderator variables and (4) individual performance as the criterion variable. Then,

operationalization and specific measurements of these variables are discussed in detail.

Finally, the chapter discusses the research hypotheses to be tested.

3.2 Research framework

The comprehensive review of literature performed in the previous chapter found that most of

prior researches on technology adoption have stopped at the behavioral intention to adopt

information technology (…..quoted). Review of past studies revealed that only few

researches have been done on the actual usage and impact of technology, less little on the

adoption of audit software in audit practices. Several models have been proposed to predict

the technology adoption such as Technology Acceptance Model (Davis et al. 1989), Theory of

Planned Behavior (Ajzen, 1991) and Unified Theory of Acceptance and Use of Technology

(Venkatesh et al. 2003). However these models focus on whether a system is used, not how it

is used (Dowling, 2009). A number of studies attempted to extend the technology acceptance

study further into the use of technology in audit practice. For example, Bierstaker, Burnaby,

& Thibodeau, (2001) assessed the current impact of technology on the audit process, and the

future implications of technological trends for the auditing profession.

Based on the reviews done on the several technology adoption model as well as mid-range

theories related to the technology adoption, the theoretical foundation of this study is

premised on the UTAUT model which was tested by Venkatesh et al. (2003). Venkatesh et

al., (2003) also highlighted that the directions for future research need to be directed more to

oem, 01/06/13,
What about audit performance?
Page 33: Thesis writing

the outcome of the technology adoption. Until today, little to no research has addressed the

link between user acceptance and individual performance outcome. Thus, the assumption that

technology usage will always resulted in positive outcomes are still remains untested.

Therefore, this study is believed to be able to fill the gap that exists in the area of audit

technology.

Figure 2 : Research framework

Here need to explain about the adoption of audit software in auditing practices. Need to proof

that there is no attempt to investigate the adoption and impact to performance as well as the

determinat factors..

Effort Expectancy

Social Influence

Facilitating Condition

Audit

Software

Application

Individual

Performance

TrainingExperience

Performance Expectancy

Client Technology

Organizational Support

Computer Self-Efficacy

Page 34: Thesis writing

3.3 Operationalization and measurement of variables

The present study is based on the UTAUT model which was developed and tested by

Venkatesh (2003). The focus of the study is to examine the factors that influence the

application of audit software amongst the auditors in different audit firms that adopting audit

software in practice. This study is also aimed to examine the impact of the audit software

application to the individual audit performance. As mentioned earlier (make sure this has been

mentioned), UTAUT model is in principle open for inclusion of other predictors if these

predictors can explain significant variance in the technology adoption, in an attempt to

provide an even richer understanding of technology adoption and usage behavior. Give

example other studies that recommended other variables..

As stated previously, the UTAUT model is open for inclusion (make sure this is mentioned

previously) of other variables pertinent to behavior usage of audit software. Therefore,

besides the main variables tested in UTAUT model, the conceptual framework of the present

study also includes computer self-efficacy (Compeau and Higgins,1995; Burkhardt and

Brass, 1990; Gist et al., 1989 ), training factors as one of the elements influence the usage

behavior, elements, the present study also includes additional variables that can explain more .

The main variables introduced in UTAUT model are performance expectancy, effort

expectancy, social influence and facilitating condition. The study introduced two new

variables to the existing UTAUT model; client technology and computer self-efficacy. This

study tested the moderating effect of experience to the relationship of performance and effort

expectancy to the audit software application as tested in UTAUT model. To consider the

contribution of this study, new moderating variable was also tested to see the interaction

effect of training to the relationship of performance and effort expectancy to the audit

software application as tested in UTAUT model.

3.3.1 Audit software application

This study used the term audit software application to describe software used to assist auditors

in completing one or more tasks. Reviewed of prior literature and discussion held with

practitioners and academician resulted in the identification of 15 audit software applications.

The applications of audit software included those examined in previous research for example,

analytical procedures (Knechel, 1988), identifying samples (Kachelmeier & Messier, 1990).

They also included recent audit software applications in audit tasks, for example fraud review

(Bell & Carcello, 2000), testing online transactions (Wright, 2002). This study grouped the

audit applications

Page 35: Thesis writing

3.3.2 Determinant factors of audit software adoption.

3.3.3 Individual performance

The choice of performance measures is one of the most critical challenging facing

organizations. Performance measurement systems play a key role in developing strategic

plans, evaluating the achievement of organizational objectives, and compensating managers

(Ittner and Larcker, 1998). The individual performance is the ultimate dependent variable or

the criterion variable in this present research. Generally, individual performance is intended to

identify the extent to which users believe that adopting technology in performing the task

gives impact to their job performance. Individual performance also has been examined in its

specific aspects such as use of new technology enrich the work …………………In this

present research, individual performance is defined as the extent to which auditors believe

that using audit software in performing their audit task can increase the performance of audit

works.

Individual performance has been measured in audit technology context by several researchers.

For example ……

3.4 Hypotheses Development

According to Vierra, Pollack and Golez (1998) researchers normally restate research

questions as hypotheses because hypotheses can be subjected to empirical testing. This means

they can be tested using some form of research procedure such as observations or surveys. In

this way, the investigation can be confirmed if the prediction is empirically sound. (Singh,

Fook and Sidhu, 2006). The hypotheses of this study are developed on the basis of the

observation of past literatures according to the richness of measurement. When there are

mixed results from literature, this study consider the majority results to develop the

hypothesis.

Generally, the relationships investigated in the present research and their related hypotheses

can be classified as: (1) between audit software application and audit performance, (2)

between the determinant factors and audit software application, and (3) between the

determinants factors and audit software application with the interaction of moderator

variables. Details of the above are discussed in the following subsections.

Page 36: Thesis writing

3.4.1 Audit software application and audit performance

Results of previous studies that have tested the IT use and individual performance relationship

shown mixed results. McGill, Hobbs, & Klobas (2003) and Livari (2005) did not find

significant relationship between IT use and individual performance. Both of them used

frequency as the IT use measure. For the performance measurements, McGill et al., (2003)

measured the subjective effectiveness, productivity, and performance of user-developd

application. Livari (2005) measured the perceived efficiency, productivity nad effectiveness

of a financial accounting system.

3.4.2 Determinant factors of audit software use

Although UTAUT is quite a new theory, dating to 2003, the literature review revealed

hundreds of research studies that use UTAUT as a theoretical background. This is an indicator

of the high degree of acceptance of UTAUT by scholars from many disciplines. In the

following section, some of these studies are detailed, providing information regarding

variables used in the model and their level of significance in the respective studies. (cp

odabasi, 2010)

3.4.2.1 Performance Expectancy

This variable is considered to be the most important one in the UTAUT model. The

performance expectancy variable in the UTAUT predicts a positive relationship between

intention to use technology and gains in job performance. Actually, in most user acceptance

studies, performance-related variables such as perceived usefulness attract the most attention.

In order to determine whether this variable is indeed the most important variable, results from

studies in different disciplines are listed below. Anderson et al. (2006) applied the UTAUT

model to understanding the perceptions of university faculty toward tablet personal computer

(PC) usage. They surveyed 50 faculty members by using web-based survey methods. As a

result of their study, Anderson et al. Found that performance expectancy was the “strongest

predictor” (Anderson et al. 2006, p.430). According to their study, performance expectancy

positively affected the usage of the tablet PC. In other words, the faculty who believed that

using a tablet PC increased their work performance tended to use the tablet PC more than the

faculty who thought otherwise.

Performance expectancy produced similar results in Wang and Shih (2009) study of

information kiosk systems. They explored the perceptions of 244 Taiwanese users in their use

Page 37: Thesis writing

of an E-Government information kiosk. Performance expectancy was operationalised as the

increased gain in accessing government related information and concluded that their intention

to use the information kiosks was heavily influenced by their level of performance

expectancy. Therefore, increasing the performance expectancy level of the users guaranteed a

high usage of the E-government kiosks. In addition to the E-Government and academic

environments, performance expectancy was also found to be the most influential factor in

adopting technology in business settings.

Wang, Archer, and Zheng (2006) examined the use of electronic marketplace (EM)

applications and the perceptions of their intended users. They associated performance

expectancy with greater economic benefits, such as increased customer contact and

improvement of business processes. They assumed that a system which increases the ability

of a company to contact buyers and sellers would be acceptable to that company.

Furthermore, if the system resulted in an improvement in business processes, it would attract

more users. Employing a case study methodology with UTAUT as the theoretical

background, Wang et.al. (2006) determined that performance expectancy was a major

variable in inducing the business sector to use EM. In other words, the results of their study

confirmed the significant affect of performance expectancy on the intention to use the EM.

In another study, Bandyopadhyay and Fraccastoro (2007) used the UTAUT model in order to

understand the perceptions of users towards prepayment metering systems. The researchers

hypothesized that consumers would prefer to use the prepayment meter technology over

traditional payment methods, if they believe it is a useful system for managing their electricity

usage. The results of the study confirmed this hypothesis in the finding of a significant

relationship between performance expectancy and the intention to use the system. In other

words, people who thought that using the prepayment metering system would be helpful in

electricity account management intended to use the system more than people who thought

otherwise. Furthermore, like many other scholars, Bandyopadhyay and Fraccastoro (2007)

determined that performance expectancy was the strongest variable within the theoretical

model.

With regards to the use of audit software, ..........(find study on audit software adoption)

Look at Bierstaker, burnaby and Thibodeu (2001) – explain the audit process ; audit

planning, testing ad documenttaion

Hence, the following is hypothesised:

Page 38: Thesis writing

H2 : Auditors with high performance expectancy are associated with high usage of audit

software.

3.4.2.2 Effort Expectancy

Like performance expectancy, effort expectancy is considered to be an important determinant

of user intentions. In the user acceptance literature, most of the studies found a significant

relationship between effort expectancy and intention. However, the relationship was not as

strong as with performance expectancy. Lin and Anol (2008) studied online social support

and network IT usage among 317 Taiwan university students by employing the UTAUT.

They operationalized effort expectancy as the degree of easiness in using the network IT. An

analysis of the survey revealed a significant relationship between effort expectancy and user

intentions. Students who found the system easy to use were more likely to use the system than

those who found the system difficult to use.

Wu, Tao, and Yang (2007) also used UTAUT as a theoretical background in their research

study. They studied the perceptions of users of 3G mobile communication systems and

hypothesized that effort expectancy would play a major role in increasing the intention scores

of the users. The researchers surveyed 394 users, by using an online questionnaire. Using

structural equation modeling, Wu et al. found a significant relationship between effort

expectancy and the intention to use 3G mobile technologies. Therefore, they demonstrated

their hypothesis about effort expectancy boosting user intentions to be correct.

Im, Hong, and Kang (2007) studied mp3 player and internet banking technologies in two

different settings, namely Korea and the United States. They wanted to compare the

perceptions of users from both countries to see whether there were any differences. Im et al.

(2007) collected data from 501 users including Korean college students and US undergraduate

students. They defined effort expectancy as the easiness of using the mp3 players and internet

banking. Notwithstanding the differences in nationality, the results demonstrated a significant

relationship between effort expectancy and user intentions for both Korean and US students.

Users who found using both the mp3 players and Internet banking easy had high intention

scores. Although a majority of the studies demonstrate a significant relationship between

effort expectancy and user intentions, there are a limited number of studies that show

otherwise. Anderson, Schwager, and Kerns (2006) studied the perceptions of college faculty

Page 39: Thesis writing

in their use of tablet PCs. They hypothesized that the ease of use of the tablet PCs would

positively affect user intentions. In other words, they expected to see higher intention scores

from users who found the tablet PC easy to operate. However, the study did not produce any

significant results in terms of effort expectancy, and their hypothesis was rejected.

Adapting effort expectancy into the use of audit technology. Need to find support for this...

Hence, the following is hypothesised;

H3: Auditors with high effort expectancy are associated with high usage of audit software.

3.4.2.3 Social Influences

Marchewka et al. (2007) examined the Blackboard application which is a type of educational

software widely used by the university community. Their sampling frame was university

students both at the graduate and undergraduate levels. After surveying 132 university

students, they concluded that there was a significant relationship between social influences

and intention to use the Blackboard system. According to the results, students are affected by

their significant others’ opinions in terms of their use of the Blackboard system. If they

believe that they are encouraged by those people, they were more likely to use the system.

Armida (2008) used UTAUT as a theoretical framework for her study on VOIP systems. She

hypothesized that social influence scores would positively affect users’ intention to use the

VOIP systems. In other words, users would decide whether to use the system based on the

opinions of people whom they consider important. Armida surveyed 475 respondents from

various states in order to conduct her study. After statistical analysis, Armida concluded that

social influences were a significant predictor of intention to use the VOIP systems.

Neufeld, Dong, and Higgins investigated the relationship between charismatic leadership and

the adoption of information technology (2007). Neufeld et al. collected a sample of 207

respondents from 7 organizations. and hypothesized that social influence was a determinant of

IT adoption. An analysis of their data resulted in positive scores for social influences. In other

words, the results supported their hypothesis and found a significant relationship between

social influences and intention to use the new IT system.

Adapting social influence into the use of audit software...Need to find past study to support..

Thus, the following is hypothesised;

H4: Social influence will significantly affect the use of audit software among auditors.

Page 40: Thesis writing

3.4.2.4 Facilitating Conditions

Facilitating conditions include the support function of the technology implementations.

Depending on the complexity of the systems, facilitating conditions can affect intention to use

a system. Research findings from different studies display varying results for facilitating

conditions. While some studies report significant relationships, other studies found no

significant relationship between facilitating conditions and intention to use a system. One of

the studies that produced significant results was conducted by AlAwadhi and Morris (2008).

The researchers examined E-government services in Kuwait and surveyed 880 university

students in order to obtain their data. AlAwadhi and Morris (2008) operationalized facilitating

conditions by two measures: first, having the knowledge to use the e-government services and

second, getting support when needed. The results of their study indicate that facilitating

conditions is a significant determinant in using a new system.

Wills, El-Gayar, and Bennett (2008) also found a significant relationship between facilitating

conditions and intention to use a new system in their study of electronic medical records. In

this study, Wills et al. (2008) studied professionals working in the field of healthcare. They

defined healthcare professionals as “registered nurses, physician assistants or certified nurse

practitioners in the state of South Dakota” (p398). They surveyed 52 healthcare professionals

in order to obtain their data. As noted earlier, the results demonstrated a significant

relationship between facilitating conditions and intention to use electronic medical records.

Al-Gahtani, Hubona, and Wang (2007) used the UTAUT model in order to understand the

perceptions of Saudi Arabian users in terms of IT acceptance. Al-Gahtani et al. (2007)

surveyed 1190 workers from companies that are located in four major cities. They

hypothesized that facilitating conditions would positively affect users’ behaviors in terms of

using computer systems. However, the study did not produce significant results and their

hypothesis was rejected.

Adapting fracilitating conditions into the use of audit software...Need to find past study to

support..

Thus, the following is hypothesised;

H5: Facilitating conditions will have a significant influence on the use of audit software

among auditors.

Page 41: Thesis writing

3.4.2.5 Training

Training is one of major component that clearly influenced the extent to which some users did

not understand the system while others appeared to master it (Boudreau and Seligman, 2005)

H6: Level of training in computer technology received will have a significant influence on the

use of audit software among auditors.

3.4.2.6 Organizational support

In this study, organizational support can be defined as the extent to which auditor believe that

their organization helps them use as audit software (Jiang and Klein, 2000). Numerous studies

have demonstrated that organizational support has a positive effect on employees’ technology

acceptance and usage behavior. In their meta-analysis, Mahmood et al. (2001) determined that

organizational support is the third most important factor affecting technology usage behavior.

They also argued that organizational support is more powerful in facilitating usage, although

the factors of education level, training level, and professional level were found to have a

substantial effect on usage as well. Speier and Venkatesh (2002) also found that management

support has a positive effect on a salesperson’s perceptions of technology, which, in turn,

affect person technology fit. Additional support can be found in Cheung et al. (2000), who

assessed that facilitating conditions have a positive effect on usage behavior. They included

the support provided by a company to aid usage as one of the facilitating conditions. CP Lee

et al., 2005

However, some research also suggests that organizational support moderates the relationship

between a person’s perception of technology and future usage behavior. Within the context of

computer-aided design (CAD), Malhotra et al. (2001) examined whether or not the impact of

perceived system qualities on system employees’ performance could vary depending on the

degree of perceived managerial influence. They found that managerial influence interacts with

perceived system qualities to influence system employees’ performance. They suggested that

an organization could increase the level of employees’ usage and performance by providing

sufficient resources for the support function. They also pointed out that the relationship

between perceived system quality and performance would be stronger among employees who

frequently exchanged ideas in the process of adoption and utilization. Stamper and Johlke

(2003) investigated the impact of perceived organizational support on the relationship

between employees’ negative perceptions (i.e., role conflict and role ambiguity) and work

Page 42: Thesis writing

outcomes (i.e., job satisfaction, intent to remain, and task performance). They found that

perceived organizational support attenuates the negative relationship between employees’

negative perceptions and work outcomes. They also suggested that employees who perceived

high levels of organizational support are more likely to have greater job satisfaction and

remain with their organizations longer than workers who experienced low levels of

organizational support. Cp Lee et al (2005)

H7: Organizational support will have a significant influence on the use of audit software

among auditors.

3.4.2.7 Client’s technology adoption

One of the factors that may contribute to the usage of audit software is the level of teccnology

adoption by the audit clients. Alali & Pan (2011) argued that the audit process has been

greatly impacted by the increasing dependence on information technology (IT) by audit client.

Thus, when source of documents are only available in electronic form, the auditing has to

change from “auditing around the computer” to “ auditing through the computer”.

Level of client’s technology adoption may also refer to the complexity of IT adopted by

client. The level of IT use by client effects the decision to use IT specialist by auditors. In fact

SAS No. 94 suggest that auditors consider several factors when deciding whether to use audit

specialist. These factors include; (1) the complexity of the client’s IT, (2) the significance of

changes made to existing client system or implementation of new system, (3) the extent to

which data is shared among client system, (4) the client’s use of emerging technology and (5)

the significant of audit evidence that is available only in electronic form (AICPA 2008, AU

319.31)

The complexity of client IT also effect auditor’s risk assessment procedures. Auditors may be

unable to evaluate clients with complex IT without obtaining electronic records from the

client using CAATTs (Janvrin, Bierstaker, & Lowe, 2009). In tackling this issue, Regulatory

Standards encourage the use of CAATTs in the process of understanding and auditing control

and suggest that client IT consideration be reflected in risk assessment (AICPA 2001, 2006)

H8: The higher the level of technology adoption by the client, the higher the audit software

application by the auditors.

Page 43: Thesis writing

3.4.2.8 Moderating effect of experience

This study focuses on the direct relationship between the auditors’ expectency and the

application of audit software. Although the study is interested in confirming these direct

relationships, the main conceptual focus of this study is on the moderator effect of this

external variable. The relationship between auditors’ expectancy and the application of audit

software may vary depending on the specific conditions of organization (Bhattacherjee, 2001;

Malhotra et al., 2001). The general finding of these studies propose that..

Boner (1990) examined the experience effects in auditing, specifically the role of tassk-

specific knowledge in two audit tasks, analytical risk assessment and control risk assessment.

Results showed that experience aided auditors in acquiring knowledge of relevant cues for cue

selection in analytical procedure risk assessment. Experience also proved to be important

element in evaluating client’s going concern status. Since determining a firm’s going concern

is a complex process, the judments differ for the experience and less experience auditors (Ho,

1994). The result indicated that experienced auditors generated more positive going-concern

judgments compared to less experience auditors.

User experience with infotmation technology (IT) will determine the individuals’ decisions to

discontinue or continue usage of information technology (IT) in information systems (IS)

field (Bhattacherjee, 2001;Flavian, Guinaliu, & Gurrea, 2006; Thong, Hong, & Tam, 2006).

Deng, Turner, Gehling, & Prince (2010) investigated the effects of user experience with

information technology (IT) on user satisfaction with and continual usage intention of the

technology. The research model uses the concept of cognitive absorption (CA) to

conceptualize the optimal holistic experience that users feel when using IT. The results

indicate that the more the users feel the experience of CA with an IT, the more likely they

perceive high utilitarian and hedonic performance of the service, experience positive

disconfirmation of expectation, be satisfied with the technology and in turn intend to continue

the technology usage.

H9: Experience will moderate the relationship between performance expectancy and

audit software application, such that the effect will be stronger for the participants who

have high performance expectancy

Page 44: Thesis writing

H10: Experience will moderate the relationship between effort expectancy and audit

software application, such that the effect will be stronger for the participants who have

high effort expectancy

3.4.2. 9 Moderating effect of computer self-efficacy

CHAPTER 4

RESEARCH METHODOLOGY

4.1 Introduction

The previous chapter discussed the theoretical framework which served as the basis of this

present research. It identified the related variables and described the hypotheses to be tested.

The next stage is to design an appropriate research method for data collection and

propositions testing purposes. This is important as the accuracy of research finding depends

on the adoption of appropriate research methodology. This chapter discusses the research

method used in this study. Section 2 describes the research design for the study. Section 3

explains the research method adopted for study one. This is followed by research method

adopted for study two in Section 4. Technique used for data analysis is explained ion section

5. Finally, Section 6 summaries the chapter.

4.2 Research design

A research design is the framework or plan for a study that describes the procedures to be

followed by researchers in collecting and analysing data to provide answers to the research

questions, thus accomplishing the objectives of the research (Churchill & Iacobucci, 2005; de

Page 45: Thesis writing

Vaus, 2001; Easterby-Smith, Thorpe & Lowe, 1991). Sekaran (2005) describes research

design as the planning of actual study that includes decision on the sampling choice, data

collection and data analyses. Alias (2008) defined research design as the planning of research

activities in terms of how data is collected and analysed, with the following aims: (1) to guide

the researcher to an appropriate research method to find answers to the proposed research

questions, (2) to help the researcher ensure the efficient use of resources, (3) to guide the

researcher to appropriate data collection methods and (4) to select a suitable technique for

data analysis.CP SALAMIAH JAMAL

Bryman & Bell (2007) argue that research design provides a framework for thecollection and analysis of data stating that design reflects decisions about the prioritybeing given to a range of dimensions of the research process. On the other hand, theyconsider research methods as the techniques for collecting data which can involvespecific instruments such as self-completed questionnaires or structured interviews.De Vaus (2001) stated: “the function of a research design is to ensure that theevidence obtained enables us to answer the initial question as unambiguously aspossible”. Sekaran (2003) argued that research design involves a series of rationaldecision-making choices regarding the purpose of the study (exploratory, descriptive,hypothesis testing), its location (i.e., the study setting), the type of investigation, theextent of researcher interference, time horizon, and the level to which the data will beanalyzed (unit of analysis). In addition, decisions have to be made regarding thesampling design, how data is to be collected (data collection methods), how variableswill be measured and analyzed to test the hypotheses (data analysis). According toSekaran (2003), the methods are part of the design; thus, she agrees with Bryman &Bell (2007) that methods are meat to describe data collection.Correspondingly, based on Sekaran’s definition of research design, this study isconducted for the purpose of testing the hypotheses derived from the conceptualframework presented. It is believed that studies employing hypotheses testing purpose

Page 46: Thesis writing

usually tend to explain the nature of certain relationships, or establish the differencesamong groups or the independence of two factors or more in a situation. Hypothesestesting offer an enhanced understanding of the relationships that exist amongvariables. (cp khoulud 2009)

The research design for the current study is that of a non-experimental quantitative research

design since the data is collected using questionnaires. This method of study is chosen since

the objective of the study is to examine the determining factors of audit software application

and its impact to audit performance. In most research relating to the individual perception and

attitudinal aspects, survey is the most popular method used. Specifically, studies related to

perceptions of auditors or accountants are inclined to use surveys as the method. Since this

study is looking at auditor’s perceptions about the application of audit software in practice,

the most suitable way to collect information would be survey. Many researchers have utilized

survey in examining perceptions of auditors and accountants (for example,

Abdolmohammadi, 1991; Bedard et al., 2003; Ismail & Zainol Abidin, 2009). Thus this study

adopted similar method of previous studies in examining the perceptions of auditors toward

the application of audit software in practice.

The present study adopted quantitative survey questionnaire as the research method. The use

of survey questionnaire is motivated by an argument in Beattie and Fearnley (1998, p. 264)

that “the questionnaire approach provides richer insights than is possible using secondary data

analysis, which focuses on economic factors, because the questionnaire instrument includes

both economic and behavioural factors.” They also point out that a behavioral or qualitative

technique is important to clarify theories in accounting research because it can provide “new

insights into buyers' behavior is offered by the ‘relationships approach’ to professional

services developed in the service marketing literature, which classifies relationships (in the

present case, auditor– client relationships) based on buyer type. Cp (al-Ajmi 2009)

A survey questionnaire is an efficient data collection mechanism when researchers know

exactly what is required and how to measure the variable of interest (Sekaran, 2003). In

addition, survey is the most common method used in generating primary data as it provides

quick, efficient and accurate means of assessing information about population (Cooper &

Page 47: Thesis writing

Schindler, 2003). Survey method requires that important variables have to be known first. A

comprehensive review of literature indicated that there were many studies in technology

adoption and auditing that could be used to identify important variables. As this study

adopting UTAUT model as a research framework, the important variables adopted are certain.

Most of the variables adopted from the model have been tested in previous studies and

noticeably majority of them used quantitative survey for their data collection. Table xx shows

the studies that have adopted UTAUT model and used survey approach.

In addition, time dimension is viewed as an important part of the research design because the

time sequence of events and situations is critical to determining causation, and it also affects

generalization of the research findings (Barbie, 2007). There are two primary options

available: cross-sectional and longitudinal research. A cross sectional study focuses on

examining a phenomenon at a single point in time, whereas a longitudinal study involves

examining and collecting data about a phenomenon at different points in time. The present

research was a cross-sectional study, that is, the perception of the auditors about the use of the

audit software in practice and the performance gain was determined at one point in time. Here

start point out that I’m carrying 2 studies.

4.3 Study One: Determinants of user intention to use Audit Command Language (ACL)

and impact to audit performance

4.3.1 The participants

The unit of analysis of this study is an individual. This is suitable for the study that focuses on individual’s behavioral intention to use audit software in audit practice. This study focuses on understanding the determinant factors that lead to the intention to adopt and use of Audit Command Language (ACL) in audit practices. This study is also aimed at understanding the perceived impact of use of ACL to audit performance. Participants of this study were undergraduate students from Diploma in Accounting Information System of Universiti Teknologi MARA (UiTM), Malaysia. Students that have been selected for this study were those who are undertaking ACL as one of compulsory subjects to complete the course. Only three campuses offers this program namely Melaka, Terengganu and Perlis. The questionnaire was distributed to 125 students, 71 were at UiTM Melaka, 32 at UiTM Terengganu and 12 at UiTM Perlis. No responds were receive from UiTM Perlis. Hence, no data from UiTM Perlis included in this analysis, leaving the total number of responds with 103.

The questionnaires were distributed to students right after they submitted the test lab answer paper. The time taken for each student to complete and submit the lab test was noted before the questionnaire is given. The mark given for each student was entered as performance score

Page 48: Thesis writing

in data sheet. Performance score later renamed as specific knowledge and tested as one of moderator variables.

4.3.2 Data collection method

4.3.3 The questionnaire and variables development

A structured questionnaire was developed from existing instruments to enhance the validity

and reliability of the measures. The reliability and validity of survey results depend on the

way that every aspect of the survey is planned and executed, and the questions addressed to

the respondents are the most essential component (Alreck & Settle, 1995). The questionnaire

sections include

4.3.4 Pre-test

A pre-test of the proposed measurement item was conducted prior to the main data collection

to further refine the measurement items and data collection procedures. The purpose of a pre-

test is to test whether the measurement items were clear to respondents and whether they

reflected the conceptual definitions of the constructs they intend to measure. The pre-testing

focused on instrument clarity, question wording and validity. The pre-test was conducted with

three academician. Two accounting lecturers, each from UiTM Kampus Dungun and UiTM

Melaka and one language lecturer from UiTM Kampus Dungun were asked to review the

items for clarity and face validity. The following slight changes were made in response to the

comments received from those lecturers:

1) The instruction for scale selection (scale 1 to 7 to indicate level of agreement) are

adviced to be printed on every new section of the questionnaire.

2) Question 5 in Part C – CGPA last semester was reformed from open question to

interval scale.

3) Several items were removed from the instrument based on the feedback from the

pre-testing subjects.

4.3.5 Validity and reliability

Content validity, construct validity, and reliability are the three essential evaluation criteria

for instrument development. Validity is the degree to which a measure accurately represents

what it is supposed to measure (Hair, Black, Babin, & Anderson, 2010). In general, validity is

concerned to how well the concept is represented by the measures and reliability relates to the

consistency of the measures. The content validity of a measuring construct is the extent to

which it provides adequate coverage of the investigative questions guiding the study (Paino,

Page 49: Thesis writing

2010). Similarly, Gefen (2002) stated that content validity is a qualitative assessment of

whether measures of a construct capture the real nature of the construct. It is usually

established through the literature and pre-test activity. In this study, content validity refers to

the degree to which the survey items and scores from survey questions are representative of

all possible related to the construct of performance expectancy, effort expectancy, social

influence, facilitating conditions, specific knowledge and behavioral intention to use Audit

Command Language (ACL).

Construct validity is the degree to which both the independent and dependent variable

accurately reflect or measure the constructs of interest (Nazif, 2011). It attempts to identify

the underlying constructs being measured and to determine how well the test represents them.

It can be evaluated by judgmental correlation of the proposed test with established,

convergent-discriminate techniques, factor analysis, and multitrait-multimethod analysis

(Paino, 2010). In this study, the factor analysis procedure of SPSS 18.0 was used to determine

the constructs. Although there is a variety of combinations of extraction and rotation

techniques, Tabachnick & Fidell, (2007) argued that the results of extraction are similar

regardless of which method is used.

Once the validity is assured, the next step is to ensure the reliability of measurements.

Reliability is the degree to which the observed variable measures the “true” value and is

“error free”, thus, it is the opposite of measurement error (Hair, Black, Babin, & Anderson,

2010). Coakes (2005) defined reliability as the degree of consistency between two measures

of the same thing. Reliability concerns the extent to which measurements are repeatable

(Nunally & Durham, Validity, reliability, and special problems of measurement in evaluation

research. In handbook of evaluation research. E.L. Struening and M. Guttentag (Eds.), 1975),

or have a relatively high component of true score and relatively low component of random

error (Carmines & Zeller, 1979). It is also defines as the degree to which a test yields the

same scores on a few occasions (Greenberg & Baron, 2006), or the degree of consistency

between multiple measurements of a variable (Hair, Black, Babin, Anderson, & Tatham,

2006). According to Fornell and Larcker (1981), the reliability of a multi-item measure is

estimated by Cronbach’s alpha or Composite Reliability (CR). Most researchers suggested

that the acceptable level of Cronbach alpha is at least 0.7 (Nunally, 1978 & Pallant, 2007)

Page 50: Thesis writing

4.3.6 Operationalisation of Variables

This sub-section discusses how the variables of interest in the study were defined and

operationalized. This study adapted the measures used to operationalize the constructs

included in the investigated model from relevant previous studies, making minor wording

changes to tailor these measures to the context of behavioral intention to adopt ACL in audit

practices. To ensure the content validity of the scales, the items selected must represent the

concept about hich generalisations are to be made (Wang, Wu, & Wang, 2009). Therefore, the

items used to measure ACL impact to audit performance were adapted from Braun & Davis

(2003). The items used to measure performance expectancy, effort expectancy, social

influence and facilitating conditions were adapted from Venkatesh et al., (2003) and

AlAwadhi & Morris (2008).The items for the behavioral intention construct were also adapted

from Venkatesh et al., (2003). Finally, the items used for the demographic profile were

adapted from combination of relevant previous studies.

4.3.6.1 Perceive impact to audit performance

4.3.6.2 Performance expectancy

This independent variable measured the expectation of users of audit software with regard to

the audit software’s ability to enhance the users’ work performance. In this study performance

expectancy was operationalized as the degree of expectancy of student for the use of ACL.

Six items tested performance expectancy as follow:

PE1: Using ACL in my job would enable me to accomplish tasks more quickly

PE2: Using ACL in my job would increase my productivity

PE3: Using ACL would enhance my effectiveness on the job

PE4: Using ACL would make it easier to do my job

PE5: I would find ACL useful in my job

PE6: If i use ACL, I will spend less time on routine job tasks

4.3.6.3 Effort expectancy

This is another independent variable that measured the degree of easiness in learning and

using the audit software. In this study, effort expectancy was operationalized as the effort

Page 51: Thesis writing

expectancy of a student in using Audit Command Language (ACL). Six items tested effort

expectancy as follow.

EE1: Learning to operate ACL would be easy for me

EE2: My interaction with ACL would be clear and understandable

EE3: I would find ACL to be flexible to interact with

EE4: It would be easy for me to become skillful at using ACL

EE5: I would find ACL easy to use

EE6: Overall, I believe that ACL is easy to use

4.3.6.4 Social influence

This independent variable measured the effects that significant others have on influencing

other people’s behaviors. In this study, social influences were operationalized with regard to

their effects on the use of ACL. Five items tested social influence as follow.

SI1: I would use ACL if people who are important to me think that I should use ACL

SI2: I would use ACL if the senior management and staff of the organisation I work with have been helpful in the use of ACL

SI3: I would only use ACL if I needed to

SI4: I would use ACL if my friends used them.

SI5: I would use ACL if the organisation I work with supports the use of ACL

4.3.6.5 Facilitating conditions

Another independent variable is facilitating conditions which gauged the availability of

necessary resources in terms of supporting the use of audit software. Hence, it was

operationalized as the availability of facilitating conditions for the use of Audit Command

Language (ACL). Four items tested facilitating conditions as follow:

FC1: I would use ACL if I have the resources necessary to use.

FC2: Given the resources, opportunities and knowledge it takes to use ACL, it would be easy for me to use ACL

FC3: I have enough tutorial experience to use ACL

FC4: I would use ACL if specific person (or group) is available for assistance with system difficulties

Page 52: Thesis writing

4.3.6.6 Behavioral intention to use ACL

Most of the user acceptance theories assert that behavioral intention in the trigger variable that

leads to actual use of audit software. The behavioral intention is operationalized as the

intention to use ACL after they are graduated and join audit practices. In this study, five

items tested intention to use Audit Command Language (ACL) as follow.

BI1: I intend to use ACL after graduation if the company requires me to do so

BI2: I predict I would work with the company that use ACL after graduation

BI3: I predict I would use ACL after graduation

BI4: Assuming I had access to the ACL, I intend to use it

BI5: Given that I had access to ACL, I predict that I would use it

4.3.7 Control Variables

There are individual characteristics that have been selected to be controlled in both

experiments, namely gender and …. Past studies indicate that these two variables serve as a

good indicators and are significantly related to behavioral intention to adopt audit software.

Numerous studies have found that in certain circumstances women…

4.3.8 Techniques for Analysing Quantitative Data

4.3.8.1 Factor Analysis

Factor analysis was used to verify the number of dimensions conceptualized. The primary

purpose is to define the underlying structure among the variable in the analysis. The analysis

provides the tools for analyzing the structure of the interrelationships (correlations) among a

large number of variables by defining a set of variables that are highly correlated, known as

factors (Hair et al., 2006). This study uses principal component analysis as a factor extraction

method. According to Hair et al. (2006), principal component analysis is most appropriate

when (1) data reduction is primary concern, focusing on the minimum number of factors

needed to account for the maximum portion of the total variance represented in the original

sets of variables, and (2) prior knowledge suggest that specific and error variance represent a

relatively small portion of the total variance.

Page 53: Thesis writing

Before performing factor analysis there are two main issues to consider in determining

whether the data is suitable for factor analysis; sample size, and the strength of the

relationship between the measured variables (i.e. Spearman’s rho). Regarding the sample size,

generally the sample should be more than 50 observations, and preferably the sample size

should be 100 or larger (Hair et al., 2006). Hair et al. (2006) also suggested that, as a general

rule, the minimum is to have at least five times as many observations as the number of

variables to be analyzed, and the more acceptable sample size would have a 10:1 ratio. This

study has 5 variables to be examined, thus 103 respondents obtained meets the sample

requirement to perform factor analysis.

Another issue to consider is the strength of the relationship between the measured variables,

in other word, variables must have sufficient correlations. There are two statistical methods

that are

4.3.8.2 Analysis of Variance (ANOVA)

A review of recent literature in the area of technology adoption that used the UTAUT model

shows that a variety of data analysis techniques have been used. Wang and Yang (2005) study

of the role of personality traits in the context of online stocking used multiple regression and

hierarchical regressions to test the UTAUT with added individual personality traits. Dulle and

Minishi-Majanja (2011) used the descriptive and binary logistic regression statistics of SPSS

in an attempt to exhibit the suitability of the UTAUT model in studying factors contributing to

the acceptance and usage of an open access.

Other studies have used PLS analysis. Gahtani, Hubona and Wang (2007) used PLS-Graph to

determine the relative power of a modified version of UTAUT in determining ‘intention to

use’ and ‘usage behavior’. Zhou, Lu and Wang (2010) used two-step approach to test an

integrated model of Task Technology Fit (TTF) and UTAUT that explains mobile banking

user adoption. First, they analyzed the measurement model to test the reliability and validity,

then used structural model to test their research hypotheses. Anderson, Schwager and Kerns

(2006) used PLS analysis in an examination of drivers for acceptance of tablet PCs by faculty.

In this study, descriptive statistics was used on all the independent and dependent variables.

This was accomplished by calculating the mean, median, minimum, maximum and standard

deviation for each of the items in the questionnaires using SPSS. This would allow one to

describe the distribution of each one of the variables in order to determine whether they are

normally distributed. If the distribution is normal, normal statistical procedures can be used.

Page 54: Thesis writing

Otherwise, if the data is found not normally distributed, transformation may be considered

necessary. Cp Hebron

Regression analysis was conducted for the independent and dependent variables in the model

after descriptive statistics were performed. This was done so that all the variables in the

analysis examined simultaneously with the dependent variable. An advantage of the multiple

regression model is that in can determine the individual effect each one of the independent

variables have on the dependent variable while accounting for the other variable in the model.

In other words, multiple regression provide the ability to assess the contribution of each one

of the independent variable on the overall model when it came to explaining the variation in

the rate of diffusion. Cp Hebron.

4.4 Study Two: Determinant factors and impact of audit software application to audit

performance.

4.4.1 The participants

The target population for the present research is audit staff at the auditing firms who are using

audit software in performing their auditing practices. The targeted population is confined by

the following specific criteria:

a. First, they are users of any audit software packages which are available in the market.

The audit staffs of selected audit firms who have used or currently using audit

software in performing auditing tasks are requested to attempt the questionnaire. The

present research do not focuses on any specific audit software package as different

audit firms used different audit software package depending on the budget and policy

of the firm.

b. The specific audit software users are auditors or audit staff because they are the

relevant personnel to auditing practices investigated in the present research.

This particular method is considered appropriate due to following reasons. Firstly, the main

objective of this present research is to determine the level of audit software application

amongst auditors and their perception of the usage impact on audit performance. The data of

this research comes from auditors who are working in different audit firms that are adopting

Page 55: Thesis writing

audit software. It is generally known that different audit firms are using different type of audit

software. The type of audit software normally classified into standard package, modified

standard package, custom-developed package or any other package offered by the vendors,

4.4.2 Sample and population

The focus of this study is the impact of the use of audit software to individual audit

performance. Therefore, the population that the findings generalised is auditors who worked

with audit firms who are registered members of MIA. The study only focuses on MIA’s

members because in Malaysia only those who are members of MIA

4.4.3 The questionnaire and variables development

The questionnaire for this study was developed based upon the literature review, exploratory

interview and previously tested and validated measurable variables from previous empirical

studies. This survey questionnaire was attuned to take into account the research context,

research objectives, conceptual framework and hypothesized relationships between the study

variables of the current study. The study variables and their multiple items scales which are

used to measure auditors’ perceptions on the impact of audit software application to their

audit performance are described in detail in Table XX

There were 76 questions in the questionnaire. Out of this, 4 related to firm’s profile, 15 related

to the application of the audit software in practice, 5 for audit performance impact, 8 for

computer self-efficacy, 6 for performance expectancy, 4 for effort expectancy, 4 for social

influence, 3 for facilitating conditions, 3 for organizational support,3 for infrastructure

support, 3 for technical support, 8 for training which covers internal and external, 3 for effect

of client’s technology and the remaining 7 related to demographic information. The cover

page of the questionnaire contained the university logo and address, name and email of

researcher, the title of the research, the purpose and who supposed to answer the

questionnaire, the instruction to complete the questionnaire and space for respondents to

include their contact information should they need the research result summary.

The content of questionnaire was structured into eight sections, each encompassing a different

theme. Section A of the questionnaire is on page 2. This section contains questions on the

firm and audit software information. The questions regarded the category of the firm, number

of auditors, the type of audit software currently use in the firm and number of years audit

software being used in the firm were asked in order to obtain understanding on the profile of

Page 56: Thesis writing

audit firm. The scale used in this section was a combination of nominal and open and close

ended questions. Question A1 was asked to identify the category of the firm either big-four,

non big-four international or non big-for local. Nominal scale was used to identify the

category. Question A2 used ratio scale to obtain information regarding the number of auditors

in the firm according to the position. Question A3, used nominal scale to identify the type of

audit software currently use in the firm. Question A4 was an open-ended question that sought

information on how many years has the audit software been used in the firm.

Section B of the questionnaire is on page 3. This section aimed to assess the extent of audit

software being applied for each audit application. Three stages of audit applications where

auditors were assessed on their application of audit software were, client acceptance and

planning, audit testing and audit completion and report writing stage. A seven-point Likert

scale was used to measure individual audit software application in each stage. Hair, Money,

Samouel, & Page (2007) asserted that the more points we use the more precision we get with

regard to the extent of the agreement or disagreement with a statement. 5 statements were

given to measure audit software application in the client acceptance and audit planning stage,

7 statements in audit testing stage and 3 statements at the audit completion and report writing

stage.

Page 4 of the questionnaire comprises of Section C and D. Section C aimed to assess the

auditor’s agreement on the impact of applying audit software in audit practices to his or her

individual audit performance. This section comprises of 5 questions using a seven-point

Likert scale. Question C1 …..

In order to enhance scale validity a few items of the question were in form of reversed item.

Reversed item is intended to relate to the same construct as it is no reversed item, but in the

opposite direction (Weijters, Geuens and Schillewaert, 2008). Reversed item may be use

strategically to make respondents attend more carefully to the specific content of individual

items (Barnette, 2000). Reversed item also used in order to ensure more complete coverage of

the underlying content domain as well as to counter bias due to acquiescence response style

(weijters et al., 2008).

Page 57: Thesis writing

Table xxx

Descriptive of Constructs and Source of Measurement Instrument

Variables Description of Variables Source of Instruments

Individual Audit Performance

The extent to which individual believes that using audit software will improve his or her performance. Another would be perceptions of how much using audit software improved the time, quality, productivity and effectiveness of the job.

Three questions adapted from D’Ambra and Rice (2001) and two questions adapted fromVenkatesh et al. (2003)

Application of Audit Software

The extent of audit software use for each audit application namely, client acceptance and audit planning stage, audit testing stage and audit completion and report writing stage.

Fiveteen questions adapted from Janvrin, Bierstaker and Lowe (2008)

Performance expectancy

The degree to which the auditor believes that using audit software in audit practice will help him or her to accomplish the various audit assignments and attain gain in job performance

Six questions adapted from Venkatesh et al., (2003) and Staples and Seddon (2004)

Effort expectancy

The degree of ease associated with the use of the audit software

Four questions adapted from Venkatesh et al. (2003)

Social influence The degree to which an auditor perceives that important others (colleaques, friends and close family members) believe he or she should use the audit software.

Four questions adapted from Venkatesh et al., (2003) and Staples and Seddon (2004)

Facilitating conditions

The degree to which an auditor believes that an organizational and technical infrastructure exist to support use of the audit software.

Three questions adapted from Thompson et al. (1991)

Organizational support

Extent to which auditors believe that their organization helps and encourages he or she to use an audit software

Three questions adapted from Lee et al. (2004)

Client’s technologyInfrastructure support

The adequacy of the deployment of IT infrastructure (such as network, server and database) in an organization to support job performance.

Three questions adapted from Bhattacherjee and Hikmet

Page 58: Thesis writing

(2008)Technical support

The availability of specialized personnel to answer questions regarding IT usage, troubleshoot emergent problems during actual usage, and provide instructional and/or hand-on support to users before and during usage of audit software

Three questions adapted from Bhattacherjee and Hikmet (2008)

Computer Self-efficacy

An individual’s perception of his or her ability to use audit software in his or her job.

Ten questions adapted from Compeau and Higgins (1995)

Experience

Berdie et al. (1986) stated that the number and quality of responses is positively correlated

with the format and the layout of the questionnaire. Therefore, a booklet type questionnaire

was used. According to Sudman and Bradburn (1982), a booklet type questionnaire prevented

pages from being lost or misplaced, makes it easier for the respondent to turn to pages, looks

more professional and is easier to follow, and makes it possible to use double page format for

questions about multiple events or persons.

4.4.4 Pre-test

The pre-test was conducted with two groups of people, academician and practitioners. For the

first group, two accounting lecturers were asked to review the items for clarity and face

validity. One of them used to be an audit staff for one of the medium size audit firm in

Malaysia before joining academic institution. The other one was senior general auditor in

National Audit department. The following changes were made in response to the comments

received from those lecturers:

1. Section A – Number of auditors in your organization

Originally the list comprises of partners, managers, supervisors, auditors and audit

assistants. As this study is focuses on auditor, “audit assistants” was not suitable and

not supposed to be included auditor’s definition. Thus “audit assistants” was changed

to “junior auditor” and subsequently, “auditors” was changed to “senior auditors”.

2. Section B- At the client acceptance and audit planning stage.

Item b of the question was asking about whether the auditor use audit software as

“internet search tools”. As there is other function which is more important and

commonly used by auditors, this item was changed to “setting materiality level”.

Page 59: Thesis writing

3. Section F- Training factors

2 questions added before questions on internal and external training.

Q1. What type(s) of audit software training have you received?

Q2. The number of training provided for auditors to increase the IT knowledge in their

job per year.

For the second group, two auditors from one of the big-four audit firms were asked to answer

the questionnaire. One of the auditors was the senior manager and the other one was junior

auditor with two years of experience. The purpose was to confirm the terms and items used in

the questionnaire and the adequacy and suitability of items asked to reflect the real situation,

personality and practice of the respondents.

In order to obtain further clarification of adequacy and suitability of the questionnaire, thirty

questionnaires were distributed to students who are pursuing a Masters in Accountancy

program in UiTM Shah Alam. Before the questionnaires were handed to students, precise

explanation was given to them about the objective for the questionnaire distribution. They

were advised to answer and give comments about the suitability of the items asked. Most

importantly, those who had experienced working with audit firm are strongly encouraged to

attempt the questionnaire. Twenty-five students responded and returned the questionnaires.

Out of that number, three students had experienced working with audit firm that use audit

software. However only one of them had experienced working with audit firm for more than

five years and he himself applied audit software in certain audit assignments. Comments

obtained from them were then considered accordingly before the final questionnaire printed.

4.4.5 Reliability and Validity e

It is very important that the items that are used to measure a concept be assessed in terms of

their reliability and validity. Reliability is defined as the “extent to which an experiment, test,

or any measuring procedure yields the same results on repeated trials (Carmines & Zeller,

1979, p.11). It is also been defined as the degree to which a test yields the same scores on a

few occasions (Greenberg & Baron, 2006), or the degree of consistency between multiple

measurements of a variable (Hair, Black, Babin, Anderson, & Tatham, 2006). Reliability

concerns the extent to which measurements are repeatable (Nunally & Durham, 1975). In

other word, reliability means that there is high internal consistency among items that measure

the same construct and the items are highly correlated (Hair et al., 2006). According to

oem, 14/02/13,
I think no need to discuss this here. Discuss this under SEM when come to Measurement model.
Page 60: Thesis writing

(Fornell & Larcker, 1981) the reliability of a multi-item measure is estimated by Cronbach’s

Alpha or Composite Reliability (CR). Bryman (

Validity is the extent to which the scale or set of measure accurately represents the concept of

interest (Hair et al., 2006). In this study, to check on validity, two methods are used, which are

face validity or content validity and construct validity. In the content validity, the instrument

was pre-tested on audit manager and academicians. The purpose is to look into the degree of

correspondence between the items selected to constitute a summated scale and its conceptual

definition. Changes were made on the items in the questionnaire after a pre-test. The detail of

a pre-test procedures has been explained in 4.4.4 above.

Another method used to determine the validity is construct validity. Construct validity is the

degree to which both the independent and dependent variables accurately reflect or measure

the construct of interest. In other word, the extent to which a set of measured items actually

reflects the latent construct intended to measured (Hair et al., 2006). Researchers should

establish two main types of construct validity, namely, convergent validity and discriminant

validity (Zheng, 2007). Convergent validity is established when the items that are indicators

of a specified construct share a high proportion of variance in common. The first step of

convergent validity is to conduct a reliability assessment on the items where all the items

constructed in the questionnaire are tested for convergent validity. Construct validity is the

extent to which a set of measured items actually reflects the theoretical latent construct those

items are designed to measure (Hair et al., 2010). Construct validity is about the accuracy of

measurement and it can help to provide confidence that item measures taken from a sample

represent the actual tru score that exist in the population. Convergent validity refers to the

degree of agreement between two or more measures of the same construct. (cp Mohamed)

In this study, the data were analysed using Structural Equation Modeling (SEM) with

Analysis of Moments Structures (AMOS) software that will be explained in data analysis

section (Chapter 6). In SEM analysis, the validity and reliability testing are conducted through

the assessment of the measurement model. The assessment of measurement model is

conducted prior to the evaluation of structural model. In the present study, detail results of

validity and reliability testing were presented in Chapter 6...see W Nazif page 181

4.4.6 Operationalisation of Variables

This section discusses how the variables of interest in the study were defined and

operationalized. In general, the items that measure the intended variables used unipolar rather

oem, 19/02/13,
To insert summary of validity and reliability testing results.
Page 61: Thesis writing

than bipolar scaling method1 and used a scale of 1 to 7 in order to allow reasonable choices to

respondents. Unipolar scaling method was used because it is argued that it can be easily

understood and does not confuse the respondents. It is also argued that the method implicitly

implied that the respondents use all the scales in the same manner (Ajzen, 2002). Bipolar

scaling has positive and negative ends and can be confusing to respondents. It was anticipated

that the use of a unipolar scale might encourage participation and hence increase responses.

(cp m zawawi)

Based on the theoretical framework depicted in Figure xx, the variables used in this research

are audit software adoption, individual performance, performance expectancy, effort

expectancy, social influence, facilitating conditions, experience and computer self-efficacy.

Efforts were made as much as possible to use the previously tested variables and

measurements. However, new or customized variables were added to the adopted theory

whenever required to fit the context of the research. These variables have been discussed in

general in Chapter 2. They are further discussed in this section in the context of their

operationalization and measurement.

4.4.6.1 Audit software application measurement

this variable measures actual usage of Internet banking facilities. Q13and Q14, of part four measure Internet banking usage in terms of years of adoptionand weekly usage pattern. In addition, Q15 measures typical banking services carriedout on the Internet channel using three patterns of frequency ( rarely-

occasionallyconstantly).

Several information systems studies used extent of usage to represent the IT usage theoretical

construct (Straub et al., 1995,

4.4.6.2 Individual audit performance measurement

4.4.6.3 Computer self-efficacy measurement

1 Unipolar scaling method has only one end or ane extreme. A unipolar scale prompts a respondent to think of the presence

or absence of a quality or attribute. For example, a scale of 1= strongly disagree to 7= Strongly agree. Where a unipolar scale

has that one “pole”, a bipolar scale has two polar opposites. A bipolar scale prompts a respondent to balance two opposite

attributes in mind, determining the relative proportion of these opposite attributes. Statisticians often map these answers to a

scale with 0 in the middle: -3, -2, -1, 0, 1, 2, 3.

Page 62: Thesis writing

4.4.6.4 Performance expectancy measurement

this variable measures the degree to which an individualbelieves that using Internet banking will help him/her attain gains in performingbanking tasks through the Internet channel. Statements 1-4 of part three measure thisvariable using five point Likert scale ranging from (1) “strongly disagree” to (5)“strongly agree”.

4.4.6.5 Effort expectancy measurement

this variable measures the degree of ease associated with the useof Internet banking. Statements 5-8 of part three measure this variable using five pointLikert scale ranging from (1) “strongly disagree” to (5) “strongly agree”.

4.4.6.6 Social influence measurement

this variable measures the degree to which an individual perceivesthat important others believe he/she should use Internet banking and also measuresbank staff support in usage of the Internet channel. Statements 9-12 of part threemeasure this variable using five point Likert scale ranging from (1) “stronglydisagree” to (5) “strongly agree”.

4.4.6.7 Facilitating conditions measurement

this variable measures the technical characteristics of the websitesuch as security, ease of navigation, search facilities, site availability, valid links,personalisation or customisation, interactivity, and ease of access. Statements 13-20of part three measure this variable using five point Likert scale ranging from (1)“strongly disagree” to (5) “strongly agree”.

4.4.6.8 Organizational support measurement

4.4.6.9 Infrastructure support measurement

4.4.6.10 Technical support measurement

Page 63: Thesis writing

4.4.7 Control Variables

4.4.8 Preliminary Data Analysis

In order to analyse quantitative data gathered from the questionnaires, Statistical Package for

Social Sciences (SPSS) version 19 was used. This software has largely been used and

accepted by researchers as a data analysis technique (Pallant, 2007). Therefore, this technique

has been used to screen the data of this thesis in terms of coding, missing data (i.e., using t-

test), outliers (i.e., using Box and Whisker, normal probability plot), and normality (i.e., using

skewness and kurtosis). Each one of these methods has been further defined and described in

section 5.2 (Study One) and 6.2 (Study Two). SPSS was also employed to conduct

preliminary data analysis including frequencies, mean, and standard deviation. These analyses

were conducted for each of the variables to gain preliminary information about the sample. In

short, the SPSS version 19.0 was used for the following analyses:

1. Frequency analysis on respondents’ demographic profile.

2. Descriptive statistics on the maximum, mean, minimum, standard deviation, data

skewness and standard score of all variables employed. Data skewness and kurtosis are

used to determine the existence of data outliers.

3. Pearson correlation to examine the existence of multicollinearity within variables. In

addition, considerations were given to items that have a high correlation with all or most of

the other items (0.90 or above)

4.4.9 Techniques for Data Analysis

In this study, Structural Equation Modeling was used to analyse the data in obtaining

understanding of the impact of audit software application to audit performance while at the

same time examining the factors influence the auditors to apply the software in practice.

Structural Equation Modeling or popularly known as SEM is the Second Generation

Statistical Method widely used by researchers nowadays to analyse the inter-relationships

among variables in a model (Awang, 2012). The term SEM does not designated a single

statistical technique but instead refers to a family of related procedures. SEM is a statistical

methodology that takes a confirmatory (i.e., hypothesis testing) approach to the analysis of a

structural theory bearing on some phenomenon (Byrne, 2010). Other terms used in the

literature are covariance structure analysis, covariance structure modeling or analysis of

covariance structure to classify these techniques together under a single label (Kline, 2011).

Page 64: Thesis writing

SEM is also known as causal modelling (Marcoulides & Heck, 1993) where it represents the

“causal” processes that generate observations on multiple variables (Bentler, 1988).

Several types of computer software are available in the market that can be used to analyse

data using SEM. Among other popular softwares are LISREL (Linear Structural Relations)

developed by Karl Joreskog and Dag Sorbom (Schumacker & Lomax, 2004); EQS developed

by Peter M Bentler (Schumacker & Lomax, 2004); SAS (Statistical Analysis System) (Shaw

& Shiu, 2003); PLS (Partial Least Squares) developed by Herman Wold (Vinzi, Chin,

Henseler, & Wang, 2010) and AMOS (Analaysis of Moments Structures) developed by James

Arbuckle (Schumacker & Lomax, 2004). These leading programs permit some combination

of matrix algebra, equation, and/or graphical implementation in presenting SEMs. It is

suggested that inclusion of matrix conventions in the skill sets allow the users to achieve

deeper insight and avoid certain model misspecification errors ( (Bagozzi & Yi, 2011).

AMOS (Analysis of Moments Structures) is one of the newest software developed and

available in the market which enables researchers to model and analyse the inter-relationships

among constructs having multiple indicators effectively, accurately and efficiently. More

importantly, the multiple equations of correlational and causal relationships in a model are

computed simultaneously. Thus, AMOS is considered as a powerful SEM software that

enables researchers to support their theories by extending standard multivariate analysis

methods, including regression, factor analysis, correlation and analysis of variance. Since this

study is a theory driven (as explained in previous chapter) which examining the relationships

of dependent variables to independent variables using the UTAUT theory, using SEM with

AMOS software is justified.

4.4.9.1 Justification for the use of SEM

SEM is a powerful tool for which it has the ability to assess the unidimensionality, reliability

and validity of each individual construct (Hair et al., 2010, Kline, 2011). It is a combination of

factor analysis and regression analysis and is able to assess a series of relationships (Hair et al,

(2006). That is, it can identify significant relationships among the constructs. It is also able to

assess the relative importance of each variable included in the theory (Marcoulides & Heck,

1993). Further SEM is able to assess the observed variables or indicators as well as

unobserved variables or latent variables. Since the present study contains both observed and

Page 65: Thesis writing

unobserved variables, and the conceptual model of the study involves multiple relationships

among variables, it is appropriate to use SEM.

Unobserved variable or also known as latent variable can be specified, estimated and assessed

by a set of indicators or items (hair et al., 2006). Latent variable can be exogeneous or

endogeneous variables, which are equivalent to independent and dependent variables,

respectively. Thus, latent variables may not be measured accurately because there is a

possibility that significant indicators are excluded. Nevertheless, it can be overcome by

including all known significant indicators. Indicators are observed variables and are also

known as manifest variables. Latent variables are considered as causes of indicators

(Burnkrant & Page Jr.,1988), while the indicators are the effects. The hypotheses are tested on

latent variables or constructs rather than on the indicators (Burnkrant & Page Jr.,1988). The

structural model is evaluated based on the significance of the paths and based on the

explained variance of the endogeneous variables. This is evaluated by examining R2 (Fornell

& Larcker, 1981)

The present study has xx latent variabless which comprise one endogeneous latent variable,

that is audit performance; six exogeneous latent variables comprising performance

expectancy, effort expectancy, social influence, facilitating condition, organizational support

and computer self-efficacy; and one exogeneous-endogeneous latent variable, that is,

application of audit software. The theoretical model of this study also consists of two

moderator latent variables, that are, client technology and experience.

SEM is described as a statistical methodology that takes a confirmatory (i.e. hypothesis

testing) approach to analyse the proposed theoretical framework examined in the study

(Byrne, 2010), There are two important aspects of SEM: i) the causal process which are

represented by a series of structural equations in the form of regression equations, and ii) the

structural relationships are modelled pictorially for clearer conceptualization of the

hypotheses been investigated. SEM can simultaneously test the extent to which the entire

systems of variables conceptualised as structural equation/s are consistent with the data

collected from the field. If the data collected from the field adequately explains the

conceptualised model under the SEM, it follows that the model adequately explains the

structural relationship between the constructs, and the structural adequacy (that is, goodness-

of-fit) may be measured by series of indicators.

oem, 14/02/13,
Put this at the last paragraph...to summaries the SEM in this study.
Page 66: Thesis writing

SEM also considers measurement errors. Measurement error is an error associated with

observed variables, whic reflects on the adequacy in measuring the factors being predicted.

This is explicitly considered by modelling them in both the measurement model and structural

model. Measurement error derives from two sources; random measurement error (in the

psychometric sense) and error uniqueness, a term used to describe error variance arising from

some characteristic that is considered to be specific (or unique) to a perticular indicator

variable. Such error often represents non-random (or systematics) measurement error (Byrne,

2010). In contrast, regression analysis assumes no measurement error.

4.5 Structural Equation Modeling (SEM)

Structural Equation Modelling (SEM) is the main statistical technique used

in the current study to analyse the dataset and to test the hypotheses.

Despite SEM being a relatively new technique, its adoption as a research

tool has gained increasingly wider acceptance, especially for testing the

relationships in a theoretical model (Mayer & Leone, 1999; Hair et al.,

2006). As noted by Hair et al. (2006), SEM is the only technique that allows

the simultaneous estimation of multiple equations. These equations show

the direction and interrelations of multiple constructs in the model, making

SEM equivalent to performing factor analysis and regression in a single

step.

SEM may be used as a more powerful alternative to multiple regression,

path analysis, factor analysis, time series analysis and analysis of

covariance. It combines an econometric focus on prediction with a

psychometric perspective on measurement, using multiple observed

variables as indicators of latent or unobserved concepts. Because the

current study involved testing complex interactions among multiple

independent, dependent, and moderating variables (performance

expectancy, effort expectancy, social influence, facilitating condition,

organizatioanal support, infrastructure support, technical support, the

application of audit software, performance impact, and training and

Page 67: Thesis writing

experience as moderators), SEM was the best option compared to other

techniques.

As mentioned above, SEM has become a popular multivariate approach in a relatively short

period of time. Researchers are attracted to SEM because it provides a conceptually appealing

way to test theory (Hair et al., 2010). They further argued that if a researcher can express a

theory in terms of relationships among measured variables and latent constructs, then SEM

will assess how well the theory fits reality as represented by data. Thus, it can be said that the

rule of using SEM here is no model should be developed without some underlying theory.

This study adopted the UTAUT model, a theory which has been widely used to support the

research on technology acceptance. Hence, these arguments support the justification of

having adopted SEM as a statistical tools and data analysis approaches.

SEM comprises of two components: the measurement model and the structural model. The

following sub-sections explain the both models and their specifications.

4.5.1 Measurement Model Specification

A measurement model specifies how the latent constructs are measured in

terms of the observed variables, followed by the assessment of their

dimensionality, goodness-of-fit (GOF) and validity. Each latent construct is

usually associated with multiple measures and is linked to its measures

through a factor analytic measurement model. That is, each latent

construct is modelled as a common factor underlying the associated

measures. The measurement model is the model that demonstrate the relationship between

response items and their underlying latent construct (Awang, 2012). A measurement model is

a “sub-model in SEM that (1) specifies the indicators for each construct and (2) assesses the

reliability of each construct for estimating the causal realtionship” (Gefen, Straub, &

Boudreau, 2000, p. 70).

Measurement model assessment can be achieved by three approaches: the exploratory factor

analysis approach, the confirmatory factor approach and hybrid approach (Ahire & Devaraj,

2001). Exploratory factor analysis (EFA) aproach only able to define possible relationships in

the most general form before allowing the multivariate technique to reveal relationships. Hair

et al. (2010) argued that confirmatory factor analysis (CFA) approach differs from EFA

approach in that the latter extracts factor based on statistical results not on theory and can be

conducted without prior knowledge of the number of factors or which items belong to which

Page 68: Thesis writing

construct. Whereas with CFA, both, the number of factors within a set of variables and factor

loading for each item is known to the researcher before results can be computed to reveal

relationships. Anderson & Gerbing (1988) strongly recommend CFA as a more rigorous

statistical procedure to refine and confirm the factor structure because EFA cannot ensure

unidimensionality.

As suggested by Baumgartner & Homburg (1996), to conduct CFA on the

items to be aggregated, CFA is conducted on every single construct that is

incorporated in the specific measurement model to present evidence of

construct dimensionality. Each single factor model is stabilised by deleting

ill-fitting items. Next, CFA is performed on the overall measurement model

comprised of purified construct measures derived from the previous step.

This procedure is intended to assess the quality of the measurement

model by investigating the goodness-of-fit (GOF) and construct validity. All

of the assessment measures used for CFA are summarised in Table xxx

To assess the measurement model validity is to assess how well the hypothesised

measurement models describe the sample data (Byrne, 2010). In other words, to compare the

theory with the reality as represented by the observed data (Hair et al., 2010). The term used

for describing this is “model fit” which is the focal point in SEM (Byrne, 2010). Model fit can

be assessed by examining the goodness-of-fit indices and assessing the construct validity.

Goodness-of-fit indicates how well the measurement model reproduces the sample data (the

covariance matrix), that is, how similar the observed covariance matrix is to the estimated

covariance matrix (Hair et al., 2010).

4.5.1.1 Measurement Model Fit

In SEM, there are series of goodness-of-fit indexes that reflect the fitness of the model to the

data at hand. So far, there is no agreement among the researchers which fitness indexes should

be reported (Awang, 2012). Hair et al. (2010) and (Holmes-Smith, Coote, & Cunningham,

2006) recommend the use of at least three fit indexes by including at least ane index from

each category of model fit. The three fitness categories are absolute fit, incremental fit, and

parsimonious fit.

Absolute fit indices

Page 69: Thesis writing

The absolute fit index indicates the extent of the correspondence between the covariance

matrix as implied by the fixed and free parameter specified in the model were estimated

(Hoyle & Panter, 1995). Therefore, it gauges the badness-of-fit (Hoyle & Panter, 1995) or

lack-of-fit (Mulaik, Alstine, Bennett, Lind, & Stilwell, 1989) since the greater the absolute fit

index, the greater the departure between the implied and observed covariance matrix.

Absolute fit indices are direct measures of how well the proposed model reproduces the

observed data or fits the sample data (Hair et al., 2010). The most fundamental absolute fit

index is the chi-square T statistic (x2 Statistic). x2 statistic is the only statistically based SEM

fit measure and is essentially the same as thex2 statistic used in cross-classification analysis

between two nonmetric measures. The only crucial distinction is that when used as goodness-

of-fit measure the researcher is looking for no differences between matrices (i.e., low x2

values) to suppport the model as representative of the data (Hair et al., 2010). In using other

techniques, researcher normally looked for a smaller p-value (less than .05) to show that a

significant relationship exist. But with the x2 test in SEM, inferences are made in some way

that is exactly opposite. When p-value for the x2test to be small (statistically significant), it

indicates that the two covariance matrices are statistically different and indicates problem

with the fit. Therefore, in this thesis, the result that shows relatively small x2 value and

corresponding large p-value is sought for to indicate no statistically significant difference

between the two matrices, to support the idea that a proposed theory fits reality.

The second measure of absolute fit indices used within this thesis is the Goodness-of-Fit

Index (GFI) proposed by m ,

Incremental fit indices

Incremental fit or comparative fit indices differ from absolute fit indices in that they assess

how well a specified model fits relative to some alternative baseline model (most commonly

referred to as null model), which assumes all observed variables are uncorrelated ( (Al-Qeisi,

2009). This class of fit indices represents the improvement in the fit by the specification of

related multi-item constructs.

Parsimonious fit

Page 70: Thesis writing

As a conclusion, the overall model fit needs to be assessed with one or more goodness-of-fit

measures.Table xxxx provides the description and benchmark for each measure.

Table xxx Index category and the Level of Acceptance for every index

Name of category Name of index

Level of acceptance Comments

1. Absolute fit Chisq P > 0.05 Sensitive to sample size > 200

GFI GFI > 0.9 GFI = 0.95 is a good fitRMSEA RMSEA > 0.08 Range 0.05 – 1.00 acceptable

2. Incremental fit AGFI AGFI > 0.90 AGFI = 0.95 is a good fitCFI CFI > 0.90 CFI = 0.95 is a good fit

TLI TLI > 0.90 TLI = 0.95 is a good fit

NFI NFI > 0.90 NFI = 0.95 is a good fit

3. Parsimonious fit

Chisq/df Chi square/ df The value should be below 5.0

Page 71: Thesis writing

4.5.2 Structural Model Specification

The structural model is the model that demonstrates the correlational or causal dependencies

among the measurement models in the study (Awang, 2012). The latent constructs are

assembled into the structural model based on the hypothesized inter relationships among

them. The structural model analysis can be carried out only when the measurement models

have been confirmed and validated. Structural model is syntheses of path models and

measurement models. A structural model represents the theory with a set of structural

equations and is usually depicted with visual diagram. Structural models are referred to by

several terms, including a theoretical model or, occasionally, a causal model (Hair et al.,

2010). A causal model infers that the relationships meet the conditions necessary for

causation. This stage involves assigning relationships among the construct based on some

theoretical model (Hair et al., 2010). The structural relationship between any two constructs is

represented empirically by the structural parameter estimate, also known as a path estimate.

As in path analysis of traditional method, the specification of a structural model allows test of

hypotheses about effect priority (Kline, 2011). Unlike path models, though, these effects can

involve latent variables because structural model also incorporates a multiple-indicator

measurement model, just as in CFA.

In the present study, the relationships in the structural model are based on the hypothesized

structural model, which need to be tested using SEM analysis. Similar to the measurement

model, the error terms, residuals, and metrics have to be specified. In situations where the

hypothesized structural model solution is not admissible, the indicator variance that causes an

inadmissible solution was specified to 0.005.

4.6 Summary

Page 72: Thesis writing

CHAPTER 5

RESULTS AND DISCUSSIONS OF FINDINGS

STUDY ONE: DETERMINANTS OF USER INTENTION TO USE AUDIT COMMAND

LANGUAGE (ACL) AND IMPACT TO AUDIT PERFORMANCE

5.1 Introduction

This chapter presents and discusses the results of the study based on the survey questionnaires

and their respective measurement. The first section presents the preliminary analysis on

normality, reliability and factor analysis followed by additional analysis using SPSS. The

subsequent sections present and discuss the profile of the respondents using descriptive

analysis. The chapter then continues with the presentation and discussion of the hypotheses

testing using ANOVA and hierarchical regression analysis. This chapter ends with the

summary of the results from hypotheses testing.

5.2 Preliminary analysis

Preliminary analysis addressed the normality, reliability and factor analysis on the data and

items used in this study. Editing and adjusting were used to edit and remove some items

according to the statistical results.

5.2.1 Normality analysis

Data screening and transformation techniques are useful in making sure that data have been

correctly entered and that the distribution of variables that are to be used in analysis are

normal. Table 5.1 summaries the assessment of normality for the variables used in the study.

The test of normality for all variables and items suggests that these variables are normally

distributed. The log transformation was used to normalize the distribution of data (Pallant,

2007). This involved mathematically modifying the scores using various formulas until the

distribution looks more normal. The Kolmogorov-Smirnov statistics with a Lilliefors

Significance level was used for testing normality. A non-significant result (Sig. value of more

than .05) indicates normality. In this study, all items show the significance value less than .05,

suggesting violation of the assumption of normality. However this is quite common for thye

sample more than 100.

Page 73: Thesis writing

Table 5.1 Tests of normality

Kolmogorov-Smirnova Shapiro-WilkStatistic df Sig. Statistic df Sig.

Bhvr_Int .148 101 .000 .948 101 .001Perf_Expct

.098 101 .018 .972 101 .030

Efrt_Expct

.102 101 .012 .967 101 .012

Soc_Inf .116 101 .002 .975 101 .049Fac_Cond .111 101 .004 .972 101 .032a. Lilliefors Significance Correction

5.2.2 Reliability analysis

Reliability is a measure of the internal consistency of a set of scale items (Pallant, 2007).

Internal consistency refers to the degree to which the items make up the scale “hang

together”. There are a number of different reliability coefficients. One of the most commonly

used is Cronbach’s Alpha, which is based on the average correlation of items measuring a

variable within a test if the items are standardised. If the items are not standardised, it is based

on the average covariance among the items. Cronbach’s Alpha can be interpreted as a

correlation coefficient and its value ranges from 0 to 1. The closer to 1, the more reliable the

scale and a value deemed highly reliable if coefficient scale above 0.7 (Pallant, 2007).

Table 5.2 illustrated the reliability for each construct. Cronbach’s Alpha for the overall items

of variables is valued above 0.7, which indicates highly reliable measures.

Table 5.2: Reliability of the construct variables

Construct Variables Cronbach alpha

coefficient

No. of items

Behavioral Intention 0.924 5

Performance Expectancy 0.924 6

Effort Expectancy 0.932 6

Social Influence 0.835 5

Facilitating Condition 0.789 4

Page 74: Thesis writing

5.2.3 Factor analysis

The general purpose of factor analysis is to condense (summarise) the information contained

in a number or original variables into a smaller set of new, composite dimensions (factors)

with a minimum loss of information. Factors analysis is keyed to four issues: specifying the

unit of analysis; achieving data summarization or data reduction; variable selection; and using

factor analysis results with other multivariate techniques. Factor analysis provides insight into

the interrelationships among variables and the underlying structure of the data.

According to Kaiser (1974), the average of the value of Kaiser-Meyer-Olkin Measure of

Sampling Adequacy (KMO) should be at least 0.60. It is inappropriate to implement factor

analysis if the value is below 0.50. KMO uses interpretive adjective to rate the adequacy of

samples. A measure of 0.90 and greater is “marvellous’, 0.80 to 0.89 is “meritorious:, 0.7 to

0.79 is “middling”, 0.6 to 0.69 is “mediocre”, score 0.5 to 0.59 “miserable”, and those falls

below 0.50 are “unacceptable” (George & Mallery, 2001). The KMO calculation for this

study determined a “ “ value which even so is considered acceptable and adequate for

conducting factor analysis. Table 5.3 presents the KMO and Bartlett’s test for this study.

5.2.4 Descriptive Statistics of Participants

Table 5.4 shows the descriptive statistics of the participants. From 103 students participated,

85 .5% were female and 17.5% were male. In term of academic result, about half (43.7%) of

the students score CGPA 3.00 to 3.49. About quarter (25.2%) of them score CGPA 2.50 –

2.99 and another quarter (25.2%) managed to score CGPA above 3.5. With regards to

experience working with ACL, more than half (59.2%) do not have any practical experience

working with ACL whilst 40.8% do.

Table 5.4: Descriptive Statistics

Categories Frequency Percentage

(A) Gender

Male 18 17.5

Female 85 82.5

Total 103 100

(B) CGPA

2.01 - 2.49 5 4.9

2.50 - 2.99 26 25.2

3.00 - 3.49 45 43.7

Page 75: Thesis writing

3.50 and above 26 25.2

Missing 1 1

Total 103 100

(C) Experience working with ACL

No 61 59.2

Yes 42 40.8

Total 103 100

5.2.5 Correlation analysis

Prior to multiple regressions, correlation analysis was executed. Table 5.5 shows the results of

correlation analysis. There was a significant relationship between performance expectancy (r

= 0.683, p < 0.01), effort expectancy (r = 0.731, p< 0.01) and social influence (r = 0.667, p <

0.01) with behavioral intention. No significant relationship can be found between gender and

academic score (CGPA) with respect to the relationship with behavioral intention. However,

there was a significant relationship between experience (r= 0.209) and specific knowledge (r=

0.198) with the behavioral intention at p < 0.05.

Table 5.5 : Pearson’s Correlation for n=103

Gen Exp CGPA SK PE EE SI BI

Gender 1.000

Experience -0.242** 1.000

CGPA 0.320** -0.040 1.000

Specific Knowledge -0.054 0.203** 0.147 1.000

Performance

Expectancy -0.067 0.178 0.033 0.191 1.000

Effort Expectancy -0.148 0.134 0.019 0.186 0.700** 1.000

Social Influence -0.052 0.123 -0.007 0.248* 0.623** 0.602** 1.000

Behavioral Intention -0.047 0.209* 0.060 0.198* 0.683** 0.731** 0.667** 1.000

Notes: *p < 0.05; **p < 0.0

Page 76: Thesis writing

5.3 Hypotheses Testing

A hierarchical regression analysis was performed to examine the direct effect of performance

expectancy, effort expectancy and social influence to the behavioral intention after

controlling for the influence of gender and academic score (CGPA). Table 5.6 shows the

results of the hierarchical regression analyses on the bahavioral intention. In all equations, the

control variables, gender and CGPA, were the first block entered. Both of the control

variables, gender and CGPA, were not significantly influenced the dependent variable. Model

1 examined the direct effect of determinant variables on the behavioral intention. After the

entry of determinant variables and moderators, the total variance explained by the model as a

whole was 64.5 percent. All determinant variables have shown a significant relationship with

dependent variable. The unstandardized regression coefficients associated with the effect of

performance expectancy (B = 0.21, p < 0.1), effort expectancy (B = 0.471, p < 0.05) and

social influence (B = 0.393, p < 0.05) were significant. This implies that for the individual

factors, participants with high performance expectancy and effort expectancy lead into high

intention to adopt ACL. These results are consistent with previous studies (Gupta et al., 2008;

AbuShanab and Pearson, 2007). Thus H1 and H2 were supported.

The interaction effect of specific knowledge was tested in Model 2. The unstandardized

regression coefficient (B = - 0.014) associated with the interaction between specific

knowledge and performance expectancy was significant at the p < 0.05 level. This implies

that the intention to adopt ACL increases with the existence of specific knowledge among the

participants with high performance expectancy. However the result did not show any

significant interaction effect of specific knowledge to effort expectancy. Overall, this

interaction term explained and additional 2.3 percent of the variance in behavioral intention

over and above the 64.5 percent explained by the direct effects of three main variables and

moderators. Thus, H3a was supported but not with H3b.

Model 3 tested the interaction effect of experience. The result shows that experience failed to

moderate the relationships of performance expectancy and effort expectancy to behavioral

intention. Negative unstandardized beta value of -0.002 (performance expectancy) and -0.222

(effort expectancy) explained that existence of experience in using ACL gave opposite

direction of relationship. The intention to adopt audit technology became lower to the

participants who have higher performance and effort expectancy after interaction of

Page 77: Thesis writing

experience. This results contrary to what was proposed by UTAUT model, but consistent with

previous studiy by AbuShanab and Pearson (2007). Thus, H4a and H4b were not supported.

The significance or non-significance of the direct effect in this model can be used as a basis

for conclusion about pure versus quasi moderation. Performance expectancy has a significant

direct relationship with behavioral intention. The result show that there was a significance

interaction effect between performance expectancy (B= -0.011, p < 0.1) and behavioral

intention indicated that a quasi-moderation of specific knowledge. However, there was no

moderation effect of specific knowledge to effort expectancy. Similarly, there was also no

moderation effect of experience to either performance or effort expectancy.

Social influence was found to be a determinant of behavioral intention in many previous

studies (Venkatesh et al., 2003; AlAwadhi and Morris, 2008; Wu et al., 2008). Similarly, the

result of this study shows that social influence has a significant direct relationship with

behavioral intention throughout the regression analyses with or without the interaction effect

of moderators. Thus, H5 was supported.

Table 5.6 : Hierarchical regression analyses on behavioral intention.

Model 1 Model 2 Model 3

Beta t-value Beta t-value Beta t-value

Constant 4.812 7.99** -5.396 -2.600** -1.647 -1.671**

Control variables

Gender -0.196 -0.654 0.196 1.039 0.162 0.838

CGPA 0.100 0.851 0.073 0.907 0.043 0.571

Main variables

PE 0.210 1.788* 1.280 2.671** 0.194 1.248

EE 0.471 4.777** 0.172 0.373 0.467 4.721**

SI 0.393 3.788** 0.407 4.005** 0.482 3.321**

Spec Knw 0.001 0.124 0.050 2.058** 0.001 0.121

Page 78: Thesis writing

Exp 0.239 1.665 0.276 1.951** 1.415 1.346

Interactions

Spec Knw * PE -0.014 -2.293**

Spec Knw * EE 0.004 0.732

Exp * PE -0.002 -0.012

Exp * EE -0.222 -1.076

R2 0.645 0.669 0.652

Adjusted R2 0.619 0.636 0.617

R2 Change 0.636 0.023 0.006

F Change 3.198** 0.816

Note; **p<0.05, *p<0.1

5.4 Discussion of Findings

This study seeks to extend our understanding of technology adoption in different field and

context by extending UTAUT to audit practice. This study apply the UTAUT model in the

context of accounting students who are likely to adopt this technology after graduating and

engaging in audit practice. We theorized the same relationships as those theorized in the

original model with respect to the effects of performance expectancy, effort expectancy, and

social influence on behavioral intention to adopt audit technology. The empirical test of

amended UTAUT model was able to identify constructs determining the behavioral intention

to adopt audit technology as well as the effect of moderators on the relationship between the

predictors and outcome. As theorized, we found that all determinants have shown positive

direct significant relationship with the behavioral intention to adopt audit technology.

However, for the interaction effects, specific knowledge has only interacted with performance

expectancy to moderate behavioral intention. Experience did not moderate the behavioral

intention at any interaction.

The influence of performance expectancy on respondents’ behavioral intention was

significant, but the coefficient was so small. This implies that participants with high

performance expectancy had slightly high intention to adopt Audit Command Language

Page 79: Thesis writing

(ACL). Such a result supports the work of Venkatesh et al. (2003) in the UTAUT, Davis

(1989) in TAM, Venkatesh and Davis (2000) in TAM2 and other replication of those model

(Gupta et al., 2008; AlAwadhi and Morris, 2008 ; Wang and Wang, 2010). With respect to the

interaction effects, the result of this study shows that specific knowledge moderate the

relationship of performance expectancy and behavioral intention. This indicates that

respondents with greater specific knowledge have realized the benefit they get from using

ACL and can relate such benefit to the intention to adopt audit technology. However

experience effect failed to moderate the relationship with behavioral intention. This implies

that respondents with experience of using ACL have not realized the benefit of the experience

to the job performance and hence they cannot relate such benefit to the intention to adopt

audit technology.

Effort expectancy showed a positive direct relationship with behavioral intention. This

support the existing literature on the topic that intention to use of a technology is dependent

on how easy it is to use (Venkatesh et al., 2003; AlAwadhi and Morris, 2008). The

statistically significant influence of effort expectancy suggests that respondents are opted to

adopt ACL when they believe the technology is easy to use with the minimum effort to learn.

With respect to the interaction effects, both specific knowledge and experience failed to

moderate the relationship with behavioral intention. This gave the signal that the increase in

specific knowledge and experience do not give any effect to the level of intention to adopt

audit technology.

As a conclusion, the results support the hypothesis that the higher expectation of the

respondents to the performance expectancy, effort expectancy and social influence resulted in

higher intention to adopt ACL. And, participants with higher performance expectancy are

more likely to adopt ACL when they have specific knowledge about using ACL. Our results

show that UTAUT is a relevance model to understand the behavioral intention to adopt audit

technology among accounting students who are most likely to engage in audit practices after

they graduated.

5.5 Summary

Page 80: Thesis writing

CHAPTER 6

RESULTS AND DISCUSSIONS OF FINDINGS

STUDY TWO: DETERMINANT FACTORS AND IMPACT OF AUDIT SOFTWARE

APPLICATION TO AUDIT PERFORMANCE

6.1 Chapter Overview

This chapter presents a detailed discussion of data analysis and results from a survey

conducted among auditors in Malaysia using structural equation modeling (SEM). Analyses

of results are divided into three sections. The first section presents the data screening process

prior to subjecting the data sets for the two steps structural equation modeling procedures,

measurement model and structural model. This second section presents descriptive statistics

of the sample which includes discussion of response rates and profiles of respondents. The

third section presents the results of Structural Equation Modelling (SEM) used to test the

hypotheses derived from the model. This SEM analysis involves the assessment of

measurement model and the evaluation of structural model. This path analysis technique was

used to test hypotheses proposed in this present research. The result of the hypotheses testing

were presented and discussed in the later section of this chapter.

6.2 Data Screening

Prior to data analysis, research instrument items were examined, through SPSS statistical

package, for accuracy of data entry, missing value, outliers and normality.

6.2.1

6.3 Descriptive Statistics

6.3.1 Response rate

Response rate was referred to as “the proportion of subjects in a statistical study who respond

to a researcher’s questionaire (http://dictionary.bnetcom) which is also similar to a definition

of response rate by Market Direction Analytical Group, (2001), “the proportion of persons

included in the sample who actually complete the questionnaire or interview”. In any study

Page 81: Thesis writing

employing the survey questionnaire, the issue of low response rate which leads to sample bias

was always the concern. This study followed the rule of thumb by Hussey and Hussey (1997)

regarding the minimum requirement of 10% respond rate should be acquired in avoiding

sample bias. In addition, the current study followed the guidelines from a “table for

determining returned sample size for a given population size for continous and categorical

data” by Bartlett, Kotrlik, & Higgins, (2001) that required the minimum responded sample

size of 102 for a population of 700. A returned sample size of 104 was needed for a

population of 800 (Table 6.1)

Table 6.1: Table for determining minimum responded sample size for a given

population size for continous and categorical data

Source : Bartlett, Kotrlik, and Higgins (2001)

Sample SizeContinuous data categorical data

Population

alpha=.10

alpha=.05

alpha=.01 p=.50 p=.50 p=.50

Size t=1.65 t=1.96 t=2.58t=1.65

t=1.96

t=2.58

100 46 55 68 74 80 87200 59 75 102 116 132 154300 65 85 123 143 169 207400 69 92 137 162 196 250500 72 96 147 176 218 286600 73 100 155 187 235 316700 75 102 161 196 249 341800 76 104 166 203 260 360900 77 105 170 209 270 3821000 79 106 173 213 278 3991500 83 110 183 230 306 4612000 83 112 189 239 323 4994000 83 119 198 254 351 5706000 83 119 209 259 362 5988000 83 119 209 262 367 61310000 83 119 209 264 370 623

Hussey & Hussey, (1997) suggested 22.57 percent of response rate denoted the avoidance of

sample bias which meant it adequately represented the population. As the sample of this study

Page 82: Thesis writing

was 751, based on Table 4.1 above, 104 responses was needed as the minimum requirement

of returned sample size by Bartlett et al. (2001). As this study adopted SEM for the data

analysis purposes, sample size requirement as ruled out by Hoyle (1995) was also considered.

Sample size of between 100 to 200 was suggested to gain confidence in the goodness of fit

test for SEM. Hair, Black, Babin, & Anderson, (2010) suggested that minimum sample size

should be based on the model complexity and basic measurement model characteristics.

Minimum sample size of 150 is required for the models with seven or fewer constructs,

modest communalities (.5), and no underidentified constructs.

In this study, 751 questionnaires were distributed to the auditors of 68 identified udit firms

throughout Malaysia (Peninsular, Sabah and Sarawak). The firms addresses were obtained

from MIA directory list 2011. Of the 751 questionnaires distributed, a total of 320 responses

were received of which 38 were excluded because they were totally empty questionnaires or

answered by those who are not suppose to answer the questionnaire. As this study only

targeted the auditors who were using audit software in their practice as samples, these

responses were excluded. Therefore, a total of 282 questionnaires were considered valid and

were used for empirical analysis. This gives a response rate of 37.54 percent, far above the

minimum number of responses required and thus, fulfilled the requirements as suggested by

Bartlett et al. (2001), Hoyle (1995) and Hair et al. (2010).

This response rate was higher than similar research that involved members of MIA sample

(e,g., Ismail and Zainol Abidin, 2009 (8.7.0%), Sarina, 2006 (12,3%), Md Salleh and Ismail,

2002 (9%)). Nevertheless, the rate was about the same with other research on IT adoptoption

that involved questionnaires directed to …..

6.3.2 Profile of the respondents

The variables of interest requested in the questionnaire were gender, age, level of education,

current position, firm category, type of audit software being used in the firm, duration of audit

software being used in the firm and duration of experience using audit software. For the

firm’s category, the questionnaire requires auditors to indicate whether they are from Big 4

audit firm, non Big 4 international or non Big 4 local firm. The Big 4 consists of four big

accounting firms namely, Price Waterhouse and Cooper, Ernst and Young, KPMG and

Delloite and Touche. The remaining firms are either international or local.

Table 6.2 presents the profile of the respondents. A total of 282 respondents were included in

this study with 96 (34.0 percent) are male and 186 (66.0 percent) are female. It seems that the

Page 83: Thesis writing

respondents were not evenly distributed in term of gender. However, as the objective of this

study was not focuses on the gender different, the uneven distribution of gender profile was

not be the big concerned. It consistent with the informal observation that female auditors are

dominant in most audit firms.

As for the age range, out of 282 respondents, 227 (80.5 percent) were in the age range of 23-

29 years, 47 (16.7 percent) were in the range of 30-39 years and the rest were above 40. Only

one respondent was over 50 years of age. The data revealed that he is a partner of one of the

non big-four local audit firm. With regards to the highest education obtained, about more than

half or 205 (72.7 percent) were the degree holder. 69 respondents (24.5 percent) hold the

professional qualification, while the rest were master holder. The distribution of the level of

education consistent with the distribution of respondents’ current position in the firm. The

results of the dataset show that 100 (35.5 percent) and 135 (47.9 percent) respondents are

senior auditor and junior auditor respectively. 30 respondents (10.6 percent) are audit

manager and 9 respondents (3.2 percent) are supervisor, while the rest are partners.

Table 6.2: Profile of Respondents.

Demographic

variables

Categories Frequencie

s

Percentage

(%)

Gender Male 96 34

Female 186 100

Age range 23-29 227 80.5

30-39 47 16.7

40-49 7 2.5

50 and above 1 0.4

Level of education Master 8 2.8

Page 84: Thesis writing

Professional 69 24.5

Degree 205 72.7

Current position Partner 8 2.8

Manager 30 10.6

Supervisor 9 3.2

Senior auditor 100 35.5

Junior auditor 135 47.9

Firm category Big 4 79 28

Non Big-4

International

48 17

Non Big-4 Local 155 55

Type of Audit

Software

Standard package 89 31.6

Modified standard

package

75 26.6

Custom developed

package

102 36.2

Others 16 5.7

Years AS being used

in

3 years and below 119 42.2

the firm 4 to 6 years 92 32.6

7 to 9 years 30 10.6

Page 85: Thesis writing

9 years and above 41 14.5

Years using AS 3 years and below 238 84.4

4 to 6 years 33 11.7

7 to 9 years 4 1.5

9 years and above 7 2.4

Demographic variables Categories Frequencies Percentage (%)

Gender Male 96 34

Female 186 100

Age range 23-29 227 80.530-39 47 16.740-49 7 2.550 and above 1 0.4

Level of education Master 8 2.8Professional 69 24.5Degree 205 72.7

Current position Partner 8 2.8Manager 30 10.6Supervisor 9 3.2Senior auditor 100 35.5Junior auditor 135 47.9

Firm category Big 4 79 28Non Big-4 International 48 17Non Big-4 Local 155 55

Type of Audit Software Standard package 89 31.6

Modified standard package 75 26.6Custom developed package 102 36.2Others 16 5.7

Page 86: Thesis writing

Years AS being used in 3 years and below 119 42.2the firm 4 to 6 years 92 32.6

7 to 9 years 30 10.69 years and above 41 14.5

Years using AS 3 years and below 238 84.44 to 6 years 33 11.77 to 9 years 4 1.59 years and above 7 2.4

6.4 Measurement Model Assessment and Confirmatory Factor Analysis (CFA)

The importance of this chapter is to finalise the variables listed in the constructs of the

framework with the adequate statistical fit and makes theoretical or substantive sense. The

finalised variables with smaller constructs will represent better variables for the study. This is

done by following the strategy proposed by (Jöreskorg, 1993) in utilising CFA as a first step

in the analysis of the data and followed by full measurement model analysis. CFA is a special

form of factor analysis. It is employerd to test whether the measures of a construct are

consistent with the researcher’s understanding of the nature of that construct (Awang, 2012).

CFA has been applied by researchers to analyse construct validity and replaced older method.

According to Hair et al. (2006) CFA approach differs from exploratory factor analysis (EFA)

approach in that the latter extracts factors based on statistical results not on theory and can be

conducted without prior knowledge of the number of factors or which items belong to which

construct. Whereas with CFA, both, the number of factors within a set of variables and which

factor each item will load highly on, is known to the researcher before results can be

computed. CFA as a tool enables the researcher to either confirm or reject the preconceived

theory. Furthermore, CFA provides an assessment of fit while EFA does not. Ahire & Devaraj

(2001) validated the advantages of CFA but also highlighted the merits of EFA in detecting

unidimensionality issues and multidimensional sets within constructs measurements

compared to CFA which is only capable of detecting unidimensionality problems without

indications of the dimensions.

This study is applying the CFA approach to assess the measurement model.

According to Jöreskorg’s strategy, models are classified as either strictly confirmatory

models or model-generating models. Model-generating models are tentative or exploratory

Page 87: Thesis writing

models in which changes are made within the model –testing framework of SEM until a

model is found to have an adequate fit in a statistical sense and also makes theoretical or

substantive sense. Firstly, a full model is specified based on current theory and practice.

Secondly, before testing this full model, a series of one-factor congeneric models for each

construct that consists of four or more indicator items are tested and evaluated separately

before being tested in combination with other constructs. Constructs with two or three

indicators should be tested in pairs, as a construct with fewer than four times items will lead

the degrees of freedom (df) to be zero, and this will generate a zero chi-square value, which is

meaningless. If the chi-square statistics is unsatisfactory, testing and changes of the models

are done one step at a time, provided that the changes make substantive sense. Once

constructs have been examined singly, in pairs, or both, a full measurement model comprising

all the constructs of interests is then evaluated.

For this study, 282 data of the study were utilized for the CPA analysis using AMOS (version

18). CFA analysis using AMOS is performed by the maximum likelihood (ML) with the

application of general rule of thumb of the goodness-of-fit as listed below:

1. The standardised factor loading should be .5 or higher, and ideally .7 or higher (Hair et

al., 2010). Any items had factor loadings below 0.5, the deletion of items is done one

item at a time. This procedure is repeated until all the factor loadings for each item

exceeded 0.5 (Byrne, 2010).

2. Thesignificance value of chi-square T statistics (x2) test of p-value should be more

than 0.05 to indicate the model fit well (Kline, 2005). x2 statistic is refers to CMIN

(minimum discrepancy) where this statistic is equal to (N-1)Fmin (sample size minus 1,

multiplied by the minimum fit function) and, in large sample, is distributed as a central

x2 with degree of freedom equal to ½ (p) (p + 1) – t, where p is the number of observes

variables, and t is the number of parameters to be estimated ( (Byrne, 2010).

3. Other goodness-of-fit indices are utilized to further support the measurement model:

a. One absolute fit index (for example: GFI, RMSEA, or SRMR)

b. One incremental fit index (for example: CFI or TLI)

c. One goodness-of-fit index (for example: GFI, CFI or TLI) and

d. One badness-of-fit index (for example: RMSEA, or SRMR)

Page 88: Thesis writing

6.4.1 Assessing the Unidimensionality

This section covers the specification of the measurement model for each underlying construct

with a discussion of the path diagram. Then, it describes the use of multi-item scales to

measure each factor in the measurement model. This is followed with a description of the

procedures that were conducted to modify the measurement model until it achieves the

required level of fitness indices. This procedure has been done either through the item-

deletion or through setting the “free parameter estimate”. All processes are described in the

following sections.

The constructs in the proposed model were each assessed for unidimensionality using CFA.

Each of these constructs was examined in a separate measurement model. There are single-

headed arrow linking the factors (also called latent variables) to their items (indicators), and a

single-headed arrows linking the error terms to their respective indicators. There are no single

headed arrow linking the factors because there are no theoretical relationship that one of these

factors causes the other. Instead, double-headed arrows show correlations between these

factors. These figures also provide the standardized parameter estimate (also called factor

loadings) on the arrow connecting factors with their items. The values appearing next to the

edge of the items are squared multiple correlations, and values next to the curved double-

headed arrows show correlations between the latent variables (factors).

6.4.1.1 Individual performance impact

Auditor’s individual performance impact of using audit software construct was examined by

CFA one-factor congeneric models utilizing the sample with a total five items (IMPACTa,

IMPACTb, IMPACTc, IMPACTd, and IMPACTe). Figure 6.1 illustrates the factor loading

for each observed items. Factor loading for all items are above 0.9 except for IMPACTa

which had a loading value of 0.85. Running the maximum likelihood estimate for the working

data revealed significant Chi-square statistics where x2 (df = 5, n = 281) = 23.34, p = 0.000

Page 89: Thesis writing

indicating that the data did not fit the model. As for the other goodness-of-fit criteria, the

result shows that only two fit indices supported the model. Other fit indices were CFI = 0.939

and RMSEA = 0.282. Thus the required level of absolute and incremental fit are achieved but

parsimonious fit is not achieved the required level.

Figure 6.1 Factor analysis of performance impact items

The resource items were rigorously scrutinized to determine the causes of model

misspecification. In order to achieve the better fit, the model refinement process was done

which includes scanning the output and applying the following criteria:

- Standard regression weights (SRW) values should be above 0.5 (preferably above 0.7)

(Byrne, 2010). This is to determine the factor loadings of each of the items, and to

identify those items with value less than 0.7 which will be dropped from the list.

- Squared multiple correlations (SMCs) should be above the cut off value of 0.5 (Byrne,

2010). This is to determine the reliability of the items. Those that have less than 0.5

will also be dropped from the list as they are considered less reliable resource items.

- Modification indices (MI) that reveal high covariance between measurement errors

accompanied by high regression weights between these errors’ construct. Indices that

have the highest value (more than 15) suggesting that the same resource items might

have the same meaning (Awang, 2012). The resource item that has the lowest factor

loading of the paired-indices is then dropped.

These three steps are repeated until the observed variables achieve a good model fit. In the

current run, all values in the SRW output were above 0.7. The SMCs values are all above 0.5.

The redundant items in the measurement model were examined through modification indices

(MI). Table 6-1 presents MI output that shows the covariance between each pair of items

Page 90: Thesis writing

(redundant items is shown through the correlated measurement error of the respective item).

The MI = 96.023 (in bold) is considered high which indicate that item IMPACTa and

IMPACTb are redundant and as a result the measurement errors namely e50 and e51 is highly

correlated. There are two choices to tackle this problem: 1) to delete one of these two

redundant items and re-specify the measurement model, or 2) to set these two correlated

errors to be “free parameter estimate” and re-specify the measurement model.

When the model has three or less items, deleting one item would result in the degree of

freedom becomes zero and SEM cannot compute the fitness indexes (Awang, 2012).

Therefore, the best solution for redundant items is to set “free parameter estimate” by using

the double-headed arrow connecting the redundant items and re-specify the model. In

situation when the model has more than four items, the normal procedure is to delete one item

which has a lower factor loading and re-specify the model. According to Byrne (2010), only

those items that demonstrate high covariance plus high regression weight in the MI should be

candidate for deletion. Table 6.1 shows that MI value for item IMPACTa and IMPACTb is

large (MI greater than 15 considered large) which indicate these two items are redundant. The

normal procedure is to delete one item (IMPACTa or IMPACTb) which has a lower factor

loading and re-specify the model. Since item IMPACTa has a lower factor loading it was

considered as candidate for deletion.

Table 6-1: AMOS selected text output – Modification indices for performance impact

Errors MI-covariance Par change Path

e52 ↔ e53 8.435 .031 IMPACTc →IMPACTd

e51 ↔ e53 10.545 -0.040 IMPACTb →IMPACTd

e51 ↔ e52 5.960 -0.041 IMPACTb →IMPACTc

e50 ↔ e53 7.861 -0.036 IMPACTa →IMPACTd

e50 ↔ e52 4.070 -0.035 IMPACTa →IMPACTc

e50 ↔ e51 96.023 .192 IMPACTa →IMPACTb

After the deletion of item IMPACTa, the data then re-specify for another CFA test and the

results show that that the model fit the data well, with x2 (df = 2, n = 281) = 6.422, followed

with improved in x2

df∨CMIN=3.21(required level of 5). However p-value is still not

significant at 0.040 (below required level of 0.05). Further fit indices also supported the

Page 91: Thesis writing

model where GFI = 0.988, AGFI = 0.941, NNFI = 0.990, CFI = 0.997. Figure 6.3 show the

result of fit values after modification indices.

Figure 6.2 Factor analysis of performance impact items after modification

6.4.1.2 Audit software application

There are three latent constructs to assess under audit software application; audit planning,

audit testing and report writing. In total, fifteen items were used to measure the audit software

application. Five items were used to measure planning, seven used to measure testing and the

other three for report writing. Figure 6.3 shows factor loadings for items in the measurement

model for audit planning, testing and report writing. The results show that the model did not

fit any of the model fit criteria. Running the maximum likelihood estimate for the working

data revealed that the chi-square was significant (x2 = 554.42, df = 87, n = 281, p = .000) The

other indices were: GFI = .771, AGFI = .662, NNFI = -804, CFI = .843 and RMSEA = .157,

and x2

df=6.37. These indices indicate the measurement model was poorly fit. This situation

occurs when some items have low factor loading. Factor loading for all items under planning

construct were exceeded the required level of 0.6. However, factor loading for two items

under audit testing construct were below required level namely ATa and ATb. The normal

procedure is to delete the low loading item one after another, starts with the lowest one. In this

case item ATa was deleted first, followed by ATb. The new model then re-specified. As for

report writing, three items were used to measure the construct. One item, RWb has loading

factor less than recommended level of .60. Thus this item was subject for deletion. The model

then re-specified.

Page 92: Thesis writing

Figure 6.3 Factor analysis of audit software application items – planning, testing and

report writing

After deleted three low loading items namely ATa, ATb and RWb (one after another), the

fitness indices shows little improvement but still did not achieved the required level. Thus the

model need for further refinement process which includes scanning the output and estimate

the modification indices to achieve the better fit. MI of approximately 15 or greater suggests

that the fit could be improved significantly by freeing the corresponding path to be estimated

(Awang, 2012). Hair et al., (2010) suggested MI of approximately 4.0 and above could be

considered as point for deletion to improve the model fit. In this study, the MI values above

15 were considered for deletion or set as free estimate starts with the highest value first before

the model re-specified. Figure 6.4 reveals the S.R.W and SMCs values of each items before

measurement refinement. The SMCs values had two items below the cut off level namely

ATc and ATd which carry values .48 and .49 respectively. These two items were considered

Page 93: Thesis writing

as deletion candidates after MI. Table 6.2 illustrates the MI output indicated the need for

measurement refinement.

Figure 6.4 Factor analysis of audit software application items before MI

Table 6-2: AMOS selected text output – Modification indices for audit software

application

Errors MI-covariance Par change Path

e43 ↔ e45 17.847 -.259 ATd →ATf

e42 ↔ e43 66.429 .753 ATc →ATd

e35 ↔ e38 25.171 -.341 APa →APd

e35 ↔ e37 15.939 .290 APa →APc

According to Byrne (2010), only those items that demonstrate high covariance plus high

regression weight in the modification indexes should be candidate for deletion. As for other

Page 94: Thesis writing

criteria, if an item proves to be problematic on most of the levels mentioned above, then it is

also candidate for deletion. Following the process of MI described above, CFA was

performed again with two high covariance items (ATc and APa) removed. As goodness of fit

indices were improved, the modified model showed a better fit to the data (x2 = 118.32, df =

32, p = .000, n = 281). The GFI was .925, CFI = .953, RMSEA = .098, and x2/df = 3.69.

Although RMSEA is not achieved adequate fit (<.08), the absolute fit indices is mitigated by

the higher value of GFI. Even though the chi-square is still significant, these values suggest

that this model fits adequately to the data. Given that the model fits the data adequately and

the correlations between the underlying factors are less than .85 (see the value on the double-

headed arrows in Figure 6.4), no further adjustment required.

Figure 6.4 Factor analysis of audit software application items after MI

6.4.1.3 Performance expectancy and effort expectancy

As presented in Figure 6.5, six items (PEa - Pef) were used to measure the performance

expectancy and four items (EEa – EEd) to measure effort expectancy constructs. Even though

all factor loadings are above 0.6 (required level), the CFA indicated that the initial

measurement model needed to be specified. The chi-square was significant (x2 =183.235, df =

34,P = .000, n = 281). The fitness indices did not achieve the required level where GFI =

0.888, AGFI = 0.818, NNFI = 0.930. CFI = 0.947, RMSEA = 0.125 (above 0.08). This

indicate that the initial measurement model need to be respecified using Modification Index

Page 95: Thesis writing

(MI). MI able to look for the correlated error among the items. Correlated errors occur when

two or more items are redundant of each other.

Figure 6.5 Factor analysis of performance expectancy items

Table 6.3 presents the covariance between each pair of items in the model. The redundant

item is shown through the correlated measurement error of the respective items. The MI value

for item EEd and EEc is large (MI greater than 15 is considered large), which indicate these

two items are redundant.

Table 6-3: AMOS selected text output – Modification indices for performance

expectancy and effort expectancy

Errors MI-covariance Par change Path

e7 ↔ e8 54.432 .204 EEd →EEc

e1 ↔ e10 18.762 .101 PEf →Eea

e1 ↔ e6 15.444 -.079 PEf →PEa

Figure 6.6: Factor analysis of performance expectancy items after MI

Page 96: Thesis writing

6.4.1.4 Social influence and facilitating condition items

Figure 6.7 shows the factor loadings for items in the measurement model for construct social

influence and facilitating condition. As mentioned earlier, CFA analysis for constructs with

two or three indicators should be tested in pairs, as a construct with fewer than four times

items will lead the degrees of freedom (df) to be zero, and this will generate a zero chi-square

value, which is meaningless. As facilitating condition comprises of only three items, it was

then paired with social influence. The schematic diagram shows that item SIc has factor

loading less than 0.6, and thus subject for deletion in order to make the data fit the model.

Figure 6.7: Factor analysis of social influence and facilitating condition items

Page 97: Thesis writing

Figure 6.8: Factor analysis of social influence and facilitating condition items after

deletion of low loading item.

6.4.1.5 Organizational, infrastructure and technology support

Figure 6.9: Factor analysis of organizational, infrastructure and technology support

items

Figure 6.10: Factor analysis of organizational, infrastructure and technology support

items after deletion of low loading item.

Page 98: Thesis writing

6.4.1.6 Computer self-efficacy

Figure 6.11: Factor analysis of computer self-efficacy items.

Figure 6.12: Factor analysis of computer self-efficacy items- after MI

Low loading items deleted from the

Page 99: Thesis writing

6.4.2 Pooled Measurement Model

Figure 6.13: CFA measurement model combining all constructs.

Page 100: Thesis writing

6.4.3 Reliability and Validity of the Constructs

Before testing the hypotheses in the structural model, the reliability and validity of the

underlying constructs need to be assessed (De Wulf et al., 2001). Reliability testing is

required in order to ensure the consistency of a set of measurements (Cronbach, 1971) and

one of the main concerns is internal consistency. For this purpose, the constructs specified in

measurement model then assessed for reliability using cronbach’s alpha, construct reliability

(CR), and average variance extracted (AVE), and for validity using construct, convergent and

discriminant. Reliability of the measures in this thesis was first assessed using Cronbach’s

(1951) coefficient alpha and then using confirmatory factor analysis (CFA). Cronbach (1971)

recommended that the Cronbach’s alpha scale should be above 0.70 to indicate a sufficient

result. Table 6.3 shows that all the constructs exceed the suggested level of 0.70 suggesting

that internal inconsistency is not a question.

Construct Reliability (CR) is used to measure the reliability and internal consistency of the

measured variables representing a latent construct. Average Variance Extracted (AVE) is the

average percentage of variation explained by the items in a construct. When CFA is run,

AMOS output does not produce the construct’s CR and AVE. Therefore, CR and AVE were

calculated from model estimates using the CR formula2 and AVE formula3 given by Fornell

and Larcker (1981). The rule of thumb for either reliability estimate is that .7 or higher

suggests good reliability (Hair et al., 2010). Reliability between .6 and .7 may be acceptable

provideed that other indicators of a model’s construct validity are good. The recommended

thresholds for the convergent validity, AVE must be greater than 0.5 (Fornell & Larcker,

1981). Based on this assessments, measures used within this study were within the acceptable

levels supporting the reliability of the construct (see Table 6.3).

In the case of validity, CFA has also been used to assess construct, convergent and

discriminant validity. Empirically, construct validity exists when measured items actually

reflects the theoretical latent construct those items are designed to measure (Hair et al., 2010).

2

3

Page 101: Thesis writing

This validity is achieved when the fitness indices achieve the following requirements: GFI =

0.90 or higher, CFI = 0.90 or higher, RMSEA = 0,08 or less, and the ratio of Chi-square/df is

less than 5.0 (refer Table Measurement Model Fit in Chapter 4). In this study, results obtained

from measurement model assessment show that goodness-of- fit indices confirmed construct

validity (refer to Table 6.4)

Convergent validity of a construct is the extent to which indicators of a specific construct

converge or share a high proportion of variance in common. Convergent validity can be

estimated by factor loadings, average variance extracted and reliability (Hair et al., 2010). The

size of the factor loading is one important consideration for convergent validity. A good rule

of thumb is that the standardized loading estimates should be .5 or higher, and ideally .7 or

higher. Convergent validity is also considered adequate when AVE is 0.5 or more (Fornell &

Larcker, 1981). In case of high convergent validity, high loadings on a factor would indicate

that they converge on a common point, the latent construct. The results of the convergent

validity are shown in table 6.3. The factor loading for all items (after applying the required

modification) exceeds the recommended level of 0.5 except 2 items namely SId (.56) and

CSEg (.66). Since the factor loading for these two items still above .5, it is still acceptable.

The results of AVE which exceeded .7 provide an additional support for the analysis to

confirm that the measures adopted in this study satisfy the requirement for convergent

validity.

Table 6.3 : Results of CFA for measurement model

Construct ItemsInternal

reliability Convergent validityCronbach's Standardized CR AVE

Alpha LoadingPerformance IMPACTb 0.960 0.85 0.960 0.858

IMPACTc 0.93IMPACTd 0.97IMPACTe 0.95

Audit planning APb 0.878 0.84 0.898 0.688APC 0.80APd 0.91APe 0.76

oem, 31/03/13,
Need to recalculate as this is results before entering RW into the model.
Page 102: Thesis writing

Audit testing ATd 0.911 0.67 0.914 0.729ATe 0.89ATf 0.95ATg 0.88

PerformanceExpectancy PEa 0.954 0.90 0.957 0.787

PEb 0.92PEc 0.91PEd 0.93PEe 0.89PEf 0.76

Effortexpectancy EEa 0.893 0.87 0.869 0.724

EEb 0.88EEc 0.80

Social influence SIa 0.733 0.84 0.789 0.513SIb 0.72SId 0.56

Facilitatingcondition FCa 0.816 0.78 0.822 0.604

FCb 0.81FCc 0.74

Organizationalsupport OSa 0.899 0.87 0.915 0.859

OSb 0.98

Information support ISa 0.886 0.76 0.867 0.721

ISb 0.91ISc 0.87

Technology support TSa 0.953 0.94 0.934 0.872

TSb 0.96TSc 0.90

Computer self-efficacy CSEe 0.811 0.68 0.812 0.521

CSEf 0.78CSEg 0.66CSEh 0.76

Page 103: Thesis writing

Table 6.4: The assessment of fitness for measurement model

Name of Category Name of Index Index value Comments

1. Absolute fit RMSEA .070 Achieved

GFI .779 Not achieved

2. Incremental fit CFI .900 Achieved

3. Parsimonious fit Chi-square/df 2.383 Achieved

Discriminant validity is used to measure the extent to which a construct is truly distinct from

other constructs. CFA provides two common ways to assess discriminant validity. First,

taking Kline (2005) suggestions that the estimated correlations between factors should not be

higher than .85, each measurement model was subject to this assessment. That is, redundant

items that caused high correlations among factors were deleted, revealing evidence of

discriminant validity (see measurement models tested in previous sections). Second,

discriminant validity was assessed by examining the pattern structure coefficient to determine

whether factors in measurement model are empirically distinguishable (Thompson, 1997).

Discriminant validity was examined by comparing the squared correlations between

constructs and variance extracted for a construct (Fornel and Larcker, 1981). The analysis

results showed that the square correlations for each construct is less than the average variance

extracted by the indicators measuring that construct, as show in Table 6.5, indicating that the

measure has adequately discriminant validity. In summary, the measurement model

demonstrated adequate reliability, convergent validity, and discriminant validity.

Table 6.5 Discriminant validity constructs

(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11)

(1) Technology support 0.933(2) Organisational support 0.353 0.926(3) Computer self-efficacy 0.242 0.269 0.721(4) Information support 0.746 0.524 0.339 0.849(5) facilitating condition 0.523 0.658 0.354 0.669 0.777(6) Social influence 0.294 0.793 0.151 0.516 0.715 0.716(7) Effort expectancy 0.207 0.511 0.231 0.368 0.66 0.624 0.850(8) Performance expectancy 0.203 0.57 0.226 0.365 0.536 0.675 0.818 0.887

(9) Audit testing 0.116-

0.0650.147 0.191 0.061 0.078 0.075 0.078 0.853

(10) Audit planning 0.252 0.319 0.128 0.36 0.249 0.378 0.286 0.162 0.088 0.829(11) Performance 0.079 0.055 0.058 0.118 0.067 0.098 0.077 0.051 0.228 0.235 0.926

Page 104: Thesis writing

6.4.4 The assessment of normality

After the fitness indices have been achieved, the data at hand then been examined for the

normality assessment before proceeding with the structural model.

6.4.5 CFA for Second-order Motivation Structure

Second- order models are potentially applicable when (1) the lower order factors are

substantially correlated with each other, and (2) there is a higher order factor that is

hypothesized to account for the relations among the lower order factors (Chen, Sousa, &

West, 2005). There are several potential advantages of second-order factor model over a first-

order factor model. First, the second-order model is able to test whether the hypothesized

higher order factor actually reflects the pattern of relations between the first-order factors.

Second, a second-order model puts a structure on the pattern of covariance between the first-

order factors, explaining the covariance in a more parsimonious way with fewer parameters

(Gustafsson & Balke, 1993). Considering the advantages of second-order factors model, this

thesis taken one step to parsimonious the model by introducing a higher order structure of

motivational support.

In order to introduce a higher order structure into the research model, literature dictates that

CFA is run first for the first order and then a higher order is introduced followed by the

incorporation of the higher order into the hypothesized research model (Byrne, 2010). The

CFA model to be tested in the present application hypothesized a priori that (a) a responses to

the supports factors can be explained by three first-order factors (organizational support,

information support and technology support) and one second order factor (motivational

support); (b) each item has a nonzero loadings on the first-order factor it was designed to

measure, and zero loadings on the other two first-order factors; (c) error terms associated with

each item are uncorrelated; and (d) covariation among the three first-order factors is explained

fully by their regression on the second-order factor. A diagrammatic representation of this

model is presented in Figure 6.14. In this present model, given the specification of only three

first-order factors, the higher order structure will be just-identified unless a constraint is

placed on at least one parameter in this upper level of the model. To resolve the issue of just-

identified in the present second-order model, equality constraint is place on particular

parameter at the upper level known to yield estimates that are approximately equal. Thus,

Page 105: Thesis writing

equality constraint is place on the path of interest (information support to motivation) to fix

the relationship between constructs of interest to be equal to 1.

In total, 8-items represented the three factors of motivational support subject to CFA analysis.

Motivational support is considered as a reflective construct, where the direction of causality is

from the construct to the indicators. This is shown by arrows pointing from the construct to

indicators. Organisational support was measured using two items (OSa and OSb), information

support was measured by three items (ISa to ISc), and technology support was measured by

three items (TSa to TSc). After introducing second order factor into the model, the CFA results

showed that the chi-square was significant (x2=35.918 , df = 18, n = 281, p = 0.007)

indicating that the data did not fit the model. However other goodness of fit indices showed the

following reading: GFI=.971, AGFI=.942, CFI=.991, RMSEA=.060 and CMIN or

x2

df=1.995 (¿5 ) . These values suggest an adequate fit to the model, even though the chi-square

was significant.

6.4.6

6.4.7 Measurement model with second-order structure

Page 106: Thesis writing

6.5 The Structural Model

Having established measurement model fit and validity, the next step is testing the structural

model; that is testing the hypothesised theoretical model or the relationships between latent

constructs. The structural model differs from the measurement model in that the emphasis

moves from the relationships between latent constructs and measured variables to the nature

and magnitute of relationships between constructs (Hair et al., 2010). The structural

relationship between any two constructs is represented empirically by the structural parameter

estimate, also known as a path estimate. In term of definition, the structural model has been

defined as “the portion of the model that specifies how the latent variables are related to each

other” (Arbuckle, 2005, p.90). The structural model aims to specify which latent constructs

directly or indirectly influence the values of other latent constructs in the model (Byrne,

1989). Moreover, the ultimate purpose of the structural model carry out in this study is to test

the underlying hypotheses in order to answer the research questions outlined in Chapter One.

As presented in Table 6.6, these hypotheses were represented in nine causal paths (H1a, H1b,

H1c, H2a, H2b, H2c, H3, H4, and H5) to determine the relationships between the constructs

under consideration. In the proposed theoretical model discussed in Chapter Three, the

underlying constructs were classified into two classes, including exogenous constructs

(financial, social and structural) and endogenous constructs (relationship quality, emotions,

and loyalty).

Page 107: Thesis writing

Figure 6.14: The structural model with second-order factor

Page 108: Thesis writing

Figure 6.15: Parsimonious structural model

6.5.1 Assessment of the Structural Model

6.5.1.1 Audit software application and audit performance

Page 109: Thesis writing

6.5.1.2 Performance expectancy and audit software application

6.5 Hypotheses Testing

6.6 Discussions of Findings

6.7 Summary

Page 110: Thesis writing
Page 111: Thesis writing

CHAPTER 7

CONCLUSIONS, IMPLICATIONS, LIMITATION AND FUTURE RESEARCH

7.1 Chapter Overview

7.2 Discussions of Findings

7.3 Implication of the Findings

7.3.1 Theoretical Implications

7.3.2 Practical Implications

7.4 Limitations of the Study

7.5 Suggestions for Future Research

7.6 Summary

Page 112: Thesis writing

REFERENCES

Abdolmohammadi, M. J. (1991). Factors affecting auditors' perceptions of applicable decision

aids for various audit tasks. Contemporary Accounting Research, 7(2), 535-48.

AbuShanab, E., & Pearson, J. M. (2007). Internet banking in Jordan: The Unified Theory of

Acceptance and Use of Technology (UTAUT) perspective. Journal of Systems and

Information Technology, 9(1), 78-07.

AbuShanab, E., & Pearson, J. M. (2007). Internet banking in Jordan: The Unified Theory of

Acceptance and Use of Technology (UTAUT) perspective. Journal of Systems and

Information Technology, 78-07.

Agarwal, R., Sambamurthy, V., & Stair, R. M. (2000). Research report: the evolving

relationship between general and specific computer self-efficacy - an empirical

assessment. Information Systems Research, 11(4), 418-30.

Ahmi, A. (2011). The use of generalised audit software (GAS) by external auditor in the UK.

Doctoral Symposium 28th & 29th March 2011, (pp. 1-11). Brunel Business School.

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human

Decision Processes, 50, 179-211.

Ajzen, I. (2002). Residual effects of past on later behavior: Habituation and reasoned action

perspectives. Personality & Social Psychology Review, 6(2), 107-22.

Alali, F. A., & Pan, F. (2011). Use of audit software: Review and survey. Internal Auditing,

26(5), 29-36.

AlAwadhi, S., & Morris, A. (2008). The use of the UTAUT model in the adoption of e-

government services in Kuwait. Proceedings of the 41th Hawaii International

Conference on System Sciences, 1-11.

Alberto, J. C., Francisco, M. L., & Teodoro, L. (2007). Web Acceptance Model (WAM):

Moderating effects of user experience. Information & Management, 44, 384-96.

Page 113: Thesis writing

Al-Gahtani, S. S. (2004). Computer technology acceptance success factors in Saudi Arabia:

an exploratory study. Journal of Global Information Technology Management, 7(1), 5-

29.

Almutairi, H., & Subramaniam, G. H. (2005). An empirical application of the Delone and

McLean model in the Kuwait private sector. The Journal of Computer Information

Systems, 45(3), 113-22.

Alreck, P. L., & Settle, R. B. (1995). The Survey Research Handbook (Second ed.). Chicago:

The Irwin Series in Marketing.

Anderson, J. E., Schwager, P. H., & Kerns, R. L. (2006). The drivers for acceptance of Tablet

PCs by faculty in a college of business. Journal of Information Systems Education,

429-440.

Anderson, J. E., Schwager, P. H., & Kerns, R. L. (2006). The drivers for acceptance of Tablet

PCs by faculty in a college of business. Journal of Information Systems Education,

17(4), 429-40.

Arnold, V., & Sutton, S. (1998). The theory of technology dominance: Understanding the

impact of intelligent decision aids on decision maker's judgment. Advance in

Accounting Behavioral Research, 175-194.

Arnold, V., & Sutton, S. (1998). The theory of technology dominance: Understanding the

impact of intelligent decision aids on decision maker's judgment. Advance in

Accounting Behavioral Research, 175-94.

Bagranoff, N. A., & Vendrzyk, V. P. (2005). The changing role of IS audit among the big five

US-based accounting firms. Information System Control Journal, 5(5), 33-37.

Bandura, A. (1986). Social foundation of thought and action: A social cognitive theory.

Eaglewood Cliffs, NJ: Prentice-Hall.

Bandura, A. (1997). Self-efficacy: The exercise of control. New York: W.H. Freeman.

Barnette, J. J. (2000). effects of stem and likert response option reversals on survey internal

consistency: If you feel the need, there is a better alternative to using those negatively

worded stems. Educational and Psychological Measurement, 60, 361-70.

Page 114: Thesis writing

Bartlett, J. E., Kotrlik, J. W., & Higgins, C. C. (2001). Organizational research: Determining

appropriate sample size in survey research. Information Technology, Learning, and

Performance Journal, 19(1), 43-50.

Bedard, J. C., Jackson, C., Ettredge, M. L., & Johnstone, K. M. (2003). The effect of training

on auditors' acceptance of an electronic work system. International Journal of

Accounting Information System, 4, 227-50.

Berdie, D. R., Anderson, J. F., & Neibuhr, M. A. (1986). Questionnaires: Design and Use.

2nd Ed.,New Jersey: Scarecrow Press.

Bhattacherjee, A. (2001). Understanding information systems continuance: an expectation-

confirmation model. MIS Quarterly, 25(3), 351-70.

Bible, L., Graham, L., & Rosman, A. (2005). The effect of electronic audit environments on

performance. Journal of Accounting, Auditing & Finance, 27 - 42.

Bierstaker, J. L., Burnaby, P., & Thibodeau, J. (2001). the impact of information technology

on the audit process: an assessment of the state of the art and implication for the

future. Managerial Auditing Journal, 16(3), 159 - 64.

Bonner, S. E. (1990). Experience effects in auditing: The role of task-specific knowledge. The

Accounting Review, 65(1), 72-92.

Boudreau, M. C., & Seligman, L. (2005). Quality of use of a complex technology: A learning-

based model. Journal of Organizational and End User Computing, 17(4), 1-22.

Braun, R. L., & Davis, H. E. (2003). Computer-assisted audit tools and techniques: analysis

and perspectives. Managerial Auditing Journal, 18(9), 725-31.

Burkhardt, M. E., & Brass, D. J. (1990). Changing patterns or patterns change: The effects of

a change in technology on social network structure and power. Administrative Science

Quarterly, 35, 104-27.

Carmines, E. G., & Zeller, R. A. (1979). Reliability and validity assessment. Beverley Hills:

Sage CA.

Chau, P. Y., & Hu, P. J. (2001). Information technology acceptance by individual

professionals: A model comparison approach. Decision Sciences, 32(4), 679-719.

Page 115: Thesis writing

Coakes, S. J. (2005). SPSS for Windows: Analysis without Anguish. Australia: John Wiley &

Sons Australia, Ltd.

Compeau, D., & Higgins, C. A. (1995). Computer self-afficacy: developmentof a measure and

initial test. MIS Quarterly, 19(2), 189-211.

Cooper, D. R., & Schindler, P. S. (2003). Business Research Methods (8th ed.). New York,

USA: McGraw Hill.

Curtis, M. B., & Payne, E. A. (2008). An examination of contextual factors and individual

characteristics affecting technology implementation decisions in auditing.

International Journal of Accounting Information System, 104-121.

Curtis, M. B., & Payne, E. A. (2008). An examination of contextual factors and individual

characteristics affecting technology implementation decisions in auditing.

International Journal of Accounting Information System, 104-21.

Curtis, M. B., Jenkin, J. G., Bedard, J. C., & Deis, D. R. (2009). Auditors' training and

proficiency in information systems: A research synthesis. Journal of Information

Systems, 23(1), 79 - 96.

Dasgupta, S., Granger, M., & McGarry, N. (2002). User acceptance of e-collaboration

technology: An extension of the Technology Acceptanve Model. Group Decision and

Negotiation, 11(2), 87-100.

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of

information technology. MIS Quarterly, 13(3), 319-40.

Davis, F. D. (1993). User acceptance of information technology: system characteristics, user

perceptions and behavioral impacts. International Journal of Man-Machine Studies,

475-487.

Davis, F. D., & Bagozzi, R. P. (1989). User acceptance of computer technology: a comparison

of two theoretical models. Management Science, 35(8), 982-1003.

Debreceny, R., Lee, S. L., Neo, W., & Toh, J. S. (2005). Employing generalised audit

software in the financial service sector. Managerial Auditing Journal, 20(6), 605-18.

Page 116: Thesis writing

Debreceny, R., Lee, S. L., Neo, W., & Toh, J. S. (2005). Employing generalized audit

software in the financial services sector. Managerial Auditing Journal, 20(6), 605-18.

DeLone, W. H., & McLean, E. R. (1992). Information system success: The quest for

dependent variable. Information Systems Research, 3(1), 60-95.

Deng, L., Turner, D. E., Gehling, R., & Prince, B. (2010). User experience, satisfaction, and

continual usage intention of IT. European Journal of Information Systems, 19, 60-75.

Devaraj, S., & Kohli, R. (2003). Performance impacts of information technology: Is actual

usage the missing link? Management Science, 49(3), 273-89.

Doll, W. J., & Torkzadeh, G. (1998). Developing a multidimensional measure of system-use

in an organizational context. Information and Management, 33, 171-85.

Dowling, C. (2009). Appropriate audit support system use; The influence of auditor, audit

team, and firm factor. The Accounting Review, 84(3), 771-810.

Dowling, C., & Leech, S. (2007). Audit support systems and decision aids: Current practice

and opportunities for future research. International Journal of Accounting Information

Systems, 8, 92-116.

Dulle, F. W., & Minishi-Majaja, M. K. (2011). The suitability of the Unified Theory of

Acceptance and Use of Technology (UTAUT) model in open access adoption studies.

Information Development, 27(1), 32-45.

Eckhardt, A., Laumer, S., & Weitzel, T. (2009). Who influence whom? Analyzing workplace

referents' social influence on IT adoption and non-adoption. Journal of Information

Technology, 24, 11-24.

Flavian, C., Guinaliu, M., & Gurrea, R. (2006). The role played by perceived usability,

satisfaction and consumer trust on website loyalty. Information and Management, 43,

1-14.

Foon, Y. S., & Fah, B. C. (2011). Internet banking adoption in Kuala Lumpur: An application

of UTAUT model. International Journal of Business Management, 6(4), 161-67.

Goodhue, D. I., & Thompson, R. L. (1995). Task-Technology Fit and individual performance.

MIS Quarterly(June), 213-36.

Page 117: Thesis writing

Goodhue, D. L. (1988). IS Attitudes: Toward theoretical and definition clarity. Database,

19(3/4), 6-15.

Grandon, E., & Mykytyn, P. (2004). Theory-base instrumentation to measure the intention to

use electronic commerce in small and medium sized business. Journal of Computer

Information Systems, 44(3), 44-57.

Greenberg, J., & Baron, R. A. (2006). Behavior in Organizations (8th ed.). New Jersey:

Pearson Education International, Prentice Hall.

Gupta, B., Dasgupta, S., & Gupta, A. (2008). Adoption of ICT in a government organization

in a developing country: An empirical study. Journal of Strategic Information

Systems, 17, 140-54.

Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2010). Multivariate Data Analysis

(Seventh ed.). New Jersey: Pearson.

Hair, J. F., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2006). Multivariate

Data Analysis (6th ed.). Upper Saddle River, New Jersey: Pearson, Prentice Hall.

Hair, J. F., Money, A. H., Samouel, P., & Page, M. (2007). Research methods for business.

West Sussex, England: John Wiley & Sons Ltd.

Hall, J. (2000). Information Systems Auditing and Assurance. South-Western College

Publishing, Mason, OH.

Helms, G. L., & fred, L. L. (2000). Case study on auditing in an electronic environment. The

CPA Journal, April, 52-54.

Ho, J. L. (1994). the effect of experience on concensus of going-concern judgment.

Behavioral Research in Accounting, 6, 160-71.

Hoyle, R. H. (1995). Structural equation modelling, concepts, issues, and applications.

Thousand Oaks: Sage Publications.

Hussey, J., & Hussey, R. (1997). Business research: A practical guide for undergraduate and

postgraduate student. Great Britain: Macmillan Press Ltd.

Ismail, N. A., & Zainol Abidin, A. (2009). Perception towards the importance and knowledge

of information technology among auditors in Malaysia. Journal of Accounting and

Page 118: Thesis writing

Taxation, 1(4), 61-69.

Ittner, C. D., & Larcker, D. F. (1998). Innovations in performance measurement: Trends and

research implications. Journal of Management Accounting Research, 10, 205-38.

Janvrin, D., & Bierstaker, J. L. (2008). An examination of audit information technology use

and perceived important. Accounting Horizon, 22(1), 1-21.

Janvrin, D., Bierstaker, J., & Lowe, D. J. (2008). An examination of audit information

technology use and perceived importance. Accounting Horizon, 1-21.

Janvrin, D., Bierstaker, J., & Lowe, D. J. (2008). An examination of audit information

technology use and perceived importance. Accounting Horizon, 22(1), 1-21.

Janvrin, D., Bierstaker, J., & Lowe, D. J. (2009). An investigation of factors influencing the

use of computer-related audit procedures. Journal of Information Systems, 23(1), 97-

118.

Jensen, M. C., & Meckling, W. H. (2009). Specific knowledge and divisional performance

measurement. Journal of Applied Corporate Finance, 21(2), 49-57.

Kalaba, L. A. (2002). The benefit of CAAT. IT Audit, 5.

Lee, C. H., Yen, D. C., Peng, K. C., & Wu, H. C. (2010). The influence of change agents'

behavioral intention on the usage of the activity based costing/management system

and firm performance: The perspective of unified theory of acceptance and use of

technology. Advances in Accounting, incorporating Advances in International

Accounting, 26, 314-24.

Leung, P., Coram, P., Cooper, B. J., Cosserat, G., & Gill, G. S. (2001). Modern Auditing &

Assurance Services (2nd ed). John Willey & Sons Australia, Ltd.

Lewis, W., Agarwal, R., & Sambamurthy, V. (2003). Sources of influence on beliefs about

information technology use: An empirical study of knowledge workers. MIS

Quarterly, 27(4), 657-78.

Liang, D., Lin, F., & Wu, S. (2001). Electronically auditing EDP systems with the support of

emerging information technologies. International Journal of Accounting Information

Systems, 2, 130-47.

Page 119: Thesis writing

Lin, C.-W., & Wang, C.-H. (2011). A selection model of audit software. Industrial

Management & Data Systems, 111(5), 776-90.

Livari, J. (2005). An Empirical Test of DeLone-McLean Model of Information System

Success. Database for Advances in Information Systems, 36(2), 8-24.

Manson, S., McCartney, S., & Sherer, M. (1997). Audit automation: The use of information

technology in the planning, controlling and recording of audit works. Edinburgh:

Institute of Chartered Accountants of Scotland Research Report.

Marcoulides, G. A., & Heck, R. H. (1993). Organizational culture and performance:

Proposing and testing a model. Organization Science, 4(2), 209-25.

Marcoulides, G. A., & Heck, R. H. (1993). Organizational culture and performance:

Proposing and testing a model. Organization Science, 4(2), 209 - 225.

Mathieson, K. (1991). Predicting user intentions: Comparing the technology acceptance

model with the theory of planned behavior. Information Systems Research, 2(3), 173-

91.

McGill, T., Hobbs, V., & Klobas, J. (2003). User-developed applications and information

system success: a test of DeLone and McLean's model. Information Resources

management Journal, 16(1), 24-45.

Moore, G., & Benbasat, I. (1991). Development of an instrument to measure the perceptions

of adopting an information technology innovation. Information Systems Research,

2(3), 192-222.

Nazif, W. M. (2011). The fit between accounting practices and ERP system: Its antecedents

and impact on user satisfaction. Unpublished Doctoral Dissertation. Universiti Utara

Malaysia, Kedah.

Neufeld, D. J., Dong, L., & Higgins, C. (2007). Charismatic leadership and user acceptance of

information technology. European Journal of Information Systems, 16, 494-510.

Nunally, J. C. (1978). Psychometric Theory. New York: McGraw-Hill.

Nunally, J. C., & Durham, R. L. (1975). Validity, reliability, and special problems of

measurement in evaluation research. In handbook of evaluation research. E.L.

Page 120: Thesis writing

Struening and M. Guttentag (Eds.). Beverly Hills: Sage, CA.

O'Donnell, E., & Schultz, J. J. (2003). The influence of business-process-focused audit

support software on analytical procedures judgment. Auditing: A Journal of Practice

& Theory, 22(2), 265-79.

Oliveira, T., & Martins, M. F. (2011). Literature review of information technology adoption

models at firm level. Electronic Journal Information System Evaluation, 14(1), 110-

21.

Paino, H. (2010). Impairment of audit quality: An investigation of factors leading to

dysfunctional audit behaviour. Unpublished Doctoral Dissertation. Edith Cowan

University, Perth, Western Australia.

Pallant, J. (2007). SPSS survival manual: a step by step guide to data analysis using SPAA.

Australia: Allen & Unwin.

Pathak, J. (2003). A model for audit enggagement planning of e-commerce. International

Journal of Auditing, 7(2), 121.

Raith, M. (2008). Specific knowledge and performance measurement. RAND Journal of

Economics, 39(4), 1059-79.

Reid, M. (2008). Integrating trust and computer self-efficacy into the technology acceptance

model: Their impact on customers' use of banking information systems in Jamaica .

Doctoral Thesis. Nova Southeastern University.

Rosen, P. (2005). The effect of personal innovativeness on technology acceptance and use.

PhD Thesis, Okhlahoma State University, 2005.

Said S, A.-G., Geoffrey S, H., & Wang, J. (2007). Information technology (IT) in Saudi

Arabia: Culture and the acceptance and use of IT. Information & Management, 681-

691.

Said S, A.-G., Geoffrey S, H., & Wang, J. (2007). Information technology (IT) in Saudi

Arabia: Culture and the acceptance and use of IT. Information & Management, 681-

91.

Page 121: Thesis writing

Schaik, P. V. (2009). Unified theory of acceptance and use of websites used by students in

higher education. Journal of Educational Computing Research, 40(2), 229-57.

Shaikh, J. M. (2005). E-commerce impact: emerging technology - electronic auditing.

Managerial Auditing Journal, 20(4), 408-21.

Singh, P., Fook, C. Y., & Sidhu, G. K. (2006). A comprhensive guide to writing a research

proposal . Surrey, UK: Venton Publishing.

Singleton, T. (2006). Generalised Audit Software: Effective and efficient tool for today's IT

audit. Information Systems Control Journal, 1-3.

Singleton, T., & Fiesher, D. L. (2003). A 25 years Retrospective on the IIA's SAC Project.

Managerial Auditing Journal, 18(1), 39-53.

Straub, D., Boudreau, M. C., & Gefen, D. (2004). Validation guidelines for IS positive

research. Communication of the AIS, 14, 380-427.

Straub, E. T. (2009). Understanding technology adoption: Theory and future directions for

informal learning. Review of Educational Research, 79(2), 625-49.

Sundarraj, R. P., & Vuong, T. (2004). Impact of using attachment handling electronic agents

on an individual's perceived work performance. Internet Research, 14(1), 6-18.

Szajna, B. (1996). Empirical evaluation of revised technology acceptance model.

Management Science, 42(1), 85 -92.

Tabachnick, B. G., & Fidell, L. S. (2007). Using Multivariate Analysis (5th ed.). Boston:

Pearson Education, Inc.

Tan, H. T., & Libby, R. (1997). Tacit managerial versus technical knowledge as determinants

of audit expertise in the field. Journal of Accounting Research, 35(1), 97-113.

Tan, H. T., & Libby, R. (1997). Tacit managerial versus technical knowledge as determinants

of audit expertise in the field. Journal of Accounting Research, 35(1), 97-113.

Tan, M., & Teo, T. S. (2000). Factors influencing the adoption of internet banking. Journal of

the Association of Information Systems, 1, 1-42.

Page 122: Thesis writing

Taylor, S., & Todd, P. A. (1995). Assessing IT usage: the role of prior experience. MIS

Quarterly, 19(3), 561-70.

Thompson, R. L., Higgins, C. A., & Howell, J. M. (1991). Personal computing towards a

conceptual model of utilization. MIS Quarterly, 15(1), 167-87.

Thong, J., Hong, S., & Tam, K. (2006). The effects of post adoption beliefs on the

expectation-confirmation model for information technology continuance. Information

Journal of human-Computer Studies, 64, 799-810.

Tongren, J. (1999). Integrated IT audit- Part-1. Institute of Internal Auditors, 2(March).

Torkzadeh, G., & Doll, W. J. (1999). The development of a tool for measuring the perceived

impact of information technology on work. Omega, 27, 327 - 39.

Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance

model: four longitudinal field studies. Management Science, 11(4), 186-204.

Venkatesh, V., & Zhang, X. (2010). Unified theory of acceptance and use of technology: U.S

vs. China. Journal of Global Information Technology Management, 5-27.

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of

information technology: Toward a unified view. MIS Quarterly, 425-478.

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of

information technology: Toward a unified view. MIS Quarterly, 425-78.

Venkatesh, V., Morris, M. G., Sykes, T. A., & Ackerman, P. L. (2004). individual reactions to

new technologies in the workplace: the role of gender as a psychological construct.

Journal of Applied Social Psychology, 445-467.

Venkatesh, V., Morris, M. G., Sykes, T. A., & Ackerman, P. L. (2004). individual reactions to

new technologies in the workplace: the role of gender as a psychological construct.

Journal of Applied Social Psychology, 445-67.

Vierra, A., Pollack, J., & Golez, F. (1998). Reading educational research. New Jersey:

Prentice-Hall Inc.

Wang, H. -I., & Yang, H. -L. (2005). The role of personality traits in UTAUT model under

online stocking. Contemporary Management Research, 1(1), 69-82.

Page 123: Thesis writing

Wang, H. S., & Wang, S. H. (2010). User acceptance of mobile internet based on the unified

theory of acceptance and use of technology: investigating the determinants and gender

difference. Social Behavior and Personality, 38(3), 415-26.

Wang, Y. S., Wu, M. C., & Wang, H. Y. (2009). Investigating the determinants and age and

gender differences in the acceptance of mobile learning. British Journal of

Educational Technology, 40(1), 92-118.

Weijters, B., Geuens, M., & Schillewaert, N. (2009). The proximity effect: the role of inter-

item distance on reverse-item bias. international Journal of Research in Marketing,

26, 2-12.

Wu, Y. L., Tao, Y. H., & Yang, P. C. (2008). The use of unified theory of acceptance and use

of technology to confer the behavioral model of 3G mobile telecommunication users.

Journal of Statistics & Management Systems, 11, 919-49.

Yang, D. C., & Guan, L. (2004). The evolution of IT auditing and. The evolution of IT

auditing and internal control standards in financial statement audits, 19(4), 544-55.

Zhao, N., Yen, D. C., & Chang, I. (2004). Auditing in e-commerce era. Information

Management & Computer Security, 12(5), 389-400.

Zhou, T., Lu, Y., & Wang, B. (2010). Integrating TTF and UTAUT to explain mobile banking

user adoption. Computers in Human Behavior, 26, 760-67.