04 dr david woodhouse

Upload: aziz-malik

Post on 24-Feb-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/24/2019 04 Dr David Woodhouse

    1/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    CREATIVITY AND DIVERSITY:CHALLENGES FOR QA BEYOND 2010

    Dr. David Woodhouse,Executive Director, Australian Universities Quality Agency,

    President, International Network of Quality Assurance Agencies inHigher Education

    Dr David Woodhouse is Executive Director of the AustralianUniversities Quality Agency (AUQA), which carries out quality auditsof Australia's universities, other higher education institutions andaccreditation agencies. David provides advice and training on qualityassurance nationally and internationally. He is currently President ofINQAAHE (having also served in that role from 19972001) and wasSecretary/Treasurer of the Asia-Pacific Quality Network from 20052007.

    David Woodhouse was founding Director of the New Zealand Universities AcademicAudit Unit (19942001) and Deputy Director of the Hong Kong Council for AcademicAccreditation (19901994). His first career was as an academic in mathematics andcomputer science, including being dean of a university faculty. His leisure interests includebushwalking, multisport and other endurance events.

    52

  • 7/24/2019 04 Dr David Woodhouse

    2/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    CREATIVITY AND DIVERSITY:CHALLENGES FOR QA BEYOND 2010

    Dr. David Woodhouse,

    Executive Director, Australian Universities Quality Agency,President, International Network of Quality Assurance Agencies in

    Higher Education

    CONTEXT

    When the Australian Universities Quality Agency (AUQA) was established in 2000 by thestate and federal governments in Australia as the principal national quality body for highereducation (HE), it was expected to deal differently with different types of institutions, so asnot to homogenise the system. At that time, AUQA mainly reviewed universities, but now

    we review more non-university HE institutions, so the breadth of diversity we are expectedto support is far greater.

    Australia had considered and rejected a US-like accreditation system and had consideredand rejected a UK-like system of audits plus program-level assessments. Both of these wereseen as more intrusive than what was needed by the Australian HE system. Instead,Australia was attracted to the fitness-for-purpose (FFP) quality audit system of the NewZealand Universities Academic Audit Unit, and settled on an adaptation of that.

    It has become fashionable to say that a fitness-for-purpose approach to quality has had itsday, but this is to ignore the flexibility of this concept. The flexibility derives from the all-

    embracing nature of the term purpose. The purpose need not refer only to the institutionsown mission, but could include standards, accreditation thresholds, and legal requirements.

    But Im jumping ahead. When AUQA was set up, the prime reference point for the fitness-for-purpose quality audits was each universitys own objectives. Furthermore, the agencywas set up in collaboration and consultation with the institutions to reassure them that theirvoices were being heard.

    PREPARING THE GROUND

    1.1. Involvement and ownership

    (Woodhouse, 2007)

    AUQA was well placed to support diversity, as the quality audit approach meansthat all institutions can be different from the others, each institution can be evaluatedin its own terms, with benchmark comparisons, yet without homogenising.

    53

  • 7/24/2019 04 Dr David Woodhouse

    3/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    The Executive Director of AUQA visited all institutions over the first 18 months ofAUQAs existence to reassure them that the Agency would be sympathetic to theacademic milieu, while being thorough and rigorous in its audits. Other extensiveconsultations took place also. Transparency in all the planning activities enhancedthe acceptability of the final strategy and approach. Academics were involved in

    decisions about, and the work of, the Agency, giving them a stake in its success anda level of comfort that it would be compatible with the academic enterprise.

    Of course, it is not only academics and academic institutions that must be satisfiedwith the QA process for it to be a success. AUQA also involved other people fromvarious backgrounds in discussing quality issues to enhance the insights of thegroup process. A multi-pronged approach is more likely to garner the support ofboth academia and the external stakeholders.

    To this end, AUQA has adopted an extended peer review approach, using auditpanels from a variety of backgrounds. As a result, judgements about academia are

    being made by academics (so there is confidence that the judgements are valid) butalso by people from industry and overseas (so there is confidence that thejudgements are not self-serving). Institutions are consulted in AUQAs selection ofthe peer groups, and institutions are also invited to provide comments on them afterthe audit.

    This is relevant to the theme of this conference, as mixed peer panels increase thechances of the agency recognising, acknowledging and accepting diversity andcreativity in the institutions. Industrial members of AUQA panels, far from stiflinginstitutional creativity, are more likely to question why institutions are doing thingsin the same old way.

    AUQA instilled confidence in the institutions that adverse consequences would beminimised though one cannot control the media, and in any case valid criticismsshould be heard.

    1.2. Practical Tactics

    AUQAs audit techniques for diversity include:

    Panel composition (as mentioned above)

    Open Sessions at the audit visits speaking to anyone that wants to meet thepanel

    Walkabout Sessions at the audit visits speaking to staff and students in theirplace of work

    Longer/shorter Audit Visits

    A range of sampling rationales

    Thorough training of auditors

    54

  • 7/24/2019 04 Dr David Woodhouse

    4/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    Auditor training

    Stopping auditors tending towards the norm. Well, in my university we doX

    Avoiding being directive. You must have performance indicators Youracademic board must have academic control

    Avoiding commended good practices (especially those on AUQAs GoodPractice Database) becoming mandatory.

    Ensuring that a small institution has processes in place without pushing ittowards documentary compliance, which may be superficial, and will likelybe a distracting extra load?

    Achieving equity: is this consistent with or contrary to diversity?

    1.3. External Reference Points

    In addition, AUQA took two initiatives of its own.

    i) Some people expressed concern about an audit solely against an institutionsown objectives, lest it set low or inappropriate objectives. AUQA thereforelooked for external reference points, and noted the five National Protocols forHE Approval Processes, also created by MCEETYA in 2000. Protocol 1defined a university, so AUQA said (over some resistance) that it would alsoaudit universities against this Protocol. This was a simple check as theProtocol was quite minimal and general.

    ii) The other initiative was to recognise that people link quality and standards,

    and would likely therefore ask a quality agency to comment on universitystandards. AUQA therefore developed a set of questions to investigate at asample of universities. They included:

    How are standards determined and updated?

    What processes are in place to assure consistent implementation of thestandards?

    How are outcomes monitored?

    How are standards compared nationally and internationally?

    What is the result of these comparisons of outcomes or content?

    We found that few universities could answer this well. Common answers were:

    Examiners meetings check grade distributions (ie they check internalconsistency)

    Most of our courses have professional accreditation (ie we relinquish ourresponsibility for checking standards to external bodies)

    We are just starting on benchmarking (but with little to show for it)

    55

  • 7/24/2019 04 Dr David Woodhouse

    5/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    We have sample cross-marking by other institutions (only a couple ofinstitutions were able to give this good answer)

    REAPING THE FRUIT(Woodhouse & Carmichael, 2005)

    How successful has AUQA been at supporting diversity in practice? The following relatesways in which various AUQA audit panels have come to terms with the specific nature ofan institution. Each example presents a commendation of something the institution is doingwell in terms of its own mission (purpose) and a recommendation of improvementsneeded to achieve those stated purposed.

    i) The Australian Catholic University has a goal within its Mission of engaging thesocial, ethical and religious dimensions of the questions it faces in teaching, researchand service. This is a particularly distinctive Mission, and the Audit Panel foundthat ACU is substantially achieving this goal, and therefore commended the

    University for achieving this. In this way, the audit supported a distinctive feature ofthe institution (AUQA, 2002).ACU was formed from six colleges widely spread around the country, but it hasaimed to achieve a unitary and national characteristic. The Audit Panel found thatstudents did not feel part of a national institution, and therefore recommended thatACU consider how to formally engage the student body in the life of the Universityas a whole (AUQA, 2002).

    ii) The University of Queensland, on the other hand, has a major campus, on which ithas devolved a good deal of responsibility to faculties. In many institutions,devolution has led to inconsistencies, but the Audit Panel found that UQ was

    implementing and successfully sustaining an effective devolution model, andcommended the University for this (AUQA, 2003b).UQ is a research-oriented institution, and not surprisingly therefore speaks of thelink between teaching and research. The Audit Panel found that this concept was notwell thought through, and recommended that UQ clarify precisely what is meantby 'the distinctiveness of a research-based culture for teaching and learning' anddesign strategies to express this aspiration and to achieve the specific impliededucational goals (AUQA, 2003b).

    iii) In 2004, the University of South Australia has more students enrolled overseas thanany other Australian university. Therefore the Audit Panel investigated its qualitysystems in this area very thoroughly, and was able to commend the University for

    having a very effective quality assurance system for transnational programs (AUQA,2004a).UniSA has been emphasising the achievement by its students of Graduate Qualitiesfor longer than any other Australian university, so again this was a point ofdistinctiveness that engaged the Audit Panels attention. Here the Panel found thatthe University was falling short of its aspirations, and recommended that UniSAcontinue to develop its conceptualisation of Graduate Qualities (AUQA, 2004a).

    56

  • 7/24/2019 04 Dr David Woodhouse

    6/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    iv) The Australian Maritime College (AMC) is a non-university, specialist institutionthat is highly regarded for the quality of its industry-based training and educationprograms in both vocational and higher education. At the time of the AUQA audit,AMC had set for itself the objective of gaining university status. Australia has anationally agreed definition of a university to which AUQA refers in its university

    audits, but not being a university, AMC was not bound by these criteria. However,from the fitness-for-purpose perspective of a quality audit, it was entirelylegitimate for the Audit Panel to audit the College against its own objectives, one ofwhich was to become a university (AUQA, 2003a).The Audit Panel found that AMC was some distance from achieving this objective,and indeed would have to distort its mission by substantially shifting attention awayfrom its VET programs into the provision of higher education. (AUQA, 2003a).

    v) James Cook University (JCU) is located in northern Australia and its distinctivevision is to be acknowledged by 2010 as one of the top five universities of the worldenhancing life in the tropics through education and research. In its self-assessment,the University had benchmarked its performance as a tropical research university

    against other universities located in the tropics, using two well-recognised listings(the Shanghai Jiao Tong University Institute for HE survey, and the Thomson ISI /ESI database) (AUQA, 2004b).The Audit Panel verified the benchmarking data and concluded that the Universitywas effectively achieving its research mission. The Audit Panel commended JCUfor making substantial progress towards achieving its vision as a world-classtropical research university (AUQA, 2004b).

    Re-planting(Woodhouse, 2009b)

    1.1. Keeping it Fresh

    Attitudes of institutions and institutional staff to external QA range along aspectrum, from antagonism (intrusive, irrelevant, waste of time), through wearyresignation (more management nonsense no point in fighting just get through itas quickly as possible), through making the best use of an unavoidable imposition(this is not necessary but if we have to do it lets include things that WILL be useful),to welcome (doing a thorough self-review and getting an independent critique isuseful). Institutions in the first two categories and possibly the third are likely totreat an external quality agencys (EQAs) requirements mechanically: play the game

    and get it over. Institutions in the third and fourth categories are likely to engagewith the spirit of the requirements.

    By the end of a whole review cycle, however, institutions have a sense of what isrequired, and (even for categories 3 & 4) the process has become somewhat routine.Therefore, there is merit in keeping the process fresh. Change it enough so thatpeople have to actively think about it. How much is enough? In moving from its firstto second audit cycle, AUQA took the view that the change should not be so great

    57

  • 7/24/2019 04 Dr David Woodhouse

    7/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    that the audit seemed to be a totally different process. People rightly feel annoyed ifthey have to change a system for no apparent purpose. The changes we made wereto narrow the scope and increase the emphasis on outcomes data and the academicachievement standards of students and graduates.

    To narrow the scope, we now select two themes for each audit. A theme might be,for example, international activities, or quality of teaching, or the first yearstudent experience, or the faculty of business, or programs in nursing, orworkforce planning, or collaborative research.

    The problem in relation to the increased emphasis on standards is that, even thoughAUQA put institutions on notice in Cycle 1 with the questions on standards andbenchmarking, Cycle 2 audits are still not providing firm figures on performance interms of graduate achievement. Therefore, AUQA developed a proposal on howAustralia might rectify this gap, and the work has been taken up by the AustralianLearning and Teaching Council (ALTC).

    1.2. Making it Relevant Examples from the USA

    Accreditation as a systematic approach to QA began in the USA over a century ago(Neubauer, 2008). It has of course changed over the decades, but about 15 years agothe US system was being stringently criticised for being irrelevant and/or tooburdensome, and certainly not worth the cost. Program accreditation was seen asvaluable, though the totality of it was an enormous load for each institution, it waslargely input-oriented, and some accreditors made intrusive demands on theinstitutions that related more to a pre-conceived view than to remedying identifieddefects. Institutional accreditation gave rise to a massive self-study every 10 years orso, quickly forgotten and with little action in between. It did not adequately takeaccount of institutional differences and characteristics, and the better institutionsfound it particularly irrelevant.

    Over the last 15 years, the six regional accrediting agencies, which between themcover the whole country, have reinvented themselves in various ways (Stella &Woodhouse, 2006). The Western Association of Schools and Colleges (WASC) hasmade the greatest changes to remain relevant. It has moved away from assuringcompliance to engaging the institutions. In practice, this translated to a binaryemphasis on institutional capacityand educational effectiveness. The Agency carries out

    two visits to an institution, one to assess capacity and another, some years later,when the capacity has had time to take effect, to assess the effect of this capacity.While the model continues to evolve, institutions have reported it to be both cheaperand more useful.

    The Southern Association of Colleges and Schools (SACS) took a simpler approach.The compulsory compliance with standards was checked off in a mainly paperexercise, and then the bulk of the accreditation process, including the visit to the

    58

  • 7/24/2019 04 Dr David Woodhouse

    8/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    institution, concentrated on a topic or area, selected by the institution, in which itwished to improve. (This is akin to a combination of AUQAs audit Cycles 1 and 2.)The drawback here was that institutions found that the scheme did not give rise to alighter load, but a heavier one. In practice it seemed that the whole of a previousstyle of accreditation was done in addition tothe review of the selected area.

    The third US accreditor worth mentioning in this context is the North CentralAssociation (now renamed), which set up the Academic Quality ImprovementProject (AQIP) for its more mature institutions. Here, through projects agreed withthe Agency, and carried out over seven years, the institution both improves andshows that it meets the Agencys Standards. There is an accreditation visit afterseven years.

    These are some ways in which EQAs are changing so as to remain relevant to theinstitutions. Relevant EQAs are more likely to engage the institutions (in WASCsphrase), and engaged institutions are more likely to sustain their attention to quality

    in a meaningful, rather than superficial, way.

    FORCE MAJEURE

    Practices in HE are not driven by EQA actions alone. Often the governments actions aremuch stronger. For example, despite the Australian governments stated commitment todiversity of institutions, many of its policies that drive homogeneity, so this is the likelyoutcome regardless of what AUQA, or any EQA, might do. Governments want institutionsto concentrate on different things at different times: eg diversity, equity, standards. If thegovernment owns or can direct the quality agencies, they can require the agency to require

    these things of the institutions. Government emphasis on standards achieved by students isnow a feature of all the USA and Australian and British systems.

    Changes initiated by the Australian government in 2009 include a move to a so-called levelplaying field. Because non-university institutions have to be externally registered(accredited) periodically, universities also should periodically re-prove that they satisfy theconditions that achieved them that status, ie be accredited.

    The government of a country that exports so much education is committed to measuringthe standards that students achieve, to be able to prove, internationally that they are good,and are not falling. This requires assessment.

    And along with these strengthened accreditations and assessments will continue the auditchecks that the processes are sound and fit for purpose.

    So, in ten years, Australia has moved from a light-touch system in which the Australiangovernments adopted the New Zealand audit model in their planning for AUQA, explicitlyrejecting the USA accreditation and the UK discipline reviews, to a Total QualityManagement Systemthat includes all three (Woodhouse, 2009a):

    59

  • 7/24/2019 04 Dr David Woodhouse

    9/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    Audit against objectives, and

    Assessment against standards, and

    Accreditation against protocols,

    and institutions appear to be comfortable with the introduction of a hard-edged regulator

    with teeth provided it is applied to someone else .

    This change has been described in some government material as a move from quality basedon fitness-for-purpose to quality based on standards apparently not realising that theenlarged and extended external reviews will simply be checking fitness for augmentedpurposes.

    THE QUALITY RESPONSE

    1.1. Post Mortem

    When I consider AUQAs second cycle of audits (we are now half-way through thefive years), it is not clear that our reduction to two themes and emphasis onstandards have been successful in maintaining freshness and attention. It would beunfair to say, pejoratively, that the universities have learned to play the game.Rather, they have learned that a trial audit adds value, that the self-review can beefficiently produced by a dedicated group, and so on. AUQA has a sense that theapproach being taken is rather more routine and less engaged than we would havewished.

    Another approach would have been to base the audits on a full whole-of-institution

    audit as in the first cycle, but to have investigated much more deeply on somesample small topics, such as the handling of plagiarism, the working environmentfor casual staff, or the implementation of courses according to the AustralianQualifications Framework.

    Or perhaps we should have changed the system completely!

    This leads me to consider the effectiveness of external quality agencies (EQAs).

    1.2. EQA Objectives

    EQAs consume resources. Not very much, in the scale of education or of oversight,but they must justify this use. What are the expectations on EQAs? The samequestions can be asked about EQAs as about institutions: are you achieving yourobjectives (fitness for purpose) and were they the right objectives in the first place(fitness of purpose). Some objectives of EQAs have been mentioned above, eg:

    Helping institutions improve

    Holding institutions accountable for funds

    60

  • 7/24/2019 04 Dr David Woodhouse

    10/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    Showing institutions have provided value for money

    Ensuring that programs satisfy any national specifications

    Closing down inadequate institutions

    Establishing international relations

    There are various ways of answering the questions about objectives and effect.

    One way is what we apply to the institutions themselves, namely anindependent review. AUQA and many other agencies have now been subjectto external review.

    AUQA requires from each auditee a report against the recommendationsmade in the audit report, and has found that the great majority have beencarried out, giving further evidence of AUQAs effect.

    AUQA also surveys institutional staff, and the wider stakeholder community,with generally positive answers to the question: Do AUQAs audits and otheractivities make a difference?

    40,000 Chinese students in Australia that would not be here were it not forAUQA (DEEWR)

    International visitors, invitations, presentations etc.

    There is solid and increasing evidence that EQAs are effective in achieving atleast some of their objectives.

    1.3. Improvement and Accountability

    However, the question about EQAs impact is often asked in a particularly slantedway: Have you made a difference to institutions? Have you improved student

    learning outcomes? Have you affected the ordinary academic? The universalconundrum of EQAs, that they are expected to combine improvement (ofinstitutions) with accountability (of institutions), but the majority of the questioningabout impact is in the QI part. Less often are EQAs asked have you successfullyheld them accountable? So, EQAs are (unfairly) judged on only half their mission/purpose/objectives (the QI part).

    It is particularly unfair, because it is actually the other half (accountability) that is ofmost interest to their owners/drivers/government. Of course, governments wouldlike institutions to improve, but their main goals are for them to: do what thegovernment wants, provide more value for money, do what the institutions say they

    will, and not get worse. Improvement comes no better than fifth.

    And furthermore, while the EQA can reasonably be judged on whether it has heldinstitutions accountable, it is not solely or even primarily responsible forinstitutional improvement that is the task of the institution itself.

    Obviously, an EQA can perform better or worse, but it is analogous to teaching andlearning: teaching is vital, and there are good teachers and bad teachers, ultimately

    61

  • 7/24/2019 04 Dr David Woodhouse

    11/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    learning is what the studentdoes, and the teacher cannot force this to happen. Thatis why it is unfair to judge teachers on student performance alone. They should bejudged partlyon whether they do the generally accepted right things as teachers process, pedagogy, scholarship and partlyon subsequent student performance andachievement. Similarly, EQAs should be judged partly on their adherence to good

    practice (eg GGP, ESG) and partly on the achievement of the pre-set objectives.

    Is quality better than it was 20 years ago? is the wrong question. The rightquestion is How good is quality? We have been able to show that quality is asgood as was always claimed, but formerly without evidence.

    1.4. EQA & IQA

    The claimed failure of EQA to affect academics and their activities and the wholeacademic enterprise is actually a failure of the institutions. EQAs have deliberatelytried to avoid intrusion and leave it to institutions to operate both their IQA and

    their side of the IQA/EQA liaison to best effect. Unfortunately, to best effect isoften interpreted as to make the best external impression not to make the mostimprovement.

    The benefits of ensuring alignment of internal and external QA are leading someEQAs to make common cause with IQA units on the basis of you are like us. This isalmost certainly unwise, as the academic who is sceptical about the QA movementwill lump the IQA in with the EQA, and the EQA will still face a brick wall ininteracting with the average academic.

    1.5. Some Wild ideas for Procedural Change

    Perhaps we need to think more creatively even wildly about new ways ofapproaching systematic QA. Some years ago, the sequence self-review, report, teamvisit, report, decision sequence became almost normative for external qualityassurance. Indeed, this has proved culturally robust and has travelled well globally.Nonetheless, it is surely time for experienced QA practitioners to give thought toother possible models as we strive for creativity and diversity. The following aresome wild ideas:

    1) An internal quality manager interacts with all departments in preparation for

    the visit; the EQA team consists of only one or two people who interact withthe internal QM2) Annual, or more frequent, visits by one or two people; unannounced (or only

    a couple of weeks notice); narrowly focused3) Constant monitoring: someone is embedded in the institutions operations

    for an extended period4) Restrict data-gathering and interviews to graduates/alumni, but interview a

    large number of them

    62

  • 7/24/2019 04 Dr David Woodhouse

    12/12

    3rdInternational Conference on Assessing Quality in Higher Education, 6 th 8thDecember, 2010, Lahore Pakistan

    63

    5) Mystery shopping

    It may be that none of these would work. Also, it is perhaps ironic to be proposing amove away from the standards sequence just as it has been built into requirementssuch as the EQAR and ESG!

    CONCLUSION

    I once thought that EQA in any place would evolve from capacity-building, to assessmentthen accreditation and finally the light touch of quality audits for the mature institution.However, I now believe that systems are more likely to cycle. In Australia, for example,ultimately the state and federal governments have not been happy to have unbridleddiversity in the HE system, and have created definitions of the various types of institutions,including universities, against which they must now be checked (ie accreditation); and havedecided that institutions will also have to report publicly on their academic achievementstandards (ie assessment).

    However, there is still scope for creativity and diversity on the part of institutions, but theywill have to be exercised within different parameters. In turn, the EQA must be willing andable to hear different stories and consider using different methods within these parameters.

    REFERENCES

    1) AUQA (2002), Report of an Audit of Australian Catholic University, AustralianUniversities Quality Agency, Melbourne.

    2) AUQA (2003a), Report of an Audit of Australian Maritime College, AustralianUniversities Quality Agency, Melbourne.

    3) AUQA (2003b), Report of an Audit of The University of Queensland, AustralianUniversities Quality Agency, Melbourne.

    4) AUQA (2004a), Report of an Audit of the University of South Australia, AustralianUniversities Quality Agency, Melbourne.

    5) AUQA (2004b), Report of an Audit of James Cook University, AustralianUniversities Quality Agency, Melbourne.

    6) Woodhouse, D. (2007), Success Factors in the Implementation of Quality AssuranceSystems in Institutions: A View from a Quality Agency, Invited presentation toNCAAA conference, Riyadh, March

    7) Woodhouse, D. (2009a), Total Quality Management: Evolution of a System,

    Presentation to 4thAnnual University Governance Conference and RegulationsForum, Canberra, 3 September

    8) Woodhouse, D. (2009b), Sustaining Quality, Invited keynote presentation to TEMCConference on Sustainability in Tertiary Education, Darwin, 16 September

    9) Woodhouse, D. & Carmichael, R. (2005), Accreditation and audit, Presentation toINQAAHE Conference, Wellington, May