loading

Logout succeed

Logout succeed. See you again!

ebook img

ERIC ED490425: A North American Neophyte's Experience of Serving on EUA Evaluation Teams PDF

release year2004
file size0.03 MB
languageEnglish
by ERIC

Preview ERIC ED490425: A North American Neophyte's Experience of Serving on EUA Evaluation Teams

A North American Neophyte’s Experience of Serving on EUA Evaluation Teams Robin H. Farquhar Professor Emeritus and former President, Carleton University Ottawa, Canada Seminar for the European University Association’s Institutional Evaluation Programme Leuven, Belgium – October 2, 2004 Let me begin by expressing my thanks for the invitation to participate in this Seminar. I am pleased and honoured to be connected with the European University Association, to be getting inducted into the EUA Quality Assurance Pool, and to be joining this Panel today – for several reasons: you constitute a highly distinguished group of university leaders from all over Europe; we are engaged in a very important cause, and I commend the Association for leading it – especially on this occasion of the Programme’s tenth-anniversary celebration; and I’m humbly aware that I am one of only three non-Europeans, and the sole North American resident, in attendance here. What I intend doing this morning is to briefly outline my present views on: the EUA Institutional Evaluation Programme in terms of its purpose, method, and orientation; the experience I’ve had to date as a participant in it; and the aspects of it that at this point I think could benefit from some further consideration. Then I look forward to being involved in a general discussion of our topic. I must acknowledge at the outset several limitations on my qualifications to do this. First, I haven’t yet completed a full review cycle; I have done only two preliminary visits, just one main visit (my second one starts tomorrow after this Seminar ends), and no follow-up evaluation. Secondly, my involvement has been restricted to the current quality assurance project for Irish universities; this, as we’ve just learned, is a special case where our job (in conjunction with three other teams) is to assess the effectiveness of the institutions’ efforts toward quality improvement, as part of a comprehensive governmentally-mandated process. And thirdly, my background is North American, which means that I am not fully familiar with pertinent developments on the European scene – although it is of some relevance that I have devoted three-quarters of my forty- year career to higher education management (half of that as President and Vice- Chancellor at two Canadian universities), that I have participated actively in international academic bodies (especially in the Commonwealth and inter-American arenas), and that I have been engaged in some ten institutional reviews through the Salzburg Seminar’s Universities Project (mainly in various countries of Central/Eastern Europe and the former Soviet Union); but what I have to say will naturally be constrained by my largely North American (and especially Canadian) perspective. Yet, let me now wade into our discussion despite these limitations. 1 EUA’s Programme I understand that the main purpose of EUA’s Programme is to promote the institutionalization of a quality culture and the sustenance of positive strategic change at the universities involved, drawing on the insights of non-threatening external experts and adapting to the particularities of institutional context. This is fundamentally different from most of our North American efforts at institutional evaluation, which are intended mainly for consumer protection and confer what we call a “good housekeeping seal of approval” on the successful universities. The EUA approach seems largely formative and future-oriented (focussing primarily on improving quality, with consumer protection being an assumed by-product), whereas our North American approach tends typically to be mainly summative and accountability-oriented (ensuring that minimum standards are met, with quality improvement being an aspirational by-product). So unlike many European universities, most North American counterparts do not usually have quality offices, quality managers or quality committees at the institutional level, nor do they have national and international councils and associations to facilitate that work. Thus, in the final report of a North American quality review the most important section comprises the “Conclusions”, whereas in the EUA product it is probably the “Recommendations” that best attract participating leaders’ attention. With respect to method, various approaches are taken to quality appraisal in North American universities which include institutional accreditation, state licensure, professional certification, program review, stakeholder feedback, and “league tables”. They vary in terms of the evaluating agent (which may be a state government, a professional association, a single institution or consortium of institutions, or some other party like the media, students, employers, etc.), in terms of scope (which may comprise a particular discipline or service, a total institution, or an entire state, region, country, or international collectivity), and in terms of motivation (participation may be voluntary or mandatory, initiatives may arise internally or be externally imposed, and sanctions may be negligible or powerful – such as funding implications, program discontinuation, and reputational damage). The EUA approach, as I understand it, relates most closely to what we call program review (as opposed to accreditation, licensure, or certification) and is applied to both academic operations and administrative services. But given the diversity within Europe, it is likely that the agent, scope and motivation for a review will vary widely from one case to another; and it is therefore important that those conducting it be familiar with these contextual variables as they apply to each institution involved. Concerning orientation, the attitudes of various constituents toward a quality review can have a huge impact on its effectiveness. So it is worthwhile to try and determine the answers to such questions as: Do those at the host university welcome it, dread it, or see it as a nuisance – and if all of the above, who falls where? Do those on the evaluation team do it to be helpful to the institution, to discharge a duty to the “guild”, or to advance their own circumstances? And what is the wider perception of the review’s credibility, stature, and significance? My impression so far is that the orientation toward EUA’s Institutional Evaluation Programme, while containing all of the elements I just mentioned, is largely favourable: it is usually welcomed by the university, 2 undertaken by the team mainly to be helpful, and viewed more widely as an important, respectable and positive endeavour. Too often in North America, on the other hand, our efforts at quality assessment are perceived as necessary evils that must be tolerated but are not embraced. In essence, then, I admire the EUA Institutional Evaluation Programme and believe that we in North America can learn much of value from it. I think that the former has better potential to foster improvement in the quality of higher education. My Experience So far, my experience in this Programme has certainly been positive. I am impressed with the EUA’s dynamic, flexible, and catalytic approach to institutional evaluation. The intentions are constructively focussed on quality improvement; the process is comprehensively engaging, sensibly sequenced, and realistically scheduled; the participants are positively motivated, capably dilligent, and mutually respectful; the documentation is thorough, relevant, and clear; the administration is distinguished by excellent logistical management from the Association's Secretariat, competent multifarious support from the team secretary, and detailed procedural guidelines for the main participants; and the overall Programme is conceptually intelligent, operationally useful, and strategically important. I also like the facts that it is multinational, independent, voluntary, peer-driven, institution-based, and non-profit in nature. I am pleased with its balance between standardization and flexibility, with its responsiveness in the interest of continually improving itself, and with its dedication to cooperatively advancing higher education across national boundaries while remaining sensitive to and respectful of genuine (and desirable) diversity. So I’m proud to be part of this “movement”, happy to be here this weekend, and hopeful to learn much of value through my participation. These favourable impressions are, of course, based only on the very limited experience that I have had to date. However, I suspect that they will be strengthened as my involvement deepens in the future. Possible Concerns I do, nevertheless, have some cautions to raise (preliminary though they may be). So let me conclude by mentioning half-a-dozen of them: 1. One concern is the risk of “paralysis by analysis”. An Irish university with which I am currently involved through EUA has, within the past year or so, been consumed by its own quality reviews of numerous academic and administrative units (all comprising self-appraisals, peer reviews, unit responses, management commentaries, governing authority submissions, plans of action, and progress reports), by a comprehensive external appraisal of institutional management, by a system-wide OECD review of higher education in the country, by an extensive Washington 3 Advisory Group consultation for strategic planning purposes, and now by our own institutional evaluation – not to mention the continual demands for assessments emanating from governing authorities, professional bodies, and the national government, councils and agencies. One cannot help wondering if this is perhaps too much, if it directs resources and attention away from other important priorities rather than contributing instrumentally to the institution’s ongoing strategic planning efforts to manage quality improvement, or if it has become dysfunctionally costly, exhausting, diverting, confusing, intimidating and unsettling. There may also be dangers that such a system will collapse under its own weight, that the means of auditing quality will displace the end of improving it, or that measurable mediocrity will supersede imprecise excellence. 2. A second concern is the risk that establishing quality offices and managers may hinder rather than foster the development of a quality culture, in that the responsibility for quality matters may become seen as lying with them and thus not of great interest to anyone else on campus (except, perhaps, episodically rather than continuously): “Quality? We have an agency to take care of that – it is not my problem!” Moreover, the work of such units may become overly routinized, standardized, inflexible and unresponsive – or disconnected from an institution’s other quality assurance/quality improvement processes (including human resource management). And their reviews may be insufficiently critical, probing or data- based, their scope may be insular or fragmented, expectations held for them may be unrealistic, disappointing responses to them may be demoralizing, implementation assignments may be unclear, or follow-up and feedback may be inadequate. 3. Another concern is the possibility that an institution’s quality office will be misplaced within the organizational structure so that it becomes isolated from the producing units, buried among the bureaucratic layers, or lodged in the wrong portfolio – rather than being integrated into the strategic planning function, linked to the resource allocation process, and articulated with the senior management group. There is also a chance that the office – or indeed, the entire process – will be under-supported to do its job. 4. Turning to the EUA Programme itself, a fourth concern is the need to further clarify (in written guidelines and in participants’ understanding) what it is that we are supposed to review. Are we expected to assess the quality of an institution’s performance in various areas against international standards? Alternatively, are we expected to assure the validity of such assessments as carried out by the university itself? Or are we expected to audit the institution’s systems for doing these two things, to ensure that they are being done well and are having a progressive impact on the quality of performance? I have seen evidence in my limited experience of all three kinds of expectations; yet they differ fundamentally in nature and have widely variable implications for the composition, costs, and procedures of a review team. If the EUA intends to offer all three kinds of institutional evaluation services, then perhaps it should distinguish more clearly among their respective purposes, natures, and approaches – both conceptually and operationally. 4 5. A further concern is the desirability of clearly differentiated role definitions that will normally apply to EUA review team members. While those for the team leader and secretary are reasonably explicit in the current guidelines, the kinds of tasks and responsibilities that individual team members may be called on to assume could benefit from further specification to ensure that there are no unpleasant surprises in situ. If we are expected to participate in writing reports, leading discussions, conducting meetings etc., these expectations should be made known in advance – especially for newly-appointed pool members. 6. A final concern is the importance of recognizing that this process and its product (especially the final report) can be used, internally and/or externally, for purposes not directly related to the EUA Programme’s objectives – such as propogandist advocacy, public relations, policy influence, etc. (some of these being more valid than others). While it might be lamentable if our work is used to promote extraneous causes, this is probably unavoidable and may be inevitable – so our teams should anticipate that it will likely happen and then try to steer it, as possible, in legitimate directions. Let me emphasize that none of the concerns I have mentioned detracts from my optimistic view of the enterprise in which we are engaged. I offer them with the suspicion that they are already familiar to you, and in the spirit of learning from our own experiences (limited though mine may be) – which is another commendable hallmark of this Programme. So there you have the tentative observations of a neophytic North American inductee. Whether or not you find them useful, I thank you for listening. I’m glad to be with you. 5

See more

The list of books you might like