Measuring educational and training environments as part of QA

Introduction

The learning and teaching environments in which education and training occurs in the health professions has come in to focus in the UK with both the GMC and PMETB introducing mechanisms for postgraduate students and trainees to feed back their perceptions of the logistics and calibre of their experiences to centralised quality control systems through national surveys.

These initiatives have clear educational advantages and will be compatible with the increased emphasis on local detection and management of disciplinary issues that are foreshadowed by the recent Donaldson Report on the future of medical regulation. In recent years deaneries have used trainee questionnaires, but these new national initiatives will give respondents an input to UK-wide quality assessment.

Several years ago we developed a Postgraduate Hospital Educational Environment Measure (PHEEM) using grounded theory approaches including focus groups and Delphi processes. The PHEEM has 40 items and has been found to have high validity and reliability in administrations throughout the UK, the Netherlands, Australia and Japan. Variations on it have been developed using the same methodologies for the measurement of trainee perceptions of the Surgical Theatre Education Environment (STEEM - adapted to OREEM in Canada for the Operating Room); the Anaesthesia Theatre Teaching Environment (ATEEM) and for GP VTS training in Ireland.

This work builds on the Dundee Ready Education Environment Measure (DREEM) developed in the Centre for Medical Education in Dundee in the late 1990s. DREEM is now translated into more than 20 languages, including Chinese, Arabic, Malay, Portuguese, Dutch, Spanish and most recently Persian. It's 50 items help to diagnose problems in the students' perceptions of the undergraduate curriculum Ð with minor terminological changes it can be used in any of the health professions. It is increasingly used longitudinally and the data can be analysed by gender, year of study, ethnicity or any other variable that is programmed in to the demographic section. Many schools are now using electronic versions.

The DREEM gives a global score (out of 200) for the 50 items, and has five sub-scales relating to:

  • Students' Perceptions of Learning
  • Students' Perceptions of Teachers
  • Students' Academic Self-Perceptions
  • Students' Perceptions of Atmosphere
  • Students' Social Self-perceptions.

It has a consistently high reliability despite the widely varying cultural contexts in which it has been used, and data can be collected and analysed according to variables such as year of study, ethnicity, gender, age, and courses/attachments. Pololi and Price (1) have since developed a 31question survey in 4 US medical schools with three subscales. Their inventory does not claim to be non- culturally specific nor generic to the health professions beyond medicine; the two inventories do share several items, which might be taken as indication that there is something generic about what is considered to be an effective educational environment in the undergraduate health professions. The research literature is building a picture of the norms we should expect in a constructive undergraduate educational environment. DREEM has been used in nursing schools in the Middle East and Thailand, in dental schools (including hygienists) in Malaysia and Pakistan as well as a wide range of medical schools.

Ifere (2) administered DREEM to 127 Nigerian medical students in years 4, 5 and 6 and was able to identify their perceptions of the strengths and weaknesses of the medical school which had a mean total of 118/200. There were statistically significant gender and academic year differences in the results. Similarly, Bhattacharya administered the DREEM to 86 students in years 1, 2 And 3 of a Nepalese Health Sciences Institute and reported a mean total of 130/200 and significant gender and academic year differences. DREEM was administered to 70 final year and 36 first year medical interns in the West Indies and reported a total mean of 110/200 with various specific findings for the sub-groups within the cohort.(3) Till administered the DREEM to 407 Canadian chiropractic students in Years 1, 2 and 3 and reported radically declining overall means for each year - 111/200 for Year 1, 97/200 for Year 2 and 78/200 for Year 3 - with individual items and sub-scales indicating clearly where remediation was required.(4) UK medical schools tend to score around 130/200 - good, but with some considerable room for improvement.

If DREEM data are correlated with academic results, they may be able to be used prospectively to predict which students are struggling in a given educational environment. DREEM was administered to 508 medical students in the clinical years at an Indian medical school and found that DREEM scores were significantly higher for academic achievers as defined by their GPAs.(5) Similarly Sun administered the DREEM in Chinese to 885 students at his medical school and found a statistically significant difference (p<0.01) between the mean DREEM scores of high academic achievers (123/200), middling academic achievers (118/200) and low academic achievers (113/200) although they did not detect statistically significantly different mean scores between males (118/200) and females (119/200) or between the two types of courses that the students were studying.(6)

Further work will be undertaken to establish whether or not the DREEM can be reliably used to identify various types of academic achievers, and even perhaps to predict the probable academic outcomes of particular individuals and subgroups in the absence of intervention. While a poor perception of the educational environment may not necessarily correlate with poor academic performance, we hypothesise that it is likely to do so.

The DREEM was purposefully developed as an international, generic instrument that is not culturally specific to a given region. In Thailand, Wangsaturaka has explored the utility of developing culturally-specific instruments for a given country's undergraduate medical education at the same time as investigating if there are phases within the undergraduate curriculum that require different inventories in order to provide sufficiently sensitive quality assessment data for managers of a nation's medical schools.(7)

Virtually all of the development and validation work for these instruments has been undertaken by Masters and PhD students in the Centre for Medical Education. The instruments are fully in the public domain and youÕre welcome to use them if you think they will be a useful part of your quality assessment portfolio.

For more information: s.l.roff@dundee.ac.uk

References

 
 
MEDEV, School of Medical Sciences Education Development,
Faculty of Medical Sciences, Newcastle University, NE2 4HH

|