The Universities Medical Assessment Partnership (UMAP) has existed since January 2003 and is a collaborative response to the need to share effort and raise standards in assessments in undergraduate medicine.The current partnership includes Leeds, Liverpool, Manchester, Newcastle and Sheffield.
A further ten partners are to join UMAP over the next 12 months. Here Andrea Owen reports on a workshop funded by the subject centre that was part of the UMAP project’s suite of question writing workshops – Writing questions for the UMAP bank:The value of professional variety.
The UMAP partnership came about after a group of schools decided that in working together it was perfectly possible not only to improve quality, but also to put an end to schools working individually to produce what might be very similar questions. Collaboration would mean that partner schools could pool co-written assessment items, providing access to a greater number of items whilst avoiding reinventing the wheel.
There was much planning at the beginning of the project to ensure that UMAP made an effective choice on a formula for question construction. The greatest volumes of evidence on item writing practice are available from the National Board of Medical Examiners (NBME) in the United States. The NBME has a longstanding written assessment framework through which an incredible amount of data has been generated. Because of the proven success of the NBME style of multiple choice (MCQ) and extended matching items (EMQ), UMAP chose to concentrate on these formats. The NBME’s method of question writing was therefore of immediate interest and the formula developed by Susan Case and David Swanson has been adapted and implemented across the UMAP programme of question writing workshops.
The workshop is where everything begins. Participants are asked to consider a series of ‘best practice’ guidelines before looking within their own areas of expertise to draw on potential topics that could be translated into scenario based MCQs or EMQs. Sessions are designed so that different subject representatives are thrown together. Participants are encouraged to discuss and define scenarios that mingle topic areas. Where appropriate, pairings are encouraged where subjects offer natural crossover. We particularly invite e.g. biological scientists and surgeons to compare notes, and similarly for psychologists to work with palliative care specialists, nurses to work with general practitioners, ethicists to work with genetic specialists…. The list of complementary pairings continues to grow.
The workshops are the main source of new questions for the bank, and it is here that the quality assurance process begins. Session participants are given question writing templates to guide their writing and to help their opposite pair to consider the final product. In the final segment of the workshop, members are asked to dissolve their original pairings and to seek out a contrasting partnership as per our preferred method for reaching agreement over common, core knowledge. Pairs then compare notes on each others questions, considering certain key elements relating to structure and content.Notes are made, feedback to authors is delivered and updates are applied.If consensus on content or correct answer is unobtainable, the question is queried.
UMAP were very keen to set up a structure which would develop on the quality assurance process commenced at workshops. Central committees, tiered decision making meetings and review by email were some of the methods trialled and rejected. After continued consultation with our project consultants Cees van der Vleuten and Lambert Schuwirth, a localised review team structure was chosen. The networked team approach has been very successful in large part because of the access afforded to a varied selection of subject specialists. Each partner medical school runs two teams of four reviewers, each team consisting of a range of subject representatives, from pathology to paediatrics, ethics to epidemiology. In addition, this system ensures that questions written at one school can be reviewed, blind, by another. This avoids the ‘not invented here’ problem, which has been highlighted elsewhere for having discriminating effects.
The UMAP team collect a variety of data which helps develop our practices. We are able to look at results data and decide what this shows about the quality of our questions and whether revisions are required.
We are also able to e.g. determine topic areas which are most vulnerable to our low attrition rate, and e.g. which questions perform best with which student cohorts. However, the greatest level of interest is generated by which authors write the best questions.
Within our analysis of the data returned to us after an exam has run we look at how well a question has performed. That is to say, has the top group of students outperformed the bottom group of students. Should the bottom group of students outperform the top group, this highlights a potential problem with the question.
With this information we are able to provide feedback to authors each time a question has been included in an exam.
Authors are often keen to know how well their question has performed alongside others and so we offer them a place ranking. In addition, the central UMAP office updates its master list of item performance which details the best authors overall. This years results make for interesting reading.
UMAP have recently been funded by the JISC to investigate digital repositories for better sharing assessment items.
MCQ – Best authors
EMQ – Best authors
For more information: firstname.lastname@example.org