Old Drupal 7 Site

A common test for medical students in Norway

Jan C. Frich, Stine Andersen, Anders Bærheim, Maren Ellingsen, Gard A. Skulstad Johanson, Oda Lockert, Henrik Schirmer, Tobias S. Slørdahl About the authors
Artikkel

A national test was arranged in March 2017 for medical students in their final year. The test was well attended, and the results indicate that the academic level is consistent across all Norwegian faculties

I 2014, the deans of the four medical faculties commissioned a preparatory study for a national test (1). In March 2015, the national educational assembly in medicine established a project group whose remit was to prepare a written, digital test in clinical reasoning for the final semester of the study programmes. In December 2015, the Assembly of Deans decided that a partial medical test should be conducted (1, 2). One main argument in favour of the partial test was that it would inform the students about their academic level, as well as provide an opportunity to compare the students’ performance across faculties.

As part of the development of a national test in medicine, we will here share our experience from the first pilot of the national test that was arranged in March 2017.

National academic committees

The subjects selected for the pilot included cardiology, thoracic surgery, gastroenterology, gastric surgery, nephrology and urology (1). The questions were aimed at targeting the competence requirements in the study programmes and include perspectives from general practice.

In 2016, three national academic committees were established, each with ten members who prepared the test items. The question type selected was multiple-choice, with four response alternatives and one correct answer (1). A separate manual was developed for the academic committees (3). The committees produced a total of 138 assignments that were externally peer-reviewed by altogether nine doctors (specialty registrars, specialists in the subject in question and specialists in general practice). The peer reviewers identified a need to adjust one out of three assignments, and some suggestions for deletions were put forward. A total of 12 items were deleted, and the academic committees made a number of adjustments to the items.

Attendance and implementation

The national test was not mandatory, but well attended. Altogether 319 (83 %) of 384 possible candidates sat the test (4). The attendance rate was in the range of 73 % – 94 % in the different places of study. A digital solution was chosen for conducting the test. It was technically challenging and functioned well, although its user friendliness was not optimal. The test consisted of 120 items, with an estimated time frame of 120 seconds per item. As it turned out, few students spent more than 75 seconds per item (4). The items and the set of correct answers were published online immediately after the test had been arranged (5).

Assessment and feedback

The students were invited, individually and as a group, to give feedback on the test as a whole, individual questions and the set of correct answers with explanations. The students provided feedback on 40 of the items. The project group served as the assessment committee. The committee’s assessments of the items and conclusions regarding the feedback provided by the students were summarised in a memo that was published online (6). It was decided to delete six questions from the test, and the set of correct answers was changed for five of the questions, as two response alternatives were deemed to be correct.

The average correct score was 71 %, with a range from 41 % to 91 %, and the test discriminated well (4). Approximately two weeks after the test had been arranged, the students received individual feedback with information about their scores in the different subjects. The students also received information about the average score percentages for all candidates nationwide, in addition to their own performance compared to the others at their own faculty. The average total score across the faculties varied by 4 %. Each faculty has been provided with detailed data for the various subjects for closer analysis and processing.

Evaluation of the test

After the test, all the students who had participated received a digital evaluation form with questions regarding the objective, content and arrangement of the test. Altogether 152 (48 %) out of 319 students responded (4). The response rate is low, and there is therefore a degree of uncertainty associated with the results. A total of 84 % of the respondents stated that they took a positive view of the national test, and 68 % welcomed further efforts to establish a common, written final examination. In the feedback provided by students in a free-text format, they pointed out that some of the questions tested specialist knowledge, and that the test should rather test skills that a newly graduated doctor would be expected to master. The students called for more exercises, better proofreading of the items, including standardisation of designations and reference values, and a more user-friendly system for the running of the test. In their free-text notes, some pointed out that in order to highlight real differences across the faculties the test should be made mandatory.

Considerations and further plans

Our experience from the pilot shows that it is academically and technically feasible to conduct a digital, national test in the final year of medical study programmes. The national academic committees as such represent a new arena for dialogue on the content and expected learning outcome of the studies in various medical subjects. A large proportion of the students chose to sit the test, and a majority of those who responded saw it as a positive measure. Peer review of the items in many cases helped provide quality assurance. Moreover, it helped clarify the set of correct answers with explanations. In addition, feedback showed that there is further room for improvement.

The question format and use of digital tools made it possible to provide feedback to all students on their performance within two weeks after the test. The set of correct answers with explanations, including the adjustments that were made after the assessment, is freely available online and represents a learning opportunity, to the students who sat the test as well as others. The attendance rate varied from one educational institution to another, and different student groups had varying experience with the question format and sitting a digital test. It is thus difficult to draw any conclusions in terms of real differences across the educational institutions after conducting a single pilot. We may assume that over time, there will be differences in academic level from one student cohort to the next in the same educational institution.

In June 2017, the deans decided that a national test in medicine should be sustained with an extended test as a mandatory requirement for all graduation cohorts in medical study programmes in Norway from the spring of 2018. The partial test should not replace clinical and oral examinations. In some countries with a longer tradition of national tests and examinations, the testing of theoretical knowledge is combined with a test of clinical skills. In the longer term, we may see such a development in Norway as well.

We wish to thank the other members of the project group, Eirik Dalheim, Kristin Elisa Ruud Hansen, Elin Holm, Marte Laugen, Linda K. Røine, Hanne-Guro W. Aabelvik and Eivind A. Valestrand (member until the end of 2016), as well as all others who have contributed to this work in various ways.

Anbefalte artikler