当前位置: X-MOL 学术bioRxiv. Sci. Commun. Educ. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Selecting Student-Authored Questions for Summative Assessments
bioRxiv - Scientific Communication and Education Pub Date : 2020-07-29 , DOI: 10.1101/2020.07.28.225953
Alice Huang , Dale Hancock , Matthew Clemson , Giselle Yeo , Dylan J Harney , Paul Denny , Gareth Denyer

Production of high-quality multiple-choice questions (MCQs) for both formative and summative assessment is a time-consuming task requiring great skill, creativity, and insight. The transition to online examinations, with the concomitant exposure of previously tried-and-tested MCQs, exacerbates the challenges of question production, and highlights the need for innovative solutions. Several groups have shown that it is practical to leverage the student cohort to produce a very large number of syllabus-aligned MCQs for study banks. Although student-generated questions are well suited for formative feedback and practice activities, they are generally not thought to be suitable for high-stakes assessments. In this study, we aimed to demonstrate that training can be provided to students in a scalable fashion to generate questions of similar quality to those produced by experts, and that identification of suitable questions can be achieved with minimal academic review and editing. Biochemistry and Molecular Biology students were assigned a series of activities designed to coach them in the art of writing and critiquing MCQs. This training resulted in the production of over one thousand MCQs that were then gauged for potential by either expert academic judgement, or via a data-driven approach in which the questions were trialled objectively in a low-stakes test. Questions selected by either method were then deployed in a high-stakes in-semester assessment alongside questions from two academically authored sources: textbook-derived MCQs, and past paper questions. A total of 120 MCQs from these four sources were deployed in assessments attempted by over 600 students. Each question was subjected to rigorous performance analysis, including the calculation of standard metrics from classical test theory and more sophisticated Item Response Theory (IRT) measures. The results showed that MCQs authored by students and selected at low cost performed as well as questions authored by academics, illustrating the potential of this strategy for the efficient creation of large numbers of high quality MCQs for summative assessment.

中文翻译:

选择学生认可的问题进行汇总评估

生成高质量的多项选择题(MCQ)以进行形成性和总结性评估是一项耗时的工作,需要高超的技能,创造力和洞察力。向在线考试的过渡以及先前经过反复测试的MCQ的出现,加剧了问题产生的挑战,并强调了对创新解决方案的需求。几组研究表明,利用学生群体为研究银行生成大量与课程大纲对齐的MCQ是可行的。尽管学生提出的问题非常适合形成性反馈和实践活动,但通常认为它们不适合进行高风险评估。在这个研究中,我们旨在证明可以以可扩展的方式向学生提供培训,以产生与专家提出的问题质量相似的问题,并且可以通过最少的学术审查和编辑来确定合适的问题。生物化学和分子生物学的学生被分配了一系列活动,旨在指导他们写作和批判MCQ的艺术。这次培训产生了超过一千个MCQ,然后通过专家的学术判断或通过数据驱动的方法对潜力进行了评估,该方法在低风险测试中客观地对问题进行了试验。然后,将通过两种方法选择的问题与两个学术著作来源(教科书衍生的MCQ和过去的纸质问题)一起在高风险的学期评估中进行部署。来自这四个来源的总共120个MCQ进行了600多名学生尝试的评估。每个问题都经过严格的性能分析,包括根据经典测试理论和更复杂的项目响应理论(IRT)度量标准度量的计算。结果表明,由学生撰写并以低成本选择的MCQ以及学者撰写的问题均得到执行,这说明了该策略在有效创建大量用于汇总评估的高质量MCQ方面的潜力。
更新日期:2020-07-30
down
wechat
bug