当前位置: X-MOL 学术Int. J. Technol. Des. Educ. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Examining the reliability of Adaptive Comparative Judgement (ACJ) as an assessment tool in educational settings
International Journal of Technology and Design Education ( IF 2.0 ) Pub Date : 2021-02-23 , DOI: 10.1007/s10798-021-09654-w
Richard Kimbell

Conventional approaches to assessment involve teachers and examiners judging the quality of learners work by reference to lists of criteria or other ‘outcome’ statements. This paper explores a quite different method of assessment using ‘Adaptive Comparative Judgement’ (ACJ) that was developed within a research project at Goldsmiths University of London between 2004 and 2010. The method was developed into a tool that enabled judges to distinguish better/worse performances not by allocating numbers through mark schemes, but rather by direct, holistic, judgement. The tool was successfully deployed through a series of national and international research and development exercises. But game-changing innovations are never flaw-less first time out (Golley, Jet: Frank Whittle and the Invention of the Jet Engine, Datum Publishing, Liphook Hampshire, 2009; Dyson, Against the odds: an autobiography, Texere Publishing, Knutsford Cheshire, 2001) and a series of careful investigations resulted in a problem being identified within the workings of ACJ (Bramley, Investigating the reliability of Adaptive Comparative Judgment, Cambridge Assessment Research Report, UK, Cambridge, 2015). The issue was with the ‘adaptive’ component of the algorithm that, under certain conditions, appeared to exaggerate the reliability statistic. The problem was ‘worked’ by the software company running ACJ and a solution found. This paper reports the whole sequence of events—from the original innovation, through deployment, the emergent problem, and the resulting solution that was published at an international conference (Rangel Smith and Lynch in: PATT36 International Conference. Research & Practice in Technology Education: Perspectives on Human Capacity and Development, 2018) and subsequently deployed within a modified ACJ algorithm.



中文翻译:

在教育环境中检查自适应比较判断(ACJ)作为评估工具的可靠性

传统的评估方法涉及教师和考官通过参考标准列表或其他“成果”陈述来判断学习者的工作质量。本文探讨了使用“自适应比较判断”(ACJ)进行评估的另一种方法,该方法是伦敦金史密斯大学在2004年至2010年之间的一项研究项目中开发的。该方法已发展成为一种工具,使法官能够区分优劣表演不是通过分数方案分配数字,而是通过直接,整体的判断来进行。该工具已通过一系列国家和国际研发活动成功部署。但是,改变游戏规则的创新绝不是第一次完美无瑕(Golley,Jet:Frank Whittle和喷气发动机的发明,Datum Publishing,Liphook Hampshire,2009;戴森(Dyson),《逆境:自传》,特克斯尔出版公司(Texre Publishing),纳兹福德·柴郡(Knutsford Cheshire),2001年进行了一系列认真的调查,结果在ACJ的工作中发现了一个问题(布拉姆利,《研究自适应比较判断的可靠性》,剑桥评估研究报告,英国,剑桥,2015年)。问题在于算法的“自适应”组件,在某些情况下,该组件似乎夸大了可靠性统计量。该问题已由运行ACJ的软件公司“解决”,并找到了解决方案。本文报告了事件的整个过程,包括从最初的创新到部署,出现的紧急问题以及在国际会议上发布的最终解决方案(Rangel Smith和Lynch在PATT36国际会议上。技术教育研究与实践:

更新日期:2021-02-23
down
wechat
bug