当前位置: X-MOL 学术Can. J. School Psychol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Accuracy of Automated Written Expression Curriculum-Based Measurement Scoring
Canadian Journal of School Psychology ( IF 1.370 ) Pub Date : 2021-01-19 , DOI: 10.1177/0829573520987753
Sterett H. Mercer 1 , Joanna E. Cannon 1 , Bonita Squires 1, 2 , Yue Guo 1 , Ella Pinco 1
Affiliation  

We examined the extent to which automated written expression curriculum-based measurement (aWE-CBM) can be accurately used to computer score student writing samples for screening and progress monitoring. Students (n = 174) with learning difficulties in Grades 1 to 12 who received 1:1 academic tutoring through a community-based organization completed narrative writing samples in the fall and spring across two academic years. The samples were evaluated using four automated and hand-calculated WE-CBM scoring metrics. Results indicated automated and hand-calculated scores were highly correlated at all four timepoints for counts of total words written (rs = 1.00), words spelled correctly (rs = .99–1.00), correct word sequences (CWS; rs = .96–.97), and correct minus incorrect word sequences (CIWS; rs = .86–.92). For CWS and CIWS, however, automated scores systematically overestimated hand-calculated scores, with an unacceptable amount of error for CIWS for some types of decisions. These findings provide preliminary evidence that aWE-CBM can be used to efficiently score narrative writing samples, potentially improving the feasibility of implementing multi-tiered systems of support in which the written expression skills of large numbers of students are screened and monitored.



中文翻译:

基于自动书面表达课程的测量评分的准确性

我们研究了基于自动书面表达课程的测量(aWE-CBM)在多大程度上可以准确地用于计算机评分学生写作样本的筛选和进度监控。 通过社区组织接受1:1学术辅导的1至12年级学习困难学生(n = 174)在两个学年的秋季和春季完成了叙事写作样本。使用四个自动和手工计算的WE-CBM评分标准对样本进行评估。结果表明,在所有四个时间点上,自动分数和手工计算的分数与总单词数(r s = 1.00),正确拼写的单词(r s = 0.99–1.00),正确的单词序列(CWS; r)高度相关。s = .96–.97),并纠正减去不正确的单词序列(CIWS; r s = .86–.92)。但是,对于CWS和CIWS,自动评分系统地高估了手工计算的评分,对于某些类型的决策而言,CIWS的错误量是不可接受的。这些发现提供了初步的证据,证明aWE-CBM可用于有效地对叙事写作样本进行评分,从而潜在地提高了实施多层支持系统的可行性,在该系统中,可以筛查和监控大量学生的书面表达能力。

更新日期:2021-01-19
down
wechat
bug