当前位置: X-MOL 学术Sci. Educ. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Can AI be racist? Color-evasiveness in the application of machine learning to science assessments
Science Education ( IF 3.1 ) Pub Date : 2021-07-03 , DOI: 10.1002/sce.21671
Tina Cheuk 1
Affiliation  

Assessment developers are increasingly using the developing technology of machine learning in transforming how to assess students in their science learning. I argue that these algorithmic models further embed the structures of inequality that are pervasive in the development of science assessments in how they legitimize certain language practices that protect the hierarchical standing of status quo interests. My argument is situated within the broader emerging ethical challenges around this new technology. I apply a raciolinguistic equity analysis framework in critiquing the “new black box” that reinforces structural forms of discrimination against the linguistic repertoires of racially marginalized student populations. The article ends with me sharing a set of tactical shifts that can be deployed to form a more equitable and socially-just field of machine learning enhanced science assessments.

中文翻译:

人工智能可以种族主义吗?机器学习在科学评估中的应用中的色彩回避

评估开发人员越来越多地使用机器学习的开发技术来改变如何评估学生的科学学习。我认为,这些算法模型进一步嵌入了在科学评估发展中普遍存在的不平等结构,即它们如何使某些保护现状利益的等级地位的语言实践合法化。我的论点位于围绕这项新技术的更广泛的新兴伦理挑战中。我应用了一个种族语言公平分析框架来批评“新黑匣子”,这种黑匣子强化了对种族边缘化学生群体语言曲目的结构性歧视。
更新日期:2021-08-03
down
wechat
bug