What has been assessed in writing and how? Empirical evidence from Assessing Writing (2000–2018)
Introduction
As the only international journal solely dedicated to disseminating scholarship on writing assessment, Assessing Writing (ASW) provides a forum for ideas, research and practice on the assessment of written language. Through scholarly exchanges, as shown on its website homepage, ASW “contributes to the development of excellence in the assessment of writing in all contexts, and, in so doing, to the teaching and appreciation of writing” (Assessing Writing, 2019). According to Hamp-Lyons (2002), the need for ASW is as great or even greater than it was in 1994 when Kathi Yancey and Brian Huot started it, given the pivotal role that the written word plays in the 21st century as a medium of communication and the fact that substantial work has been done and yet much remains unexamined in assessing writing. The journal thus has always served as an important scholarly venue for the field where researchers and practitioners can share ideas, questions, and concerns in the assessment of writing
In the first volume of ASW, White (1994, p.11) commented on the issues and problems in writing assessment that
As we move to the end of this century, the issues and problems in writing assessment have shifted and become more complicated. Or perhaps it would be more accurate to say that the complicated issues and problems behind writing assessment have finally become evident, after a century in which they have been concealed by special interests and technical problems.
Although White’s comments were given 25 years ago, they could also be borrowed to describe what is happening in the first two decades of the 21st century. The issues and problems in writing assessment remain complicated and even become more challenging. As Slomp (2012), p.90) has noticed, we are witnessing a shift from “assessing writing as a product to assessing the development of writing ability”, which poses theoretical concerns, construct issues, and methodological challenges.
While complexities and challenges come up when we are experiencing shifts and changes, a comprehensive understanding of the evolvement of ideas, questions, and concerns in writing assessment can be conducive to tackling these issues. This article reports a review study, aiming to provide a meta-disciplinary picture of assessing writing research based on the published empirical studies in ASW over its first 25 years, which we believe could contribute to that understanding. The question at the heart of the article is “What has been assessed in writing in the past 25 years and how has the field evolved in the contextual, theoretical, and methodological aspects?” Using content analysis, we examine the contextual, theoretical, and methodological orientations of 219 full-length empirical research articles published in ASW (volume 1–31) and analyze writing assessment research overall, and in two time periods (2000–2009 and 2010–2018). Before proceeding to outline the specifics of our methodology, several prior reviews and syntheses that have informed the current journal-based review study are presented.
Section snippets
Previous reviews and syntheses
While there is no lack of effort in reviewing and synthesizing research and practices in writing assessment (e.g., Anson, 2006; Anthony, 2009; Camp, 2012; Colombini & McBride, 2012; Condon, 2013; Deane, 2013; Elbow, 2006; Elliot, 2005; Knoch, 2011; Knoch & Sitajalabhorn, 2013; Lam, 2017; Martin & Penrod, 2006; Ramineni & Williamson, 2013; Rutz & Lauer-Glebov, 2005; Serviss, 2012; Stevenson & Phakiti, 2014; Weigle, 2013), in this section we focus on the prior studies that are of close relevance
Search and review strategies
Journal-based literature search was conducted using the ScienceDirect Backfile – Social Sciences. All research articles published in Assessing Writing from January 1994 to October 2018 (Volume 1–38) were located (N = 298). Reviews, correspondences, and editorial notes were excluded from our sample. From the research articles, we selected the full-length empirical research articles (N = 219) reporting studies that relied on or derived from observational and experimental evidence, and guided by
Findings
In this section, we present findings revealed in the analysis of contexts and participants, research focus and theoretical orientations, and research methodology and data sources. While it is not our aim to quantify the qualitative data, displaying tables that show frequency and percentage could be a useful starting point from which we can explore the data.
Discussion
Our study was conducted with a twofold purpose. We aimed to investigate what has been assessed in the field of writing assessment and how. To achieve the purpose, we used journal-based content analysis of the empirical research articles published in ASW (volume 1–38) in the past 25 years, focusing on their contextual, theoretical, and methodological orientations.
While the what and how questions have been addressed in details by the findings presented above, we wish to offer quick answers by
Concluding comments
This article aims to give a view of the status quo and development of writing assessment based on the empirical articles published in a journal that is dedicated to disseminating scholarship on assessing writing research. However, such a review study can have limitations that future research may want to address in discerning this topic. First, while a content analysis of context and participants, research focus and theoretical orientation, and research methodology and data sources of the
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Yao Zheng is a PhD student at the Faculty of Education, University of Macau, China. Her research interests include second language writing and language education. Her publications have appeared in Assessing Writing, Innovations in Education and Teaching International, and Assessment and Evaluation in Higher Education.
References (68)
Assessing writing in cross-curricular programs: Determining the locus of activity
Assessing Writing
(2006)- et al.
Assessing peer and instructor response to writing: A corpus analysis from an expert survey
Assessing Writing
(2017) Classroom computer experiences that stick: Two lenses on reflective timed essays
Assessing Writing
(2009)- et al.
The effects of writing mode and computer ability on L2 test-takers’ essay characteristics and scores
Assessing Writing
(2018) Not to scale? An argument-based inquiry into the validity of an L2 writing rating scale
Assessing Writing
(2018)- et al.
Historical view of the influences of measurement and writing theories on the practice of writing assessment in the United States
Assessing Writing
(2011) - et al.
Design and evaluation of automated writing evaluation models: Relationships with writing in naturalistic settings
Assessing Writing
(2017) The psychology of writing development—And its implications for assessment
Assessing Writing
(2012)- et al.
Researching the comparability of paper-based and computer-based delivery in a high-stakes writing test
Assessing Writing
(2018) - et al.
Writing from sources: Does audience matter?
Assessing Writing
(2018)
“Storming and norming”: Exploring the value of group development models in addressing conflict in communal writing assessment
Assessing Writing
Large-scale assessment, locally-developed measures, and automated scoring of essays: Fishing for red herrings?
Assessing Writing
Assessing writing with the tool for the automatic analysis of lexical sophistication (TAALES)
Assessing Writing
A statewide writing assessment model: Student proficiency and future implications
Assessing Writing
On the relation between automated essay scoring and modern views of the writing construct
Assessing Writing
Checking assumed proficiency: Comparing L1 and L2 performance on a university entrance test
Assessing Writing
The impact of bilingual dictionaries on lexical sophistication and lexical accuracy in tests of L2 writing proficiency: A quantitative analysis
Assessing Writing
Do we need a single standard of value for institutional assessment? An essay response to Asao Inoue’s “community-based assessment pedagogy.”
Assessing Writing
Rubrics and corrective feedback in ESL writing: A longitudinal case study of an L2 writer
Assessing Writing
Severity differences among self-assessors, peer-assessors, and teacher assessors rating EFL essays
Assessing Writing
The scope of writing assessment
Assessing Writing
Teaching textual awareness with DocuScope: Using corpus-driven tools and reflection to support students’ written decision-making
Assessing Writing
Comparing the outcomes of two different approaches to CEFR-based rating of students’ writing performances across two European countries
Assessing Writing
College student perceptions of writing errors, text quality, and author characteristics
Assessing Writing
Validation of a locally created and rated writing test used for placement in a higher education EFL program
Assessing Writing
Examining the comparability between paper- and computer-based versions of an integrated writing placement test
Assessing Writing
Rating scales for diagnostic assessment of writing: What should they look like and where should the criteria come from?
Assessing Writing
A closer look at integrated writing tasks: Towards a more focussed definition for assessment purposes
Assessing Writing
Responding to student writing online: Tracking student interactions with instructor feedback in a Learning Management System
Assessing Writing
Analysis of syntactic complexity in secondary education ELF writers at different proficiency levels
Assessing Writing
Taking stock of portfolio assessment scholarship: From research to practice
Assessing Writing
L2 writing teachers’ perspectives, practices and problems regarding error feedback
Assessing Writing
Contract grading in the technical writing classroom: Blending community-based assessment and self-assessment
Assessing Writing
Connecting writing assessment with critical thinking: An exploratory study of alternative rhetorical functions and objects of enquiry in writing prompts
Assessing Writing
Cited by (31)
Voices from L2 learners across different languages: Development and validation of a student writing assessment literacy scale
2023, Journal of Second Language WritingJudgment accuracy of German student texts: Do teacher experience and content knowledge matter?
2022, Teaching and Teacher EducationCitation Excerpt :How teachers judge student achievement has consequences for students' self-evaluation (Trautwein et al., 2006), placement decisions (Helwig et al., 2001), teachers' instructional practice (Brookhart, 2011; Herppich et al., 2017), and, perhaps most importantly, for their future learning progress (Anders et al., 2010, but see Förster et al., 2021). A particular challenge is assessing written student report texts (Zheng & Yu, 2019). Teacher qualification is a central aspect in several models of teacher judgment like the heuristic model of teacher judgment accuracy (Südkamp, Kaiser, & Möller, 2012) or the DiaCom (Loibl et al., 2020, see below for more details).
Assessing pragmatic performance in advanced L2 academic writing through the lens of local grammars: A case study of ‘exemplification’
2022, Assessing WritingCitation Excerpt :Section 6 concludes the paper and argues that more efforts need to be made to explore thoroughly the association between L2 writing proficiency and pragmatic performance of discourse acts in academic writing. Recent research syntheses on L2 writing assessment (e.g., Behizadeh & Engelhard, 2011; Lam, 2017; Zheng & Yu, 2019) generally indicate that previous studies have focused primarily on assessing complexity, fluency, and accuracy (CFA; Skehan, 2009; Polio & Friedman, 2016; Barrot & Agdeppa, 2021). These include the assessment of formal features such as syntactic complexity (e.g., Lu, 2011, 2017; Crossley & McNamara, 2014), lexical diversity and sophistication (e.g., Kyle & Crossley, 2016; Crossley & Kyle, 2018; Nasseri & Thompson, 2021), and linguistic/grammatical accuracy (e.g., Polio & Shea, 2014).
Mapping research on second language writing teachers: A review on teacher cognition, practices, and expertise
2022, SystemCitation Excerpt :Yet to date, we know little about how they are presented across and within L2 contexts in the extant literature. The field of L2 writing has evolved into an independent field of inquiry with the exponential growth in research across contexts in the past six decades (Zheng & Yu, 2019; Ferris & Hedgcock, 2006; Hyland, 2006; Leki, Cumming, & Silva, 2008; Matsuda, Canagarajah, Harklau, Hyland, & Warschauer, 2003; Riazi et al., 2018). Fundamentally, the field of L2 writing is issue-driven (Matsuda, 2013), and the essential issues concern the three interrelated themes of L2 writers, writing, and writing instruction (Hyland, 2006; Matsuda et al., 2003).
Yao Zheng is a PhD student at the Faculty of Education, University of Macau, China. Her research interests include second language writing and language education. Her publications have appeared in Assessing Writing, Innovations in Education and Teaching International, and Assessment and Evaluation in Higher Education.
Shulin Yu, PhD, Assistant Professor at Faculty of Education, University of Macau, Macau SAR, China. His research interests include second language writing and second language education. His publications have appeared in Assessing Writing, Educational Research Review, Language Teaching Research, Language Teaching, TESOL Quarterly, Teachers and Teaching, Assessment and Evaluation in Higher Education, System, Studies in Higher Education, and Teaching in Higher Education.