-
Insights into the cognitive processes of trained vs untrained EFL peer reviewers on writing: An exploratory study Assess. Writ. (IF 2.404) Pub Date : 2021-04-10 Alireza Memari Hanjani
While research on various aspects of peer review in ESL/EFL writing has been burgeoning in the past two decades, studies comparing the cognitive processes of trained and untrained L2 peer reviewers have been scant. This case study endeavored to address this gap by recruiting ten senior EFL university students and randomly assigning them into trained and untrained groups. While both groups attended
-
Teachers’ perspectives on the causes of rater discrepancy in an English for Academic Purposes context Assess. Writ. (IF 2.404) Pub Date : 2021-02-25 Simon Mumford, Derin Atay
Many studies have focused on discrepancies in scoring writing, focusing on determining rater types, rubrics and their interpretation, and the factors that make a particular paper hard score. This qualitative study attempts to understand sources of discrepancy from the perspective of the raters themselves. Teachers from an English medium university freshmen academic skills programme provided scores
-
Directed Self-Placement: Subconstructs and group differences at a U.S. university Assess. Writ. (IF 2.404) Pub Date : 2021-02-24 Laura Aull
Directed Self-Placement (DSP) is an approach that brings together self-efficacy and course selection to guide enrollment of college students into first-year writing courses. The study in this article emerged in an institutional scenario in which the DSP process at a U.S. university changed from a reading and writing task with reflective questions to reflective questions only. In turn, the scenario
-
Easing stress: Contract grading’s impact on adolescents’ perceptions of workload demands, time constraints, and challenge appraisal in high school English Assess. Writ. (IF 2.404) Pub Date : 2021-02-16 Emily Ward
Mastery-based contract grading is a holistic assessment approach for learning and grading in which students choose their desired effort and outcome by contracting for either an A or B to meet high academic standards. This mixed-methods study examined the impact of mastery-based contract grading on secondary students’ (grades 9–12) perceptions of stress and threat appraisal. Participants were 439 adolescents
-
What interpretations can we make from scores on graphic-prompt writing (GPW) tasks? An argument-based approach to test validation Assess. Writ. (IF 2.404) Pub Date : 2021-02-02 YunDeok Choi
This argument-based validation research examines the validity of score interpretations on computer-based graphic-prompt writing (GPW) tasks, centering on the explanation inference. The GPW tasks, designed for English placement testing, measure examinees’ ability to incorporate visual graphic information into their writing. Over 100 ESL students, studying at a public university in the United States
-
Improving student feedback literacy in academic writing: An evidence-based framework Assess. Writ. (IF 2.404) Pub Date : 2021-02-02 Shulin Yu, Chunhong Liu
Student feedback literacy, which concerns learners’ understanding and evaluation of feedback information and of self-regulated learning, has recently drawn increasing scholarly attention. Although much discussion is directed to the theoretical complexity of feedback literacy in higher education, academic writing on this subject has remained unfocused, and the issues related to feedback literacy are
-
Writing motivation: A validation study of self-judgment and performance Assess. Writ. (IF 2.404) Pub Date : 2021-02-01 Guangming Ling, Norbert Elliot, Jill C. Burstein, Daniel F. McCaffrey, Charles A. MacArthur, Steven Holtzman
This study reports on validation of a writing motivation survey and its relationship with a variety of indicators of academic performance of 566 undergraduate students drawn from six US postsecondary institutions. A writing motivation survey was used to capture students’ writing goals, confidence, beliefs, and affect. Two research questions are addressed in the study: 1) What is the internal factor
-
Development and validation of the Situated Academic Writing Self-Efficacy Scale (SAWSES) Assess. Writ. (IF 2.404) Pub Date : 2021-01-31 Kim M. Mitchell, Diana E. McMillan, Michelle M. Lobchuk, Nathan C. Nickel, Rasheda Rabbani, Johnson Li
Existing writing self-efficacy instruments have assessed the concept through mechanical and process features of writing to the neglect of the influence of situated context. The purpose of this study was to develop and test the Situated Academic Writing Self-Efficacy Scale (SAWSES) based on Bandura’s self-efficacy theory and a model of socially constructed writing. A sequential multimethod approach
-
An integrated mixed-methods study of contract grading's impact on adolescents' perceptions of stress in high school English: a pilot study Assess. Writ. (IF 2.404) Pub Date : 2021-01-21 Emily Ward
This study analyzed the impact of contract grading on adolescents’ perceptions of stress amid Good Shepherd High School’s annual research paper unit. While college instructors have employed contract grading since the 1970s, the alternative assessment approach appears underused and under-analyzed in contemporary high school classrooms. In spring 2019, participants (n = 53) enrolled in one of seven senior-level
-
Lexical density and diversity in dissertation abstracts: Revisiting English L1 vs. L2 text differences Assess. Writ. (IF 2.404) Pub Date : 2021-01-11 Maryam Nasseri, Paul Thompson
This study investigated lexical density and diversity differences in English as L1 vs L2 academic writing of EFL, ESL, and English L1 postgraduate students to compare their lexical proficiency in EFL vs. English L1 academic settings. A corpus of 210 dissertation abstracts was analysed using three natural language processing tools [LCA, TAALED, and Coh-Metrix] where the effects of text length and topic
-
Complexity, accuracy, and fluency as indices of college-level L2 writers’ proficiency Assess. Writ. (IF 2.404) Pub Date : 2020-12-29 Jessie S. Barrot, Joan Y. Agdeppa
Several studies have explored complexity, accuracy, and fluency (CAF) as an index of language development, as an index of language performance, and as an index of writing quality. Although there have been studies that dealt with complexity measures as an index of proficiency, none so far have examined it in the company of accuracy and fluency as indices of proficiency. Thus, this study investigates
-
The development and validation of an inventory on English writing teacher beliefs Assess. Writ. (IF 2.404) Pub Date : 2020-11-24 Mehmet Karaca, Hacer Hande Uysal
Despite the recent interest in discovering language teacher beliefs in general, L2 writing teachers’ beliefs remain almost an untouched area. Considering this gap, this study aims at developing and validating a sound and comprehensive inventory that can be used for exploring L2 instructors’ beliefs regarding the nature of L2 writing, teaching L2 writing, and assessing L2 writing. To do that, a total
-
Syntactic complexity in L2 learners’ argumentative writing: Developmental stages and the within-genre topic effect Assess. Writ. (IF 2.404) Pub Date : 2020-11-23 Nesrin Atak, Aysel Saricaoglu
The developmental patterns that learners follow as their language develops in syntactic complexity has gained much attention in L2 writing research recently. This study aims to contribute to the growing body of empirical literature testing the hypothesis that learners move through a set of stages in their use of complex structures for L1 Turkish learners of L2 English (Biber et al., 2011). It also
-
The role of L2 writing self-efficacy in integrated writing strategy use and performance Assess. Writ. (IF 2.404) Pub Date : 2020-11-03 Seyyed Ehsan Golparvar, Afshin Khafi
The application of integrated writing tasks in academic writing assessment is increasing and research on these tasks is growing. However, the role of individual difference variables in students’ performance in source-based writing is under-researched. Thus, the present study purports to investigate the predictive contribution of L2 writing self-efficacy to the summary writing strategies used by EFL
-
Investigating minimum text lengths for lexical diversity indices Assess. Writ. (IF 2.404) Pub Date : 2020-11-03 Fred Zenker, Kristopher Kyle
Lexical diversity (LD) is an important feature of a second language (L2) writer’s lexical knowledge, and indices of LD have been widely used in the field of writing assessment (e.g., Cumming et al., 2006; Engber, 1995). Research with longer native speaker (L1) texts has indicated, however, that many commonly used LD indices are sensitive to text length and may conflate lexical breadth and fluency.
-
‘I will go to my grave fighting for grammar’: Exploring the ability of language-trained raters to implement a professionally-relevant rating scale for writing Assess. Writ. (IF 2.404) Pub Date : 2020-10-20 Ute Knoch, Barbara Ying Zhang, Catherine Elder, Eleanor Flynn, Annemiek Huisman, Robyn Woodward-Kron, Elizabeth Manias, Tim McNamara
Researchers have recommended involving domain experts in the design of scoring rubrics of language for specific purpose tests by eliciting profession-relevant, indigenous criteria and applying these to test performances (see, e.g., Douglas, 2001; Jacoby, 1998; Pill, 2016). However, these indigenous criteria, derived as they are from people outside the assessment field, may be difficult to apply by
-
Comparing writing proficiency assessments used in professional medical registration: A methodology to inform policy and practice Assess. Writ. (IF 2.404) Pub Date : 2020-10-13 Sathena Chan, Lynda Taylor
Internationally trained doctors wishing to register and practise in an English-speaking country typically have to demonstrate that they can communicate effectively in English, including writing proficiency. Various English language proficiency (ELP) tests are available worldwide and are used for such licensing purposes. This means that medical registration bodies face the question of which test(s)
-
The writing that nurses do: Investigating changes to standards over time Assess. Writ. (IF 2.404) Pub Date : 2020-10-05 Brigita Séguis, Gad S. Lim
Unlike general language tests, specific-purpose tests have a greater need for updating should there be changes to context-specific language use. With near continuous inflow of migrant healthcare workers and long-standing language testing practices, the healthcare domain represents a good context for investigating the relationship between changes in the English language assessment standards and changes
-
Moodle quizzes and their usability for formative assessment of academic writing Assess. Writ. (IF 2.404) Pub Date : 2020-10-03 Weronika Fernando
This review discusses Moodle quizzes and their potential usefulness for formative assessment of academic writing. Moodle quizzes offer a wide variety of possibilities for the development of formative writing assessment. This is due to the impressive repertoire of choices with regard to specific tasks and to quiz design options available in Moodle. The key strength of Moodle quizzes, as used for formative
-
Assessing writing for workplace purposes: Risks, conundrums and compromises Assess. Writ. (IF 2.404) Pub Date : 2020-10-01 Susy Macqueen, Cathie Elder, Ute Knoch
Since the development of cuneiform script in Mesopotamia for keeping tallies of grain and sheep, written language has been used to document workplace transactions as a safeguard against the unreliability of human memory or deceit. Modern workplaces continue this reliance on written language as a means of mitigating the risk that some important aspect of work will be missed, misunderstood, misrepresented
-
Capturing domain expert perspectives in devising a rating scale for a health specific writing test: How close can we get? Assess. Writ. (IF 2.404) Pub Date : 2020-09-26 Ute Knoch, Catherine Elder, Robyn Woodward-Kron, Elizabeth Manias, Eleanor Flynn, Tim McNamara, Annemiek Huisman, Barbara Ying Zhang
The importance of input from occupational experts in defining valid criteria to assess performance on English for specific purposes (ESP) tests is widely acknowledged. However, few studies have described the process of collecting indigenous criteria and establishing their suitability for a language testing context. The paper reports on this process with specific reference to the writing sub-test of
-
TOEIC® Writing test scores as indicators of the functional adequacy of writing in the international workplace: Evaluation by linguistic laypersons Assess. Writ. (IF 2.404) Pub Date : 2020-09-23 Jonathan Schmidgall, Donald E. Powers
This study examines the extent to which TOEIC Writing test scores relate to an external criterion: evaluations by linguistic laypersons of the functional adequacy of writing in the international workplace. Test-taker responses to two representative tasks from the TOEIC Writing test (e-mail requests, opinion surveys) were adapted for workplace role-play scenarios that laypersons read and evaluated in
-
Using Eli review as a strategy for feedback in online courses Assess. Writ. (IF 2.404) Pub Date : 2020-09-19 Angela Laflen
Eli Review is a web-based platform that was built by composition faculty for the primary purpose of scaffolding online peer feedback activities. Eli supports a very particular feedback strategy characterized by students completing frequent small writing assignments, participating in regular peer reviewing activities, and using the feedback received to generate revision plans. Faculty focus on scaffolding
-
Designing proficiency-oriented performance tasks for the 21st-century workplace written communication: An evidence-centered design approach Assess. Writ. (IF 2.404) Pub Date : 2020-09-19 Ahmet Dursun, Jennifer K. Morris, Aylin Ünaldı
While contemplating a new online assessment framework for global corporations in Turkey, developers faced the dilemma of how to create a modern English for Specific Purpose (ESP) writing proficiency assessment. The result was the Communicative English Proficiency Assessment (CEPA)® Written Communication Assessment™, a computer-based, criterion-referenced proficiency test. In its creation, and with
-
Feedback scope in written corrective feedback: Analysis of empirical research in L2 contexts Assess. Writ. (IF 2.404) Pub Date : 2020-06-29 Zhicheng Mao, Icy Lee
The current study aims to explore the development of research on feedback scope in written corrective feedback (WCF), identify unresolved issues concerning feedback scope, and offer recommendations for further research in this domain. To achieve these purposes, we synthesize a total of 59 relevant articles and examine the salient findings on four dimensions: (1) effectiveness of comprehensive WCF;
-
Changing stories: Linguistically-informed assessment of development in narrative writing Assess. Writ. (IF 2.404) Pub Date : 2020-06-27 Carmel Sandiford, Mary Macken-Horarik
Variable achievements are not only common in students’ writing but pose challenges for teachers seeking to acknowledge these and foster next steps in literacy teaching. If teachers are to ‘lead development’ (Vygotsky, 1978), they need knowledge about how different text types function and what progression in learning to compose these looks like. Drawing on data from a large research project investigating
-
Engaging expectations: Measuring helpfulness as an alternative to student evaluations of teaching Assess. Writ. (IF 2.404) Pub Date : 2020-06-25 Mathew Gomes, Wenjuan Ma
We propose an alternative to student evaluations of teachers (SETs), arguing that writing programs can use the SET moment to share responsibility for students’ expectations and course experiences. We argue studying students’ perceptions can help writing programs generate research for localizing engagement and aiding professional development. We study the perceived helpfulness of first-year writing
-
Presentation-mode effects in large-scale writing assessments Assess. Writ. (IF 2.404) Pub Date : 2020-06-21 Thomas Canz, Lars Hoffmann, Renate Kania
To ensure valid measurement in large-scale assessments, avoiding incorporating construct-irrelevant aspects is crucial. We investigated a potential source of construct-irrelevant variance, i.e. the presentation mode of essays (handwritten vs. computer-typed) and its influence on scoring. Further, we investigated whether the presentation-mode effect is moderated by text quality and legibility, as well
-
Co-constructed rubrics and assessment for learning: The impact on middle school students’ attitudes and writing skills Assess. Writ. (IF 2.404) Pub Date : 2020-06-05 May Abdul Ghaffar, Megan Khairallah, Sara Salloum
Many L2 learners demonstrate low motivation when it comes to developing higher competencies in writing. In this study, we propose that by engaging L2 students and their teachers in collaborative co-construction of writing rubrics, students develop better understanding and awareness of writing criteria, thus demonstrating ownership and responsibility for enhancing their writing competency. This study
-
The relationship between features of source text use and integrated writing quality Assess. Writ. (IF 2.404) Pub Date : 2020-06-05 Kristopher Kyle
Academic writing ability is an important aspect of success in higher education. Recently, standardized academic language proficiency tests (such as the TOEFL) have begun to include integrated writing tasks, which ask test-takers to read and/or listen to a passage and construct a response that reflects the information in the passage(s). Arguably, integrated tasks more closely resemble authentic academic
-
Rater Negotiation Scheme: How writing raters resolve score discrepancies Assess. Writ. (IF 2.404) Pub Date : 2020-05-17 Ece Sevgi-Sole, Aylin Ünaldı
In practices of direct assessment of writing ability, the variability of human decision-making during scoring poses great challenges to the validity of assessment (Kane, 2006). The variables causing differences in individual raters’ scoring interpretations have been widely investigated (e.g. Eckes, 2012; Wolfe et al., 2016). However, the issue of how raters negotiate to resolve discrepancies has not
-
Beyond linguistic complexity: Assessing register flexibility in EFL writing across contexts Assess. Writ. (IF 2.404) Pub Date : 2020-05-15 Wenjuan Qin, Paola Uccelli
The present study examines adolescent and adult English-as-Foreign-Language (EFL) Learners’ linguistic complexity and register flexibility in writing across academic and colloquial contexts. A total of 263 EFL learners from three first language (L1) backgrounds (Chinese, French, and Spanish) participated in this study. Each participant produced two written texts on the same topic: a personal email
-
Student engagement with automated written corrective feedback (AWCF) provided by Grammarly: A multiple case study Assess. Writ. (IF 2.404) Pub Date : 2020-03-19 Svetlana Koltovskaia
Despite the increased use of automated writing evaluation (AWE) systems and similar programs for assessment purposes in second language (L2) writing classrooms, research on student engagement with automated feedback is scarce. This naturalistic case study explored two ESL college students’ engagement with automated written corrective feedback (AWCF) provided by Grammarly when revising a final draft
-
eRevis(ing): Students’ revision of text evidence use in an automated writing evaluation system Assess. Writ. (IF 2.404) Pub Date : 2020-02-22 Elaine Lin Wang, Lindsay Clare Matsumura, Richard Correnti, Diane Litman, Haoran Zhang, Emily Howe, Ahmed Magooda, Rafael Quintana
We investigate students’ implementation of the feedback messages they received in an automated writing evaluation system (eRevise) that aims to improve students’ use of text evidence in their writing. Seven 5th and 6th-grade teachers implemented eRevise (n = 143 students). Qualitative analysis of students’ essays across first and second drafts suggests that the majority of students made changes to
-
Corrigendum to “The influence of lexical features on teacher judgements of ESL argumentative essays” [Assess. Writ. 39 (2019) 50–63] Assess. Writ. (IF 2.404) Pub Date : 2020-01-17 Cristina Vögelin, Thorben Jansen, Stefan D. Keller, Nils Machts, Jens Möller
The authors discovered a number of minor inaccuracies on pp. 58–59 of this article. However, they do not affect any of the main claims regarding the influence of lexical features on text assessment made in that study. In this note, we provide a corrected proof of the inaccuracies (marked in bold). The authors would like to apologize for any inconvenience caused.
-
A measure of possible sources of demotivation in L2 writing: A scale development and validation study Assess. Writ. (IF 2.404) Pub Date : 2019-11-26 Mehmet Karaca, Serhat Inan
Writing in L2 is a complex phenomenon in which affective factors play an important role. Among others, demotivation can determine the students’ success in orchestrating the complex writing processes and the quality of their L2 writing texts. Although there is a growing body of research investigating demotivating factors (e.g. teacher, self-confidence, materials and methods) in language education, there
-
Marrying achievement with proficiency – Developing and validating a local CEFR-based writing checklist Assess. Writ. (IF 2.404) Pub Date : 2019-11-25 Claudia Harsch, Sibylle Seyferth
Many language course providers face the challenge to align internal, often intuitive assessments to internationally recognised proficiency frameworks for accountability reasons. We report a development and validation project for assessing writing in a university languages centre, where an intuitive, achievement-oriented grading system was aligned to the proficiency levels of the CEFR. We took an iterative
-
Making our invisible racial agendas visible: Race talk in Assessing Writing, 1994–2018 Assess. Writ. (IF 2.404) Pub Date : 2019-11-02 J.W. Hammond
“Writing” is far from the only construct relevant to writing assessment research. The construct “race” is arguably crucial for the field’s considerations of human diversity, difference, and inequity. To examine how race has been constructed within the field, this paper provides a content analysis of explicit race talk in Assessing Writing (1994–2018). Drawing on insights from critical race and Whiteness
-
Linking TOEFL iBT® writing rubrics to CEFR levels: Cut scores and validity evidence from a standard setting study Assess. Writ. (IF 2.404) Pub Date : 2019-10-31 Johanna Fleckenstein, Stefan Keller, Maleika Krüger, Richard J. Tannenbaum, Olaf Köller
English writing is a key competence for higher education success. However, research on the assessment of writing skills in English as a foreign language in European upper secondary education (i.e. beyond year 9) remains scarce. The Common European Framework of Reference (CEFR) describes language proficiency on a scale of six ascending levels (A1-C2). For writing skills at the end of secondary education
-
Evidence of fairness: Twenty-five years of research in Assessing Writing Assess. Writ. (IF 2.404) Pub Date : 2019-08-30 Mya Poe, Norbert Elliot
When Assessing Writing (ASW) was founded 25 years ago, conversations about fairness were very much in the air and illustrated sharp divides between teachers and educational measurement researchers. For teachers, fairness was typically associated with consistency and access. For educational measurement researchers, fairness was a technical issue: an assessment that did not identify the presence of β
-
What has been assessed in writing and how? Empirical evidence from Assessing Writing (2000–2018) Assess. Writ. (IF 2.404) Pub Date : 2019-08-23 Yao Zheng, Shulin Yu
Using content analysis, this review study examines 219 empirical research articles published in Assessing Writing (2000–2018) to give a view of the development of writing assessment over the past 25 years. It reports overall and periodic analyses (2000–2009 and 2010–2018) of the contextual, theoretical, and methodological orientations of those articles to gain a comprehensive understanding of what
-
(Re)visiting twenty-five years of writing assessment Assess. Writ. (IF 2.404) Pub Date : 2019-08-21 Edward White
This reflective essay provides a narrative analysis of the author’s perceptions of US writing assessment over the past twenty-five years. Reflections are provided on four communities involved in the instruction and assessment of writing: teachers, researchers, testing organizations, and students. The essay concludes with an identification of trends in reconciling the goals of these four assessment
-
Unresolved issues in defining and assessing writing motivational constructs: A review of conceptualization and measurement perspectives Assess. Writ. (IF 2.404) Pub Date : 2019-08-20 Muhammad M.M. Abdel Latif
Motivational variables significantly influence learners’ writing experiences and performance. Diagnosing learners’ affective perceptions and beliefs using accurate measures is a prerequisite for identifying the optimal ways for motivating them to write. Though writing motivation has been researched for more than four decades, some issues in defining and assessing its constructs are yet to be resolved
-
Do raters use rating scale categories consistently across analytic rubric domains in writing assessment? Assess. Writ. (IF 2.404) Pub Date : 2019-07-22 Stefanie A. Wind
Analytic rubrics for writing assessments are intended to provide diagnostic information regarding students’ strengths and weaknesses related to several domains, such as the meaning and mechanics of their composition. Although individual domains refer to unique aspects of student writing, the same rating scales are often applied across different domains. Accordingly, the interpretation of rating scale
-
Affordances of TOEFL writing tasks beyond university admissions Assess. Writ. (IF 2.404) Pub Date : 2019-06-19 Jon Smart
This review describes in brief the writing sub-sections of the internet-based Test of English as a Foreign Language (TOEFL) and discusses its use in university admissions decisions and potential use as a tool for course placement. The primary purpose of the TOEFL is to measure the academic English proficiency of non-native English speakers seeking admission to English- medium universities. The writing
-
Using the Smarter Balanced grade 11 summative assessment in college writing placement Assess. Writ. (IF 2.404) Pub Date : 2019-06-19 Kendon Smith, Kelly L. Wheeler
The Smarter Balanced grade 11 summative assessment is a career and college readiness assessment aligned with the Common Core State Standards. In addition to its use as a measure in the high school, over 200 colleges and universities in 10 states use the results of this assessment as part of a multiple measures approach for placement in writing and mathematics. Our focus in this review is on the assessment’s
-
Directed self-placement as a tool to foreground student agency Assess. Writ. (IF 2.404) Pub Date : 2019-06-19 Andrew Moos, Kathryn Van Zanen
This review examines directed self-placement, a placement tool that provides students with information about curricular options and asks each student to self-select into any first-year writing course. Specific constructs of a given directed self-placement approach vary from institution to institution; in this review we work to synthesize a diverse range of available scholarship to describe the common
-
Holistic, local, and process-oriented: What makes the University Utah’s Writing Placement Exam work Assess. Writ. (IF 2.404) Pub Date : 2019-06-18 Crystal J. Zanders, Emily Wilson
This review of the University of Utah’s Writing Placement exam evaluates the possibilities of the exam’s construct, addresses the tool's limitations, and analyzes it in light of similar placement tools. The review concludes that although there are challenges specifically related to the scalability, security, and language ideology of the exam, its holistic nature, local assessors, and process-oriented
-
Affordances and limitations of the ACCUPLACER automated writing placement tool Assess. Writ. (IF 2.404) Pub Date : 2019-06-17 Sarah Hughes, Ruth Li
The College Board’s ACCUPLACER Automated Writing Placement Tool is administered more than 8.5 million times each year to place students into college-level or developmental writing courses. ACCUPLACER’s writing assessment consists of a multiple-choice Next-Generation Writing Test and an on-demand essay called WritePlacer. In this review, we offer an overview of ACCUPLACER, considering the test in light
-
Investigating the effect of source characteristics on task comparability in integrated writing tasks Assess. Writ. (IF 2.404) Pub Date : 2019-06-10 Maryam Homayounzadeh, Mahboobeh Saadat, Alireza Ahmadi
The study presents an attempt to explore the impact of source characteristics on task comparability in integrated writing tasks. To this end, two read-listen-write tasks of TOEFL iBT were selected to differ in topic, structural organization, and lexical and conceptual overlap, suggested to be significant in affecting summary quality (Cho, Rijmen, & Novák, 2013; Li, 2014; Yu, 2009). The performance
-
Lower English proficiency means poorer feedback performance? A mixed-methods study Assess. Writ. (IF 2.404) Pub Date : 2019-06-06 Zhiwei Wu
This study adopts a mixed-methods design and examines the relation between English proficiency and peer feedback performance. Data sources included peer feedback made by 23 lower English proficiency (LEP) students and 23 higher English proficiency (HEP) students, and semi-structured interviews with four LEP and four HEP students from that sample. Quantitative analysis did not find significant difference
-
Raters’ perceptions of assessment criteria relevance Assess. Writ. (IF 2.404) Pub Date : 2019-05-10 Stephen Humphry, Sandy Heldsinger
This study adopts a novel approach to investigate perceptions of assessment criteria relevance in differentiating writing performance levels. Experienced writing assessors were asked to directly compare pairs of performances. For each comparison, assessors were asked to determine which performance was better and to record which aspects of writing were used to make determinations. To do so, assessors
-
Learning from giving peer feedback on postgraduate theses: Voices from Master's students in the Macau EFL context Assess. Writ. (IF 2.404) Pub Date : 2019-04-08 Shulin Yu
Although peer feedback has received increasing attention from academic writing instructors and thesis supervisors in most recent years, we know little about postgraduate students’ perceptions and experiences of learning (if any) from providing feedback on thesis/dissertation writing. Drawing upon multiple sources of data including thesis drafts (original, revised and finalised theses), written peer
-
Assessing student-writers’ self-efficacy beliefs about text revision in EFL writing Assess. Writ. (IF 2.404) Pub Date : 2019-04-02 Jing Chen, Lawrence Jun Zhang
This research proposed and examined a two-factor structure of self-efficacy beliefs about text revision in English-as-a-foreign-language (EFL) contexts. The Second Language Text Revision Self-Efficacy Scale (L2TRSS) was developed and scrutinised; exploratory factor analyses of the responses of 446 EFL learners and a subsequent confirmatory factor analysis with a different sample of 310 participants
-
“I should summarize this whole paragraph”: Shared processes of reading and writing in iterative integrated assessment tasks Assess. Writ. (IF 2.404) Pub Date : 2019-03-29 Lia Plakans, Jui-Teng Liao, Fang Wang
Researchers do not yet fully understand the complex processes linking reading and writing in a second language. A number of recent studies have focused on reading to write integrated tasks in language assessment, with an eye toward eliciting the underlying construct of reading-writing integration. To extend this conversation, we designed an iterative integrated task (writing-reading-writing) including
-
Developing and examining validity evidence for the Writing Rubric to Inform Teacher Educators (WRITE) Assess. Writ. (IF 2.404) Pub Date : 2019-03-20 Tracey S. Hodges, Katherine Landau Wright, Stefanie A. Wind, Sharon D. Matthews, Wendi K. Zimmer, Erin McTigue
Assessment is an under-researched challenge of writing development, instruction, and teacher preparation. One reason for the lack of research on writing assessment in teacher preparation is that writing achievement is multi-faceted and difficult to measure consistently. Additionally, research has reported that teacher educators and preservice teaches may have limited assessment literacy knowledge.
-
A validation program for the Self-Beliefs, Writing-Beliefs, and Attitude Survey: A measure of adolescents' motivation toward writing Assess. Writ. (IF 2.404) Pub Date : 2018-12-29 Katherine Landau Wright, Tracey S. Hodges, Erin M. McTigue
Recent findings reveal clear evidence that students’ low performance on writing tasks is often related to problems with motivation. Writing curriculum and interventions produce varying effects on adolescents’ writing outcomes, and such variations may be mediated by motivation. However, without a valid tool for measuring students’ motivation towards writing, these effects cannot be quantified. In this
-
The influence of lexical features on teacher judgements of ESL argumentative essays Assess. Writ. (IF 2.404) Pub Date : 2018-12-13 Cristina Vögelin, Thorben Jansen, Stefan D. Keller, Nils Machts, Jens Möller
Numerous studies have examined the relationship between lexical features of students’ compositions and judgements of text quality. However, the degree to which teachers’ judgements are influenced by the quality of vocabulary in students’ essays with regard to their assessment of other textual characteristics is relatively unexplored. This experimental study investigates the influence of lexical features
-
Source use in the story continuation writing task Assess. Writ. (IF 2.404) Pub Date : 2018-12-08 Wei Ye, Wei Ren
The story continuation writing task (SCWT) is a newly developed type of integrated writing task that is observed to be able to stimulate language learning efficiently. Nevertheless, little is known with respect to what source knowledge test-takers notice and how they process the noticed information during the task. This paper contributes to the literature by investigating what and how source information
-
Exploring the correspondence between traditional score resolution methods and person fit indices in rater-mediated writing assessments Assess. Writ. (IF 2.404) Pub Date : 2018-12-06 Stefanie A. Wind, A. Adrienne Walker
Scoring procedures for rater-mediated writing assessments often include checks for agreement between the raters who score students’ essays. When raters assign non-adjacent ratings to the same essay, a third rater is often employed to “resolve” the discrepant ratings. The procedures for flagging essays for score resolution are similar to person fit analyses based on item response theory (IRT). We used
Contents have been reproduced by permission of the publishers.