Introduction

Language teachers have been reported to be working under a lot of pressure caused due to various factors such as heavy workload, managing time, and performing an array of duties (MacIntyre et al., 2020). The daily plight of language teachers, which is often worse in low and low-middle income countries, has come under further stress due to COVID-19. A United Nations (2020) report suggests that the pandemic has impacted around 99% of students in such countries. However, the report also mentions that the grave threat has ‘stimulated innovation’ in education (p. 2). Innovative practices in online teaching and learning have certainly made noticeable progress since the pandemic started spreading in the world. Though COVID-19 has severely hit school teaching in South Asian countries like India, Bangladesh, and Nepal, language teachers working in universities have been able to move to online platforms, and some amount of normalcy has been restored through online teaching. However, carrying out assessments online and offering feedback to students which are essential for making learning effective (Mohamadi, 2018) are found to be one of the major concerns in online teaching (García-Peñalvo et al., 2021). In fact, Online Learning Consortium (2020) and Quality Matters (2020), two organizations specializing in online learning, have emphasized providing timely feedback to students. Furthermore, García-Peñalvo et al. (2021) recommend making FA mandatory in online teaching. In this connection, some studies have focused on online teaching during COVID-19 times in India (Mishra, Gupta & Shree, 2020; Giri & Dutta, 2020), Bangladesh (Mamun, Chandrima & Griffiths, 2020; Islam et al., 2020) and Nepal (Acharya et al., 2020; Ansari, 2020). However, very few studies have explored how ESL teachers in these countries have carried out teaching and assessment on digital platforms. This paper reports three case studies, one each from India, Bangladesh, and Nepal, about three ESL teachers’ online FA (OFA) and feedback practices.

Review of Literature

Formative Assessment

According to Daşkın and Hatipoğlu (2019), FA is a teacher-based, classroom-based, dynamic, and learning- and learner-friendly assessment, carried out in classrooms for accelerating students’ learning. Informal approaches are often used to trace and monitor students’ progress and offer students the required feedback. As pointed out by Black and Wiliam (2009), FA provides teachers with information which is then analyzed and interpreted before it is used for improving the quality of teaching and learning. In language classroom contexts, FAs are sources of micro-level diagnostic information that is used by the teacher to promote better teaching and learning of language skills (Elder, 2017). They are highly context-specific and often found to be localized (Davidson & Leung, 2009). Since FAs are required to provide diagnostic information to teachers, they are conducted with the help of a variety of formal and informal methods, which include interaction, observation, and demonstration (Klenowski, 2009). Impactful FAs are supposed to help teachers find information about what learners can do at a certain point, what they are required to do to perform better, and how they should be able to attain a higher level of achievement (Guo & Xu, 2020).

Online FA

Online FA (OFA) has received much less attention when compared to FA conducted in face-to-face mode and inadequate qualitative information is available on FA in higher education contexts (McLaughlin & Yan, 2017). Higher education researchers have mainly focused on online teaching in Western countries. Very few studies have focused on formal teaching happening online in South Asian contexts, especially before the pre-pandemic times. Gikandi et al. (2011) have identified factors such as authenticity, feedback, multifaceted nature of perspectives, learner scaffolding, proper utilization of evidence, multi-method arrangement for evidence collection, and clear awareness about learning objectives which contribute to the validity and reliability of FAs. However, it is also true that these goals for FAs cannot be achieved unless teachers receive pedagogic, institutional, and technical support in the form of teacher development (King & Boyatt, 2014). McLaughlin and Yan (2017) list ‘multiple-choice tests, the one-minute paper, e-portfolios, and Web 2.0 tools’ (p. 563) as useful and practical methods of OFA. According to Newhouse (2011), online assessments usually provide learners with an opportunity to demonstrate their learning, help in keeping track of learners’ development of competence and include a plan to analyze learners’ performance. What is evident here is that FAs are required to be learner-and learning-centric. Thus, it may be necessary for teachers to take into account learners’ satisfaction with the OFA tools which determine the quality of learning (Agustina & Purnawarman, 2020). The usability and value of OFA tools can also play a crucial role (Chiu et al., 2005). Yilmaz et al. (2020) claim that the choice of OFA tools and strategies depends on ‘perceived usefulness, perceived ease of use, computer self-efficacy, social influence, perceived relationship with the course content, enjoyment, interest and behavioral intention dimensions’ (p. 32). There have been studies on popular methods of OFA. Some of the most popular ones investigated by researchers are learning management system (Bogdanović et al., 2014), student response system (Pérez-Segura et al., 2020), e-portfolio (Namaziandost et al., 2020), social media (Allagui, 2014), web 2.0 tools like blogs (Mohamed, 2016), wikis (Wang, 2014), Google Forms (Haddad & Youakim, 2014), self-assessment (Ishikawa et al., 2014), and peer-assessment (Chien et al., 2020). Qualitative studies focusing on teachers’ use of OFA and related feedback strategies are, however, rare (Chen et al., 2020). Moreover, qualitative information about the quality and frequency of OFA and formative feedback need further enquiry. The next section focuses on online feedback which is an integral part of OFA.

Online Feedback Practices

Hattie and Timperly (2007) define feedback as ‘information provided by an agent (e.g., teacher, peer, book, parent, self, experience) regarding aspects of one’s performance or understanding’ (p. 81) and place it at the heart of FA. The use of digital tools has made giving and receiving feedback more accessible for learners (Yilmaz, 2017) which is an immediate requirement in the case of OFAs. It has been reported that computer-mediated feedback in online courses contributes to student learning (Bahari, 2021). There are several benefits associated with online feedback provided to students after OFA. First, technology use helps in individualizing feedback which in turn propels learning (Ai, 2017). Secondly, it has ‘linguistic and procedural benefits’ (Pérez-Segura et al., 2020, p. 5). Thirdly, after an OFA is carried out, the teacher can employ technology to analyze students’ performance, keep track of their progress and tweak their teaching to meet the specific needs of students (Spector et al., 2016).

Online feedback can be written, audio-recorded, or video-recorded (Johnson & Cooke, 2016). Online feedback research has focused on mobile-assisted feedback (Wu & Miller, 2020), immediate computer-mediated feedback (Ginkel et al., 2020), peer-feedback (Chien et al., 2020), and automated feedback (Cheng, 2017). Platforms such as Google Docs (Ebadi & Rahimi, 2019) and WhatsApp (Soria et al., 2020) have been found to be effective tools of online formative feedback. The variety of means available to teachers for reaching out to their students makes online feedback versatile and accessible. Daradoumis et al. (2019) have highlighted ease as a vital dimension attached to online feedback practices. Teachers and learners display happiness over timely and instantaneous feedback (Khan & Khan, 2019), and such kind of feedback can motivate learners (Alharbi, 2017) and help them self-regulate (Mahoney et al., 2019). Multimodal feedback (Cunningham, 2019) offered through a variety of online modes can make feedback more reachable and usable. Also, a good mixture of self-and peer-assessment can enable learners to reflect on their learning, and feedback obtained from students through these means can have a positive impact on learning. Feedback received through online self-assessment (Beebe et al., 2010) can enable learners to self-monitor their learning. Online peer-feedback activities can improve communication among students and develop a sense of community in them. One can imagine the importance of the teacher’s role in the feedback process. Carless and Winstone (2020) note that feedback literate teachers display empathy in about how the feedback is shared, look at the procedure of feedback as a collaborative activity involving the teacher and students, and utilize technology to optimize how the feedback is conveyed. However, more information is required about how online feedback is provided in language classrooms (Rassaei, 2019) and also about the type of feedback provided using technology (Sedrakyan, 2018).

Research Questions

The study addressed two major research questions and each major question comprised two sub-questions.

  • How did ESL teachers in India, Bangladesh, and Nepal carry out FAs in their online classroom over COVID-19?

    • How were their OFA practices similar or different in terms of quality, methods and frequency?

    • What kind of digital tools did they use and what was the rationale behind their decision?

  • How did they offer feedback to their learners?

    • How were they similar or different in their feedback practices in terms of quality, methods and frequency?

    • What kind of digital tools did they use and what was the rationale behind their decision?

Methodology

A multiple case study approach was adopted for the study. A case study approach to research allows the researcher to investigate a phenomenon in its natural setting, which happened in the current study. Three teachers, one each from India, Bangladesh, and Nepal, formed three cases. Purposive sampling (Stakes, 2013) used for the selection of cases was driven by factors like access (Yin, 2018), the opportunity to study the identified phenomenon and delimitation with regard to factors that are unrelated to the study (Stakes, 2013). Multiple methods were used for data collection, and data were triangulated both theoretically and methodologically (Flick, 2009). Thematic discussions (Gerring, 2007) were undertaken to identify identical and non-identical patterns in the data.

Context

The three teachers who participated in the study were working in university settings in India, Bangladesh, and Nepal. As COVID-19 started spreading, the universities in India, Bangladesh and Nepal where the teachers were working moved online in March, April and April, 2020, respectively, (for more details, see https://www.ugcnepal.edu.np/, https://www.ugc.ac.in/subpage/covid_advisories.aspx, and http://www.ugc.gov.bd/). The controlling bodies of higher education in the three countries, the University Grant Commission (same name used across the countries) along with the respective governments tried to support teachers in preparing for and undertaking online teaching through training programs. As shown in Table 1, they had similar sociocultural and educational backgrounds, teaching experience and training in online teaching and assessment. The participant-teachers were also active on social media and took part in several global webinars before the start of the data collection for the study which happened in September 2020. The researcher stayed in touch with them for a while on Facebook before discussing the study. He mentioned the aim and objectives of the study, the methods of data collection, and their expected roles and obtained informed consent from them. The teachers informed the researcher that they had access to the internet and that some of their students had issues with the internet. Their students did not have seamless access to the internet. The teachers were working from home and they along with their students were under lockdown when data were collected for the study.

Table 1 Participants’ profiles

Data Collection and Analysis

Data for the study were collected in three stages. In the first stage, the researcher observed four online classes of each teacher. Each class was of 1-h duration. The researcher used an observation schedule to take notes about the teachers’ FA and feedback practices which comprised what happened during the live classroom and outside it. It took 3 weeks to complete the first stage. In the next stage, a semi-structured interview, which comprised questions related to teachers’ FA and feedback practices, was conducted with each teacher. It took 30 min to complete each interview. Fig. 1 contains the questions that were asked during the interview.

Fig. 1
figure 1

Interview questions

In the last stage, the teachers were asked to share screenshots of feedback offered on Google Docs, Google Forms, WhatsApp, Facebook, YouTube, Flipgrid, Mentimeter, and so on. While most OFAs were conducted during the class time, on some occasions, students were given homework as part of FAs. The screenshots shared by the teachers included samples of self, peer and teacher feedback provided through written comments and rubrics on all the aforementioned skills. The teachers also allowed the researcher to listen to their own and their students’ oral comments on platforms like Flipgrid and WhatsApp.

It must be noted that in this study, OFAs referred to classroom-based and teacher-developed assessments-for-learning conducted in and outside the classroom on digital platforms. The performance of students in these assessments was not used for grading purposes. The feedback, the other focus area in the current study, meant written and/or oral reactions of students themselves, their peers and their teacher through digital tools to students’ performance in OFAs. The feedback process was led by the teacher.

The data analysis was carried out in several phases, as shown in Fig. 2. Since it was a qualitative study, a few steps were taken to maintain the validity and reliability of the analysis. First, the study followed the overarching interpretative framework used in another transnational study conducted on teachers in Chile and the US by Menard-Warwick (2008). Second, common contextual concerns were used as driving factors in the selection of cases, development of research questions and framework of data analysis (Moret et al., 2007). Third, triangulation was used to ensure validity (Bazeley, 2013). In this study, methodological triangulation was employed and it involved collection of information with the help of various methods and corroborating data collected through one method with those obtained through others. Finally, the data were revisited by the researcher with each participant-teacher and an external expert (who worked on FA), as recommended by (Creswell & Plano Clark, 2011). It has been claimed that ensuring validity in qualitative research automatically takes care of reliability and that generalization and replication are not really the goals of qualitative enquiries (Guest et al., 2011). Multi-coder analysis was not considered for data analysis in this study because no second researcher was directly involved in the study and the involvement of an outside coder could have jeopardized the analysis (Morse, 1997).

Fig. 2
figure 2

Process of data analysis

First, for each case, data gathered through the interview were transcribed and then coded with the help of ATLAS.ti, a computer program for analyzing qualitative data. After that, classroom observation data were polished, elaborated, and re-written before being coded, a practice often used in qualitative research for ‘developing tentative ideas about categories and relationships’ in the data (Maxwell & Wooffitt, 2005, p. 96). Then, the documents shared by the teachers were analyzed, and descriptive notes were made. The data collected from interviews, classroom observation notes and documents for each case were classified under the research questions and triangulated in the next phase. In the last phase, a cross-case analysis was carried out, and the overall findings were reported.

Findings

ESL Teachers’ OFA Practices

All three teachers made efforts to engage in FA practices in all their classes. However, they varied in terms of the quality of activities, the variety and frequency of FA methods used, and their choice and application of digital tools for conducting FAs. A few factors such as access to tools, internet speed at students’ end, and familiarity with the tools shaped their practices. The findings are presented under three main sub-themes, which were identified from the analysis of data.

Quality of OFA Activities

A set of common indicators of quality in FAs was identified from the review of the literature. Then, the quality of each teacher’s OFA activities was evaluated against the indicators. Apart from data obtained through classroom observation, interview data were also used for identifying the quality of the teachers’ activities. As presented in Table 2, the teachers’ practices differed in a variety of ways.

Table 2 Quality of teachers’ OFA activities

Methods and Frequency

The three teachers used a small range of OFA methods, but at regular intervals, in their respective classrooms. Sam used rubrics, informal self-and peer-assessment, and quizzes to engage her students in every class. Rubrics were used to evaluate their presentation skills. Quizzes were used to enhance their explicit knowledge of writing and reading, and self-and peer-assessment were used during discussions on conversation skills, grammar, and pronunciation. When asked about the reason behind her use of these methods, she mentioned the following:

I use these techniques because students find these interesting and pay attention to what is being taught in the classroom. I’m sure that if they enjoy participating, they will be able to learn. Besides, I don’t get time to think about other techniques and those techniques may not work.

Zakir used informal questions, self-assessment, rubrics, and checklists in his classroom. He justified his choice of these methods:

I use these methods because they help my students think about their performance in different skills. If I tell them openly, they may feel hesitant to participate. These methods are easy to use and do not require too much investment on my part in terms of time and resources.

The informal questions were often used to raise students’ level of accuracy in language use. Self-assessment, rubrics, and checklists were used in academic writing and speaking classes. Zakir asked leading questions to encourage participation.

Ram frequently asked students to self-assess their speaking and writing performance, especially their accuracy in terms of grammar and pronunciation. He wanted them to use ‘correct English.’

I want them to speak and write correct English. I know that they are not very good at speaking and writing. That’s why I want them to learn correct English. It may be challenging for them now, but they will do well if they keep learning.

His preference for an informal oral approach was motivated by his students’ use of mobile phones to access his classes. Writing and doing other kinds of activities, he thought, could have been difficult for them.

Use of Digital Tools

As shown in Table 3,  the teachers’ choice of digital tools for conducting OFAs was to a great extent determined by teachers’ knowledge, students’ access, affordability, speed of the internet accessible to the students, and so on. Free digital tools like Google Docs, Google Forms, and WhatsApp were preferred over paid tools.

Table 3 Teachers’ use of digital tools

Sam taught on Google Meet and Microsoft Teams, and frequently asked her students to write the responses on the chat box. She shared course content through Moodle. She created and shared rubrics for students to assess oral presentations and reading comprehension, and meta-writing quizzes on Google Forms. She also utilized a free version of Mentimeter to obtain students’ opinions about their own and peers’ performance and communicate with her students on WhatsApp. Zakir taught on Google Meet and Zoom, and much like Sam, he made use of Google Docs, Google Forms, and WhatsApp for similar purposes. He responded to students’ queries on WhatsApp. Besides, he employed Flipgrid to promote speaking practice among students and Facebook for sharing a wide array of practice materials. Ram conducted his classes on Microsoft Teams and Zoom. Low internet speed on the parts of his students limited his use of digital tools. But he was completely dependent on Google Drive for sharing learning materials. Since most of his students were on WhatsApp and Facebook, he chose to continue discussions on the classroom topics on these social media platforms.

ESL Teachers’ Online Feedback Practices

As highlighted in the review of literature, feedback is a key aspect of FA and it enables students to take responsibility for their own learning and take steps to improve their performance. Though good feedback practices require a high level of feedback literacy on the parts of teachers, the study did not specifically focus on teachers’ feedback literacy. It investigated the quality, methods and frequency of feedback they provided to their students. Though all the three teachers were friendly towards their students and were keen on improving their language skills, their approaches to making feedback available to students lacked clarity of planning and purpose. The information obtained through FAs did not seem to guide the feedback practices. All three of them were constrained by the compulsion of offering feedback online, especially feedback on writing.

Quality, Methods and Frequency

The three teachers preferred to offer group feedback over individual feedback as they had many students in their classes. The teacher feedback was often brief, oral, and rarely planned. But they offered frequent instant feedback in every class, especially on oral comprehension-based activities, which was mostly corrective in nature. They also presented evidence of using recorded audio and video feedback on students’ performance in both oral communication skills and academic writing. While Zakir and Ram kept their audio-visual feedback elaborate, used personal pronouns and a friendly tone, talked about where students did well and where they did not, Sam was direct in her approach, identified problems in their performance and gave them advice on how to improve their performance. In all three cases, students politely thanked their teachers for the feedback. The teachers provided automated feedback on reading comprehension, grammar, and vocabulary exercises through Google Forms. The automated feedback was given in the form of descriptive rubrics and questions directing students to self-check. The teachers regularly engaged students in self-assessment of their writing and encouraged them to self-correct, and on some occasions, the students were also engaged in peer-assessment of writing and reading- and listening-based notes. Students were given time by Sam and Ram to self-assess during class time, but Zakir asked his students to self-assess outside the classroom time. Sam and Ram usually followed up by asking their students to talk about the problems in their writing and notes. Breakout rooms were used for peer-assessment by all three teachers. Rubrics and checklists were employed to enable students to engage in these practices though none of the teachers attempted training their students in using rubrics and checklists. However, the teachers did not seem to think carefully about how to utilize diagnostic information obtained through the FAs and integrate feedback into their OFA practices.

When asked about her feedback practices, Sam tried to explain her decisions related to offering feedback:

It is challenging to provide feedback to individual students because of the sheer number of students. Since students generally don’t pay much attention to corrections I carry out in their writing, I don’t want to waste my time on that. However, I try to give them some kind of direction about how they can improve their writing and speaking skills. I believe that if they participate in FAs, they will learn, even if I don’t always tell them about every single mistake they make.

Zakir felt that FAs should be informal and too much emphasis on performance in such tests could discourage students:

If learning is the aim of such assessments, we must emphasize student participation. If I start analyzing my students’ performance after each FA, I will waste a lot of time as most students will feel discouraged about FAs. On the other hand, if my students continuously take the informal classroom tests, and I tell them about what they can do better, I can help them learn fast. There is already so much burden on them due to online teaching, and I will risk a lack of interest on their parts. (It’s)...better to keep it informal.

Ram talked about how teaching online severely limits his options of offering feedback:

I can’t think much about feedback because it’s tough to work online for long periods. Online teaching is stressful for my students and me. It has limited my options of teaching, interaction, FA, and feedback. I had to familiarize myself with many digital tools and platforms just to teach well. I feel that I have limited options, and as a result, I try to manage FAs without being too ambitious.

Digital Tools Used for Offering Feedback

Google Forms, along with WhatsApp, emerged as the most commonly used feedback tools across the three teachers. Facebook and Google Docs were the second most preferred set of digital tools utilized by the teachers. Some of the commonly given justifications for the choice of digital tools were ease of access to the feedback provided by the tool, students’ preference, the scope for personalizing feedback, efficiency in saving time spent on the process and affordability. When asked to explain her liking for certain digital feedback tools, Sam talked about factors that determined her decision:

Google Forms offers the flexibility to assess any language skill with ease. The automated scoring, the systematic display of results and its affordability are unmatchable. Mentimeter has a more interesting user interface, but I have to pay for the premium features. Similarly, WhatsApp is free and popular among students. Finally, I can’t think of conducting a class without Google Docs. They are easy to manage and share, and I don’t lose activities and rubrics once I create them on Google Docs.

Zakir was open to using anything he found accessible and content-friendly:

Content-friendly tools are those which make the teaching of specific skills easier. For example, I chose Flipgrid to help my students speak. It was free, students had fun when using it, and it was easy for me to offer feedback on it. Google tools like Docs and Forms were once again very feedback-friendly. All my rubrics and checklists were on Docs, and students received their scores and feedback immediately after writing their FAs on Forms. Facebook and WhatsApp served similar purposes.

Ram felt that feedback can be easily provided during teaching hours, and no special digital tool may be necessary to provide feedback to students:

I think, if appropriately utilized, some classroom time can be spent on providing instant oral feedback. It doesn’t require too much preparation. After conducting an FA, a teacher can offer oral comments to students about their mistakes. They pay more attention when it is instantaneous and oral. However, I sometimes create audio-recorded comments on WhatsApp for my students when they share their writing with me. In all cases, I try to make feedback an integrated activity, not separate from FAs.

Discussion

The current study investigated OFA and feedback practices of three university ESL teachers from India, Bangladesh and Nepal. The findings indicate many similarities and as few minor differences in their practices.

Considering that the teachers had formal training in assessment, experience in teaching and some training in online pedagogy, they were expected to carry out OFAs more effectively. In all three cases, the quality of OFAs partially reflected the parameters discussed by Gikandi et al. (2011). It is possible that moving their assessment online could have made conducting FAs difficult for teachers and an obvious explanation for that could be inadequate teacher preparation (King & Boyatt, 2014). The differences in various aspects of quality could be attributed to contextual factors (Davidson & Leung, 2009) such as support of management, colleagues and experts, and personal factors such as their own teaching goals and motivation of students which could not be explicitly traced in this study. Then, the teachers could not utilize the diagnostic information obtained through the assessments, an important feature of FA emphasized by Black and William (2009), which was surprising considering that the three teachers had formal training in assessment. Of course, as suggested by Guo and Xu (2020), the teachers did encourage students to take note of their progress, made them aware of their strengths and weaknesses and pushed them to work hard in the right direction. In this connection, it may be interesting to look into how teachers’ assessment literacy gets reflected in their OFAs. It was observed that teachers differed in their use of assessment methods though not as much in the frequency of use which was almost regular. Their assessment literacy and time to complete assessments could be reasons behind teachers’ employment of small range of assessment methods. The teachers also made use of a small range of free yet primary OFA tools such as Google Docs, WhatsApp and Facebook, which are highlighted by McLaughlin and Yan (2017), and cited factors such as access, suitability for the content, familiarity with tools, students’ interest and data requirement as determinants of their choices which is in agreement with claims of Chiu et al. (2005) and Yilmaz et al. (2020). It can be assumed that apart from their attitude towards educational technology, their respective institutions could have influenced their decisions. It is also possible that they did not have much time to improve their digital literacies before the data for this study were collected. The single most influential factor that drove the three teachers’ OFA decisions is their student-centric decisions which is in line with  the findings of Agustina and Purnawarman (2020). An odd factor that came up in the data analysis is that despite claims about access to the internet by a large majority in India, there was very little difference among the teachers’ access to digital tools and platforms. One may be tempted to consider the digital divide between high- and low-income countries playing a role here.

When it comes to feedback, Sam, Zakir and Ram gave more group rather than individual feedback which was usually unplanned and brief. In contrast to what is reported by many previous researchers like Daradourmis et al. (2019), the teachers did not find it easy to provide feedback online. Their lack of experience in teaching online, unfamiliarity with online feedback tools, large class-size, limitation with regard to students’ access to the internet, high workload and inability to afford the efficient tools could be possibly some of the main reasons behind the feedback-related decisions. In addition, they did not receive any specific training in OFA and offering feedback meant for ESL teachers. Thus, their efforts to offer regular and instant feedback, which has been established to have had a positive affective impact on students (Khan & Khan, 2019), could be due to their own motivation and/or demands of the institutional management. A closer analysis of data indicates that Sam, Zakir and Ram were empathetic in their approaches to offering feedback which could have a positive impact on student learning (Carless & Winstone, 2020). Apart from teacher feedback, there were many occasions on which students were provided with the opportunity to self-assess and assess their peers’ performance. Whereas online self-assessment has been found to improve students’ self-monitoring skills (Admiraal et al. 2015), online peer-assessment has the potential to enhance peer interaction and collegiality (Mostert & Snowball, 2013). Considering the value of feedback received through online self-and peer-assessment, as claimed by Beebe et al. (2010) and Chien et al. (2020), the teachers’ efforts to engage students in self-and peer-assessment was commendable, though the number of peer assessments was low. Moreover, the diagnostic information obtained through OFAs was not explicitly utilized for providing feedback. Some of the possible explanations for this could be lack of proper training in FA and paucity of time. They employed the same set of digital tools for providing feedback that they used for carrying out OFAs. Zakir’s selection of tools was topic/content-driven, as opposed to Sam’s which was guided by the ease of use (Daradoumis et al., 2019). Ram underplayed the importance of digital feedback tools. Preference for and ability to use these tools should be ideally led by the required type of feedback but contextual factors could have impacted teachers’ choices. Their deficient training in OFA and consequently, their inadequate online feedback literacy could also have shaped their feedback practices. This assumption needs to be empirically verified.

Conclusion

This study is significant for a few reasons. First, comparative studies focusing on ESL education involving India, Bangladesh, and Nepal are rare. This study could be one of the first attempts to report such a study in a mainstream journal. Secondly, this is an attempt to research the OFA and corresponding feedback process in relation to ESL/EFL teachers which are considered to be under researched areas. Thirdly, the enquiry provides some qualitative information about ESL teachers’ choice of digital tools for OFAs and online feedback which can be considered a contribution to the existing knowledge-base. Moreover, this study has drawn attention to OFA in resource-constraint contexts, an utterly underexplored area in applied linguistics. As the findings suggest, the teachers’ practices need improvement. However, the teachers deserve credit for standing up to the challenge of teaching online even though they did not have any formal training in online teaching and assessment. With their respective governments making efforts to train teachers in these areas, they will be much more effective in carrying out OFAs and offering feedback in their online classes. They can improve their feedback practices if they start using the information obtained through OFAs. Of course, they need to adhere to the learning outcomes more closely when preparing assessment activities. This could be achieved by going back to the basics of FA, as recommended by experts in the field of language assessment. They need to evaluate their assessment activities and feedback practices regularly so that they could keep growing as teachers and assessors. Since the study involved only three cases, the findings of the study cannot be generalized for all university ESL teachers. Further, the data collection was limited to the observation of 4 h of classes. A more extended observation period could have yielded richer and more reliable data. Finally, the study did not collect any information about what students felt about OFAs and online feedback strategies. Future studies can undertake further investigations on online assessments carried out by ESL teachers in South Asian countries. Large-scale surveys can throw light on more reliable information about teachers’ practices. Another exciting area that requires research attention is ESL teachers’ online assessment and feedback literacies. The findings of this study may encourage researchers and teacher educators in India, Bangladesh, and Nepal to look into offering teachers need-based support for strengthening their ability to conduct online assessments and offer feedback.