1 Introduction

A Computer Engineering (CE) or Software Engineering (SE) degree in Turkey can be obtained by attending to an on-campus degree program offered by a Turkish University. The majority of courses in these programs are on-campus courses typically offered in a traditional face-to-face lecture environment (Simon et al. 2009). In addition, students are required to be present in university campuses to attend to lab sessions and do team assignments. On the other hand, with the recent developments in information and communication technologies (ICT), higher education in Turkey was also undergoing a digital transformation (Karabacak and Sezgin 2019). To provide a better education experience, instructors have started to blend the traditional methods with digital materials, such as electronic books, video recordings, and digital interaction environments (Watson 2008; Alonso et al. 2010; Alammary 2019; Martínez et al. 2019). Therefore, the conventional CE/SE education in Turkey already included some digital materials and methods and hence could be attributed as a blended type of education. The Coronavirus (COVID-19) pandemic disrupted this situation unexpectedly and posed e-learning as the mandatory education setting. Either online lectures or any other material replaced face-to-face lectures. Students started to communicate using digital channels to do team assignments and even to socialize.

The objective of this study is to understand how CE/SE students perceived their e-learning experience under pandemic, in which on-campus learning activities, face-to-face lectures, in-person interaction with their instructors and peers were absent. To achieve this objective, student satisfaction (Bolliger 2004), instructor support, student interaction and collaboration, student autonomy (Macnish et al. 2003), and course materials (Martínez-Argüelles et al. 2013; Simonson et al. 2019) were important factors to be assessed.

Student satisfaction is a critical variable in determining the success or failure of online learners, courses, and programs (Edwards and Waters 1982; Astin 1993; Bailey et al. 1998; Bolliger 2004). Designing and implementing an effective and efficient education environment, which satisfies students, is a complex process, which involves many factors, including instructor support, student interaction and collaboration, and student autonomy (Macnish et al. 2003). Prior research revealed that instructor support is an essential factor affecting student satisfaction (Bolliger 2004; Walker and Fraser 2005; Özkök et al. 2009). Interaction among students plays an essential role in student satisfaction and learning (Moore 1989). Student autonomy describes how much control students take for their own learning and is attributed as an important feature for student satisfaction (Carver 2014; Seiver and Troja 2014). Fig. 1 shows the relationships between instructor support, student interaction and collaboration, student autonomy, and student satisfaction, as stated in the literature (Walker and Fraser 2005; Özkök et al. 2009).

Fig. 1
figure 1

The relationships between instructor support, student interaction and collaboration, student autonomy, and student satisfaction

This study considers these three factors as important factors in student satisfaction. The hypotheses below tested this assumption.

  • Hypothesis 1a: Instructor support positively affects CE/SE students’ satisfaction with e-learning.

  • Hypothesis 1b: Student interaction and collaboration positively affects CE/SE students’ satisfaction with e-learning.

  • Hypothesis 1c: Student autonomy positively affects CE/SE students’ satisfaction with e-learning.

Previous research suggests that perception of instructor support, student interaction and collaboration, and student autonomy differ among education approaches. For instance, students perceived less instructor support when face-to-face interaction with instructors is absent (Kuh and Hu 2001; Simonson et al. 2019). Interpersonal dialogue with instructors positively affects student engagement and satisfaction (Chigeza and Halbert 2014; Nortvig et al. 2018). In the area of student interaction and collaboration, student perceptions favor face-to-face environment (Carver 2014). Some prior studies suggest that students who were higher in independence and autonomy showed greater success in online classes than students who were lower in those traits (Milligan and Buckenmeyer 2008; Carver 2014; Seiver and Troja 2014). The second group of hypotheses aimed at testing these previous findings for CE and SE students.

  • Hypothesis 2a: CE/SE students perceive that instructors support them less in e-learning compared to on-campus education.

  • Hypothesis 2b: CE/SE students perceive that they interact and collaborate less with other students in e-learning compared to on-campus education.

  • Hypothesis 2c: CE/SE students perceive that they act less autonomously in e-learning compared to on-campus education.

Course material is another important factor for students’ learning experience (Martínez-Argüelles et al. 2013; Simonson et al. 2019). Some researchers found out that students learn better in face-to-face lectures compared to online lectures (Adams et al. 2015; Powers et al. 2016). In contrast, some other researchers claim that a better learning academic outcome may be obtained by combining relevant materials and methods in an online environment (Northey et al. 2015). To understand the usefulness of course materials and methods in CE/SE education context, the following hypotheses were formed:

  • Hypothesis 3a: CE/SE students perceive live lectures as more useful in e-learning compared to on-campus education.

  • Hypothesis 3b: CE/SE students perceive video recordings as more useful in e-learning compared to on-campus education.

  • Hypothesis 3c: CE/SE students perceive lecture notes as more useful in e-learning compared to on-campus education.

  • Hypothesis 3d: CE/SE students perceive reading materials as more useful in e-learning compared to on-campus education.

  • Hypothesis 3e: CE/SE students perceive digital interaction environments as more useful in e-learning compared to on-campus education.

In addition to the hypotheses formed, four research questions (RQ) were formed to obtain an insight on the challenges, positive and negative aspects of e-learning and some recommendations on how to improve it.

  1. RQ1

    What are the challenges faced by CE/SE students in e-learning?

  2. RQ2

    What are the positive and negative aspects of e-learning, according to CE/SE students?

  3. RQ3

    What else can be done to improve the satisfaction of CE/SE students?

  4. RQ4

    What did CE/SE students do to improve their learning performance during e-learning?

2 Method

2.1 Participants

CE/SE undergraduate students were invited to fill out a questionnaire online through www.surveymonkey.com platform in May 2020. 290 participants completed the questionnaire. 18% of the participants were first-year students (n = 52), 32% were second-year students (n = 92), 28% were third-year students (n = 81), and 22% were fourth-year students (n = 65). While 80% of the participants were enrolled in public universities, the rest were enrolled in private universities. The majority of the participants, 93%, were studying CE; the rest were studying SE.

The Scientific Research and Publication Ethics Legislation (The Council of Higher Education 2016) published by the Council of Higher Education were taken into account when conducting this study. The professors disseminating the questionnaire were given clear information about the objective of the study. A written consent in the first section of online survey was given to all participants before filling the questionnaire. It was made clear that the collected empirical data would be used only for academic purposes and that the questionnaire did not include any question addressing participants’ identity, gender, age, or university name.

2.2 Measures

The questionnaire started with a brief explanation of the objective and anonymity of the study. A filter question was used to exclude participants who did not meet the inclusion criterion, i.e. currently being an undergraduate student in a CE/SE department of a university in Turkey.

The second section aimed at identifying what materials and methods students used in on-campus education and e-learning and how useful they perceived these to be in both settings. The predefined list of materials and methods included face-to-face lectures in classroom, video recordings, lecture notes, reading materials, digital interaction environments for on-campus education. For e-learning, face-to-face lectures in classroom have been replaced by online lectures. For all of these materials and methods, participants were asked to select “0” if they did not use the corresponding material or method. If they used it, they were asked to rate how useful they perceived that material or method to be using a Likert scale ranging from “very useful” to “not useful at all”.

The third section intended to quantify participants’ perceptions of instructor support, student interaction and collaboration, and student autonomy. The Turkish versions (Özkök et al. 2009) of three psychological scales designed and validated by Walker and Fraser (2005) were used to measure students’ perceptions on these three independent variables for on-campus education and e-learning. The participants used a 5-point Likert scale ranging from “always” to “never” to provide their responses.

The fourth section involving seven questions aimed at measuring the perceived satisfaction of e-learning participants. The participants used a 5-point Likert scale ranging from “strongly agree” to “strongly disagree” to provide their responses on their perception of satisfaction with e-learning.

The last section included open-ended questions to enable participants to submit their opinions on challenges, positive and negative aspects, and improvement opportunities of e-learning.

All of the questions except the open-ended questions were mandatory to reduce the number of incomplete questionnaires. Although it is possible to use statistical techniques to estimate the values of missing data (Little and Rubin 2019), such techniques become usually inappropriate when the amount of missing data is excessive (Kitchenham and Pfleeger 2003).

A two-step process was followed to ensure content validity. First, two professors from two distinct CE departments reviewed the questionnaire. They evaluated if the questions successfully captured the topic and based on their feedback, the questionnaire was revised. Second, two academicians from a psychology department, who are experts in conducting surveys in the field, reviewed the questionnaire, and ensured that it did not contain common errors, such as leading and confusing questions.

A pilot study was performed to reduce the chance of misleading questions and poor instructions (Kitchenham and Pfleeger 2003) by distributing the questionnaire in a CE department in a public university. No feedback that resulted in a revision on the questionnaire was received.

2.3 Data collection

The target population for this study was undergraduate students in CE/SE departments of Turkish public and private universities. The questionnaire was shared with 15 professors from different universities via email. This type of sampling is named convenience sampling, which is the dominant survey and experimental approach in CE/SE (Sjøberg et al. 2005). The main drawback of convenience sampling is that it may result in a homogeneous sampling frame, and this sample may have limited generalizability to the broader population (Valerio et al. 2016). Therefore, snowball sampling (Goodman 1961) was employed to obtain a more heterogeneous sample. In the scope of snowball sampling, the professors were asked to forward the questionnaire to other instructors they know. Additionally, some of the participants were asked to recruit their friends who meet the participant criteria.

2.4 Validity and reliability analysis

Parametric and nonparametric statistics are two broad classifications of statistical analysis (Hoskin 2012). Parametric tests are used where data are normally distributed (Van Belle 2011). The most widely used parametric tests are t-test (paired or independent), ANOVA (one-way non-repeated, repeated; two-way, three-way), linear regression, and Pearson correlation (Hoskin 2012).

The skewness and kurtosis indexes were used to identify the normality of data. Table 1 shows the indexes, and the results suggested the deviation of data from normality was not severe as the absolute value of skewness and kurtosis index were below three and ten, respectively (Kline 2015). Besides, the sample size has the effect of increasing statistical power by reducing sampling error (Hair et al. 2006). For sample sizes of 200 or more, the impact of departures from normality may be negligible (Hair et al. 2006). Based on skewness and kurtosis index along with the large sample size (n = 290), the collected data were normally distributed, and hence are appropriate for parametric analysis.

Table 1 Skewness and kurtosis indexes for the factors, scale reliability using Cronbach’s alpha coefficient (n = 290)

Each scale, namely instructor support, student interaction and collaboration, student autonomy, and student satisfaction, was assessed for internal consistency. Table 1 displays Cronbach’s alpha coefficients for each of the scales used in the questionnaire. It is suggested that the reliability should be ideally high and should not be lower than 0.70 (Carmines and Zeller 1979). The Cronbach’s alpha coefficients of the scales ranged from 0.82 and 0.94. This range is considered good to excellent (George and Mallery 2010). In addition, Average Variance Extracted (AVE) and Composite Reliability (CR) values were calculated and are presented in Table 1. AVE for all constructs are above 0.5 and CR for all constructs are above 0.7. Due to the high Cronbach alpha coefficients and AVE and CR values above the threshold, it can be concluded that the scales are reliable.

Construct validity refers to the degree of how well a scale measures the construct that it was designed to measure. The construct validity was evaluated using a principle component analysis (PCA) with Varimax rotation and Kaiser normalization. For the three scales used for on-campus education, Barlett’s test is significant (chi-square = 2936; df = 171; p < 0.001) and KMO measure of sampling adequacy is high (0.897). For the four scales used for e-learning, Barlett’s test is significant (chi-square = 5372; df = 325; p < 0.001) and KMO measure of sampling adequacy is high (0.920). These results indicated that the data are appropriate for factor analysis.

As Table 2a shows, for on-campus education, 19 items loaded on the three dimensions, and as Table 2b displays, for e-learning, 26 items loaded on four dimensions. The resulting factor structure demonstrated that the factors for which this questionnaire was designed to assess were measured within each of the instruments’ scales.

Table 2 Principal Component Analysis (PCA) of the Scales used in the Questionnaire. (a) PCA of the three scales used for on-campus education, (b) PCA of the four scales used for e-learning

2.5 Qualitative analysis

All free text answers were recorded in a spreadsheet file and initially, each of the answers was read on multiple occasions to identify meaning units. These were the different patterns used by the participants to share their opinions. Then the meaning units were inductively coded (Guest et al. 2012). A code symbolically assigns a summative or evocative attribute for a portion of qualitative data (Miles et al. 2018). Coding was done in cycles. In the first cycle, any emerging patterns of similarity or contradiction were identified. In the second cycle, the codes were collapsed and expanded to understand any patterns. After the main themes and codes were extracted, the codes assigned to each response were revised and the frequencies were calculated to report the results. Some examples from the answers were also extracted to exemplify the themes that emerged. These examples are presented in the results.

3 Results

3.1 Quantitative analysis results

3.1.1 Instructor support, student interaction and collaboration, and student autonomy

Table 3 shows the results of the paired samples t-test, which was conducted to understand whether the participants perceive a difference in the levels of instructor support, student interaction and collaboration, and student autonomy in on-campus education and e-learning. On average, the perceived level of instructor support was higher in on-campus education (M = 3.70, SD = 0.77) compared to e-learning (M = 3.15, SD = 0.92). This difference, 0.55, 95% CI [0.43, 0.66], was statistically significant, t(289) = 9.30, p < 0.001. Similarly, the perceived level of student interaction and collaboration was higher in on-campus education (M = 3.82, SD = 0.85) than e-learning (M = 2.84, SD = 1.10). This difference, 0.97, 95% CI [0.82, 1.13], was statistically significant, t(289) = 12.56, p < 0.001. There was not a significant difference in the perceived level of student autonomy for on-campus education (M = 3.76, SD = 0.91) and e-learning (M = 3.89, SD = 0.90); t(289) = −1.86, p = 0.064.

Table 3 The perceived level of instructor support, student interaction and collaboration, and student autonomy in on-campus education and e-learning. The paired samples t-test shows the perceived differences between two settings

Finding

Based on the paired samples t-test, while hypotheses 1a and 1b were supported, hypothesis 1c was rejected.

3.1.2 The predictors of student satisfaction

Associations were explored between student satisfaction with e-learning and the three factors (instructor support, student interaction and collaboration, and student autonomy) using Pearson correlation analysis and regression analysis. Based on the Pearson correlation analysis, instructor support (r = 0.45, p < 0.01), student interaction and collaboration (r = 0.47, p < 0.01), and student autonomy (r = 0.46, p < 0.01) are strongly related to student satisfaction. To ascertain which factors are independently related to student satisfaction when all other factors are mutually controlled, the regression coefficients were examined. A significant regression equation was found (F(3, 286) = 52.016, p < 0.000), with an R square value of 0.346. The perceived satisfaction of the participants is equal to −0.353 + 0.328 (instructor support) + 0.256 (student interaction and collaboration) + 0.370 (student autonomy), where all three factors were measured using a 5-point Likert scale ranging from “always” to “never”. According to the results, all three factors are independently, positively, and significantly related to student satisfaction. Based on the statistically significant R square value, these three factors explain 35% of the variation in student satisfaction.

Finding

Pearson correlation and regression analyses suggest that a higher level of each factor is associated with greater student satisfaction in e-learning. The hypotheses 2a, 2b, and 2c were supported.

3.1.3 The materials and methods used for learning

The participants perceived digital interaction environments as the most useful media for learning in both on-campus education (M = 3.83, SD = 1.04) and e-learning settings (M = 3.75, SD = 1.15). Lecture notes were also recognized as valuable as digital interaction environments in on-campus education (M = 3.83, SD = 0.95). While face-to-face lectures were perceived as the third useful method in on-campus education (M = 3.72, SD = 1.09), digital live lectures were regarded as the least helpful method in e-learning (M = 3.27, SD = 1.26). Reading materials were seen as beneficial in both settings (on-campus: M = 3.65, SD = 0.94; e-learning: M = 3.50, SD = 1.11). While the participants found video recordings as useful in both environments (on-campus: M = 3.56, SD = 0.95; e-learning: M = 3.60, SD = 1.13), a significant percentage of the participants did not use video recordings in on-campus education. 56% and 90% of the participants reported that they used video recordings in on-campus education and e-learning settings respectively.

A paired samples t-test was conducted to understand whether the participants perceive the usefulness of a material or method differently in on-campus education and e-learning. To be able to conduct a paired samples t-test, the responses of the participants who do not use a material/method in both on-campus education and e-learning were filtered out. 269, 155, 281, 250, and 223 participants out of 290 used live lectures, video recordings, lecture notes, reading materials, and digital interaction environments in both settings respectively.

Table 4 shows the results of the paired samples t-test. On average, the perceived usefulness of face-to-face lectures was higher (M = 3.77, SD = 1.07) compared to online lectures (M = 3.28, SD = 1.25). This difference, 0.48, 95% CI [0.27, 0.70], was statistically significant, t(268) = 4.40, p < 0.001. Similarly, lecture notes were perceived more useful in on-campus education (M = 3.84, SD = 0.95) than e-learning (M = 3.61, SD = 1.13). This difference, 0.22, 95% CI [0.10, 0.35], was statistically significant, t(280) = 2.47, p < 0.001. The participants perceived reading materials as more useful in on-campus education (M = 3.67, SD = 0.94) compared to e-learning (M = 3.53, SD = 1.09). This difference, 0.14, 95% CI [0.03, 0.25], was statistically significant, t(249) = 2.47, p < 0.05. There was not a significant difference in the perceived usefulness of video recordings for on-campus education (M = 3.59, SD = 0.94) and e-learning (M = 3.59, SD = 1.16); t(154) = −0.08, p = 0.934. Similarly, the participants did not perceive a difference in the usefulness of digital interaction environments in on-campus education (M = 3.87, SD = 1.02) and e-learning (M = 3.80, SD = 1.13); t(222) = 1.28, p = 0.201.

Table 4 The differences in the perceived usefulness for the materials/methods in on-campus education and e-learning based on paired samples t-test

Finding

Based on the paired samples t-test, while hypotheses 3a, 3c, and 3d were supported, hypotheses 3b and 3e were rejected.

3.2 Qualitative analysis results

3.2.1 Challenges of e-learning

227 of the participants reported one or more challenges of e-learning as a free text answer. Figure 2 shows the top ten challenges frequently faced by the participants. The most commonly encountered challenge was problems related to Internet connection and infrastructure (CHA1). The participants heavily complained about the Internet connection stability and speed and power cut. The participants stated that they had difficulties in self-motivation, concentration on courses (CHA2), and maintaining self-discipline (CHA7). Exams (CHA3), too many and hard assignments (CHA5), and insufficient course materials (CHA6) challenged the participants in e-learning. One participant commented on the exams: “The exams are improper for e-learning and exam durations are not sufficient.” The participants encountered difficulties in interacting with instructors (CHA4) and their classmates (CHA8). One participant commented: “In the past, when I needed to ask questions to my professors, it was enough to go to their office, but now I have to send an e-mail. This slows down communication.” Another remarked: “When I have questions about homework, I have difficulty getting quick answers and not being able to cooperate with my friends.” Performing team assignments (CHA10) was another challenge, mostly due to the problems in communicating with classmates. Some participants thought that instructors’ expectations from students were too much, and the workload in e-learning has been increased compared to the on-campus education.

Fig. 2
figure 2

Top 10 challenges faced by the participants in e-learning (n = 227)

3.2.2 Perceived positive and negative aspects of e-learning

To explore the positive and negative aspects of e-learning perceived by the participants, 217 and 213 free text answers were analyzed, respectively. As Fig. 3 shows, the most positively viewed aspect of e-learning is on-demand access to course materials (PAS1). One student commented: “Unlike face-to-face lectures, being accessible at any time and at any time and place.” Another participant remarked: “The only positive aspect is that we can revisit the lectures recordings.” Also related to on-demand access, the participants appreciated flexible scheduling (PAS3) and location convenience (PAS5). Traveling (PAS2) and spending less (PAS7) were perceived positively by the participants. Also, some participants found e-learning safer (PAS10) considering the extraordinary conditions of the COVID-19 pandemic. Some participants considered e-learning as more time-efficient (PAS4). Some participants improved their self-learning and research competencies (PAS6) and felt less stress since assignments replaced exams for course assessments (PAS8) in most cases.

Fig. 3
figure 3

Top 10 positive aspects of e-learning perceived by the participants (n = 217)

As Fig. 4 shows, interaction with instructors (NAS1) was the most mentioned negative aspect, although four participants perceived better instructor support (PAS9) in e-learning. Interaction in general (NAS4) and interaction with classmates (NAS8) were also negatively perceived factors of e-learning. Closely related to interaction, lack of social life was another handicap (NAS5). This finding might have been affected by the restrictions posed due to the COVID-19 pandemic. Some participants faced with focus, motivation (NAS3), and self-discipline (NAS7) problems and some of them were not satisfied with their learning performance (NAS6) in e-learning. The participants perceived assessment (NAS2) as a downside in e-learning. Most of the responses emphasized unfairness due to illegitimate cooperation among some students during assessments, especially in take-home exams. One participant commented: “Exams are not fair and reliable.” Another student emphasized: “Insufficient exam durations and hard exams to prevent cheating.” Some students complained about more workload (NAS9). Some participants saw lab sessions (NAS10) as a negative side of e-learning.

Fig. 4
figure 4

Top 10 negative aspects of e-learning perceived by the participants (n = 213)

3.2.3 Improvement opportunities for e-learning

177 of the participants reported their opinions on what could be done to improve their satisfaction with e-learning. As Fig. 5 shows, the most frequently mentioned opportunity was about improving the materials and methods used in courses (ISA1). Some of the participants commented: “More course resources can be provided.” One student asked for more diverse resources: “More online interactive course materials can be offered.” Another participant remarked: “Lecture slides should be updated for e-learning environment.” The participants asked for the adaptation of assessments (ISA2) and assignments (ISA3) to e-learning. Instructors should arrange lectures (such as their duration, breaks, content) (ISA7) and promote student involvement (ISA9) according to some participants. Not surprisingly, the learning platform (ISA4) is an essential component for e-learning and is subject to be improved. The participants called for more opportunities to interact with instructors (ISA5). Some participants stated that Q&A sessions (ISA8) and office hours (ISA10) could improve their interaction with instructors and hence improve their learning performance. A minority of the participants were hopeless about the success of e-learning and were in favor of switching to on-campus education (ISA6).

Fig. 5
figure 5

Top 10 themes proposed by the participants for increasing student satisfaction with e-learning (n = 177)

3.2.4 The methods used by the participants to improve learning performance in e-learning

166 of the participants reported what they did to improve their learning performance in e-learning. As Fig. 6 shows, more than half of the participants who responded to this question were using other online resources (ILP1), such as Udemy and YouTube, to enhance their learning performance. Some of them revisited video and lecture recordings (ILP2) and took advantage of on-demand access. While some of the participants preferred self-study (ILP9), others interacted with their classmates (ILP8). Regular study (ILP3), doing more research (ILP4), taking notes (ILP5), repetition (ILP6), and studying more (ILP7) were the techniques used by some of the participants. A minority of the participants tried to apply the knowledge (ILP10) obtained from courses to various problems.

Fig. 6
figure 6

Top 10 actions taking by the participants to improve their learning performance in e-learning (n = 166)

4 Discussion

This study aims to understand how CE/SE undergraduate students perceived their e-learning experience during COVID-19 pandemic. The participants rated their perceived satisfaction from e-learning as 2.85, which is slightly under the mid-level of the 5-point Likert scale. The reason behind this finding could be moderate perceived satisfaction with instructor support and student interaction and collaboration in e-learning. Since the transition from on-campus education to e-learning happened unexpectedly in a short period, instructors might not have been able to adapt their courses to the new conditions. This subsection unifies the results presented in the previous section and gives some sample practices from the literature to enhance learning experience using ICT within the scope of CE/SE education.

4.1 Course materials and methods

The participants stated that they applied various methods to improve their learning performances, including self-study, interacting with their classmates via digital environments (Çakıroğlu 2014), conducting more research, exploring other online resources, and lecture/video recordings prepared by the course instructor. This finding is consistent with the learning styles theory, which emphasizes that students learn more when the educational experience is geared toward their learning styles (Shih and Mills 2007). Instructors can support students in discovering the proper learning style for themselves, especially by guiding them in accessing suitable materials and methods. The participants faced challenges with the sufficiency of course materials and methods and 66 participants consider that improving course materials and methods would result in higher student satisfaction. Another critical point to mention is that the participants found on-demand access to course materials as the most positive aspect of e-learning. Additionally, they also appreciated the flexible scheduling and location convenience aspects of e-learning, in line with the findings in the literature, such as Arbaugh (2000), Ersoy (2003), Djenic et al. (2010) and Hannay and Newvine (2006). These findings support the value of integrating digital content into course materials. Fox and Patterson (2013), who are SE professors, claimed that online courses and electronic books might become the textbook of the twenty-first century.

There are many pieces of evidence on the positive effects of blending a wide variety of course materials and methods to support learning effectively (Hannay and Newvine 2006; Djenic et al. 2010; Alammary 2019). Instructors may provide a wide range of course materials to guide students in discovering their learning styles and address the challenge of sufficiency of course materials. Students can use a proper subset of course material based on their preference. Instructors should specify the materials covering the core mandatory course content in advance to guarantee that all students agree on the minimum learning outcomes of a specific course. Such a specification may also prevent students from feeling that instructors expect too much from students.

85 participants reported that they use other online resources to improve their learning performance. Instructors can provide an additional list of online resources, such as relevant MOOCs, digital materials, and web sites. Such a list could be a good starting point for students who need various course materials to learn.

Another vital course material identified by this study are lecture/video recordings. Participants reported that they used lecture/video recordings in e-learning settings (90%) and found them useful with a mean of 3.56 out of 5. Video recordings have been perceived as one of the most positive four aspects in a blended learning environment for a programming fundamentals course (Djenic et al. 2010). Video lectures were one of the fundamental components of the SE course delivered as a flipped classroom format (Erdogmus and Péraire 2017). In a SE MOOC, the instructors found video lectures highly efficient in conveying information (Fox and Patterson 2013). Besides, the instructors also observed that students could cope with dense information by pausing and reviewing videos at any point (Fox and Patterson 2013). The participants of this questionnaire stated that they revisit lecture/video recordings, take notes, and repeat the content to improve their learning performance. In this study, we can also observe that the use of lecture/video recordings have been increased from 56% to 90% after students switched from on-campus education to e-learning. The main reason for this could be that lecture/video recordings are rare in on-campus education in Turkish universities. Switching from on-campus education to e-learning may have revealed that students may benefit from lecture/video recordings to improve their learning performance.

Suggestion 1

Instructors should provide an inventory of learning content involving various course materials. For instance, the author used an extensive list of course material in a Software Project Management course offered in Spring 2020. The list contained book chapters (Villafiorita 2014; Cobb 2015; Goodpasture 2010; Greene and Stellman 2018), videos from MOOCs (AdelaideX 2020; Grushka-Cockayne 2020; Johnson 2020; Meloni 2020; Orso 2020), lecture slides, and pre-recorded videos prepared by the instructor.

The participants found face-to-face lectures more useful compared to digital live lectures. Some instructors might have directly used their content and method used in the classroom without any adaptation to the e-learning environment. This type of use can be inferred from the improvement suggestions on lecture arrangement (such as their duration, breaks, content) and promoting student involvement. Q&A sessions may also increase the perceived usefulness of live sessions. In a SE course, Erdogmus and Péraire complemented video lectures involving theoretical content with short live Q&A sessions (Erdogmus and Péraire 2017).

Suggestion 2

Instructors should adapt their course materials to the e-learning environment. They can use cloud-based tools to promote student involvement. For instance, lectures slides can be shared via zeetings.com and enriched with questions asked to students during the lecture. In addition, students can be engaged with live polls published on mentimeter.com and sli.do. Online quizzes can be published using a gamified approach using kahoot.com (Compton and Allen 2018).

Last but not least, the learning platform is also significant for delivering a course in a blended or e-learning environment (Erdogmus and Péraire 2017). Some of the participants identified the learning platform as one of the components to be improved. In addition, the root causes of the connection and infrastructure problems faced by students should be identified by the universities.

Suggestion 3

The universities should analyze their infrastructure and learning platform based on the data collected during COVID-19 pandemic. For instance, Favale et al. (2020) analyzed the changes in the traffic patterns in the Politecnico di Torino campus in Italy. They shared the challenges faced and the solutions they applied during COVID-19 pandemic. Other universities can benefit from these experiences and form their own “education continuity plan” under pandemic conditions.

4.2 Instructor support

Prior research revealed that instructor support is an essential factor affecting student satisfaction (Bolliger 2004; Walker and Fraser 2005; Özkök et al. 2009). Based on the qualitative analysis results, the participants perceived significantly less instructor support in e-learning compared to on-campus education. In line with the quantitative results, the qualitative analysis showed that 14% of the participants who reported the challenges of e-learning perceived interaction with the instructor as a challenge. Furthermore, communication with the instructor has been viewed as the most frequently mentioned negative aspect of e-learning. 13 participants thought that better interaction with instructors would result in an improvement in the quality of e-learning. There are many factors – such as communication, feedback, encouragement, accessibility, professionalism – that can affect perceived instructor support (Bolliger 2004). Earlier research found out that students have expressed a desire to have more interaction on the topic of their education and of their professional careers (Kilicay-Ergin and Laplante 2012).

Suggestion 4

Instructors can organize office hours to talk about career plans and the learning performance of students. In addition, Q&A sessions can be a good medium to interact with students.

While more interaction between instructors and students seems to improve student satisfaction, all of the actions to interact with students would bring extra workload to instructors. Therefore, it is vital to select instructors who are interested in teaching within blended and e-learning (Sun et al. 2008). Moreover, professional expertise should not be the sole criterion in choosing online instructors, their attitudes toward using ICT in delivering education would impact students’ attitudes and affect their performance (Sun et al. 2008). In this respect, CE/SE instructors may be a little more advantageously positioned compared to instructors in other disciplines due to their expertise in ICT.

4.3 Student interaction and collaboration

Interaction among students plays an essential role in student learning in e-learning (Moore 1989). A minority of participants stated that they interacted with their classmates to improve their learning performance. On the other hand, based on the qualitative analysis results, the participants perceived significantly less interaction and collaboration with their classmates in e-learning compared to on-campus education. Complementing these findings, the participants mentioned interaction with classmates as a challenge and a negative aspect. Moreover, the participants faced with challenges in performing team assignments mainly due to interaction difficulties.

Suggestion 5

Instructors should set a communication platform for their courses and announce it in the first session. CE/SE instructors successfully utilized such platforms in a requirements engineering course (Kilicay-Ergin and Laplante 2012) and in some CE laboratory courses involving hardware (Digital Design, Computer Networks, VLSI Design) (Saniie et al. 2015).

Suggestion 6

Instructors can use collaborative coding platforms to promote student interaction. Teiniker et al. (2011) used Google code as the development and communication platform in their SE course to enable student collaboration. Google CoLaboratory can be used in various courses in CE/SE curriculum, including coding (Tock 2019), artificial intelligence (Nelson and Hoover 2020), and robotics programming (Cañas et al. 2020).

Suggestion 7

Instructors can use SE practices, such as reviews and retrospectives, to promote collaborative learning in SE-related courses (Teiniker et al. 2011; Kropp et al. 2016).

Suggestion 8

Instructors can organize virtual collaborative problem-solving events, i.e. hackathons, to foster student interaction and collaborative learning. Hackathons were proved to be successful for SE-related tasks (Porras et al. 2018; Valença et al. 2019; Falk Olesen and Halskov 2020), but also applicable to other types of tasks involving teamwork.

4.4 Assessment

Although the questionnaire did not include any questions regarding assessment, the qualitative analysis identified assessment as one of the main themes. 33 participants (15%) identified taking exams as a challenge, and 32 participants (15%) perceived assessment as a downside in e-learning. Most of the responses emphasized unfairness due to illegitimate cooperation among some students during assessments, especially in take-home exams. 36 participants thought that assessment is an improvement area for e-learning. In line with the findings, Erdogmus and Péraire are seeking fairer ways of assessing individual performance while encouraging better collaboration among students in their SE course (Erdogmus and Péraire 2017).

Suggestion 9

Instructors can use auto-graders in some programming tasks for a fair assessment and decreasing the grading burden (Fox and Patterson 2013), such as nbgrader (Blank et al. 2019).

Thurmond et al. stated that diversity in assessment influences e-learning satisfaction considerably (Thurmond et al. 2002). Besides, Sun et al. (2008) found out that diversified assessment tools and methods increase student satisfaction due to various types of feedback from each assessment.

Suggestion 10

Instructors can use just-in-time assessments to improve students’ preparedness for live sessions and their knowledge retention (Erdogmus et al. 2019).

Suggestion 11

Instructors can use an evaluation framework, such as proposed by Tubino et al. (2020), to assess student outcomes individually in team assignments.

4.5 Potential threats to validity

This study is subject to some threats to construct and content validity. To enhance construct validity, validated scales were used for instructor support, student interaction and collaboration, and student autonomy. As the result of the reliability analysis, Cronbach’s alpha, Average Variance Explained, and Composite Reliability revealed high internal consistency of the responses on instructor support, student interaction and collaboration, student autonomy, and student satisfaction. Moreover, construct validity was evaluated from another perspective using a Principle Component Analysis (PCA) with Varimax rotation and Kaiser normalization. The results revealed that the factors for which the questionnaire was designed to assess were measured within each of the instruments’ scales without any exception.

To ensure the content validity of the questionnaire, two professors from two distinct CE departments reviewed the questionnaire and evaluated whether the questions successfully capture the topic. Also, two academicians from a psychology department, who are experts in conducting surveys in the field, checked the questionnaire against common errors such as leading and confusing questions. Another potential threat to internal validity could be that the participants did not have a common understanding of the questions. To minimize this potential threat, a pilot study was conducted in a CE department.

One of the potential threats to external validity in this study is the selection bias in participant selection. Convenience sampling has some drawbacks, like its potential for leading to a homogeneous sampling frame and producing results that have limited generalizability to the broader population (Valerio et al. 2016). To minimize these drawbacks, convenience sampling was combined with snowball sampling by asking the instructors to share the questionnaire with their academic network. Four of the instructors contributed to snowball sampling by confirming that the questionnaire was announced in CE/SE departments of six other universities. Nevertheless, the limited sample size drives the generalizability of the findings.

5 Conclusions and future work

The COVID-19 pandemic forced instructors to integrate new materials and methods into their courses in a very short period to maintain the quality of education under the limitations posed by the pandemic. This study analyzed self-reported opinions and perceptions of Turkish CE/SE undergraduate students regarding on-campus and e-learning measured by responses to an online questionnaire. The COVID-19 pandemic has created unique conditions to conduct such a study since thousands of CE/SE undergraduate students switched from on-campus education to e-learning. Obtaining feedback from students is an essential part of identifying what has worked and where improvements could be made in the future. The author hopes that this study inspires more research on the development of the CE/SE curriculum by using ICT and blending various materials and methods. The potential future work is to listen to the other critical stakeholders of education, i.e., instructors.