Abstract

Background. Self-evaluation or autonomous evaluation, understood as a practice in which students can judge their own achievements and reflect on them, is considered a key element in the assessment process of college education. A common procedure at University environments is to apply information and communication techniques to carry out self-assessment activities and record answers. The aim is to analyse if the e-self-assessment improves student performance, using tests for objective and short answers as a complementary activity in teaching through the virtual platform Moodle. Method. The sample consisted of 406 students of two subjects in the degree course for Primary and Early Childhood Education and in the degree course for Teacher of Primary Education; they had to fill in 100 question self-assessment questionnaires about the content of the subjects on the Moodle virtual learning platform and a satisfaction scale. Results. They confirm a high participation in this innovation methodology; the e-self assessment showed improvement of student achievement and increased the degree of student satisfaction. Conclusions. The e-self assessment would assist students to take an active role in their learning process, increase their achievement, promote their self-directed learning, and develop metacognitive skills.

1. Introduction

Assessment is the final element in the teaching and learning process where the student can participate in three ways: self-assessment, assessment in pairs, and shared assessment or co-assessment. Furthermore, it may be considered as an opportunity in itself to foment significant learning and to develop competencies in university students [1]. This is of such fundamental importance for university teaching that different studies have shown that it determines learning outcomes of students more than the official syllabus [2]. In recent years, the term “learning-oriented assessment” has been coined [35], which brings together three essential questions: (a) the development of assessment tasks for learning; (b) the involvement of the student in assessment; and (c) the results of the assessment are offered as a feedback method [6].

Self-assessment or independent assessment, one of the three forms of assessment where students can participate, is considered as an essential element in the assessment process and is understood as a practical session where learners can judge their own achievements with respect to a specific task, at the same time reflecting on the level of control that they have reached in that specific area of learning [79]. Such are the benefits that we should move from examinations to assessment tasks, and in assessment tasks, including self-assessment, this becomes a teaching tool in itself, thanks to which knowledge is acquired and learning is promoted, but without teachers losing their central role in the teaching and learning process [10, 11].

Even though self-assessment brings together the three essential questions present in assessment oriented towards learning [12] and that its advantages have been demonstrated, it is a methodology that has scarcely been used in an innovative university teaching setting. Thus, in a meta-analysis of the period from 1932 to 1988, only 48 research papers make reference to Higher Education [13]. These data continue to be observed in a number of different works [14, 15] which reflect that the use of different participative assessment methods at universities is scarce, between 8% and 2.7% [6], and point out the necessity of establishing formative processes, for both professors and students, which affect knowledge and putting into practice these modalities with the objective of promoting autonomous and strategic learning [15].

Over the last decade, the carrying out of self-assessment by students is gaining ground in university practices [16, 17], because of the close interrelationship with the promotion of autonomous learning, given that, with the correct orientation, the teacher can train students to establish their learning objectives, to self-monitor, to self-correct, and, in general, to self-regulate their learning process [13, 1820]. In this way, a methodological change is suggested in university teaching where it is established that students should be as autonomous as possible in their learning and that they should take on responsibility for the organization and development of their academic work, the university teacher being a facilitator of this process, helping them to construct their learning [21], as is laid down in the context of European Space of Higher Education.

Furthermore, it has been demonstrated that self-assessment and evaluation from peers promotes competencies such as the development of the capacity for analysis, critical thinking, decision-making, and the acceptance of responsibilities [22]. Taking an active role in assessment implies the development of metacognitive abilities, which, at the same time, results in the development of autonomy. According to Osses and Jaramillo [23], “it is possible to affirm that metacognition is a viable way to achieving fully autonomous development of students, this being reflected, among other aspects, in learning which transcends the scope of school learning and which is projected into student life as “learning to learn.”

Moreover, it may also be said, with respect to self-assessment, that making students participants and protagonists in evaluative practices is a way of integrating assessment in the teaching/learning process. In this way, evaluation will stop being something external and the last step in this process and will become something central and parallel with the entire teaching and learning process. This has been called sustainable assessment, and it should be considered an integral part of curriculum with the aim of creating lifelong effective learners and assessing the tasks they are going to face [24].

The concept of e-self-assessment may be defined as an electronic assessment process in which Information and Communications Technologies (ICT) are applied in order to carry out self-assessment activities as well as to register the student’s answers [16]. The introduction of these technologies (ICT) in the classroom is showing itself to be a strong ally for teachers in the teaching and learning process and, consequently, in assessment [2528]. In this way, Wang and Kinuthia [29] suggest that the incorporation of this technology into the learning environment would be used, among other things, to motivate students and to assess and value learning objectives. A clear example of this is the use of mobile phone in the classroom for learning purposes using a methodology of Mobile Game-Based Learning in Higher Education Settings. It has been observed to be a powerful tool in the fields of e-learning for the students to learn and advance in their knowledge [30]. An exhaustive revision of e-assessment in Higher Education using different assessment strategies can be found in Buzzetto-Mores and Alade [17]. Particularly, the strategies of e-assessment offer the students the opportunity to become part of an electronic learning community [31], and this contributes to making them more autonomous developing the necessary skills to judge and manage their own learning and to the construction of a more adequate and significant learning experience [27, 32].

The use of a visual environment through a virtual platform, such as Moodle for example, to develop a system of self-assessment with tests containing objective and short answers allows students the possibility of adapting their learning rhythms to the temporal and spatial flexibility of this type of assessment. Furthermore, together with immediate feedback on the answers to the questions, which exercises a motivating element so that the student makes an effort, self-assessment takes on the value of a metacognitive tool given that it orients the students in their activities [33, 34]. Just as Biggs [2] affirms, self-assessment, and in particular e-self-assessment, not only sharpens the learning of content, but also gives rise to the learning of metacognitive processes of supervision, which will be essential to their professional and academic life. To this effect, a recent study carried out by Ruiz et al. [35] reveals that students involved in e-assessment aimed at learning develop their basic competencies significantly more compared with students working under a traditional assessment system.

The student who learns to self-assess or e-self-assess also learns to identify and express their needs, to set objectives and to design action plans to achieve them, to identify resources, to value achievements, to increase motivation and confidence in their own abilities, to develop critical thinking and the capacity for analysis, etc. [17, 24, 36], these being cross-curricular competencies included in Undergraduate and Master’s degrees in our universities.

The interest in this type of innovative work lies in the improvement of student academic performance through the use of questionnaires as a tool for self-assessment in virtual environments. Through immediate feedback given to the answers to the questions included in the questionnaire, students, as a part of a co-productive process, can detect their specific learning difficulties as well as learn to self-assess, that is to say, to evaluate how they have overcome these difficulties, how they have modified their learning strategies, and to analyse the result of the assessment process and the quality of the knowledge acquired (metacognition) [12]. So far, the positive benefits of e-assessment have been studied: it does not add stress to the assessment processes; it is useful, adequate, and accessible to university students; it improves reliability and learning expectations; it adds value to the learning process; and it facilitates the learning process by bridging the gap between the starting level of the student and the goal level [24, 2628, 37, 38]. However, it has not been studied if e-self-assessment provides the same benefits and improves the teaching-learning process.

Therefore, the principal objective of this research is to analyse whether e-self-assessment through the virtual platform Moodle, as a complementary activity of course delivery, improves student performance and activates processes of metacognition in higher education settings.

Secondary objectives that may be highlighted promote autonomous work and the participation of students in their learning process, increase collaboration among teachers through the joint development and application of an e-self-assessment tool, and include teaching innovation with innovative tools in the assessment of content.

2. Materials and Methods

2.1. Participants

The participants in this research were 406 students enrolled in two subjects: Foundations in Psychology for Attention to Diversity (FPAD) in the degree course for Primary and Early Childhood Education and Developmental Psychology (DP) in the degree course for Teacher of Primary Education. There were 314 students enrolled in the former and 92 in the latter. Furthermore, there were five professors included in the teaching group, four from the former subject and one from the latter subject.

Those students who did not carry out the self-assessment questionnaire were eliminated from the total number of participants, as well as those who had completed this but who did not sit for the exam. Due to this, the final sample in this study consists of 316 subjects enrolled between the two subjects.

2.2. Instruments

The self-assessment questionnaire consisted of 100 questions to evaluate knowledge of the subject. The questions were of two types: 90 multiple-choice and 10 short-answer questions. The maximum score that could be obtained was ten marks, and in order to pass the test, it was necessary to obtain five marks. Only one attempt was allowed per student.

This questionnaire was completed on the Moodle platform. There were five questions per page, with free browsing of the different pages and answer options were randomly ordered. Also, immediate feedback was given to the student.

The scale of satisfaction consisted of 10 Likert-type questions in order to assess the level of satisfaction of the student with regard to appropriateness, level of difficulty, etc. The questions gave four answer options: 0 corresponded to Totally Disagree; 1 Disagree; 2 Neither Agree Nor Disagree; and 3 Agree. The questionnaire was available through the platform Google Forms once the self-assessment questionnaire had been completed.

2.3. Procedure

A first meeting took place among professors to determine the content and the number of questions to be included in the self-assessment questionnaire. It was decided to create a definitive bank of 100 questions, in which each of the teachers of the subject FPAD proposed an initial list. In the same way, the number of multiple-choice questions was also established as well as how many “fill in the blank” questions would be included, estimating that the majority would be multiple choice with four alternative answers, given that the final exam for the year follows this format. This decision was taken for both subjects given that this is what is outlined in their teaching guides. Also, it was determined that the questionnaire would be visible to the students 15 days before the date of the exam and would be closed one day before the exam, with the aim of avoiding that students could complete it without having studied previously. The access dates for the questionnaire were conditioned by the exam dates since these dates vary in relation to different groups. In addition, parameters were established with regard to the timing and management of the questionnaire (categorization, number of attempts, and type of feedback) for both subjects. Regarding categorization, one point was given if the answer was correct and zero points if incorrect. With regard to time, it was stipulated that students were to be given a maximum of 120 minutes to answer the questionnaire. They could only answer once, and so, if the student responded incorrectly, they would have to think of the correct option, which would imply searching for information. In this way, on finishing the questionnaire, the system gave the student a final grade on their performance.

In Figure 1, an example is shown of two questions of different types that formed part of the questionnaire for the subject FPAD.

In the case of the subject FPAD, in order to reach an agreement on the content of the questions and answers, an initial bank of 200 questions was elaborated. The procedure used to reach this agreement was as follows: a spreadsheet was created which included the number of each question and the name of each of the professors who were required to mark with an X those questions which they considered should form part of the final questionnaire. The criteria for including a question were that the content would be covered in class and that the formulation of the question was clear and coherent. The professors had to carry out this task individually and, once completed, send the spreadsheet to the person responsible for the project and who was also responsible for putting together the four spreadsheets and selecting from the total number of questions the 100 on which all had agreed. In this way, a consensus was reached on the definitive, self-assessment questionnaire.

The questions from this bank were grouped and organized according to topics included in the subject. However, in the questionnaire, these were presented randomly, so that even though two students completed the questionnaire at the same time, the order in which the questions appeared was different.

Following this, on the Moodle platform for the subject, each professor created the definitive questionnaire and arranged the parameters, that is to say, the timing, the categorization, the number of attempts, and the type of feedback. In the case of the subject DP, the professor followed the same procedure as in the FPDA subject.

Furthermore, two of the professors elaborated a scale of satisfaction with respect to this methodology and which consisted of ten Likert-type questions. Once this was concluded, it was sent by e-mail to all other colleagues so that they could make suggestions and appraisals. Once all the group had given their approval to the scale of satisfaction, one of the professors took on the responsibility of elaborating this on Google Forms and of sending the corresponding link to the other professors involved in the project so that they could upload it onto Moodle platform and not leave it visible to students. The same as in the case of the self-assessment questionnaire, the scale of satisfaction questionnaire was made visible to the students 15 days before the date of the exam, but, in this case, it was left open for a few days more after the exam in case the students had not completed it. The link to the questionnaire remains open and is as follows: https://docs.google.com/forms/d/e/1FAIpQLSejczvip_-hBRh1ldZ9UpYD7MZU4wC3ZYNmbpPzsMrsqeqTAg/viewform.

2.4. Data Analysis

The design of this research is quasi-experimental with a single group. The statistical analyses have been carried out with the statistics program SPSS version 20.0 for Mac and with the program GPower 3.1. With the former, the grade of existing correlation between the score obtained on the questionnaire and in the exam for the subject was calculated. In addition, the Student t test for dependent samples was used to determine if there were significant statistical differences between both variables and a factorial analysis (ANOVA) in order to establish any possible differences in the evolution of the scores. A calculation was also carried out post hoc of the size of the Cohen effect (d) to evaluate the effectiveness of the innovation proposal in order to compensate for the lack of group control with the program GPower 3.1.

3. Results

The results of this research correspond, firstly, to the respondents to the self-assessment questionnaire and, secondly, to the scale of satisfaction.

3.1. Self-Assessment Questionnaire

Table 1 shows the percentage total of students who responded to the self-assessment questionnaire differentiated by subjects and by Degree Programs, and the latter was divided into groups. It also shows the mean response percentage (M). The total percentage of participation is 67.24%, which varies slightly according to subjects, being slightly larger in the subject DP (69.14% in DP and 65.68% in FPAD). With regard to the Degree Course, there are also differences. The participation in the Degree in Primary Education is larger than that of those in the Early Childhood Education Degree (68.19% compared to 64.12%). In the subject FPAD, there are also differences between groups.

Figure 2 shows the percentage of students who passed both the exam and the questionnaire. This percentage is obtained by adding the number of passes, distinctions, and high distinctions from both assessments.

Of the total sample, 1.91% of the students did not complete the questionnaire and 1.72% did not sit for the exam. As can be seen in Figure 2, the percentage of students who passed both the questionnaire and the exam is higher than those who did not pass. The percentage of those who exceeded a passing grade on the exam is greater (almost 76%) than on the questionnaire (almost 73%). A positive statistical correlation (r = 0.343; ) was found among the scores of the subjects who passed both the questionnaire and the exam (M = 4.77; TD = 2.738 in questionnaire; M = 6.24; TD = 1.714 in exam). Also, statistically significant differences in means have been found between both variables (t = −6.866; ), the size of the mean effect being (d = 0.474), with an observed power of 0.952.

Table 2 shows the percentage of students according to scores, both on the exam and the questionnaire.

Table 2 also shows that the number of Distinctions and High Distinctions has increased significantly and, at the same time, the number of Passes and Fails has diminished in the final examination. To confirm whether there were significant statistical differences among the scores achieved by the subjects between the questionnaire and the exam, considering different grades, a factorial analysis of variance (ANOVA) was carried out taking the grade obtained in the exam as a dependent variable. Statistically significant differences were found (F (3,312) = 18.468; ). The Scheffé post hoc test showed that significant statistical difference was maintained between the questionnaire and the exam in the grades Fail and Distinction (), in the grades Fail and High Distinction (), in Pass and Distinction (), and in Pass and High Distinction ().

Figure 3 shows these changes in tendency in the different grades obtained by the students.

One of the most notable changes is that those subjects who failed on the questionnaire passed the exam (70%) and those who had obtained a passing grade on the questionnaire reached a grade of Distinction in the exam (55%). It should also be mentioned that a very small percentage of subjects (4.9%) who obtained a Distinction or High Distinction on the questionnaire failed the exam.

3.2. Scale of Satisfaction

After responding to the questionnaire, students were asked to complete a scale of satisfaction with regard to the evaluation of the self-assessment methodology.

Figure 4 shows the mean score for all the students who responded to this scale of satisfaction.

As can be observed in Figure 4, the mean score from students on the scale of satisfaction is high, all of them giving a score higher than two. The two items with lower satisfaction are questions four and nine. Both these items obtain a lower mean than any other on the scale, given that two subjects indicate disagreement that the guidelines had helped them to control their anxiety.

4. Discussion

One of the greatest challenges for professors in the process of European Convergence into the European Higher Education Area has been, and is, a change in certain teaching habits and routines. An attempt has been made to encourage a more significant process of change, among other things, in methodological strategies of assessment. As opposed to the traditional paradigm where the professor was responsible for giving master classes and assessing whether the students had acquired the concepts and contents explained in expository classes (assessment of learning), the focus is now centered on the students who must assume responsibility for organizing and developing their academic work, as well as evaluating their achievements, in short, by developing their autonomous learning [3, 19, 24, 30, 38, 39].

Of the three ways in which students can participate in their assessment process, this research focuses on self-assessment and, specifically, on autonomous e-assessment, with the incorporation of CIT into this process. Although there has been a great deal of research that has shown the benefits of self-assessment, given that it allows students to judge their own progress with respect to a certain task, reflect on the level of control achieved in this learning, that is to say, self-regulate their own learning process [7, 8, 10, 11, 13, 1618, 2628], there are few which incorporate CIT methodology like Moodle quizzes or gradient scales as a criterion to develop assessment judgements of their own performance [27, 30, 32, 34, 39]. Because of that, an innovative teaching proposal has been carried out in the assessment of content, thus creating a module for self-assessment in the two subjects from the degree course in Primary and Early Childhood Education, using the Moodle platform to foment autonomous learning with the aim of improving student performance and increasing the quality of teaching.

With regard to the general objective put forward, the results indicate that e-self-assessment has improved the students’ general performance, if we take into account their scores on the self-assessment questionnaire and the exam, where the correlation has a high level of significance. Furthermore, students have improved their numerical scores in the exam with respect to scores on the questionnaire. These results are in line with Ibabe and Jaureguizar (2007) [39] who obtained a significant statistical correlation between self-assessment and the exam scores of 82 participants. In addition, this investigation [39] and others [26] found that this was a tool which adequately predicts the final grade in the subject. Thus, it is considered that e-self-assessment could favour the development of critical thinking and lead to self-regulation of learning [13, 16, 1820]. Therefore, e-self-assessment could be considered as a dimension of sustainable assessment, answering to some of its key features [24]. This would suggest the need to promote both self-assessment [40] and e-self-assessment in Higher Education, since this may contribute to the attainment of individuals who can act in the future, once they have finished their training in the formal education system, as active and autonomous trainees.

Regarding secondary objectives, the study shows that e-self-assessment has encouraged autonomous work in students and their participation. The self-questionnaire was designed so that students could respond only once immediate feedback was given as well as feedforward. If the students responded incorrectly, they had to think of the correct option, which implies finding information (autonomous work). The improved scores obtained in the final test regarding the questionnaire show that individual effort. According to Knight [40], the feedback, but above all, the feedforward, have great power to stimulate learning. While feedback encompasses comments on the quality of the task carried out, feedforward includes information that is meant to help the students so that they will complete similar tasks more adequately and better in the future, as a part of sustainable assessment [24]. Thus, e-self-assessment could be considered as a reflective strategy in the learning process and, as the self-assessment, help bridge the gap between assessment and learning to ensure long-term learning after completing university studies [24].

Therefore, it appears that e-self-assessment could be considered as an educational tool to encourage autonomy in the teaching/learning process and inform students of their performance throughout the learning process. In this way, it could improve academic performance [79, 24] and it means an increase in types of interactions (professor-student; student-student).

A high percentage of student participation has been achieved, which constitutes a strong point in this study. Almost 70% of enrolled students completed the questionnaire, which reveals an interest in testing their knowledge before the final exam in the subject and also puts into use their processes of metacognition and autonomy [12], especially when they knew that none of the questions in the questionnaire were repeated in the exam. It is estimated that this percentage is high if compared with the results obtained by Rodríguez et al. [41] where the final percentage was 58.5%. However, in further research, a higher level of participation should be considered.

One possible explanation as to why student participation was not 100% could be because, on the one hand, they did not have a good understanding of the benefits of e-self-assessment in their learning process (some said that they did not complete the questionnaire since it was considered a waste of time), or on the other hand, that the possibility of using this tool was not sufficiently disseminated, even though the professors had informed students in the classroom about its existence and advantages. In this sense Gil and Padilla [22], within their list of recommendations for the adoption of these self-assessment and e-self-assessment practices in the context of higher education, highlight that if students do not comprehend the criteria and procedures of the test, the important role that this acquires in learning is not made clear and the motivation of the students is not maintained through feedback, then this practice may not be successful.

On future occasions, it will be necessary to take more care with these aspects and to emphasize the use of this methodology as a form of active participation in the teaching/learning process. One possible way of increasing student participation would be to prepare questionnaires for each two or three topics covered in the classroom, not just one final questionnaire. This measure would lead to being aware of the e-self-assessment tool, students would feel obliged to use it, which would, in turn, mean greater implication of the students and would favour dialogue with and questions directed at their professors. Boud and Falchikov [4] consider that more active involvement of students, not only in the processes and activities of teaching and learning, but also in their own processes of assessment, is one of the fundamental directions in which innovations are being introduced in the field of assessment of university learning.

Another secondary objective of this research is to increase the amount of coordination and collaboration among teachers given the fact that, according to Krichesky and Murillo [42], in the Higher Education environment collaboration is very difficult to achieve. The characteristics of this research require several meetings, a high number of e-mail exchanges, and countless phone calls, both to customize the questions of the questionnaire and to design the satisfaction scale. Without this study, contact among teachers would have been limited or even inexistent. In addition, the teacher group commented that this is an improvement strategy that may have impacted on teaching quality and has been considered motivating and attractive for everyone. This investigation supports the idea that teacher collaboration encourages processes of innovation at the same time as it improves student performance [42, 43].

In regard to the last secondary objective, promoting the use of teaching innovation with innovative tools, with the high participation of students, is partially achieved. The scores of the satisfaction scale show that the students find the educational tool highly satisfactory. The mean of the 10 items on the scale of satisfaction is high, which shows that this methodology has been considered useful by students. The data brings us to thinking that students have understood the e-self-assessment as forming part of the learning process and that it has led them to the construction of knowledge in a virtual environment [6, 21]. All of this must be understood as a strengthening of this methodology. The fact that it offers immediate feedback to their responses also scored very positively. It is an improvement from other researches, where the students complained about the poor quality of the feedback given by the questionnaires [26]. This is an essential question in the new understanding of assessment as a learning process [6, 44], which leads students to reflect on learning, to make judgements, and to direct their learning in a more autonomous way. However, the score obtained on question four (Has this questionnaire been useful in determining the amount of knowledge you have of the subject?) is the lowest on the entire scale, which makes us think that some students made an external causal attribution on their grade on the questionnaire and did not take into account everything they knew about the assessed subjects. Another possible explanation is that they completed the questionnaire before studying the subject, just to try their luck.

5. Conclusions

In conclusion, it may be said that self-assessment through virtual environments or through e-self-assessment is not only possible but also recommended and beneficial, given that the students’ academic performance improves and that it activates processes of metacognition through the use of new technologies. In addition, in this case, it has indirectly encouraged collaboration among professors, which constitutes a tool for improving teaching and it has been proved to increase the satisfaction of the students with this innovative methodology. Therefore, e-self-assessment, as a formative dimension of assessment, acquires a strong value in the teaching/learning process, and it is also confirmed that the use of questionnaires as self-assessment tools in virtual environments (e-self-assessment) is effective to improve academic performance.

Data Availability

The survey data used to support the findings of this study are available from the first author upon reasonable request.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This research was supported by the Plan of Support for the Dissemination and Promotion of the Teaching Innovation Activities of the University of Oviedo.