Theoretical Background

Learning with hypermedia requires learners to continuously regulate their learning (Opfermann et al. 2013). This, however, is not extensively done by most students (Azevedo 2005; Bannert 2007). Therefore, instructional aids, like prompts, are used to instigate learners’ self-regulation and have been shown to foster learning in terms of strategy use and learning outcomes (Berthold et al. 2007; Zheng 2016). Since self-regulated learning encompasses also motivation as a central component, the aim of the study is threefold. First, the study aims at replicating the effects of prompts on self-reported strategy use and learning outcomes in a hypermedia learning environment. Second, we extend the previous literature by focusing on the effects of prompts on the motivational variable self-efficacy. We expect that prompting activates self-regulated learning processes associated with higher (perceived) goal achievement, a major factor influencing self-efficacy. Thus, prompting should affect learners’ perceived self-efficacy regarding the mastery of the learning task. We then shift the focus from viewing self-efficacy as a variable that is affected by prompts towards a theoretically alternative conceptualization. We analyze an aptitude-treatment interaction, which regards learners’ self-efficacy during learning as a variable that moderates the effectiveness of the prompts. Thus, we finally investigate whether learners perceived self-efficacy during learning moderates the prompting effects on learning outcomes and self-reported strategy use. The rationale behind our assumptions will be explained in the following.

Challenges of Learning with Hypermedia

Hypermedia is a form of computer-based learning environment that is characterized by electronic hyperlinks. These hyperlinks interconnect related pages or nodes (e.g., texts, pictures, tables) of a computer-based learning environment, giving them a network-like structure (Dillon and Jobst 2005; Scheiter and Gerjets 2007). Due to this structure, hypermedia offer learners various ways of navigation and integrating information. This contrasts traditional information sources like books where the learning content is pre-structured linearly through chapters or units. Based on the network-like structure of hypermedia, learners can pursue their learning path. Consequently, learners need to constantly plan their navigation, organize the learning content, as well as monitor, evaluate, and adapt the learning process to achieve the desired goal. In sum, learning with hypermedia requires enactive and continuous self-regulation from learners (Opfermann et al. 2013; Schwonke 2015; Shapiro and Niederhauser 2004).

Self-Regulated Learning

Self-regulated learning refers to learners’ “self-generated thoughts, feelings, and actions that are planned and cyclically adapted to the attainment of personal goals.” (Zimmerman 2005, p. 14). There are various models for self-regulated learning. Prominent models focus either on its components like Boekaerts (1999) or describe how the different components of self-regulation interact in self-regulated learning processes (e.g., Schmitz and Wiese 2006; Winne and Hadwin 1998; Zimmerman 2005). Based on Bandura’s social-cognitive theory (Bandura 1986), Zimmerman (2005) describes three phases in his process-model of self-regulation: forethought, performance and self-reflection. During forethought, task analysis as well as strategic selection and planning of learning strategies takes place. Further, learners’ motivation (e.g., self-efficacy) is evoked and influences the selection of cognitive and metacognitive learning strategies (Greene et al. 2004; Liem et al. 2008). Cognitive strategies (e.g., organization, elaboration) deal with the learning content and serve for knowledge construction (i.e., identifying and relating relevant pieces of information to each other and to one’s prior knowledge). Metacognitive learning strategies (e.g., monitoring the learning behaviour) are concerned with the knowledge construction process and set the current state of knowledge in relation to the desired goal state. Both, cognitive and metacognitive strategies are applied during the performance-phase. Eventually, these processes are evaluated and attributed to a source during self-reflection. Assuming cyclicality, these phases may repeat themselves within a learning process but may as well occur across different learning processes and situations. Thus, prior learning experiences may affect future ones within and across situations.

In sum, self-regulated learning is an important factor for various variables of learning success (Panadero 2017; Zimmerman 1990). Across learning situations, self-regulated learning was found to improve strategy use, learning outcomes, and motivation (e.g., Dignath and Büttner 2008; Dignath et al. 2008; Donker et al. 2014). Yet, despite these positive effects of self-regulation, learners very often do not recall or apply these self-regulation strategies spontaneously (Azevedo 2005; Bannert 2007).

Prompts as an Instructional Method to Foster Self-Regulated Learning

When learners already know self-regulation strategies (e.g., cognitive and metacognitive learning strategies) but do not execute them while learning, an instructional method to activate self-regulation are prompts. Prompts are instructional aids which help learners to incite productive learning processes (Berthold et al. 2007). They indicate learners through questions or hints, when and which self-regulation strategies to use while learning in order to perform effectively (e.g., Thillmann et al. 2009; Helsdingen et al. 2011). Thus, they can be perceived as strategy activators (Reigeluth and Stein 1983). Cognitive and metacognitive prompts, as they will be used in this study, are expected to activate learners to use specific cognitive and metacognitive learning strategies. Cognitive prompts encourage learners to identify the main ideas and their interrelations or to connect the new information to their prior knowledge (Mayer 1984; Weinstein and Mayer 1986). Metacognitive prompts aid learners to engage in monitoring processes for their cognitive processes. By doing so, knowledge gaps and understanding problems are identified and illusions of knowledge are reduced. In sum, a deeper understanding and productive processing of the learning contents as well as its retention are assisted (Barnett et al. 1981; McCrindle and Christensen 1995).

In various learning situations, cognitive and metacognitive prompts were shown to increase learners’ strategy use (e. g., Bannert and Reimann 2012; Bannert et al. 2015; Berthold et al. 2007; Kramarski and Friedman 2014; Nückles et al. 2010; Nückles et al. 2009; Schmidt et al. 2012). The result of these prompted processes is often an increased learning outcome (Berthold et al. 2007; Kramarski and Friedman 2014; Müller and Seufert 2018). However, even though the positive effect of cognitive and/or metacognitive prompts on learning outcomes was shown in various studies, there are also studies which could not replicate the direct effect on learning outcomes or strategy use (e.g. Bannert and Reimann 2012; Sitzmann and Ely 2010). Sitzmann and Ely (2010) found that the effects of prompts on learning outcomes were mediated via time-on-task (i.e., effort and persistence). Nevertheless, overall in a meta-analysis Zheng (2016) found a positive medium-size effect of scaffolds, under which prompts can be subsumed, on learning outcomes.

Besides the well-established effects of prompts on cognitive outcome variables, there are also findings investigating the effects of prompts on motivation and affect (Gidalevich and Kramarski 2018; Lehmann et al. 2014; Nückles et al. 2010). Lehmann et al. (2014) found that including prompts to activate specific planning and preparation strategies in a hypermedia environment compared to a learning environment without prompts increased learners’ positive activation. Adding personal-utility prompts compared to solely cognitive and metacognitive prompts while learning increased learner’s short- and long-term topic interest (Schmidt et al. 2012; Wäschle et al. 2015). Moos and Azevedo (2008) suggested that instructional scaffolds, like prompts, may also affect learners’ self-efficacy. Since self-efficacy is an important source of learning motivation and (perceived) goal achievement (Wäschle et al. 2014; Williams and Williams 2010), investigating the effects of prompts on self-efficacy may be interesting.

Self-Efficacy

Self-efficacy refers to learners’ “judgements of their capabilities to organize and execute courses of action required to attain designated types of performances.” (Bandura 1986, p. 391). It is a central component of forethought in self-regulated learning models including Zimmerman’s (Zimmerman 2005; Schmitz and Wiese 2006). When being confronted with a learning task, learners estimate whether they will be able to perform successfully, i.e., they make assumptions about their self-efficacy regarding the mastery of the learning task. Perceiving oneself as highly self-efficacious increases the motivation to engage in the learning task (Bråten et al. 2004; Liem et al. 2008). Consequently, in the performance phase, deep level learning strategies will be selected that are anticipated to lead to the desired learning goal (e.g., Diseth 2011; Bråten et al. 2004; Liem et al. 2008). Further, learners with high perceptions of self-efficacy will show more persistence and effort when being confronted with obstacles during learning (Greene et al. 2004; Schunk and Ertmer 2005). Finally, these learners will experience high degrees of goal achievement when reflecting upon their achievement. Such experiences of mastery achievement in turn represent a powerful source of self-efficacy (Bandura 1997). Mastery experiences can occur during enactive self-regulation and/or be experienced through its outcomes. Experiencing previous learning processes as positive and as accomplishments will heighten learners’ self-efficacy. Thus, experiencing high degrees of goal achievement following a productive learning experience will heighten learners’ self-efficacy (Wäschle et al. 2014; Williams and Williams 2010). In the next learning cycle, this increased self-efficacy will positively influence learners’ motivation, strategy use, and learning outcome. On the other hand, negative learning experiences like failure will lower learners’ self-efficacy in future learning situations. Based on such learning experiences, self-efficacy perceptions develop over time (Bandura 1997; Chen and Usher 2013; Usher and Pajares 2008).

In their meta-analysis Sitzmann and Yeo (2013) found a moderate effect of self-efficacy on learning outcomes and a moderate to strong effect of learning outcomes on self-efficacy. Yet, not only learning outcomes may serve as an indicator for learners’ evaluation of their goal achievement. Increased learning outcomes may also be a consequence of a productive learning process. Wäschle et al. (2014) showed that the frequency of learning strategies like organization and elaboration predicted learners’ perceived goal achievement. In sum, although self-efficacy is a variable of forethought, it affects the entire self-regulation process and is, in return, affected by it. Thus, in the following we discuss these two alternative perspectives on the role of self-efficacy during learning: the first perspective regards self-efficacy as an outcome of the learning process and consequently as an effect of using prompts. The second perspective proposes that self-efficacy affects the use of prompts and thus the learning process.

Effects of Prompts on Self-Efficacy

Why should prompts affect learners’ self-efficacy? Based on the definition of prompts as strategy activators it is expected that prompts help learners to engage in productive learning processes through the activation of cognitive and metacognitive strategies. Comparable to the learning processes of learners with high self-efficacy, these productive learning processes initiated by prompts often lead to a higher goal achievement indicated by better learning outcomes (e.g., Berthold et al. 2007). Additionally, productive learning processes were also associated with perceived higher goal achievement (Wäschle et al. 2014). Since both, better learning outcomes and higher degrees of goal achievement were shown to affect learners’ self-efficacy positively (e.g., Wäschle et al. 2014; Williams and Williams 2010), prompting should influence learners’ self-efficacy. Likewise, Müller and Seufert (2018) found that including prompts in a hypermedium increased learners’ self-efficacy beliefs across learning situations. Thus, the effects of prompts on learners’ self-efficacy need to be investigated in more detail.

Role of Self-Efficacy for the Effectiveness of Prompts

Why should self-efficacy affect the effect of prompts? A theoretically alternative conceptualization is given in the field of instructional design. There, the importance to consider characteristics of learners, such as learners’ prior-knowledge (Kalyuga et al. 2003), has given rise to a large number of so-called aptitude-treatment interaction (ATI) studies (Kalyuga et al. 2003). ATI studies propose that instructional features are most effective for learners with certain characteristics. A classic example is the expertise reversal effect, which refers to the phenomenon that instructional support is effective for novice learners (i.e., learners with low prior knowledge) but hinders learning for expert learners (i.e., learners with high prior knowledge). In line with this, Roelle and Berthold (2013) found that prompts were effective for novice learners but hampered post-test performance for learners with high prior knowledge. Pieger and Bannert (2018) showed that learners with low verbal intelligence and low reading competence profited from prompts. In contrast, higher skilled learners’ post-test performance did not profit from the prompts. Although there exists a large body of research on ATI, motivational or emotional aptitudes were largely neglected in ATI research (e.g., Astleitner and Koller 2006). Yet, studies indicated that motivational aptitudes also influence the effectiveness of instructional support in computer-based learning environments (e.g., Astleitner and Koller 2006; Juarez Collazo et al. 2012).

Moreover, while ATI studies typically consider characteristics of learners, which are assumed to be static over the whole learning process (e.g., prior-knowledge), there are also a number of self-regulated learning processes which can be assumed to be more state-like. Motivational variables in fact do change over the course of the learning process (e.g., Endler et al. 2012; Bernacki et al. 2015). As learners’ motivation (e.g., self-efficacy) may change as a consequence of enactive self-regulation, (perceived) goal achievement, mastery, or failure, ATI-research is challenged. Thus, research is needed that takes learner characteristics into consideration that are assessed during learning, as they may influence the way instructional features are processed.

Further, the assessment of learner characteristics specific to a learning situation during learning may yield a more sophisticated picture since learners did not only built assumptions about the learning task (e.g., its difficulty) but had the chance to interact with the learning material / situation. On the theoretical basis of ATI literature, the effect of prompts might depend on learners’ perceived self-efficacy during the learning process. Learners experiencing high self-efficacy while learning might profit from prompts since they support them to achieve their desired goal. On the other hand, learners experiencing low self-efficacy while learning might need more explicit instructions than only prompts to manage the challenging learning task. An alternative explanation might be that, learners who perceive themselves as highly self-efficacious during learning might feel distracted from executing their own plan by the prompts. Supporting this assumption, Juarez Collazo et al. (2012) found that higher self-efficacy was associated with less tool use in computer-based learning. Moreover, self-efficacy was found as a moderator of tool use effectiveness (Juarez Collazo et al. 2014). Learners experiencing low self-efficacy during learning, however, might see prompts as helpful to perform effectively. Thus, it is important to consider learners’ self-efficacy beliefs while learning as they might affect whether and how instructional aids are being used.

Present Study and Hypotheses

Based on the previous literature, self-regulation is important for learning success and can be fostered with prompts. While effects on cognitive outcome variables are well-researched, we also wanted to enlarge the literature on the effects of prompts on self-efficacy. Thus, we designed a study in which learners reported their self-efficacy regarding the mastery of the learning materials immediately before, during and immediately after learning in a hypermedia environment on empirical research methods. During learning, participants received either a combination of cognitive and metacognitive prompts (prompt group) or learned without prompts (no-prompt group). After learning, participants reported their use of learning strategies and completed a test assessing their learning outcomes (for the description of the procedure see Procedure).

With this study-design, we aimed to investigate the following. First, we wanted to replicate the positive effects of prompts on strategy use and learning outcomes. We hypothesized that: (1a) Supporting learners with prompts while learning increases learners’ cognitive and metacognitive strategy use compared to learners who are not supported by prompts. (1b) Supporting learners with prompts while learning increases learners’ learning outcomes compared to the learning outcomes of learners who are not supported by prompts while learning.

The following hypotheses reflect the two alternative perspectives on the role of self-efficacy during learning with prompts. The first perspective is that prompts can affect learners’ self-efficacy as an outcome. Thus, in our second hypothesis we assumed that (2) supporting learners with prompts while learning increases learners’ self-efficacy compared to learners who do not receive prompts while learning (interaction effect). This means that analyzing the developments of each group’s self-efficacy, we expected (2a) an increase in prompted learners’ self-reported self-efficacy over time while no increase in the development of learners’ self-efficacy can be seen in the control group. (2b) When comparing both groups regarding their self-efficacy ratings during and after learning, we further expected differences regarding learners’ self-reported self-efficacy during and after learning.

Based on the theoretical assumption that prompts affect learners’ self-efficacy indirectly via learning strategies, we hypothesized a mediation in a way that (3a) prompts increase learners’ cognitive strategy use, which increases learners’ self-efficacy. We further expected that (3b) prompts affect learners’ metacognitive strategy use positively, which increases learners’ self-efficacy.

The second and contradicting perspective on the role of self-efficacy is based on ATI literature. We investigated whether learners’ perceived level of self-efficacy during learning influenced the effect of prompts. The following hypotheses were formulated: (4a) Learners’ perceived self-efficacy during learning moderates the relationship between instruction (i.e. the prompts) and self-reported online strategy use. (4b) Learners’ perceived self-efficacy during learning moderates the relationship between instruction (i.e. the prompts) and learning outcomes.

Since learners’ prior knowledge is a strong predictor of learning outcomes (Alexander and Judy 1988) it should be assessed as a control variable (and potential covariate). Further, as this study is concerned with learners’ strategy use and motivation, learners’ motivational traits and learners’ general strategy use should be assessed as control variables to eliminate that differences in learners’ self-efficacy or strategy use are due to learners’ general disposition. We also assessed learners’ computer-user self-efficacy, which is learners’ expectations to perform successfully when using the computer. This variable was assessed to eliminate confounding effects due to different expectations regarding learners’ performance when using computers.

Methods

Participants and Study Design

N = 70 undergraduate students participated in this study. Students were in their first semester, majoring in psychology (Mage = 19.40, SD = 1.60, 90% female). The experiment was based on a between-subjects design with the between-subject factor group (prompts vs. no-prompts). As dependent measures, we assessed learning strategies, and learning outcomes (recall, comprehension, transfer). Based on studies by Moos and Azevedo (2008) as well as Müller and Seufert (2018), we extended the design by including a repeated measurement of self-efficacy before, during, and after the learning session. Therefore, for those research questions where we took the development of self-efficacy into account, the research design was mixed (with between- and within-subjects factors). To identify whether possible treatment effects might be a consequence of different learner characteristics, we assessed the following variables as control variables: prior knowledge, learning strategies, online learning strategies, goal-orientation, self-concept, and computer-user self-efficacy.

Materials and Instruments

Assessment of Learners’ Characteristics

Learning strategies, online learning strategies, goal-orientation, self-concept, and computer-user self-efficacy were assessed with an online questionnaire. We assessed learners’ learning strategies, that were generally used during learning, with the German questionnaire LIST (Boerner et al. 2005). The questionnaire included subscales for the use of cognitive (e.g., “I try to create links to the contents of related subjects or courses.”), metacognitive (e.g., “When I am confronted with difficulties, I change my learning strategy.”) and resource management strategies (Cronbach`s α = .80 to α = .87; example item: “I am learning at a place where I can concentrate well on the learning content.”). To have a baseline of learners’ use of learning strategies when learning with computers, online learning strategies were assessed with an adapted version of the general learning strategies questionnaire described above (Boerner et al. 2005). Since learners’ online learning strategies were assessed after the learning phase, the scale will be explained in more details in section Assessment of Learners’ Self-Reported Online Learning Strategies. Learners’ goal-orientation was assessed with the learning and achievement orientation assessment scale (Spinath et al. 2002). The questionnaire was used to measure learning goal orientation (e.g., “In my studies it is important for me to learn as much as possible.”), performance-approach orientation (e.g., “In my studies it is important for me to finish tests better than others.”), performance-avoidance orientation (e.g., “In my studies it is important for me that others do not think I’m stupid.”), and work avoidance (α = .55 to α = .90; example item: “In my studies it is important for me to do as little work as possible.”). The student version of the academic self-concept scales was used to assess learners’ self-concept (subscales: individual self-concept, criteria-based self-concept, social self-concept, absolute self-concept; α = .85 to α = .92; example item: “Learning new things in my field of study” “is very hard for me” [1], “is very easy for me” [7]; Dickhäuser et al. 2002). Learners’ self-efficacy when using the computer was measured with the German version of the computer-user self-efficacy scale (α = .95; example item: “I find working with computers very easy.”; Spannagel and Bescherer 2009; Cassidy and Eachus 2002).

Assessment of Learners’ Prior Knowledge

Learners’ prior knowledge was assessed with a paper-based test. The test assessed knowledge about the domain to be learned (i.e. hypotheses, propositional logic). It consisted of six multiple-choice items (e.g., “Please indicate which type of hypothesis this is: We expect that an increased cortisol-level is associated with increased attention.”) and one free-recall question (“Please indicate the symbols that are commonly used in propositional logics to indicate a negation, conjunction, disjunction, implication, and equivalence.”). For multiple-choice items (cued recall), only one of the given answers was correct. Additionally, there was a predefined solution for the free-recall question. A maximum of 11 points could be achieved in the prior knowledge test. For further analyses, learners’ achievement scores in the prior knowledge test were transferred into percent.

Learning Environment

The hypermedia contained four chapters on empirical research methods. An introduction to empirical methods and its applications was given in chapter one. The second chapter dealt with the generation of scientific hypotheses, including the rules of propositional logic and types of hypotheses. In the third chapter, measuring instruments, levels of measurement, and Likert-type scales were explained. The fourth chapter contained information about various research designs. For this study, only the first two chapters were relevant and had to be learned in the learning session. The hypermedia environment contained 23 pages (including a start-page and a legal notice). Six of these pages were part of the relevant chapters one and two. There was a navigation bar on the top of each page, which served as a table of contents. Further, learners could click on the navigation bar to go to the respective page. Forward and backward buttons were visible at the bottom of each page. Several terms on each page were marked as hyperlinks that connected the different pages in a non-linear way. The hypermedia also contained a notepad and a search function. In this hypermedia learning environment, learners are required to identify the central concepts to be learned, as well as constantly monitor and evaluate their learning progress in order to make navigational decisions. Successful learning with hypermedia requires self-regulation and especially metacognitive skills from learners (Opfermann et al. 2013). An example page of the hypermedia can be seen in Fig. 1.

Fig. 1
figure 1

Screenshot of the learning environment on empirical methods

Prompts

A combination of three cognitive and three metacognitive prompts was shown to the learners in the experimental group. We used the combination of cognitive and metacognitive prompts that were shown to activate cognitive and metacognitive activities in Berthold et al. (2007). One of the three cognitive prompts cued organization activities (“How can I best organize the structure of the learning contents?”) while the other two cued elaboration (e.g., “Which examples can I think of that illustrate, confirm, or conflict with the learning contents?”, see Appendix A (Table 8)). All three metacognitive prompts cued monitoring (e.g., “Which main points haven’t I understood well?”, see Appendix A (Table 8)). The six prompts were linked to six specific pages to activate productive processing when it was relevant. There was only one prompt visible per page. The first three relevant pages contained cognitive prompts to activate knowledge construction at the pages containing the fundamentals. The subsequent three relevant pages contained metacognitive prompts. All prompts appeared within a frame at the top of each page directly below the heading (see Fig. 2). In contrast to prompting time-driven, we felt that linking prompts to specific pages would be less disruptive for the learning process. Prompts were visible all the time the learner stayed on a page and every time the learner re-visited the page.

Fig. 2
figure 2

The learning environment including a metacognitive prompt ("Please think about the following: which main points haven't I understood yet?")

Assessment of Learners’ Perceived Self-Efficacy

Self-efficacy regarding the mastery of the learning task was measured with the self-efficacy subscale of the motivated strategies for learning questionnaire (Pintrich 1991). The scale consisted of eight self-report items which were answered on a seven-point Likert-type scale (1 = not at all true of me, 7 = very true of me). Analogously to Moos and Azevedo (2008) the wording of the eight items was slightly adjusted to match the learning situation, task, and learners’ current self-efficacy state. For instance, the item “I am confident I can learn the basic concepts taught in this course.” was adjusted to “At the moment, I am confident I can learn the basic concepts taught in this e-learning.” Learners answered the eight items three times during the 40 minutes learning session: immediately before learning with the hypermedia, 20 minutes into the learning task, and directly after learning for 40 minutes with the hypermedia. Assessing learners perceived self-efficacy enabled us to examine the variation of self-efficacy within the learning session. Reliabilities for the scales (αbefore = .93, αduring = .94, αafter = .94) are comparable to previous research (Pintrich et al. 1993).

Qualitative Coding of Learning Strategies in Learners’ Notes

Two independent raters, who were blind to participants’ experimental condition, scored the quality of cognitive and metacognitive learning strategies visible in the notes learners took during learning. The 5-point rating scales, ranging from 1 (very good strategies present) to 5 (insufficient, no strategies present), were used to categorize learners’ notes with respect to the quality of their cognitive and metacognitive learning strategies. A score of 1 on the cognitive scale was given when learners wrote the learning content in their own words, the notes were highly organized and well elaborated (e.g., “specific reference of hypotheses to very specific values (e.g., increase of 10 hours)”). A score of 5 regarding cognitive learning strategies was given if learners did not take notes at all during learning. On the metacognitive strategy scale, a score of 1 was given when learners wrote down a specific plan of their learning activities (e.g. “ Time: 40 minutes, [ … ], 10 minutes reading notes”) and specifically monitored their understanding (e.g. “I did not understand example 4, why can the expression not be falsified?”). A score of 5 on the metacognitive strategies scale was achieved when no planning and monitoring strategies were present in learners’ notes. Inter-rater reliability was determined by the intra-class coefficient for the quality of the cognitive and metacognitive strategies (ICC = .98 and ICC = .93, respectively).

Time spent on Relevant Pages

Learners’ navigation behavior during learning was logged to have another measure of the learning process. To identify whether learners focused on the relevant pages, we set the time spent on relevant pages (i.e., time spent on the six pages including prompts) in relation to their total learning time (i.e., 40 minutes). The resulting percentage value reports the proportion of time spent on relevant pages during learning.

Assessment of Learners’ Self-Reported Online Learning Strategies

Online learning strategies were measured with a short version of the learning strategies questionnaire (Boerner et al. 2005). Those items from the original version were used, that had the highest factor loadings and described strategies that could also be used in the hypermedia environment (Boerner et al. 2005). Thus, the scale was developed on the same basis as the prompts, incorporating cognitive and metacognitive learning strategies (e.g., Weinstein and Mayer 1986; Wild and Schiefele 1994) and assessed strategy use when learning in computer-based learning environments. The cognitive online learning strategies subscale comprised 13 items on organization, elaboration, critical evaluation and rehearsal of the material to be learned (e.g., “As a reminder, I made short summaries of the most important contents for myself.”, “I have thought of concrete examples of particular learning contents.”, see Appendix B). The metacognitive online learning strategies subscale comprised 11 items on planning, monitoring and regulation (e.g., “I asked myself questions about the text to test whether I had understood everything.”, “When I was confronted with difficulties, I changed my learning strategy.”, see Appendix B). Reliabilities of the scales are acceptable (αcognitive = .73, αmetacognitive = .83) and comparable to those of the original questionnaire (Boerner et al. 2005). Additionally, learners were explicitly asked to answer the questions with respect to their learning behavior during preceding hypermedia learning (“Please answer the following statements based on your learning behavior of the previous learning session”). Participants reported their agreement with the statement on a six-point Likert-type scale ranging from 1 (not at all true of me) to 6 (very true of me). Online learning strategies were first measured with all other learner characteristics (see section Assessment of Learners’ Characteristics) to see whether differences regarding learners’ use of learning strategies during computer-based learning existed. Additionally, learners self-reported online learning strategies were assessed after the learning session to assess the strategies they used during learning in the hypermedia environment.

Assessment of Learning Outcomes

Learning outcomes were measured with a paper-pencil test after the learning session. The test consisted of 14 questions with various formats (multiple-choice questions, open questions) and a maximum of 25 points. Some of the items were similar to the items in the prior knowledge test, but no questions were identical. Based on Bloom’s (1956) taxonomy, questions were developed that assessed learning outcomes on the levels recall, comprehension, and transfer. Six questions assessed learners’ recall (e.g. name the four main criteria a scientific hypothesis must meet) where a maximum of 9 points could be achieved. Comprehension was measured with four questions (e.g., identifying various types of hypotheses) with a maximum of 7.5 points. Learners’ transfer post-test performance was measured with three questions (e.g., use the rules of propositional logic to identify whether a given proposition is true or false) and a maximum of 8.5 could be achieved. There was a pre-defined answer scheme to the test. Additionally, all tests were rated by two independent raters who were blind to learners’ experimental condition (inter-rater reliabilities: κrecall = .98, κcomprehension = .96, κtransfer = .98). For further analyses, the points achieved in the test were converted into percentage values.

Procedure

The study consisted of two parts, the assessment of learners’ characteristics and the learning session. At the beginning, all participants were informed about the procedure and the contents of the study and gave their consent to participate. Learners’ characteristics in terms of learning strategies, online learning strategies, goal-orientation, self-concept, and computer-based self-efficacy (see Assessment of Learners’ Characteristics) were assessed with an online questionnaire. Answering the items regarding learners’ characteristics took about 30 minutes. Having completed this online questionnaire, participants signed up for an on-site hypermedia session. In the session, learners were individually seated at tables equipped with a computer. They were first informed about the experiment and subsequently answered the prior knowledge test (see Assessment of Learners’ Prior Knowledge). Next, learners watched an introductory video on the learning environment, the contents covered, ways to navigate in the environment, the notepad, and the search function. Having watched the introductory video, learners received a written instruction to learn all they can about the first two chapters and reported their perceived self-efficacy (see Assessment of Learners’ Perceived Self-Efficacy). Subsequently, learners logged into the hypermedia environment and learned the contents covered in chapter one and two (i.e., propositional logic, types and formulations of scientific hypotheses) for 40 minutes. After learning for 20 minutes, learners answered the questionnaire on self-efficacy. Immediately after learning, learners reported their perceived self-efficacy, followed by a questionnaire on their online learning strategies (see Assessment of Learners’ Self-Reported Online Learning Strategies). In this questionnaire, learners reported the strategies they used during the previous learning phase. Last, participants completed a test measuring their learning outcomes (see Assessment of Learning Outcomes). The on-site session lasted 90 minutes. When the study was finished, all participants were compensated for their participation.

Results

When sphericity assumptions were not met, the Greenhouse-Geisser correction was used, indicated by the ε. To indicate the effect size, partial η2 will be used. Following Cohen (2009) η2 values < 0.06 will be regarded as a small effect, 0.06 < η2 < 0.13 will be regarded as a medium effect, and η2 > 0.13 will be regarded as a large effect.

Learners’ Characteristics

First, learners’ characteristics in both groups were analyzed and compared. Independent t-tests indicated no differences between the experimental and the control group. Table 1 depicts the means and standard deviations of both groups’ learner characteristics.

Table 1 Means (standard deviations) and analyses of differences for learners' characteristics in both groups

Prompting Effect on Learners’ Self-Reported Online Learning Strategies, Quality of Learning Strategies in Learners’ Notes and Time Spent on Relevant Pages

To investigate the effect of prompts on learners’ strategy use (hypothesis 1a), we calculated independent t-tests on all three measures of strategy use: self-reported online learning strategies, quality of cognitive and metacognitive learning strategies in learners’ notes, and time spent on relevant pages. First, we compared the groups (no-prompt vs. prompt) with respect to their self-reported cognitive and metacognitive online learning strategies after the learning session. In contrast to our expectations, analyses indicated that no statistically significant differences between both groups’ self-reported cognitive online learning strategies existed, t(63.5) = -1.34, p = .09 (one-tailed). Further, there were no differences regarding both groups’ self-reported metacognitive online learning strategies after the learning session, t(68) = -0.28, p = .39 (one-tailed).

Secondly, we analyzed whether the quality of cognitive and metacognitive learning strategies in learners’ notes differed between groups. First, we compared the quality of the cognitive learning strategies found in learners notes between the groups of prompted and non-prompted learners. Analysis indicated that the quality of cognitive learning strategies in learners’ notes did not differ between groups, t(68) = 1.56, p = .06 (one-tailed). Next, we compared both groups (prompt vs. no-prompt) regarding the quality of metacognitive learning strategies expressed in learners’ notes. Analysis indicated that the quality of metacognitive learning strategies in learners’ notes did not differ between groups, t(68) = 1.62, p = .06 (one-tailed). Against our expectations, the analysis of the quality of cognitive and metacognitive strategies in learners’ notes did not show significant differences between groups. Means and standard deviations of the self-reported online learning strategies and the quality of cognitive and metacognitive strategies in learners notes are listed in table 2.

Table 2 Learners' self-reported online learning strategies and ratings of the quality of learners cognitive and metacognitive learning strategies analysed from learners’ notes

The results of learners’ navigation behavior showed that prompted learners spent on average M = 85.70 % (SD = 19.69) of their learning time on relevant pages. Learners in the control group spent on average M = 84.87 % (SD = 21.65) of the time on relevant pages. Results of a paired t-test indicated no differences between groups, t(67) = -0.17, ns.

Prompting Effects on Learning Outcomes

Following hypothesis (1b), we analyzed the effects of prompts on learning outcomes. As illustrated in table 1, learners’ prior knowledge in both groups was rather low. There were no systematic differences regarding the prior knowledge of both groups’ learners. Overall, learning outcomes ranged from 32% to 86% correct answers in the post-test (M = 63.34, SD = 12.66). Analyzed with respect to Bloom’s taxonomy (1956), learners’ correct answers ranged between 11% and 83% (M = 48.02, SD = 18.68) for recall, between 27% and 100% (M = 65.84, SD = 17.67) for comprehension, and between 29% and 100% (M = 77.36, SD = 12.96) for transfer.

A MANCOVA was calculated using learners’ experimental condition (no-prompts vs. prompts) as the independent variable and the three measures of learning outcomes (recall, comprehension, transfer) as dependent variables. Since learners’ prior knowledge was associated with learning outcomes, prior knowledge was used as a covariate. Following Pillai’s trace the analysis indicated no systematic differences regarding learning outcomes (recall, comprehension, transfer) between the groups V = 0.003, F(3,65) = 0.063, ns. Means and standard deviations for both group’s learning outcomes can be seen in table 3.

Table 3 Learning outcomes in percent (and standard deviations) of both groups.

Development of Learners’ Self-Efficacy

To investigate hypothesis (2) regarding the self-efficacy fostering effect of prompts, we calculated a mixed ANOVA. The condition (no-prompt vs. prompt) was used as the between-subjects factor, while time (before learning, 20 minutes into learning, immediately after learning) was used as the within-subjects factor. To investigate whether learners perceived self-efficacy differed before learning, an independent t-test was calculated. Analysis revealed no significant differences between the groups, t(67) = 0.53 , ns. Results of the mixed ANOVA indicated a main effect of time, F(2,134) = 13.02, p < .001, ηp2 = .16 (large effect), ε = .886, with increasing scores of self-efficacy over time. Yet, no main effect of the experimental condition was found, F(1,67) = 0.69, ns. Additionally, the analysis indicated an interaction effect between time and condition F(2,134) = 4.70, p = .01, ηp2 = .07 (medium effect), ε = .886. To investigate the interaction between time and condition further, contrast analyses were conducted, comparing the three measure points. (2a) Self-efficacy increased significantly during the three measure points in the group of prompted learners, F(2,66) = 11.91, p < .001, ηp2 = .26, but not in the control group, F(2,66) = 0.39, ns. Further analyses revealed that prompted learners’ self-efficacy increased from the first measure point (beginning of learning session) to the second (during learning), p < .001, while learners’ self-efficacy in the control group did not. From the second (middle of the learning session) to the third measure point (end of learning session), self-efficacy did not increase (further) in both groups. The development of learners’ self-efficacy in the experimental and the control group is illustrated in Fig. 3, descriptive data can be seen in table 4.

Fig. 3
figure 3

Development of learners' perceived self-efficacy during learning

Table 4 Means (and standard deviations) of learners perceived self-efficacy before, during and after learning.

To further investigate learners’ self-efficacy during and after learning, t-tests investigated whether differences regarding learners perceived self-efficacy exist between both groups (2b). Analyses indicated no differences regarding learners’ self-efficacy 20 minutes into learning, t(68) = -1.21, p = .23, and immediately after learning, t(68) = -1.26, p = .21.

Mediating Effects of Strategy Use on the Relationship between Prompts and Self-Efficacy

Following hypothesis 3a and 3b, we investigated whether prompts influenced learners’ self-efficacy indirectly mediated through strategy use. Therefore, we investigated the effect using both self-report and process data (i.e., self-reported online learning strategies and quality of learning strategies in learners notes, respectively). A first step was to analyze whether prompts influence cognitive and metacognitive strategy use directly. If so, we analyzed whether prompts and strategy use influenced self-efficacy significantly (for a theoretical approach concerning mediator analyses, see Hayes 2018). For the analysis, all variables were centered.

First, we analyzed whether learners’ self-reported cognitive and metacognitive learning strategies were affected by the prompts. Prompting learners while learning did neither influence learners’ self-reported cognitive online strategy use, β = 0.16, p = .18, nor learners’ self-reported metacognitive online strategy use (β = 0.03, p = .78). The prerequisites for a mediation were not reached in this case.

Next, we analyzed whether the quality of cognitive and metacognitive learning strategies in learners’ notes was affected by the prompts. Contrasting our hypothesis but congruent with the self-report data, prompting did not influence the quality of cognitive and metacognitive learning strategies in learners’ notes (β = - 0.18, p = .12, and β = - 0.19, p = .11, respectively). Again, the prerequisites for a mediation analysis were not met. Learners’ experimental condition (prompt vs. no-prompt) did neither predict learners’ self-reported online learning strategies nor the quality of learning strategies in learners’ notes.

Self-Efficacy as a Moderator of the Relationship between Prompts and Self-Reported Online Strategy Use

Next, the hypothesis based on the theoretically alternative concept of aptitude-treatment interactions was investigated. We analyzed whether the relationship between prompts and self-reported (cognitive / metacognitive) online strategy use changed dependent on learners’ perceived self-efficacy during leaning (cf. hypothesis 4a). Therefore, a moderation analysis was calculated. We used condition (no-prompt vs. prompt) as the predictor for learners self-reported cognitive online strategy use. Learners’ perceived self-efficacy during learning was used as the moderator. We controlled for the effects of self-reported metacognitive online strategy use which was associated with self-reported cognitive online strategy use (see Appendix C (Table 9)). In the analysis, variables were centered, and the moderator was included stepwise. The moderation analysis was significant F(4,65) = 20.44, p < .001, R2 = .56. Against our expectations, the interaction term in the regression analysis (condition × self-efficacy) indicated that perceived self-efficacy during learning did not moderate the relationship between the condition and self-reported cognitive online strategy use (p = .06; see table 5)Footnote 1.

Table 5 Learners’ self-reported cognitive online strategy use predicted from their experimental condition and self-efficacy during learning

Another moderation analysis was calculated to investigate the effects of learners’ experimental condition on self-reported metacognitive online strategy use dependent on learners perceived self-efficacy during learning. We controlled for self-reported cognitive online strategy use which was associated with self-reported cognitive online strategy use, see Appendix C (Table 9). In the analysis, variables were standardized and the moderator was included stepwise. The overall moderation was significant F(4,65) = 17.72, p < .001, R2 = .52. However, we had to reject our hypothesis since the interaction term (condition × self-efficacy) did not achieve significance. Thus, self-efficacy did not moderate the relationship between learners’ condition and self-reported metacognitive online strategy use (see table 6)Footnote 2.

Table 6 Learners’ self-reported metacognitive online strategy use predicted from their experimental condition and self-efficacy during learning

Self-Efficacy as a Moderator of the Relationship between Prompts and Learning Outcomes

The last hypothesis (4b) claimed that the relationship between learners’ experimental condition and learning outcomes depends on learners perceived self-efficacy during learning. Variables were centered and a moderation analysis (stepwise) was calculated in which the experimental condition (no-prompt vs. prompt), learners’ perceived self-efficacy during learning, and their interaction (condition × self-efficacy) was used to predict learning outcomes. Prior knowledge was included as a covariate. The moderation analysis achieved significance, even though the explained variance was low, F(4,65) = 2.85, p = .03, R2 = .15. As indicated by the significant interaction term (condition × self-efficacy), learners’ self-reported perceived self-efficacy moderated the effect between learners’ experimental condition and their learning outcomes (see table 7).

Table 7 Learning outcomes predicted from learners’ experimental condition and self-efficacy during learning

To make the relationship visible, the conditional effects of the experimental condition at three levels of self-efficacy, one standard deviation below the mean, at the mean, and one standard deviation above the mean, are depicted in Fig. 4.

Fig. 4
figure 4

Learning outcomes dependent on learners’ self-efficacy (“-1”: one standard deviation below the mean, “0”: at the mean, and “+1” one standard deviation above the mean) and experimental condition

Discussion

In a naturalistic learning environment, we investigated the effects of self-regulation prompts on learners’ strategy use (hypothesis 1a), learning outcomes (hypothesis 1b) and on the development of learners’ self-efficacy during learning (hypothesis 2). The effect of prompts on self-efficacy was assumed to be mediated by learners’ self-reported online strategy use (hypothesis 3). Assuming interacting effects of prompts and self-efficacy, this study then changed the perspective towards an alternative theoretical concept. Based on instructional design theories and especially ATI research, learners’ self-efficacy during learning was assumed to affect the effectiveness of prompts on self-reported online strategy use (hypothesis 4a) and learning outcomes (hypothesis 4b).

In short, our study did not replicate the classic prompting effects on strategy use and learning outcomes (hypothesis 1a and 1b). We found that prompts increased learners’ self-efficacy during learning within the group of prompted learners, while no such increase was visible in the control group (2a). Yet, the increase in prompted learners’ self-efficacy did not result in self-efficacy differences between both groups (2b). The suggested underlying mechanism that prompts increase learners’ self-efficacy via self-reported online learning strategies could not be confirmed (3). However, we found that self-efficacy during learning moderated the effect of prompts on learning outcomes (4b).

Effects of Prompts on Learners’ Strategy Use and Learning Outcomes (Hypotheses 1a and 1b)

We hypothesized that learning with prompts (1a) increased learners’ strategy use and (1b) learning outcomes in contrast to learning without prompts. Against our expectations no differences between groups regarding learners’ strategy use could be found. All three measures of strategy use, i.e. self-reported online strategy use, quality of learning strategies expressed in learners’ notes, and learners’ navigation behavior indicated that no systematic differences in their strategy use existed between groups. Especially the qualitative analysis of learners’ notes showed that learners used hardly any metacognitive learning strategies. Regarding the use of cognitive learning strategies, the analysis of learners’ notes indicated that organization and elaboration strategies were only superficially being used. Likewise, no differences between groups regarding learning outcomes were found.

The missing differences on learning outcomes between both groups might be a result of the non-increased strategy use in the experimental group. Since prompts are generally assumed to activate learners’ strategy use, an increased learning outcome is often the result of the increased strategy use. Since there are previous studies on journal writing, in which the same cognitive and metacognitive prompts effectively activated cognitive and metacognitive strategy use and fostered learning outcomes, the present study is in stark contrast to these studies (e.g., Berthold et al. 2007; Hübner et al. 2006; Nückles et al. 2009). Apparently in the presented learning setting of our study, the cognitive and metacognitive prompts did not successfully exert the function to direct learners’ attention to cognitive and metacognitive strategy use in order to intentionally use these strategies.

Identifying differences and commonalities between the present study and the above-mentioned studies might help to delineate the possibilities and limitations of prompting in stimulating cognitive and metacognitive processes while learning. Regarding participants, no systematic differences can be found. In the study of Berthold et al. (2007) and our study, only undergraduate students of psychology participated. In Hübner et al. (2006) and Nückles et al. (2009), participants were undergraduates from various majors. Regarding the learning setting, differences can be identified. In the studies of Berthold et al. (2007), Hübner et al. (2006), and Nückles et al. (2009), all learners first watched a lecture video and were subsequently asked to write about the learning contents receiving either a general instruction or an instruction including prompts. These differences between our study and the above-mentioned studies should be analyzed.

First, while in the studies by Berthold et al. (2007), Hübner et al. (2006), and Nückles et al. (2009), the learning contents were pre-structured in form of a lecture video, learners in our study were free to choose their way through the hypermedia environment. Thus, the learning materials offered different degrees of learner control which might have resulted in differing cognitive and metacognitive demands for the learners (c.f. Azevedo 2005; Schwonke 2015). Based on the low quality of metacognitive learning strategies found in learners’ notes and learners’ rather low level of prior knowledge, it seems that deeper processing of the learning contents was too challenging for the learners besides coping with the demands of hypermedia learning.

Secondly, studying the learning material and processing of the material were combined in our study but separated in the studies by Berthold et al. (2007), Hübner et al. (2006), and Nückles et al. (2009). Respectively, prompts were administered at different parts of the learning process: in the studies of Berthold et al. (2007), Hübner et al. (2006), and Nückles et al. (2009), prompts were presented after studying the learning material in order to facilitate deep level processing and reflection. Since journal writing was used as a writing-to-learn method, the prompts may have served as a guide for writing the journal. At the same time, prompts activated learners’ strategy use during recapitulating and processing of the material. In contrast, students were asked to study the hypertext and use the prompts to deeply process the learning contents simultaneously in our study. Thus, besides the demand of studying the learning material and making navigational decisions, prompts imposed the demand of deeply processing the learning material on learners.

The additional demands that both studying the learning material and the simultaneous presentation of the prompts imposed upon learners may have resulted in high cognitive demands. Instead of helping learners to perform effectively, learners may have seen the prompts as an additional load which may, in the worst case, have distracted learners to execute their plans on effectively studying the learning material. Thus, instead of stimulating learners to focus on self-regulation activities they may have distracted them from studying the learning material in the hypermedia environment. However, since we did not measure learners cognitive load, these assumptions will have to be analyzed in further studies.

Since prompts were used successfully in hypermedia environments, identifying differences and commonalities between these studies and the present study might help to further identify possibilities and limitations to stimulate self-regulation processes through prompts in hypermedia environments. Based on the high demands, many prompting-studies on computer-based or hypermedia learning have used metacognitive prompts (e.g., Bannert and Reimann 2012; Sitzmann and Ely 2010). Bannert and Reimann (2012) used metacognitive prompts to foster learners self-regulated learning activities. Comparable to the studies by Berthold et al. (2007), Hübner et al. (2006), and Nückles et al. (2009), processing and studying were partly separated (Bannert and Reimann 2012). Students received metacognitive (planning) prompts before they started to learn and received metacognitive prompts at the end which stimulated evaluation of the learning progress. These evaluation prompts additionally asked learners to write a list of contents or a diagram including the most important contents. Prompting evaluation at the end of the learning session may have triggered writing-to-learn activities and at the same time stimulated deeper processing of the learning material.

Bannert et al. (2015) included metacognitive prompts after each node selection during the learning process to help learners to reflect upon their navigational decision. This was done to stimulate learners’ systematic navigation in the learning environment. In contrast to the previously presented studies, the prompted metacognitive activities were designed to support self-regulated learning activities which are needed to navigate effectively in a learning environment that imposes higher degrees of learner control. Finally, the analyses indicated that supporting reflection upon node selection by prompts increases time spent on relevant pages, systematic navigation behavior and transfer post-test performance compared to a control group.

Comparing the studies regarding the assessment of participants’ learning strategies, subtle differences can be found. While learning strategies were assessed via self-report (see Assessment of Learners’ Self-Reported Online Learning Strategies) and process data (see Time spent on Relevant Pages and Qualitative Coding of Learning Strategies in Learners’ Notes) in the present study, the written learning journal and navigational data were used to identify strategy use in the above-mentioned studies (Bannert et al. 2015; Bannert and Reimann 2012; Berthold et al. 2007; Hübner et al.,2006; Nückles et al. 2009). Yet, the notepad included in the present learning environment was very basic and did not contain many possibilities to highlight or structure the notes (e.g. no italicization or boldfacing) compared to the learning journal studies where learners had more possibilities to organize the learning journal (Berthold et al. 2007; Hübner et al.,2006; Nückles et al. 2009). The limitation of possibilities to take notes may have constrained learners during note taking. Thus, the assessment of learners’ cognitive and metacognitive processes with objective measures should incorporate high degrees of freedom for learners in order to be more sensitive to capture the various processes during learning.

So, how can we scaffold self-regulated learning through prompts in challenging environments such as hypermedia? One possibility is to separate studying of the learning material and its processing. This can be done in hypermedia settings by presenting the prompts at specific times of the self-regulation process through specific prompts at the beginning or at the end of the learning session. Note-taking processes stimulating writing-to-learn can thus effectively be triggered. When prompts are presented during studying of the learning material, prompts should be linked to specific actions, like node selection. In sum, timing of the prompts during hypermedia learning should be carefully selected to respond to the specific self-regulation activity designed to stimulate.

Prompting Effects on Self-Efficacy (Hypotheses 2 and 3)

In accordance with hypothesis 2, we found an interaction between leaners condition and their self-efficacy ratings. Supporting hypothesis 2a, we found that learners’ self-efficacy increased over time within the group of prompted learners, while no increase was visible in the control group. Yet, comparing learners’ self-efficacy between groups did not result in significant differences (hypothesis 2b). Since only parts of our hypotheses were supported by the analyses, the study indicates that providing learners with prompts was not enough to create the expected mastery experience to increase learners’ self-efficacy compared to the control group. In line with this, the hypothesized indirect effect of prompts on self-efficacy via strategy use (hypothesis 3), was not supported by the statistical analysis. Our results add to studies in which learners’ self-efficacy was not fostered by instructional means (Moos and Azevedo 2008; Panadero et al. 2012). Yet, Zepeda et al. (2015) were able to increase learners’ self-efficacy during learning. Over a period of 19 weeks, Wäschle et al. (2014) showed that self-efficacy increased as a consequence of perceived goal achievement. Thus, other factors such as length of the intervention or the value of the learning situation should be manipulated to investigate when and how self-efficacy changes.

Interaction Effects of Self-Efficacy and Prompts (Hypotheses 4a and 4b)

Instead of assuming self-efficacy to be the outcome of the learning process we followed the alternative and contradictory approach proposed by ATI research in which we investigated self-efficacy as a potentially influencing source of the learning process. Thus, we analysed the moderating effects of learners’ self-efficacy during learning on cognitive and metacognitive online strategy use (hypothesis 4a) and learning outcomes (hypothesis 4b). Since self-efficacy did not moderate the effect between prompts and self-reported online strategy use, we had to reject the hypothesis. However, for learning outcomes (hypothesis 4b) we did find a significant moderation, indicating that the effect of prompts on learning outcomes was dependent on learners’ self-efficacy during learning.

With respect to the present study, assessing learners’ self-efficacy and prior knowledge might help to regulate the presentation of cognitive and metacognitive prompts. Since research suggests that many learners are overwhelmed by the opportunities computer-based learning scenarios offer (Azevedo 2005; Schwonke 2015), prompts may help learners to start an effective learning process. Especially for learners with low prior knowledge or low self-efficacy regarding the mastery of the learning task, an introduction as well as metacognitive prompts designed to support their navigational decisions may help learners to cope with the self-regulation demands hypermedia learning environments impose (Opfermann et al. 2013; Schwonke 2015; Shapiro and Niederhauser 2004). When learners’ have overcome this initial phase of orientation, prompts may become irrelevant and should be faded out. Gidalevich and Kramarski (2018) suggest designing prompts in computer-based learning environments in a way that allows learners to become autonomous learners. Although prompts are initially important to start productive processing of the learning material, Gidalevich and Kramarski (2018) found that fading out of prompts leads to an increase of learning outcomes and especially of long-term retention. Also Nückles et al. (2010) support that fading out of prompts leads to increased learning outcomes on the long run.

Endler et al. (2012) suggested to adjust the difficulty of the learning material based on learners’ motivation. Unfortunately, the study lacks empirical evidence on the effectiveness of such an adaptation. The theoretical assumptions, however, give reason to expect promising results. In the field of mathematics, tailoring algebra-problems to learners’ situational interest increased learners’ interest and learning of algebra concepts (Bernacki and Walkington 2014; Walkington 2013). Using motivational factors to regulate instructional means might especially be fruitful in long-term computer-based learning scenarios such as online-courses or distance learning programs. Since attrition-rates are high in online distance learning programs (Rovai 2003), assessing learners’ motivation might be fruitful since motivation is crucial for completing online courses (Muilenburg and Berge 2005). Dependent on learners’ motivation, instructional methods might be used to adjust exercises and instructional support.

Limitations and Future Research

A limitation of this study is that reporting one’s self-efficacy during learning may have acted as a prompt. Comparable to diary studies, the assessment of learners’ self-efficacy during learning may not only be a measurement tool but also an intervention (Panadero et al. 2016). Since learners had to reflect upon their self-efficacy during learning, it may have activated, or even diminished strategy use as described in section Self-Efficacy. This may have resulted in interferences between the strategies selected by the learners and the prompted strategies.

A further limitation may be the sample size. Since the sample size in our study was comparable to other studies investigating the effects of prompts on learning outcomes and strategy use, the analyses should have resulted in comparable effects (Bannert and Reimann 2012; Bannert et al. 2015; Berthold et al. 2007; Moos and Azevedo 2008; Müller and Seufert 2018; Thillmann et al. 2009). Yet, methodological differences compared to other studies may have affected the effect-size of the well-researched effects. Given our methods and analyses, the absent effects of prompts on strategy use and learning outcomes may be an issue of statistical power.

Another methodological limitation is the assessment of learners’ strategy use. Although both, self-report and process data indicate that prompts could not foster self-regulation processes in terms of strategy use, it may be a matter of methodology. Since self-reports may not necessarily reflect the actual behavior, future research should use methods, which assess learners’ strategy use during learning by more objective means and process data. Yet, analyzing learners’ notes might have been more revealing when the notepad had included possibilities to apply learning strategies e.g. by italicizing, boldfacing or coloring parts of the notes. Thus, since learners had only limited possibilities to take notes, the notes might only reflect learners’ actual strategy use in a limited way.

Think-aloud protocols have previously been used to assess the development of learners’ strategy use during learning (e.g., Moos and Azevedo 2008). Think-aloud protocols are a way to trace learners’ behavior during learning but depend on learners’ utterances during learning. Further, learners often need to be trained to think aloud. Additional process data could have been collected like physiological measures or video analyses, that help to triangulate the cognitive, metacognitive, motivational and even emotional processes while self-regulating (Järvelä et al. 2019). An alternative when using computer-based learning scenarios might be variables of learners’ navigation behavior (e.g., sequence of the visited pages, time on relevant pages; e.g. Bannert et al. 2015). While such log-file data represent an objective measure of learners` navigation behavior, findings regarding the interpretation of such data are inconsistent. Pieger and Bannert (2018) and Astleitner (1997), for instance, regard non-linear navigation (e.g., page-changes form page 1 to page 4) compared to linear-navigation (e.g., changes from page 1 to 2) as an indication for enactive self-regulation, since it is suggested that learners reflect upon the selection of nodes. However, Jeske et al. (2014) as well as Greene and Azevedo (2007) regard excessive non-linear navigation as an indication of poor self-regulation. Thus, in order to interpret such log-file data, information from learners on why they change pages and what they do (strategy use) during the page visit are needed. The findings of this study suggest that more research is needed that investigates self-regulation processes during learning with different objective measures to fully understand how learners interact with instructional support.

Conclusion

In sum, the study investigated the effects of prompts on strategy use, learning outcomes and self-efficacy. Further, the interaction effects of self-efficacy and prompts were investigated. Although this study represents a unique contribution, the results highlight the importance of investigating interactions between instructional interventions and motivational variables during learning over a longer period of time. The results should motivate future researchers to investigate self-regulation as a process with instruments that are sensitive to identify changes in learners’ strategy use and / or motivation.