Introduction

Spatial learning of target locations while navigating is an important everyday skill that we rely on for tasks such as returning to a doctor’s office in a complex building or getting back to a discovered coffee shop in a new city. While challenging for all, these tasks become more difficult with reduced visual information, such as for people with low vision who have uncorrectable visual impairment that does not result in complete blindness. Understanding the effects of severe visual impairment on navigation has important implications for encouraging mobility-related independence for individuals with clinical low vision as well as for navigational success in situations with incomplete vision for normally sighted people, such as wearing night-vision goggles or navigating in virtual reality. Our prior work reveals a complex relationship between visual, mobility, and other cognitive demands during low-vision navigation, with challenges that extend beyond mere visual deficits. Rand, Creem-Regehr, and Thompson (2015) demonstrated that those with simulated severely reduced acuity and contrast sensitivity were impaired with regard to remembering target locations after learning by walking through hallway corridors, relative to normal vision conditions. Barhorst-Cates, Rand, and Creem-Regehr (2016) extended this work to simulated restricted peripheral vision and found similar deficits in very severely restricted (4° central field of view (FOV)) conditions (see also Barhorst-Cates, Rand, & Creem-Regehr, 2019, for navigation in a museum). Secondary tasks performed while navigating suggested that, in addition to the impact of reduced visual context and landmarks, some of the deficit in spatial learning could be attributed to the increased cognitive demands of walking with impaired vision. This mobility monitoring account suggests that because attentional resources are required to monitor and maintain one’s safe mobility while walking with degraded vision, less resources may be available to encode and remember target locations.

Given the importance of available cognitive resources on spatial learning in this context, together with mobility challenges faced by those with low vision, one relevant question is the influence of active versus passive contributions to spatial learning. Our work addresses the active versus passive distinction in two separate domains related to navigation: physical mode of mobility and attentional/cognitive engagement (i.e., making decisions during navigation, actively searching for target items). Results are mixed in the literature regarding the benefits of each type of active learning (for a review, see Chrastil & Warren, 2012), but there is reason to believe that the added challenge of visual impairment may influence these effects. Given the cognitively demanding nature of navigation with low vision (Rand et al., 2015), the addition of active mobility or active cognitive engagement may not have the same benefits as they would in normal viewing contexts. Our current goal was to test the influence of these two types of active learning during navigation that were predicted to influence learning of spatial locations with restricted FOV. First, we examined active versus passive locomotion during spatial learning with the manipulation of walking versus being pushed in a wheelchair (Experiments 1 and 2). Second, we manipulated the way in which targets were identified along the paths. In all of our prior work, targets were identified by the experimenter, potentially reducing the demands on the participant to use vision to locate the targets. In two of the current experiments (Experiments 3 and 4), we implemented an active search paradigm, requiring the participants to both detect and remember target locations.

Previous research has demonstrated the importance of peripheral vision for various components of navigation, including mobility, distance perception, and spatial memory for targets in small- and large-scale spaces. People with peripheral field restriction (either simulated – Turano et al., 2004, or clinical – Marron & Bailey, 1982) show reduced performance on mobility tasks, adapting their walking behaviors but still colliding with a greater number of obstacles. FOV restriction instigates a shift in mobility strategies, encouraging people to take wider turns, modify their gait patterns, and use more head movements to detect and avoid obstacles (Jansen, Toet, & Werkhoven, 2010, 2011). Together this work suggests that mobility itself is affected by peripheral field restriction (Pelli, 1987), which has impacts on spatial learning during navigation (Barhorst-Cates et al., 2016). Peripheral field loss also affects spatial memory for target locations through visual mechanisms. For instance, reduced FOV disrupts the ability to access a global spatial framework even of small-scale spatial layouts – more head and eye movements must be used to perceive the layout of objects when visual field is restricted (Yamamoto & Philbeck, 2013). One contribution to spatial memory error with peripheral field loss may be inaccurate distance perception (Fortenbaugh, Hicks, Hao, & Turano, 2007; Fortenbaugh, Hicks, & Turano, 2008; for the effects of severely blurred vision on distance estimates, see Rand, Barhorst-Cates, Kiris, Thompson, & Creem-Regehr, 2019). Distance perception and spatial memory with restricted FOV improve through active walking to targets as opposed to stationary viewing (Fortenbaugh et al., 2008). In our prior work on spatial learning in a large-scale environment (Barhorst-Cates et al., 2016), we observed intact spatial memory performance with severe FOV restrictions, up until FOV was reduced to near foveal vision (4°). We suggested that active locomotion through the environment may have facilitated spatial memory performance despite the severe visual restriction, similar to the argument made by Fortenbaugh et al. (2008). Although active movement to targets is beneficial in restricted FOV, it does introduce mobility-related attentional demands that may deplete cognitive resources and detract from spatial learning (Rand et al., 2015). The aim of this paper was to further clarify the nuanced interaction between mobility, vision, and their cognitive demands in the context of navigation with restricted FOV.

Although relatively few papers address the effects of mobility on spatial memory in the context of low vision, there is a large and controversial body of literature assessing the effects of active movement for spatial memory generally. Natural active walking involves podokinetic and idiothetic information (Chrastil & Warren, 2013). Podokinetic sources of information come from efferent motor commands from the walking movement itself and proprioceptive information about displacement of body parts. Idiothetic information involves efferent motor commands, proprioception, and also vestibular information from head movement (vertical or horizontal displacements) signaled by the inner ear. Passive wheelchair manipulations minimize podokinetic but maintain vestibular information, which may or may not impair spatial performance. For instance, in a study that compared walking and wheelchair locomotion methods in a spatial learning task with normal vision, Chrastil and Warren (2013) found that vestibular information (being pushed in a wheelchair) did not improve performance beyond that achieved in a video-only condition, but walking (providing additional podokinetic information) significantly improved performance. The authors argue that podokinetic information contributes to metric spatial knowledge because it provides information about distance traveled. Many others have demonstrated the importance of walking in spatial updating (Chrastil, Nicora, & Huang, 2019; Ruddle & Lessels, 2006; Ruddle, Volkova, & Bulthoff, 2011), although some argue that proprioceptive information from turns is more necessary than proprioceptive information for distance (Klatzky, Loomis, Beall, Chance, & Golledge, 1998; Presson & Montello, 1994; Riecke, Bodenheimer, McNamara, Williams, Peng, & Feuereissen, 2010). Walking may facilitate spatial learning because it is automatic and provides effortlessly acquired information about self-location (Farrell & Thompson, 1998; May & Klatzky, 2000; Rieser, 1989, 1990).

In the context of active mobility with low vision, Legge, Gage, Baek, and Bochsler (2016) compared spatial size judgment and spatial updating between walking and wheelchair conditions in a single-room environment. They found a surprising lack of difference between walking and wheelchair conditions, suggesting that active locomotion may not facilitate spatial memory of one’s own location in smaller-scale environments with visual restriction. What is missing from the literature is an assessment of active locomotion contributions to spatial knowledge in large-scale spaces with visual impairment. In our first two experiments, we directly compared active and passive locomotion within subjects in a large-scale spatial learning task with a peripheral FOV restriction. Because of our prior work arguing for the importance of active locomotion in learning these large-scale spaces (Barhorst-Cates et al., 2016) and other research demonstrating benefits of active movement in FOV restriction (Fortenbaugh et al., 2008), we expected that spatial learning would be more accurate in the active locomotion condition compared to passive locomotion despite the potential increase in cognitive load present in restricted FOV conditions.

Aside from locomotion, navigation can vary in cognitive active components (Chrastil & Warren, 2012), such as making decisions along a route, manipulating spatial information, or attending to relevant navigation features, such as objects or spatial layout. Our experiences as a driver versus a passenger in a car suggest intuitively that active engagement with a novel environment facilitates learning. However, a fair amount of research has considered active decision making in navigation and many have found minimal benefit of active over passive (Foreman, Sandamas, & Newson, 2004; Gaunet et al., 2001; Wilson & Peruch, 2002; for a review, see Chrastil & Warren, 2012) or benefits that may only be present when combined with active movement (Farrell et al., 2003). Nonetheless, results are inconsistent, with some researchers demonstrating advantages of active over passive decision making in navigation (Bakdash, Linkenaguer, & Proffitt, 2008; Brooks et al., 1999; Markant, DuBrow, Davachi, & Gureckis, 2014; Plancher, Barra, Orriols, & Piolino, 2013), especially depending on the type of navigation task (Wallet, Sauzeon, Larrue, & N’Kaoua, 2013). Active decision making may also only affect certain components of navigation. For instance, decision making at intersections affects topological graph and route knowledge, but not other types of spatial knowledge (Chrastil & Warren, 2015). Of note, many of the experiments assessing active decision making in navigation have involved desktop virtual-reality paradigms (Christou & Bulthoff, 1999; Wilson & Peruch, 2002; Wilson et al., 1997), which may be less likely to reveal the potential benefits of active decision making as paradigms that include idiothetic information (Chrastil & Warren, 2012).

Navigation is an activity that is particularly susceptible to cognitive load interference, even outside the context of visual impairment. A body of literature has consistently found that accurately learning a spatial environment requires attentional resources, both in real-world and virtual/desktop environments (i.e., Albert, Rensink, & Beusmans, 1999; Glausaur, Schneider, Grasso, Ivanenko, 2007; Lindberg & Gärling, 1982). For instance, Glausaur and colleagues (2007) demonstrated that when attention was divided while following a path, the remembered distance traveled was disrupted compared to when no dual task was provided. High cognitive load is particularly detrimental to active decision making during navigation (Knight & Tlauka, 2017), and tends to influence performance on some navigation tasks (map drawing) more than others (pointing to targets). Similarly, a study by Gardony, Brunyé, Mahoney, and Taylor (2013) demonstrates that the use of supposedly helpful navigation assistive devices is detrimental to spatial learning because of the increase in attentional demands. While incidental learning has been demonstrated when familiar landmarks can be used in route-learning (Anooshian & Siebert, 1996; van Asselen et al., 2006), by and large previous research supports the requirement of intentional/effortful processing for spatial learning (see Chrastil & Warren, 2012, for a review). We postulate that visual search for targets may also increase cognitive demands, especially with restricted peripheral vision.

In navigational contexts with normal viewing, there is not much evidence that actively attending to objects affects spatial memory for their locations (Chrastil & Warren, 2012; Wilson, 1999; Wilson & Peruch, 2002), but some research shows detriments of active attention for memory of visual features of the objects themselves (i.e., photographing museum pieces compared to passively viewing; Henkel, 2014). Methodological limitations (such as using idiothetic-free VR paradigms) could be contributing to the lack of spatial memory effect in the context of navigation. However, attention to targets in navigation has rarely been considered in the context of severe visual restrictions. With restricted FOV, active search for and attention to targets may have a stronger influence on spatial memory. Viewing objects and environments with restricted FOV requires more eye or head movements, which need to be integrated to form a complete representation of the environment. This integration process contributes to spatial memory distortions (Yamamoto & Philbeck, 2013), potentially because it is more attentionally demanding (Barhorst-Cates et al., 2016, 2019). Together, these findings suggest that the potential of adding active visual search for those navigating with visual impairments might have negative consequences on spatial learning due to the overburdening of cognitive load, despite the potential benefits sometimes seen in normally sighted individuals. That is, it could be that any active task benefits could be offset in low-vision conditions due to the addition of cognitive demands.

Our current studies build on the foundational work that has traditionally considered active contributions to spatial learning and the attentional consequences of navigating with reduced visual information as separate domains. We aimed to directly compare active and passive learning conditions in the context of navigation with severely restricted FOV. In terms of locomotion, prior research on benefits of walking in normal navigation situations (Chrastil & Warren, 2013) and benefits of movement for increasing distance perception accuracy in restricted FOV (Fortenbaugh et al., 2008) led us to hypothesize that passive locomotion along the route would lead to greater impairments in memory compared to active walking. In terms of target search, prior evidence for increased cognitive demands due to mobility monitoring during low-vision navigation (Barhorst-Cates et al., 2016; Rand et al., 2015) and the effort required to integrate multiple restricted viewpoints (Yamamoto & Philbeck, 2013) motivated the prediction that the greater demands of the active search task would outweigh any benefits of active decision making and lead to further decrements in spatial learning.

Experiments 1 and 2: Active locomotion

In our first two experiments, we assessed the difference between active and passive locomotion in a large-scale, real-world spatial learning paradigm (Barhorst-Cates et al., 2016, 2017; Rand et al., 2015). To simulate FOV loss, we used goggles that reduced the FOV to approximately 10°. We decided to use a 10° FOV because our prior research demonstrated minimal impairments in spatial learning at 10°, but significant mobility-monitoring demands (Barhorst-Cates et al., 2016). Barhorst-Cates et al. (2016) suggested that active locomotion facilitated spatial learning performance at 10° despite the severe lack of visual information. We used the same environment, paths, and targets as in Barhorst-Cates et al. (2016) in Experiment 1, with a within-subjects manipulation of locomotion method (walking vs. wheelchair). In Experiment 2, we increased the number of targets to be remembered but maintained the same locomotion method manipulation and the same paths. In both cases, we predicted greater accuracy in the walking condition compared to the wheelchair condition, which would provide evidence for the benefits of active locomotion in the context of low vision. See Table 1 for an overview of all experiments and manipulations.

Table 1. Overview of experiments and manipulations

Method

Participants

We recruited participants from the psychology department participant pool. Participants received partial course credit as compensation. Participants had self-reported normal or corrected-to-normal vision and could walk without impairment. All participants gave written informed consent, with procedures approved by the University of Utah Institutional Review Board. In prior studies, we have had sufficient power to detect effects of within-subject manipulations in this paradigm with samples ranging from 14 to 30 participants. As such, we aimed our sampling at attaining a minimum of 28 participants to be conservative. Twenty-eight participants took part in Experiment 1 (nine males). The average age was 20.3 years (SD=3.7). Thirty participants completed Experiment 2 (15 males). The average age was 22.3 years (SD=5.6).

Materials

In all experiments, our primary dependent measure was absolute pointing error to the remembered targets’ locations from the end of each path. Participants indicated the remembered locations of targets using a degree-quadrant pointing task (Philbeck, Sargent, Arthur, & Dopkins, 2008), which we have used in prior studies (Barhorst-Cates et al., 2016, 2017, 2019; Rand et al., 2015). First, the space around a person is divided into four quadrants (front-left, front-right, back-left, and back-right), with degrees from 0° to 90° (described in more detail in the Procedure section) in each quadrant. Participants physically point, then verbalize the quadrant and degree (e.g., “Front left, 35°”). See Fig. 1 for an overview of this task.

Fig. 1
figure 1

Degree-quadrant pointing task. Participants physically pointed to and verbalized the direction of targets using a quadrant and degree (i.e., back right, 25°)

Participants wore welding goggles that restricted vision to the dominant eye. We restricted peripheral FOV by drilling holes out of the center of black cardstock paper cut to match the shape of the eye-piece of the goggles (see Barhorst-Cates et al., 2016, 2019, for prior work and more detailed descriptions of these exact goggles). Average measured FOVs in each experiment for the narrow goggles were: 12.5° (SD=1.9; Experiment 1), 11.82° (SD=2.08; Experiment 2). The goggles result in slightly varying FOVs for different individuals largely due to differences in head size. See Barhorst-Cates et al. (2016) for an overview of the FOV measurement used here.

Participants filled out a Subjective Units of Distress scale (SUDS; Bremner et al., 1998) at the end of each path after performing the pointing task. Participants rate from 0–100 their remembered level of distress during the path, with 0 being not at all distressed to 100 being the most distressed they have ever been. We specifically asked participants to rate how calm or anxious they felt about their safety along the path, not about their memory.

Participants completed the Santa Barbara Sense of Direction scale (SBSOD; Hegarty, Richardson, Mondello, Lovelace, & Subbiah, 2002) as a self-reported measure of their navigation abilities and strategies. This was used to assess whether individual differences in navigation ability impact pointing errors.

Procedure

All experiments took place on the second and third levels of the Merrill Engineering Building (MEB) at the University of Utah. Participants arrived at the lab, filled out written consent forms, then completed the FOV aperture tests to measure individual perceived FOV (Barhorst-Cates et al., 2016). Experimenters then explained the general instructions and the pointing task. The experimenter drew four quadrants on a piece of paper, indicating the location and heading direction of the participant in the center, and labeled the four quadrants and location of the degrees in each quadrant. For the front quadrants, 0° is straight ahead and 90° straight to the sides. For the back quadrants, 90° is straight behind and 0° is to the sides. Participants practiced the pointing task by pointing to and verbalizing the location of objects in the room. They practiced until their verbal description matched the direction they were pointing. First, to encourage participants to look around their environment at targets, we conducted the practice used by Barhorst-Cates et al. (2016) in which participants had to walk and read aloud the room numbers of various doors that they passed. They then completed one practice in each locomotion condition of the spatial learning task. Participants walked a predetermined path through the existing building hallways, with turns at natural intersections. At the end of each path, and facing the same heading direction as when they reached the end of each path, participants pointed to the remembered location of the targets one at a time in an order that differed from that in which the participant encountered the target on the path. If the pointing direction did not match the verbal description, experimenters prompted participants to clarify. Participants could not see any of the landmarks from the end of any path, and paths did not overlap. See Fig. 2 for an example path.

  • Passive locomotion condition. In the passive locomotion condition, participants were pushed in a standard wheelchair. At each target, the experimenter stopped the wheelchair and stated the location of the target (e.g., “on your right is an elevator”). We encouraged the participants to look at the targets but did not force them to. After a 3-s pause, the experimenter began pushing the wheelchair again until the next target was reached. At each turn, the experimenter verbalized the direction of the turn a few steps before the turn was made (“here, we will make a right”). Each path contained three (Experiment 1) or four (Experiment 2) targets that varied in location on either the right or the left side of the path. We used the same paths and three targets as those used in Barhorst-Cates et al. (2016) in the experiments with three targets. The paths were 109–121 m long with four turns, and did not overlap with each other. Participants completed two of four paths in this condition.

  • Active locomotion condition. In the active locomotion condition, participants walked while holding the arm of the experimenter. This was done to equalize the amount of guidance provided in both the active and the passive locomotion conditions, as guidance has been implicated in the past to reduce mobility monitoring demands in a way that can affect spatial learning (Rand et al., 2015). Our interest was instead to test the difference between active movement (vestibular, visual, and proprioceptive cues) versus passive movement (only vestibular and visual motion cues). In this active movement condition, participants placed their non-dominant arm on top of the arm of the experimenter, with the experimenter standing beside the participant. Just as in the passive locomotion condition, in the active locomotion condition the experimenter stopped at each target and verbalized the location of the target. Again, we encouraged participants to look at the target but did not force them to. The experimenter and participant then walked together to the next target, with the experimenter verbalizing the direction of the turn a few steps in advance. Participants completed two of four paths in this condition. Participants completed the paths in an alternating order that was counterbalanced between participants (either active-passive-active-passive or passive-active-passive-active). After completing the four navigation trials, participants filled out the Santa Barbara Sense of Direction scale. They were then debriefed, thanked, and dismissed.

Fig. 2
figure 2

Example navigation path with three landmarks. Participants began at the home plate and navigated along the path as directed by the experimenter. They stopped and looked at each landmark. Upon reaching the end of the path (indicated by the stop sign), participants remained in their final heading direction and completed the pointing task to the three landmarks

Design and data analysis

In each experiment, we used repeated-measures analyses of variance (ANOVAs) with a within-subjects repeated-measure factor of condition and a between-subjects order factor. ANOVAs were performed in IBM SPSS. We also used the BayesFactor package in R to compute Bayes factors t-tests on the difference score between conditions. This analysis provides information about the likelihood that the difference between conditions is different from 0. The null hypothesis is that the difference between conditions is not different from zero (H0) and the alternative hypothesis is that the difference between conditions is different from zero (H1). For qualitative interpretation of the Bayes factors, we followed the guidelines laid out by Jarosz and Wiley (2014). Of note, many consider Bayes factors over 10 to be strong evidence (Jeffreys, 1961).

Results

Experiment 1

In Experiment 1, participants wore FOV-restricting goggles and completed two spatial learning trials while walking holding onto the arm of the experimenter, and two trials while being pushed in a wheelchair. They pointed to target locations at the end of each trial and reported their self-reported safety-related anxiety. Two outliers were identified whose pointing error was greater than three standard deviations above the mean (one in walking, one in wheeling). Those two participants’ data were removed from the following analyses.

  • Pointing error. We ran a 2 (Locomotion Condition: Walking vs. Wheelchair) × 2 (Condition Order: Walking First vs. Wheeling First) repeated-measures ANOVA. There was no significant main effect of Locomotion Condition (p=.23), no main effect of Condition Order (p=.40), and no Locomotion Condition × Order interaction (p=.15). Participants’ error in the Walking condition (M=19.02, SE=1.19) was not significantly different (BF=.397) from their performance in the Wheelchair condition (M=17.12, SE = 1.46). The Bayes factor for Locomotion Condition provided weak or anecdotal evidence in favor of H1 (Jarosz & Wiley, 2014) and suggests that the data are 2.52 times more probable if H0 were true than if H1 were true. This provides further evidence that performance in the two conditions did not differ.

  • Mobility-related anxiety. We performed the same repeated-measures ANOVA with SUDS as the outcome variable. There was a significant main effect of Locomotion Condition F(1, 24) = 23.1, ηp2 = .49, p<.0001. Walking (M=24.17, SE=3.73) elicited significantly higher SUDS (BF=306.79) than Wheelchair (M=12.35, SE = 2.19). The Bayes factor suggests that the empirical data are 306.79 times more probable if H1 were true than if H0 were true, very strong or decisive evidence in favor of H1 (Jarosz & Wiley, 2014). There was no main effect of Order (p=.70), but there was a significant Order × Condition interaction F(1, 24) = 4.58, ηp2 = .16, p=.04. Those participants who performed Walking first had a larger difference between Walking (M=25.71, SE = 5.28) and Wheelchair (M=8.63, SE = 3.1) compared to participants who performed Wheelchair first, where the difference was smaller between Walking (M=22.63, SE=5.28) and Wheelchair (M=16.08, SE = 3.1).

SBSOD was not significantly related to pointing error in the Wheelchair (p=.08) or Walking condition (p=.74).

Experiment 2

In Experiment 2, we increased the number of targets from three to four to test the hypothesis that active locomotion with restricted FOV is only beneficial when working memory is more heavily burdened. Participants completed two trials while walking holding the arm of the experimenter and two trials while being pushed in the wheelchair. They pointed to targets at the end of the path and completed the same measure of safety-related anxiety. One participant showed outlier performance with errors in both conditions that were over 3 standard deviations above the average performance in that condition. That participant was removed from the following analyses.

  • Pointing error. We ran a 2 (Locomotion Condition: Walking vs. Wheelchair) × 2 (Condition Order: Walking First vs. Wheelchair First) repeated-measures ANOVA. There was no significant main effect of Locomotion Condition (p=.51), no main effect of Condition Order (p=.83), and no Locomotion Condition × Order interaction (p=.66). Average error in the Walking condition (M=22.68, SE=1.87) did not differ (BF=.25) from average error in the Wheelchair condition (M=24.32, SE=1.78). The Bayes factor provides positive or substantial evidence in favor of H1 (Jarosz & Wiley, 2014), which is not considered strong evidence (Jeffreys, 1961). This analysis suggests that the empirical data are 4.04 times more probable if the H0 were true than if H1 were true.

  • Mobility-related anxiety. We performed the same repeated-measures ANOVA with SUDS as the outcome variable. There was a main effect of Locomotion Condition F(1, 27) = 13.52, ηp2 = .33, p=.001. Walking elicited significantly higher SUDS (M=25.49, SE=3.10) than Wheelchair (M=16.06, SE=2.57, BF=12.76). The Bayes factor indicates that the data are 12.76 times more probable if H1 were true than if H0 were true, which is considered positive or strong evidence in favor of H1 (Jarosz & Wiley, 2014). This main effect of Condition was qualified by a significant interaction with Order F(1, 27) = 11.21, ηp2 = .29, p=.002. For participants in the Walking-first order, there was a large difference between SUDS in Walking (M=27.7, SE=4.3) and Wheelchair (M=9.68, SE=3.57). For participants in Wheelchair-first order, there was not a large difference between Walking (M=23.29, SE=4.46) and Wheelchair (M=22.44, SE=3.70).

SBSOD did not predict performance in the Wheelchair condition (p=.6) but it did predict performance in the Walking condition (B=-.30, p=.03). A higher self-reported sense of direction was related to less angular error.

Discussion

In two experiments, participants wore goggles that reduced their FOV to about 10° and completed a spatial learning paradigm in walking and wheelchair locomotion conditions (within subjects). Participants were guided along paths, told the location of either three (Exp. 1) or four (Exp. 2) targets, pointed to the target locations at the end of each path, and self-reported their mobility-related anxiety. We observed no strong evidence of a difference between locomotion methods in pointing accuracy in either experiment, although we did find strong evidence that participants reported lower levels of mobility-related anxiety in the wheelchair condition in both experiments. This was particularly true for those individuals who walked first. Given no difference between walking and wheeling in Experiment 1, we tested whether the predicted advantage for walking would occur when increasing cognitive demand (increasing the number of targets to be remembered). The results of Experiment 2 replicated Experiment 1, finding no strong evidence of a difference between locomotion conditions. Admittedly, the change from three to four targets to remember was not a drastic change, but it is notable that the increase in targets did numerically increase the error in both conditions (by 4–5°), supporting an increase in memory load. These results suggest, contrary to our hypothesis, that the idiothetic information provided by walking does not improve survey knowledge above and beyond vestibular information in the context of restricted FOV. Our anxiety results suggest that the wheelchair condition may provide a benefit in reducing mobility monitoring concerns, which could theoretically alleviate cognitive demands for navigating in a way that could improve spatial learning (Rand et al., 2015). However, this reduced anxiety did not translate into improved spatial memory performance in the current studies.Footnote 1 Moreover, we did not include a concurrent measure of cognitive load to assess for mobility-monitoring demands. This should be done in future research to measure attention demands during navigation in different locomotion methods.

Experiments 1 and 2 demonstrated no marked differences in spatial memory accuracy based on locomotion method, even with more targets to be remembered. Participants were equally accurate when walking compared to when they were pushed in a wheelchair. This suggests that physical locomotion method does not affect spatial learning. Notably, however, the wheelchair condition appeared to be less anxiety inducing for participants in both experiments, suggesting that passive locomotion may be beneficial for reducing safety-related concerns while navigating with field loss. Importantly, participants still received vestibular information in both conditions. Both conditions were also equally cognitively “passive,” in that no decisions were required and we pointed out the locations of targets to participants. We tried to minimize the cognitive demands of walking by providing a physical guide (Rand et al., 2015), but participants still experienced the walking condition as more anxiety inducing compared to the wheelchair condition. This suggests that there may be a role for locomotion method in low-vision navigation, but that there is not a strong effect revealed at the level of spatial learning.

Experiments 3 and 4: Active Search

In many navigation paradigms, including the real-world spaces used in our own prior low-vision studies (Barhorst-Cates et al., 2016, 2019; Rand et al., 2015), participants traverse pre-defined paths and their attention is directed to specified targets. Although we find evidence that visual impairment can be detrimental to spatial memory for these targets’ locations, suggesting a role for vision in the task, it is possible that the passive nature of the task could allow participants to develop a memory representation without the use of vision, based on the experimenter’s verbal direction together with schema-based information about indoor spaces. This could explain why spatial memory is still intact even at severe (15° and 10°) FOV restrictions (Barhorst-Cates et al., 2016). As such, we were interested in the possibility that active search for targets might be more likely to reveal differences based on FOV restriction. Although some research suggests that desirable difficulties may improve cognitive performance (Bjork & Bjork, 2011), here we predicted that active search for targets would be detrimental for spatial learning with restricted FOV because of the greater attentional demands required to integrate the scene. This, combined with mobility-monitoring demands during navigation with FOV restriction, should detract from cognitive resources in a way that should impair spatial learning. We tested this hypothesis in Experiments 3 and 4.

We first conducted Experiment 3 to test the effect of peripheral field restriction on spatial learning in an active search task. Like in our original studies, we compared within-subjects performance in a severe vision restriction condition (10° FOV) and a mild restriction condition (60° FOV). We included an assessment of cognitive load to measure the attentional effects of navigating and learning with these two levels of FOV. We expected to see poorer performance in the 10° condition in both cognitive load (reaction time (RT) on our secondary task, see below) and pointing accuracy. This would suggest that actively searching for targets is especially difficult with restricted FOV, potentially because of the greater attentional demands of integrating head movements and maintaining safe mobility. Finally, in Experiment 4 we wanted to directly compare active and passive search within subjects with a 10° FOV restriction, assessing for both spatial learning and cognitive load. We predicted that active search would show greater cognitive load and poorer pointing accuracy compared to passive search (see Table 1).

Method

Participants

Participants were recruited from the psychology department participant pool, were compensated with partial course credit for participating, had self-reported corrected or corrected-to-normal vision, and could walk without impairment. Thirty-seven participants completed Experiment 3 (17 male). The average age was 20.27 years (SD=2.0). Twenty-eight participants completed Experiment 4 (nine male). The average age was 23.43 years (SD=6.17). All participants provided written informed consent, with procedures approved by the University of Utah Institutional Review Board.

Materials

The same materials as in Experiments 1 and 2 were used here with some modifications. Experiment 3 included both the narrow and the wide FOV goggles. Experiment 4 included only the narrow FOV goggles. Average monocular measured FOVs for the narrow goggles was: 11.79° (SD=3.85; Experiment 3) and 10.9° (SD=1.52; Experiment 4). In Experiment 3 the measured average FOV in the wide goggles was 68.52° (SD=13.56). We did not include the SUDS scale.

Participants completed a secondary auditory reaction-time task (Brunken, Plass, & Leutner, 2003) during the navigation task. They wore cordless headphones and heard auditory beeps randomized to occur every 1–6 s. Upon hearing the beep, participants clicked a cordless mouse once as quickly as possible. A research assistant carried a laptop along the paths with the beeps program, walking behind the lead experimenter and participant. The participant carried the mouse in his or her dominant hand. We used this measure as an index of cognitive load, which we have done in several prior studies (Barhorst-Cates et al., 2016; Barhorst-Cates et al., 2019; Rand et al., 2015). A slower RT indicates greater cognitive load (Verwey & Veltman, 1996).

Procedure

The procedure followed closely the procedure described above for Experiments 1 and 2. In Experiment 3, participants completed all four paths in the active search condition (see below), half with the narrow FOV goggles and half with the wide FOV goggles. In Experiment 4, participants completed two paths in the passive and two paths in the active search conditions, all while wearing the narrow FOV goggles. In each experiment, participants completed practice trials to gain exposure to the following: (1) looking around while wearing the goggles (reading aloud the room numbers), (2) walking while wearing the goggles and responding to the beeps, (3) walking, learning the location of objects, and responding to the beeps. They practiced in both conditions of the experiment. They also practiced the beeps tasks on their own in the lab prior to entering the hallway.

  • Passive search condition. The passive search condition mimicked the learning paradigm used in Experiment 1 in the walking condition. Participants walked on their own without holding onto the arm of the experimenter, as in our prior studies (Barhorst-Cates et al., 2016; Rand et al., 2015). Note that this condition is active in terms of locomotion method, but passive in terms of target search. We equated the level of locomotion activity in the passive and active search conditions by having participants walk on their own in both cases, manipulating only the active search component. We included active locomotion to allow for the influence of mobility-monitoring demands and assessed how those demands might interact with target search. Each path contained four turns and three targets (see Fig. 2).

  • Active search condition. The active search condition was similar to the passive search with some modifications. Instead of stopping the participant along the path at the location of each target, the experimenter instructed participants at the beginning of the path to be looking for specific targets. The experimenter showed a picture of each target in a randomized order that did not match the order along the route, named it out loud, and had the participant repeat the name. Along the route, participants pointed at and stated the name of the object as soon as they saw it. The experimenter then confirmed or denied that it was the correct object. The experimenter still led participants along the path with verbal instructions about when to turn right and left, but it was up to the participant to search for and locate the targets. Each path contained four turns and three targets. The same four paths and targets were used for all participants, but alternated as to whether they were involved in the active or passive conditions. See Fig. 3 for an overview of this task.

Fig. 3
figure 3

Active search task. Participants were told at the beginning of each path (indicated by the home plate) what objects to locate and shown photos of each. Then they were led on the predetermined route, and had to locate the targets. Upon reaching the end of the path (indicated by the stop sign), participants remained in their final heading direction and pointed to the three landmarks

Throughout the navigation task, participants completed the auditory reaction-time task concurrently. At the end of each path, participants pointed to the targets using the degree-quadrant pointing task. They then completed 1 min of the auditory reaction-time task while standing still, to establish baseline reaction-time performance. After completing the four paths, participants returned to the lab room. They filled out the SBSOD scale, were debriefed, thanked, and dismissed.

Results

Experiment 3

Participants completed an active search spatial-learning task wearing two sets of goggles: narrow and wide FOV. They completed a concurrent auditory reaction-time task to measure cognitive load. One case was identified where performance was below chance, with average pointing error greater than 90°. One case was also identified as an outlier, with average pointing error being more than 3 standard deviations above the mean. In both cases, the high errors were in the Wide vision condition. These two participants’ data were removed from the remaining analyses, leaving 35 participants.

  • Pointing error. A 2 (Vision Condition: Narrow vs. Wide) × 2 (Condition Order: Narrow First vs. Wide First) repeated-measures analysis of variance (ANOVA) was performed. There was no main effect of Vision Condition, although the effect was trending F(1, 33) = 3.7, ηp2 = .10, p=.063, BF=.95. The Narrow FOV condition had a higher mean error (M=30.39, SE=2.4) compared to the Wide FOV condition (M=25.57, SE=1.65). The Bayes factor analysis demonstrates that the data would be .95 times more probable if H1 were true than if H0 were true, providing only weak or anecdotal evidence (Jarosz & Wiley, 2014) in favor of an effect of Condition. There was no main effect of vision Condition Order (p=.92) and no Vision Condition × Condition Order interaction (p=.60).

SBSOD did not predict pointing error performance in Wide or Narrow FOV (ps>.08).

  • Reaction time. Due to issues with recording and experimenter error, 28 participants completed the auditory reaction-time task. We ran the same repeated-measures ANOVA with RT as the outcome variable. There was a significant main effect of Vision Condition F(1, 26) = 20.9, ηp2 = .45, p<.01, BF = 292.51 with RT in the Narrow condition (M=.58, SE=.02) being significantly slower than RT in the Wide condition (M=.54, SE=.01) (see Fig. 4). The Bayes factor suggests that the data are 292.51 times more probable if H1 were true compared to if H0 were true, which provides very strong or decisive evidence in favor of H1. There was no main effect of Condition Order (p=.78) and no Vision Condition × Order interaction (p=.43). This suggests that the narrow FOV condition was more cognitively demanding than the wide, consistent with our previous results (Barhorst-Cates et al., 2016).

Fig. 4
figure 4

Reaction time means for Experiment 3

Experiment 4

We directly compared the new active search task to our traditional passive learning task within subjects. In the passive learning task, participants navigated pre-determined paths and were told the locations of landmarks as they passed them. In the active search task, participants navigated pre-determined paths but were told at the beginning of the path to find certain landmarks. In both cases, memory for the landmarks’ locations was tested at the end of each path in the final heading direction using the degree-quadrant pointing measure. Participants wore the narrow FOV goggles in both conditions and completed the auditory reaction-time task. No outliers were identified and all pointing errors were below our 90° cut-off for above-chance performance.

  • Pointing error. A 2 (Learning Condition: Active vs. Passive) × 2 (Condition Order: Passive First vs. Active First) repeated-measures ANOVA was performed. There was no main effect of Learning Condition (p=.90, BF=.20), no main effect of Order (p=.43), and no Condition × Order interaction (p=.71). In the Passive learning condition, participants’ average pointing error was 28.98° (SE=2.92). In the Active learning condition, average error was 28.61° (SE=2.74). The Bayes factor analysis suggests that the empirical data are 4.95 times more probable if H0 were true than if H1 were true, providing only positive or substantial evidence in favor of H1 (Jarosz & Wiley, 2014). This is not considered strong evidence (Jeffreys, 1961).

SBSOD was related to error in the Active condition (B=-.37, p=.01), but not in the Passive condition (p=.5).

  • Reaction time. RT data were obtained from 25 of the 28 participants. In three cases, experimenter error resulted in missing RT data. We performed the same repeated-measures ANOVA with RT as the outcome variable. There was a significant main effect of Learning Condition F(1, 23) = 15.64, ηp2 = .40, p=.001. Participants’ average RT in the Passive learning condition (M=.53, SE = .02) was significantly quicker (BF=54.25) than the average RT in the Active learning condition (M=.56, SE=.02), suggesting greater cognitive demand in the Active condition (see Fig. 5). The Bayes factor suggests that the empirical data are 54.25 times more probable if H1 were true than if H0 were true, which is considered strong or very strong evidence in favor of H1 (Jarosz & Wiley, 2014). There was no main effect of Condition Order (p=.61) and no Condition × Order interaction (p=.4). This suggests that active search is more cognitively demanding than passive learning.

Fig. 5
figure 5

Reaction time means for Experiment 4

Discussion

In Experiment 3, participants completed a real-world active search task while wearing either 10° or 60° FOV goggles (within subjects). Participants were verbally guided along the path but had to find the pre-specified targets on their own. While our previous FOV study with passive learning had found no decrement in performance at 10° FOV restriction, we predicted worse performance at 10° versus 60° in the current study because of the additional cognitive resources needed to actively search for and locate the target objects. Contrary to our hypothesis, we observed only weak evidence in favor of an effect of vision condition on pointing error, with errors in the narrow FOV condition being marginally higher than errors in the wide FOV condition. We did find strong evidence in favor of an effect of vision condition on cognitive load, with significantly higher RTs in the narrow compared to the wide condition. This suggests that actively searching for targets with 10° FOV is significantly more cognitively demanding than actively searching for targets with 60° FOV. In Experiment 4, we compared active and passive search with 10° FOV within subjects. We did not find strong evidence of a condition difference for spatial learning, but replicated the finding of strong evidence for active search being more cognitively demanding than passive search.

General discussion

In four experiments we assessed active and passive contributions to spatial learning during navigation with peripheral vision loss. We assessed active and passive locomotion (Experiments 1 and 2) and active and passive target search (Experiments 3 and 4) using a real-world spatial learning paradigm in a large building with long hallways. Our sample included normally sighted young adult participants wearing goggles that restricted peripheral vision to approximately 10°. Our results suggest that locomotion method did affect mobility-related anxiety while navigating with restricted FOV, but did not impact spatial learning. Our results also suggest that active search for targets may uniquely affect navigation in severely restricted FOV by increasing attentional demands, but we did not see a strong influence of the type of search on spatial memory for target locations. These results contribute to the knowledge base about navigation and its cognitive, motor, and visual components, and argue that researchers should consider the visual capabilities of participants when assessing active and passive navigation processes.

The lack of influence of active versus passive locomotion on spatial learning in Experiments 1 and 2 was surprising. Based on prior research arguing for the benefit of idiothetic information for forming survey knowledge (Chrastil & Warren, 2013) and for improving distance perception with FOV restrictions (Fortenbaugh et al., 2008), we expected to see better performance in the walking condition compared to the wheelchair condition. However, the current results are consistent with the theory of mobility monitoring (Rand et al., 2015), which posits that mobility-related attentional demands during navigation with low vision can detract from spatial learning accuracy. In our manipulation, the wheelchair condition seemed to effectively work as a mobility monitoring reducer compared to walking with a guide, as evidenced by reduced self-reported anxiety. As such, the performance benefit that could have been seen in the walking condition may have been canceled out by the benefit provided by reduced mobility monitoring in the wheelchair condition. Errors were very low, especially in Experiment 1 (but showed the expected increase with more targets in Experiment 2), suggesting that performance was surprisingly good in both conditions. A more nuanced assessment of cognitive demands during navigation with different locomotion methods is needed to further understand this effect. Importantly, both of our conditions retained vestibular information associated with self-movement, which suggests that vestibular information may be sufficient for navigating in this context and with this population. Podokinetic information does not appear to provide the expected benefit (Chrastil & Warren, 2012, 2013) in the case of FOV restriction, which highlights the fact that locomotion effects may differ depending on visual status. It is also possible that a different measure of spatial memory, such as navigating along a novel route to the learned targets, would have been affected by the active/passive manipulation (Wallet et al., 2013). Future research should compare active and passive locomotion in both normal and reduced vision conditions to clarify these effects.

We also did not find strong support for the predicted decrement in spatial memory during active search for targets in Experiments 3 and 4. However, we did find the predicted and consistent effect of increased cognitive load with active search. Based on prior work examining the role of restricted peripheral field on spatial memory in room-size spaces (Fortenbaugh et al., 2007; Fortenbaugh et al., 2008; Yamamoto & Philbeck, 2013) we reasoned that the additional effort required to both search for targets and integrate multiple restricted viewpoints would reduce the cognitive resources available to accurately encode target locations. While the secondary auditory task results revealed the increased cognitive effort, we did not see the expected consequences on spatial memory. In Experiment 3, a direct comparison between severe versus moderate FOV restriction with active search suggested a tendency towards worse performance as we had predicted based on previous work (Barhorst-Cates et al., 2016, 2019) and is consistent with other research showing that increased cognitive demands can impair spatial memory performance (Knight & Tlauka, 2017). However, in Experiment 4 when we directly compared active and passive search in the same participants with severe restriction, we found no difference in spatial memory.

These results add to an already mixed literature on the effects of active versus passive navigation on spatial memory. One possible explanation for the conflicting results (evidence for increased cognitive demand but no difference in spatial memory) is the added benefit of active goal-directed learning seen in some experimental contexts. As introduced earlier, there is some support for facilitation in spatial memory in environments where navigators are asked to make their own decisions about direction of travel (Bakdash et al., 2008; Brooks et al., 1999; Plancher et al., 2013; Wallet et al., 2013), although the mechanisms underlying the advantages when found are not clearly understood. Markant et al. (2014, 2016) suggest that active control during exploration allows navigators to coordinate attentional resources to new information at their own pace, reducing the negative effects of divided attention on memory. In the current task, although we did not allow for control in directional decision making, the active search component may have served this role. Additional attentional resources were likely used to accomplish successful search (as shown through slower RT on the secondary task), but the coordination of selective attention to the targets with memory encoding (Markant et al., 2014) could have enhanced learning, counteracting the negative effects of increasing the difficulty of the task. Consistent with this idea, benefits from active search on object location memory have been shown in traditional 2D search in scenes (Võ & Wolfe, 2012) as well as interactive search paradigms in real-world environments (Draschkow & Võ, 2016). Thus, our active and passive search conditions could have resulted in the same level of memory performance, but for different reasons. We speculate that the simpler passive search task reduced attentional demand but did not enhance goal-directed learning, while the active search required more attentional resources, but also engaged task-relevant encoding. Active learning does not typically refer to active visual search for targets in the existing navigation literature. However, our studies suggest that visual search considered as a component of active learning could contribute to an understanding of spatial memory, in normal viewing contexts and with restricted viewing conditions. There are a growing number of studies that have brought traditional 2D visual search paradigms into 3D real or virtual spatial environments (Draschkow & Võ, 2016; Li, Aivar, Kit, Tong, & Hayoe, 2016; Li, Aivar, Tong, & Hayoe, 2018; Võ, Boettcher, & Draschkow, 2019), but at present little work has directly examined the role of active search for targets during navigation on memory for their locations.

Together, our results inform our understanding of how daily activities, such as navigation, can be performed with visual impairment. While adding to our knowledge about passive and active contributions in spatial learning while navigating with restricted field of view, the current studies face some limitations and suggest need for future research. First, as in our prior use of this paradigm, we simulated peripheral field loss in normally sighted young adults. This method allows for precise and less variable visual field restrictions within and across participants, but also does not take into account eye movements or other strategies that clinical populations with low vision might use while navigating. It is possible that individuals with actual vision loss would perform differently on these tasks (e.g., they may be more impaired when active control of self-movement is taken away or with the demands of search). Future efforts should be made to directly compare simulated and actual low-vision populations (Bochsler, Legge, Gage, & Kallie, 2013; Legge, Gage, Baek, & Bochsler, 2016). However, our results do apply more directly to situations of navigating with simulated field loss, as in the cases of navigating while wearing goggles or in other field-occluding situations. Second, evidence for differences in perceived anxiety associated with passive and active locomotion, as well as the increased RTs associated with reduced FOV and active search in our studies, supports the notion of the need for mobility monitoring and associated cognitive resource demands when navigating with restricted vision. However, our current approach does not distinguish between different components that may contribute to increased cognitive load, such as monitoring one’s own walking versus the integration of multiple viewpoints required with the FOV-restricting goggles. Future work should aim to tease apart the cognitive resource demands associated with multiple aspects of low-vision navigation, as well as the potential benefits and consequences of the inherent divided attention among multiple tasks required for successful navigation.

Open Practices Statement

The datasets for all experiments are available via the Open Science Framework Repository at the following link: https://osf.io/pvhg5/?view_only=59e0367a2e8c465988084cd0ad4f2eb7. None of the experiments were pre-registered.