1 Introduction

Many of my beliefs, for example my empirical beliefs, appear to be basic. I believe there is a laptop in front of me, that the room in which I am sitting is warm, that noise is coming from the street… not because I infer them from other justified beliefs, but because I entertain corresponding perceptual experiences. Such beliefs do not just appear basic to me—they need to be basic. Otherwise, to be justified in believing anything (at all), I would have to be in a position to engage in an infinite regress of providing reasons for my reasons and so on.

But then, two immediate worries follow from this: First, (A) if such beliefs are basic, does my lack of reasons in their support mean that, from my point of view, they are unjustified and thus arbitrary? The answer to this question is hopefully negative: Even though I have no reasons to offer in their support, I can still be, in some way, justified in holding basic beliefs.

Now, intuitively, being able to establish that we are in a position to avoid arbitrariness with respect to at least some of our basic beliefs would be a good thing, no doubt. But how useful  would it be, in practice? That’s the second worry. Specifically, (B) are beliefs that are, from my point of view, non-arbitrarily held more likely to be true? If the answer to this question is ‘no’ or ‘I don’t know’, then showing that some basic beliefs can be held non-arbitrarily would, at least in an important sense, be practically useless. What’s the point of non-arbitrarily holding certain basic beliefs, if such beliefs are not likely to accord with reality?

In attempting to answer (A), i.e., explain how basic beliefs can be non-inferentially justified, externalists have offered defeaters-based accounts of justification. Nevertheless, existing attempts face two serious concerns:

  1. (i)

    They fail to accommodate relevant counterexamples, such as Norman the clairvoyant (Bonjour, 1980).

  2. (ii)

    They fail to explain how one who is, on their accounts, justified is also epistemically responsible in holding their basic beliefs. As I will argue, this amounts essentially to a failure of explaining how non-inferentially justified basic beliefs can avoid being arbitrary from the agent’s point of view.

With respect to (B), indeed there is the strong intuition that responsibly (i.e., non-arbitrarily) held beliefs are, normally, likely to be true. Bonjour notes, for example, that “if epistemic justification were not conducive to truth […], if finding epistemically justified beliefs did not substantially increase the likelihood of finding true ones, then epistemic justification would be irrelevant to our main cognitive goal and of dubious worth.”Footnote 1 Yet, so far, there has been no explanation as to what might make this intuition true: Why are responsible beliefs likely to be true?Footnote 2 This is a particularly important question, because, as Bonjour further notes, “it is only if we have some reason for thinking that epistemic justification constitutes a path to truth that we as cognitive beings have any motive for preferring epistemically justified beliefs to epistemically unjustified ones (Bonjour 1985, p. 8).”

My aim is to provide a negative answer to (A)—in a way that deals with both (i) and (ii)—and a positive response to (B), by offering, for the first time, a mechanistic explanation as to why responsibly held beliefs are likely to be true. To do so, I offer a new, externalist, defeaters-based account of justification—viz., System Reliabilism.

System Reliabilism is built around the, as yet undisclosed, tripartite realisation that: (i) to avoid being arbitrary from the agent’s perspective, the agent’s basic beliefs need to be not just blameless but epistemically responsible; (ii) mere lack of defeaters can only guarantee epistemic blamelessness; (iii) to also entertain epistemic responsibility, basic beliefs need to lack defeaters while the agent is actually capable of having them. In other words, one of the main insights System Reliabilism offers and is specifically designed to accommodate is that basic beliefs are epistemically responsible if and only if they are undefeated while being defeasible.

Another important aspect of System Reliabilism is the claim that to be defeasible, basic beliefs must be the outputs of a cognitively integrated system. The notion of cognitive integration, however, is notoriously underdeveloped within mainstream epistemology. Drawing on empirical considerations from cognitive science, I develop a detailed structuralist understanding of the notion that goes significantly beyond available treatments (Greco, 1999, 2010; Breyer & Greco, 2008). The advantage of building this improved account of cognitive integration is twofold. Not only can it help account for how beliefs produced by integrated systems are responsible by being defeasible but undefeated; it further offers the first mechanistic explanation of why responsible beliefs are likely to be true.

In outline, in Sect. 1, I discuss the way externalists have employed the notion of defeaters to account for non-inferential justification. In Sects. 2 and 3, I argue that, generally, this defeaters-based approach to justification faces the following two (aforementioned) concerns: (i) It fails to accommodate relevant counterexamples such as Norman the clairvoyant and (ii) it fails to explain how lack of defeaters (i.e., the absence of something) may confer epistemic responsibility to the agent’s beliefs (and thereby prevent them from being arbitrary from her point of view). In Sect. 4, I consider two recent and improved, externalist, defeaters-based accounts of justification that attempt, but eventually fail, to tackle (i) and (ii). Thus, in Sect. 5, I introduce System Reliabilism and show how it successfully deals with both (i) and (ii). In Sect. 6, I draw on empirical considerations from cognitive science to offer a sophisticated and naturalistically motivated account of the notion of cognitive integration. Finally, in Sect. 7, I employ this improved understanding of cognitive integration to offer a mechanistic explanation of the important intuition that responsibly formed beliefs are also likely to be true.

2 Infinite regress, arbitrariness and externalism

To avoid the problem of regress, epistemic foundationalism holds that all justification ultimately rests on basic beliefs that are non-inferentially justified. That is, the justification for basic beliefs does not depend on being derived from other justified beliefs: Basic beliefs are—in the complete absence of any reasons in their support—in some way justified.

The problem with this (and one of the main objections against epistemic foundationalism) is that, in the absence of any available supporting reasons, basic beliefs appear to be arbitrary from the agent’s perspective. As Klein (1999, p. 297) puts it, “foundationalism is unacceptable because it advocates accepting an arbitrary reason at the base, that is, a reason for which there are no further reasons making it even slightly better to accept than any of its contraries.”

This sounds worrying indeed. However, epistemic externalists who deny that justification is always a matter of having reflectively accessible reasons in support of our beliefs are not particularly concerned. They simply insist that, on their view, the absence of available reasons is insufficient for rendering beliefs arbitrary from the agent’s point of view (Bergmann, 2004).

This denial of the externalist that the availability of reasons is necessary for justification may sound like asserting that justification has nothing to do with the agent’s subjective perspective of the situation. It would certainly be consistent with externalism to insist that the mere fact that a basic belief b is the product of a reliable or properly functioning process suffices for b to be justified. Crude forms of reliabilism may hold, for example, that a belief is justified just in case it is the product of a process that is de facto reliable—independently of whether the agent is in any way aware of this.

Of course, such externalist approaches to the way basic beliefs come to be justified are rather implausible: Obviously, such beliefs would still be arbitrary from the agent’s point of view. No surprise then that most externalists, including Bergmann, are not satisfied with them. Indeed, while externalists deny that the reflective availability of supportive reasons is necessary for justification, they do not necessarily divorce justification from the agent’s perspective entirely. Externalists have come to recognise that justification is also a matter of epistemic responsibility in the following sense: Even if the way S comes to believe that p is in fact reliable, S is not justified unless she is also epistemically responsible by being, in some way, sensitive to the fact that her belief is properly formed.Footnote 3

Now, one way to interpret this is to say that the externalist acknowledges the force of the internalist’s intuition: The agent’s perspective is, in some way, integral to the agent’s justificatory status.Footnote 4 Even so, this does not mean that externalists concede to internalism. Claiming that the agent’s perspective plays an important role to her doxastic justification does not automatically lead to the claim that justification is a matter of having available reasons in support of one’s beliefs. Instead, externalists can assign to the agent’s point of view an inhibitory role: They can claim that epistemic responsibility, on the part of the agent, is a matter of lacking any negative reasons or evidence against her beliefs. On this weaker condition on epistemic responsibility, the agent’s perspective is involved in a ‘stand-by’ manner, expected to come to the fore only when something is amiss.

To account for this sense of epistemic responsibility that requires no awareness of supporting reasons, externalists usually invoke the notion of ‘mental state defeaters.’Footnote 5 As Bergmann (2006, pp. 175–176) notes, “a no-believed-defeater condition isn’t an awareness requirement. A no believed-defeater condition doesn’t require awareness of anything. At best, it requires the absence of awareness of something (i.e., defeaters).” Thus, by employing the terminology of defeaters, externalists aspire to account for epistemic responsibility by involving the agent’s perspective in a negative fashion:

  • Externalist Epistemic Responsibility (EER):

    S is epistemically responsible in holding p iff S lacks any mental state defeaters against p or the way it was formed.

According to such externalist approaches, then, justification consists in reliability plus epistemic responsibility, where the latter is accounted for in terms of the absence of mental state defeaters. Importantly, on such views, which attempt to capture the significance of the agent’s perspective on her beliefs, awareness of supporting reasons figures nowhere. We can distill this general approach to justification in the form of the following principleFootnote 6:

  • No Awareness Justification (NAJ):

    S is justified in believing p iff (i) p is the product of a reliable and/or properly functioning process and (ii) S has no mental state defeaters against her belief that p (or the way it was formed).

So, to return to the issue concerning the arbitrariness of basic, non-inferential beliefs: Can the way externalists attempt to account for justification—and especially the way they account for the responsibility component—help us explain how certain basic (i.e., non-inferentially derived) beliefs can avoid being arbitrary from the agent’s point of view? To answer, we can mirror NAJ to get the following principle of non-inferential justification:

  • Non-Inferential Justification (NIJ):

    A basic belief b is non-inferentially justified for S iff b (i) is the product of a reliable and/or properly functioning process and (ii) is not defeated by any of S’s other mental states.

Condition (i) accommodates the externalist’s requirement on reliability and/or proper function, and condition (ii) is meant to capture the intuition that justification involves epistemic responsibility. Accordingly, basic beliefs are not arbitrary (from the agent’s point of view)Footnote 7 if they are the product of a reliable and/or properly functioning process, while also being undefeated by the agent’s other mental states.

Is this sufficient for rendering basic beliefs non-inferentially justified? The answer, I am about to argue, is ‘not quite.’ While NIJ is a step forward, it does not go far enough. Specifically, the view faces two problems. The first, which we are about to explore by reference to a well-known counterexample, is that satisfaction of NIJ does not guarantee that the agent’s belief is epistemically responsible. The second problem is that it fails to explain how lack of defeaters (i.e., the absence of something) may confer epistemic responsibility to the agent’s beliefs.

3 Problem I: Norman the clairvoyant

Consider Norman the clairvoyant, Bonjour’s famous thought experiment:

Norman

Norman, under certain conditions that usually obtain, is a completely reliable clairvoyant with respect to certain kinds of subject matter. He possesses no evidence or reasons of any kind for or against the general possibility of such a cognitive power, or for or against the thesis that he possesses it. One day Norman comes to believe that the President is in New York City, though he has no evidence either for or against this belief. In fact the belief is true and results from his

clairvoyant power, under circumstances in which it is completely reliable. (Bonjour 1980, p. 62)

What should we say about the case? Norman’s basic belief is the product of a process that is reliable. Ghijsen (2016) also suggests that if we further assume it has been passed on to him as the product of natural selection, then it is functioning properly.Footnote 8 Therefore, condition (i) of NIJ is satisfied. Norman also lacks mental state defeaters against his belief, or the way it is formed. Thus condition (ii) of NIJ is also satisfied. According to NIJ, then, Norman’s belief is non-inferentially justified and, thereby, not arbitrary. Intuitively, however, this is the wrong verdict.

Of course, an easy way to explain away this awkward situation is to follow Moon (2018) who insists that, despite the description of the thought experiment, it is not possible to imagine Norman as lacking mental state defeaters. Therefore, some account along the lines of NAJ should explain why Norman is not justified in his belief.Footnote 9 Essentially, however, this amounts to refusing to engage with the actual thought experiment—a thought experiment that internalists have levelled against externalism for decades and which most externalists have accepted as a fair challenge. My aim here is to accommodate all of the internalists’ dialectical demands. So, in what follows, I will keep with the tradition by assuming that Norman does, indeed, lack defeaters for his clairvoyant belief. However hard it is to imagine this, it is worth the try, because taking the experiment at face value can reveal important insights regarding the nature of epistemic responsibility.

Just above, we noted that Norman’s belief satisfies both conditions of NIJ and yet it should not count as non-inferentially justified: Even though Norman has no defeaters, his belief is still arbitrary from his point of view. But what makes us think this way? The reason, I submit, is that, given the way the case is set up, not only does Norman have no defeaters against his clairvoyant belief, but—crucially—it appears he is in no position of having any at all. Specifically, his clairvoyance seems to be entirely disconnected from the rest of his mind. There is no obvious way in which Norman’s vision, hearing, touch and so on have an effect on his clairvoyance. Contrast this with the interplay between his vision, say, and his hearing. If Norman walks down a dark alley and seems to hear a sound, he might turn around to check whether someone is following him. If he sees no one, then he will assume that there was no sound and that it was just his impression after all. On the contrary, his clairvoyance ability and beliefs are not like this. Instead, they are entirely disjointed from his perceptual faculties. The same goes with regards to the relation between his clairvoyance and his memory. We can tell this much from the description of the case: Norman is supposed to have “no evidence or reasons of any kind for or against the general possibility of such a cognitive power, or for or against the thesis that he possesses it.” But the complete absence of reasons for the general possibility of clairvoyance and for the possibility that one possesses it means that one’s memory will represent clairvoyance as a strange and unfamiliar process. Therefore, if Norman’s mind were anything like ours, this memory-generated representation of unfamiliarity would automatically serve as an undercutting defeater to his clairvoyant belief. The fact that Norman has no such defeater indicates that his clairvoyant power does not interact with his memory either.Footnote 10

But why should Norman’s inability to possess any defeaters have an effect on his justificatory status? Recall that externalists require the absence of mental state defeaters as a way of ensuring that the agent’s perspective is involved—at least as a latent inhibitor—in rendering her beliefs epistemically responsible. But when the agent is unable to have any defeaters, her perspective is blind (or epistemically inert, so to speak) such that the absence of mental state defeaters cannot confer epistemic responsibility to the agent’s beliefs. Thus, while NIJ’s condition (ii) on epistemic responsibility is satisfied, Norman’s basic belief is not epistemically responsible.

To be sure, his belief is blameless. Having no defeaters against it, he is not irrational in holding it.Footnote 11 But a blameless basic belief is not the same as an epistemically responsible belief.Footnote 12 Blameless basic beliefs are mere beliefs; from the agent’s point of view, they can be neither good nor bad. Put another way, nothing is going for them and nothing could possibly go against them either.Footnote 13 Thus, even if a blameless basic belief is also reliably formed, as in Norman’s case, it is arbitrary all the same. To avoid arbitrariness, basic beliefs need to be more than just blameless and reliably formed; they must be epistemically responsible.

4 Problem II: Ex nihilo epistemic responsibility

As the above suggests, Norman’s case reveals a structural inadequacy built into externalist, defeaters-based approaches to epistemic responsibility and justification. It demonstrates that mere satisfaction of EER won’t guarantee that a belief is epistemically responsible. If the agent has no mental states that are inconsistent with her belief that p, simply because she is in no position of having any such mental state defeaters, EER is only trivially satisfied. In such cases, though blameless, the belief is not epistemically responsible. Epistemically responsibility requires, instead, that EER be substantively satisfied: The agent needs to lack mental state defeaters against her belief that p while being capable of having such mental state defeaters.

Thus, mere satisfaction of EER is insufficient for ensuring that the agent is going to be epistemically responsible. But why assume that mere lack of defeaters would be sufficient for ensuring that a belief is epistemically responsible in the first place? Are there any reasons for this assumption? After all, as the above indicates, mere lack of defeaters may prevent a belief from being irresponsible (which only entails trivial satisfaction of EER), but it is by no means obvious how or whether it could further raise the belief from the status of epistemic blamelessness to the status of epistemic responsibility (which entails substantive satisfaction of EER). Put simply, the question is: How can the mere absence of something like defeaters add epistemic responsibility to an agent’s belief?

Granted: prima facie, it does sound correct to claim that lack of defeaters is somehow involved in being epistemically responsible. But it is not at all obvious how this can be so in a positive rather than negative way. That is to say, it is easy to see how the presence of defeaters can remove epistemic responsibility and justification from a belief; but how can a basic belief (which is non-inferentially derived, and which thereby has, by way of reasons, nothing positive in its support, to begin with) acquire epistemic responsibility via the absence of negative reasons (or experiences) against it?

Remarkably (though perhaps unsurprisingly), no explanation is available in the literature. Rather, the only motivation for the view relies, to my knowledge, on intuitions alone. Bergmann (2006, pp. 176–177), for example, notes his agreement with Bonjour (Bonjour and Sosa 2003, p. 32) that the only reason in support of a defeaters-based condition on justification is its “intuitive obviousness”: “it just seems intuitively (perhaps after considering many examples) that a belief isn’t justified if one has either a reason for thinking it false or a reason for doubting the reliability of its source—and that is so whether or not the belief is in fact reliably formed.”Footnote 14 And notice, as Bergmann’s quote suggests, that the intuitive support may only go so far as to motivate the simple idea that defeaters can remove justification, while falling short of supporting the fancier idea that their absence somehow adds epistemic responsibility. And yet, this is precisely what externalist defeaters-based accounts of justification require—otherwise they have to accept that non-inferentially derived beliefs are either arbitrary (even if blameless), or that their epistemic responsibility arises ex nihilo. To put the worry another way, externalists who invoke the notion of defeat are yet to explain how—if at all—lack of defeaters confers epistemic responsibility to the agent’s beliefs.

A central objective of any externalist defeaters-based account of justification, then, is to explain how lack of defeaters can instil epistemic responsibility.Footnote 15 Failure to do so could only mean three things: (i) The account is importantly incomplete, because to deny that basic beliefs are arbitrary, it heavily relies on—but fails to explain—the puzzling idea that lack of defeaters ensures that beliefs are not just blameless, but also epistemically responsible; (ii) the account does not rely on the idea that lack of defeaters confers epistemic responsibility, therefore it accepts that basic, non-inferentially derived beliefs are arbitrary; (iii) the account neither relies on the idea that lack of defeaters confers epistemic responsibility, nor accepts that basic, non-inferentially derived beliefs are arbitrary; rather the account assumes that epistemic responsibility can arise ex nihilo.

(ii) and (iii) are obviously unacceptable, and while externalists may have so far lived with (i), having a solution to it would constitute a significant advancement to the understanding of our epistemic nature. In Sects. 5, 6 and 7, I introduce a defeaters-based account of justification that explains how and when lack of defeaters confers epistemic responsibility to the agent’s beliefs (i.e., how and when EER is substantively satisfied). Before I do so, however, it will be helpful to consider how a couple of alternative proposals attempt to deal with Norman’s case. If the alternative, externalist defeaters-based accounts of justification that I have in mind can accommodate Norman’s case—by successfully explaining why Norman’s lack of defeaters does not suffice for him to be epistemically responsible—then, perhaps, they can also explain how lack of defeaters can, in other cases and under different conditions, confer epistemic responsibility to our basic beliefs. Unfortunately, as I will argue, these accounts are not successful on either front, but their shortcomings can be most instructive.

5 Alternatives

5.1 Proper functionalist defeat

Ghijsen (2016) focuses on Norman’s case and other variants of clairvoyance to convincingly criticize prominent externalist approaches to justification, including, among others, Process Reliabilism (Goldman, 1979), Inferentialist Reliabilism (Lyons, 2009), Proper Functionalism (Graham, 2012, 2014) and Evidentialist Reliabilism (Comesaña, 2010). Ghijsen then moves on to introduce what he considers to be a necessary clause on justification, which he calls ‘Proper Functionalist Defeat’:

PFD: S’s belief in p at t is justified only if S does not have a defeater system D such that, had D been working properly, it would have resulted in S’s not believing p at t.

The notion of a properly functioning defeater system plays a central role in Ghijsen’s approach. To explain when a defeater system is working properly, Ghijshen (2016, p. 96) draws on the proper functionalist idea of ‘consequence etiology’: ‘an etiology that “explain[s] why something exists or continues to exist in terms of its consequences, because of a feedback mechanism that takes consequences as input and causes or sustains the item as output” (Graham, 2014, p. 18).’ Accordingly, in Ghijsen’s account, “defeater systems are those systems that have the proper function of reliably preventing the formation or maintenance of false beliefs (without thereby preventing the formation or maintenance of true beliefs)” (ibid., 105). In most cases, such defeater systems were created and continue to exist because of evolution by natural selection—a paradigmatic consequence etiology. Put simply, according to Ghijsen, organisms with defeater systems had fewer false beliefs, which contributed to the proliferation of organisms with such defeater systems. Additionally, Ghijsen notes, the consequence etiologies of defeater systems are inextricably tangled with those of our cognitive faculties:

Given that the outputs of our cognitive faculties are influenced by what the defeater systems have assessed as trustworthy, our cognitive faculties are no longer selected purely on the basis of their own merit, but in combination with the merit of the defeater system. Thus the consequence etiologies of our cognitive faculties by now seem to include our defeater systems in an important way. (Ghijsen, 2016, p. 105—emphasis in the original)

Now, with the above in mind, we may ask: How does Ghijsen employ his view to account for clairvoyance? His answer is to invite us to consider what the defeater systems of clairvoyant agents, such as Norman, would have done had they been functioning properly:

It certainly appears plausible that their monitoring mechanisms would have rejected their respective beliefs had they been functioning properly: the information presented by their special senses is not corroborated by any of their other senses, nor does the information stem from a recognizable trustworthy source. This should give their monitoring mechanisms sufficient cause to prevent the information rising to the status of belief (Ghijsen, 2016, pp. 107–108).

According to Ghijsen, then, when Norman has no defeaters, this is because his defeater system is not functioning properly, and had it been functioning properly it would have issued defeaters. Put simply, the way PFD explains why Norman is not justified is the following: Norman’s defeater system fails to deliver defeaters, therefore it is not functioning properly. The problem is that not only is this way of accounting for Norman’s case ad hoc, it is also inconsistent with Ghijsen’s own account.

Recall that in order to explain why simple proper functionalism cannot account for Norman’s case, Ghijsen noted that we can easily imagine that his clairvoyance ability is functioning properly: we may simply assume that it is a process, which has been inherited to Norman through evolution by natural selection. Moreover, in a previous quote, we saw Ghijsen noting that the consequence etiologies of our cognitive faculties include our defeater systems: If we have a properly functioning cognitive faculty, this is partly because our defeater systems have, over the course of evolution, assessed it as trustworthy. Putting the two together, if we accept that Norman’s clairvoyant faculty exists because of evolution by natural selection, then so does a defeater system that has been considering it—again, over the course of evolution by natural selection—trustworthy. But, then, when Norman’s defeater system issues no defeaters against his clairvoyant belief about the president’s whereabouts, it is, in fact, functioning properly. Ghijen’s account, then, rules that Norman is justified—i.e., the exact opposite of what Ghijsen was aiming for.

Now, failing to rule out that Norman is justified is a significant problem for Ghijsen’s account, but it is not the only one—especially not in the context of the present discussion. Recall that another important desideratum of externalist defeaters-based accounts of justification is to explain how lack of defeaters may render beliefs not just blameless but also epistemically responsible. This, we noted, is particularly important for explaining how basic beliefs may escape the charge of being arbitrary from the agent’s perspective. Unfortunately, PFD does not offer any such explanation. Not only that, but towards the end of his paper, and without any apparent reasons, Ghijsen contends that “monitoring mechanisms are not necessary for perceptual justification” (2016, p. 108). But, then, assuming that PFD is part of a purely externalist account of justification, it would seem that Ghijsen would be content with accepting that basic beliefs, such as perceptual beliefs, are either arbitrary or that their epistemic responsibility arises ex nihilo.

5.2 Agent reliabilism

Though the connection may not be immediately apparent, another externalist account of justification that is associated with the notion of defeat is Greco’s Agent Reliabilism (Greco, 1999).

To account for epistemic responsibility, Agent Reliabilism proposes to supplement process reliabilism with the ability intuition on knowledge.Footnote 16 This is the intuition that in order for one’s true beliefs to qualify as knowledge, they must be the product of a belief-forming process that counts as a cognitive ability. The motivating idea is that cognitive abilities seem to be the sort of reliable belief-forming processes that one can responsibly rely on for acquiring knowledge, even if one does not have any reasons to offer in their support. For example, no one needs to explain why their visual or auditory experiences are a reliable indication of reality when they come to acquire knowledge on their basis.

Of course, to account for epistemic responsibility in this way, Agent Reliabilism needs to explain when a process counts as a cognitive ability. Accordingly, in a number of places, Greco ((Greco, 1999, 2003, 2010), (Greco and Breyer 2010)) offers the following suggestion: A process counts as a cognitive ability only if it has been cognitively integrated into the agent’s cognitive system. Thus, the notion of ‘cognitive integration’ constitutes Agent Reliabilism’s cornerstone, and this, in turn, raises the following all-important question: When is a process cognitively integrated such that it can count as a cognitive ability that is knowledge-conducive?

While Greco ((Greco, 2003, 2010), (Breyer and Greco 2008)) has suggested several possible answers to this question, the following, general remarks are his preferred and most common approach (Greco, 2003, p. 474; Greco, 2010, p. 152):

One aspect of cognitive integration concerns the range of outputs—if the products of a disposition are few and far between, and if they have little relation to other beliefs in the system, then the disposition is less well integrated on that account. Another aspect of cognitive integration is sensitivity to counter-evidence, or to defeating evidence. If the beliefs in question are insensitive to reasons that count against them, then this too speaks against cognitive integration. In general, it would seem, cognitive integration is a function of cooperation and interaction, or cooperative interaction, with other aspects of the cognitive system.

Greco, however, does not elaborate on how to interpret these points on cognitive integration, and this has unfortunately left his view open to several criticisms (e.g., Bernecker, 2008; Ghijsen, 2016). In fact, Greco himself admits he has “hardly presented a clear and detailed account of cognitive integration” (Greco, 2003).

In the context of the present discussion, the problem is that lack of precision on what cognitive integration consists in leads to the worry that Agent Reliabilism (I) cannot appropriately accommodate Norman’s counterexample and (II) cannot explain how and when lack of defeaters is sufficient for rendering a belief epistemically responsible. To see why, consider Breyer and Greco (2008, p. 177) who invite us to imagine two possible scenariosFootnote 17:

Case 1: Norman shares our counterevidence against clairvoyance and our dispositions to respect that counterevidence.

Case 2: Norman shares neither our counterevidence against clairvoyance nor our dispositionto respect such counterevidence when it arises. Moreover, his dispositions to form beliefs on the basis of clairvoyance, including his dispositions to override counterevidence when such arises, are well integrated with other of Norman’s cognitive dispositions.

In the first case, Greco and Breyer note that Norman does not know. In the second case, however, they contend that he does.Footnote 18

The problem with trying to account for Norman in this way is that it either fails to engage with the internalist challenge, or it speaks past the internalist. If we accept that Norman fails to know, because he has defeaters (i.e., case 1) then, like Moon (2018), we are not engaging with the original thought experiment. And if we assume that Norman neither has nor could he have had any defeaters (i.e., case 2), but we are willing to therefore accept that he can know, then we are ignoring the internalists’ widely shared intuition on the case.Footnote 19

What we need, instead, is an account of justification that explains why Norman fails to know, despite lacking any defeaters. More precisely, we need the account to explain why, despite lacking defeaters, Norman is not, as the internalist would point out, epistemically responsible in his belief. On the contrary, Greco and Breyer contend that, in the second case, Norman’s clairvoyant ability is “a lot like visual perception in us” (2008, p. 178), meaning that Norman’s way of forming beliefs is both reliable and epistemically responsible in the same way that our basic perceptual faculties are supposed to be. But, clearly, this claim is wrong. If we were disposed, like Norman in case 2, to ignore or override counterevidence against our perceptual beliefs when such arises—which thankfully we are not—then, even if reliable and true, such perceptual beliefs would fail to be epistemically responsible: Though blameless, such beliefs would be paradigmatically arbitrary, because, were they false, we’d have no way of telling. If agent reliabilism is happy to allow—incorrectly—for such ‘blindly’ undefeated beliefs to qualify as epistemically responsible then, clearly, it also fails the second desideratum on externalist defeaters-based accounts of justification that we laid out aboveFootnote 20: Externalists need an account that can explain how and when lack of defeaters can confer, not just blamelessness, but also epistemic responsibility to our beliefs, so that basic beliefs, such as our perceptual beliefs, can escape the charge of being arbitrary from our point of view.

Time then to start constructing such an account.

6 Structuralist integration

We noted above that Greco does not provide a detailed account of cognitive integration, and this, in effect, results in agent reliabilism’s failure to (I) accommodate Norman’s case and (II) provide an explanation of the way lack of defeat confers epistemic responsibility to our beliefs. Τhis, however, in no way indicates that the intuition driving Greco’s Agent Reliabilism—i.e., the ability intuition on knowledge—is not on the right track. Indeed, in what follows, my aim is to spell out this intuition further by devising a naturalistically motivated account of cognitive integration that captures both the conditions under and the sense in which (i.e., when and how) lack of defeaters can confer epistemic responsibility to our beliefs. To do so, my starting point will be what Greco sometimes refers to as a ‘structuralist’ understanding of the notion of cognitive integration, according to which “cognitive integration is a function of cooperation and interaction, or cooperative interaction, with other aspects of the cognitive system” (Greco, 2003, p. 474; Greco, 2010, p. 152). While Greco does not develop this idea further, expanding on it can prove particularly helpful in a number of ways.

Let’s start by stipulating, then, in line with Greco’s structuralist remarks, that a reliable belief-forming process is cognitively integrated—and thereby counts as a cognitive ability—if and only if it mutually interacts with other aspects of the cognitive system.Footnote 21 How can this approach to cognitive integration help with the main goal of explaining the way lack of defeaters may confer epistemic responsibility to our beliefs?

Since, by the above definition, cognitive abilities mutually interact with other aspects of the agent’s cognitive system, their operation and outputs need to be in step with those other cognitive aspects. Thus, when discrepancies occur, the system halts; when no discrepancies occur, the system carries on, uninterrupted, with its automatic operations. In practice, this integrated set up results in the workings and deliverances of cognitive abilities being continuously monitored by other aspects of the cognitive system, in the background. From the agent’s point of view, the resulting effect is that if there is something wrong with her cognitive abilities or their outputs, then she will be able to notice this and respond appropriately. Otherwise, if no alert signals are issued in the form of defeaters, the agent can be epistemically responsible in employing her cognitive abilities and accepting their results by default—even if she has absolutely no beliefs to offer as to whether or why her abilities and their results are reliable.

An immediate advantage of this approach to cognitive integration and the ability intuition on knowledge is that it can explain why lacking defeaters for beliefs produced by cognitive ability suffices for them to be, not just blameless, but epistemically responsible too. On this structuralist approach to cognitive integration, cognitive abilities are the reliable integrated components of an overall system, which continuously monitors the workings of its different parts against each other. In result, the agent is epistemically responsible in holding undefeated beliefs that are the product of her cognitive abilities, because these beliefs and the way they are formed are, actually, being monitored for being inconsistent with other aspects of the agent’s cognitive system and they successfully survive the test. To use the terminology of Sect. 3, by thinking of cognitive integration in terms of ongoing mutual interactions with other aspects of the cognitive system, the present approach ensures that when an agent forms her beliefs on the basis of cognitive ability, EER is substantively, rather than trivially, satisfied.

7 System Reliabilism

According to the above, the present, structuralist approach to cognitive integration offers an explanation of how and when EER is substantively satisfied, such that lack of defeaters can confer, in addition to blamelessness, epistemic responsibility to our beliefs. Let us then incorporate this approach to epistemic responsibility in an externalist defeaters-based account of justification:

  • System Reliabilism (SR):

    S is justified in holding p iff (i) p is the product of a reliable process that is cognitively integrated* and (ii) S has no mental state defeaters against her belief that p (or the way it was formed).

    *where a reliable process is cognitively integrated (i.e., counts as a cognitive ability) if and only if it mutually interacts with other aspects of S’s cognitive system.

Condition (i) of SR ensures that p is the product of a reliable process that is integrated into the agent’s cognitive system. Given (i), condition (ii) ensures that EER is substantively satisfied. Therefore, when SR is satisfied, the agent’s beliefs will not be merely reliable and blameless, but also responsibly formed.

Now, we may recall that, due to its problems with the way EER was satisfied, NAJ could not be used to account for non-inferential justification. The problem was that its derivative, NIJ, did not ensure that basic beliefs are epistemically responsible, thus failing to safeguard against the objection that they are arbitrary from the agent’s point of view. Can SR provide a better springboard for an account of basic beliefs that escapes this problem?

  • System Reliabilist Non-Inferential Justification (SR-NIJ):

    A basic belief b is non-inferentially justified for S iff b (i) is the product of a reliable process that is cognitively integrated* and (ii) is not defeated by any of S’s other mental states.

    *where a reliable process is cognitively integrated (i.e., counts as a cognitive ability) if and only if it mutually interacts with other aspects of S’s cognitive system.

Condition (i) ensures that b is the product of a reliable process that is integrated into the agent’s cognitive system. Given (i), condition (ii) ensures that EER is substantively satisfied. So, on this approach, basic beliefs are not arbitrary if they are undefeated by the agent’s other mental states and produced by a reliable process that is cognitively integrated. Is this sufficient for preventing basic beliefs from being arbitrary from the agent’s perspective?

Let’s think about Norman again. The diagnosis was that Norman has no way of telling whether his belief is likely to be false, so, intuitively, he is not, from his point of view, epistemically responsible in holding it. That is, his incapacity to have any mental state defeaters against his belief renders his perspective blind, such that the absence of defeaters is no sign of epistemic responsibility. And yet, if we were to employ NIJ, we would have to conclude, against intuition, that Norman’s basic belief is justified.

SR-NIJ, however, has no problem dealing with the case. On SR-NIJ Norman’s lack of mental state defeaters is not enough for rendering his belief-forming process epistemically responsible. The reason is that his clairvoyant power is not integrated to his cognitive system: His special power does not cooperatively interact with any of his sensory modalities. There is no conceivable way, for example, in which Norman’s vision, hearing, touch and other perceptual faculties interact with his clairvoyance and neither, it seems, does his memory system, which fails to communicate the strangeness of his clairvoyance power. Compare with the interplay between vision and touch, for example. If I clearly see a cup in front of me, but upon reaching for it, I receive no haptic feedback, I will assume that there is no cup after all and that I must be hallucinating.Footnote 22 Norman’s clairvoyance, by contrast, seems to be entirely disconnected from his other cognitive abilities. Thus, given the absence of interaction with other aspects of his cognitive system, Norman’s clairvoyance power is not integrated into it. Therefore, condition (i) of SR-NIJ is not satisfied and, as a consequence, condition (ii) is only trivially satisfied. In this way, SR-VIJ rules correctly that Norman’s belief fails to be non-inferentially justified.

SR-NIJ therefore accommodates Norman’s case in a way that clearly respects the internalist’s intuition. More importantly, it does so by offering an explanation of how and when basic (i.e., non-inferentially derived) beliefs can be justified, in (what seems to be, but is not) a mere lack defeaters. According to SR-NIJ, so long as undefeated basic beliefs are the products of reliable integrated belief-forming processes (i.e., cognitive abilities), they are non-inferentially justified in virtue of being monitored for being inconsistent with the rest of the agent’s cognitive system and successfully surviving the test.

The general upshot of SR-NIJ, then, is that mere lack of defeaters won’t do to prevent basic beliefs from being arbitrary (even if they are reliably formed); basic beliefs are justified if and only if they are the reliable outputs of a cognitively integrated machine that is capable of defeating them, yet they go undefeated.

8 Structuralist integration in cognitive science

The present approach capitalises significantly on the notion of cognitive integration, which it holds to result from cooperative interactions between different parts of the cognitive system. Given the theoretical weight assigned to this interactive process, it is important that we gain sufficient understanding of the underlying mechanism.

It should be welcome then that the claims that have been so far advanced with regards to the notion of cognitive integration are in agreement with a growing volume of studies within cognitive science, which can be used to thereby offer naturalist support to the present approach as well as elucidate it in crucial respects. In fact, listening to what cognitive science has to say about the process of cognitive integration is necessary for appreciating the notion’s full epistemic significance, and while this is not the right place to detail all of its empirical aspects, focusing on a few considerations can disclose much about its justificatory function. Specifically, it can reveal how cognitive integration, as the dynamic interplay between functionally and neuroanatomically different parts of our cognitive systems, contributes to the realisation of two core, but so far neglected, epistemic properties: The property to epistemically self-organise and the property to epistemically self-regulate.

8.1 Cognitive integration and epistemic self-organisation

Cognitive self-organisation is a diachronic, developmental process, which starts in early childhood and achieves optimal levels by late adolescence. During this process, different parts of the cognitive system interact until they evolve into a relatively stable configuration. Once the cognitive system has achieved this stable configuration, its component parts have mutually adapted by establishing interconnections that allow them to process information from a number of different modules in a coherent and reliable manner. As Stevens (2009) notes:

Age-related cognitive improvements are the result of how neural networks become increasingly more inter-connected and functionally specialized throughout development. […] Greater anatomical and functional connectivity presumably permits more efficient communication among brain regions needed for task performance.

This diachronic aspect of integration is at the heart of most fundamental belief-forming capacities. Perceptual awareness, for example, appears inherently multisensory: “The integration of information has been considered a hallmark of human consciousness, as it requires information being globally available via widespread neural interactions” (Deroy et al., 2016). Think how many times we perceive the same property via different sense modalities (the vibrations of a guitar string), and how most times we perceive different properties that are delivered from distinct sense modalities as belonging to the same object (the shape and feel of the guitar on one’s hands). Moreover, integration is not important just across sense modalities, but also within them. With respect to vision, for example, it is commonly held that the brain must bind together properties, such as colour and location, which are processed by different brain areas, into a unified object of perception (see, for example, Roskies, 1999, but also Ghose & Maunsell, 1999). While the exact mechanisms of sensory and multisensory integration remain to be understood, it is has become evident that integration involves complex interactions between anatomically and functionally distinct brain areas (see, for example, Lamme & Roelfsema, 2000; Lamme et al., 1998; Hupé et al., 1998; Werner & Noppeney, 2009; Stein et al., 2014).

Additionally, such studies suggest that the interplay between different sense modalities is to a large extent responsible for the reliability of cognitive processing. Integration, Nagel notes, allows cognitive systems such as ours to subconsciously—yet optimally—process information received from different sensory channels:

As the human nervous system draws information from a variety of sensory modalities in making judgments about the environment, it assigns different weights to these channels of information as conditions change (for example, shifting to rely more on touch and hearing as darkness falls and the signal from vision is increasingly blurred). Strikingly, this process of integration is almost perfectly optimal, in the sense that the weight assigned to each sensory modality is an almost perfect reflection of the modality’ s relative current precision (e.g. Ernst & Banks, 2002). (Nagel 2014, p. 711)

Overall, what research like this indicates is that, diachronically, throughout ontogenetic development, neuroanatomically and functionally distinct cognitive modules interact to self-organise. The result is an integrated structure that is capable of supporting the automatic, yet reliable, production of true beliefs.

8.2 Cognitive integration and epistemic self-regulation

At the same time, research within cognitive psychology suggests that integration (in the form of interactions between different cognitive modules) also contributes to epistemic responsibility, by providing cognitive systems with the ability to self-regulate, synchronically.

An important way in which cognitive interactivity contributes to self-regulation is via noetic feelings. Nagel (2014) notes that as the above subconscious process of multisensory integration shifts the weights ascribed to the different channels of information, there is an accompanying feeling of ‘uncertainty’ on the part of the agent.Footnote 23 This feeling of uncertainly may not in itself have a direct effect on the automatic and subconscious process of multisensory integration. Its conscious, reportable nature, however, is held to have a very important effect on the way agents choose to process information. Within cognitive psychology, this feeling of uncertainty—also known as the feeling of lack of fluency—is associated with dealing with suboptimal cognitive processing. Specifically, Oppenheimer (2008) and Alter et al. (2007) suggest that subjects use the feeling of lack of fluency as a metacognitive cue for choosing between two different processing styles—a fast and a slow one. When subjects experience this feeling, they shift from fast, automatic and effortless processing that makes no requirements on working memory to a slower, analytic and deliberate processing style that places considerable demands on working memory. Moreover, it is widely held that lack of fluency does not only present itself as a metacognitive cue in cases of perception, but it is also found across a spectrum of different kinds of cognitive processing. “When people perceive, process, store, retrieve, and generate information, they experience the ease or difficulty of these cognitive operations” [(Unkelbach & Greifeneder, 2013, p. 11); see also (Alter & Oppenheimer, 2009)]. In epistemological terms, then, this metacognitive cue, which is generated as the result of interactions between different parts of the cognitive system, acts as an undercutting mental state defeater, alerting subjects that there might be something wrong with the way they automatically form their beliefs.

Of course, another important way that integration by means of interactions between different cognitive abilities contributes to self-regulation, beyond experiential metacognitive cues, is on the basis of contrasting beliefs: No psychological experiments are required to observe how the interconnectedness of our memory, reasoning and perceptual systems can deliver undercutting and rebutting mental state defeaters in the form of beliefs. On a cloudless day, the water in the sea looks blue, but the memory of my primary school teacher explaining that the sea reflects the colour of the sky presents me with a rebutting defeater against my perceptual belief. Similarly, if I were to find myself lost in the desert and I visually perceived what seems to be a lake, my knowledge of the existence of ‘oasis mirages’ would present me with an undercutting defeater against my perceptual belief.Footnote 24

8.3 Structuralist integration and epistemic responsibility

In summary, interactivity between cognitive modules contributes to positive epistemic standing both in a diachronic and a synchronic manner. Developmental processes allow cognitive systems to self-organise into a structure that can automatically, yet reliably, bring about true beliefs. On the basis of this self-organised structure, cognitive modules are then poised to interact in such a way so as to allow the overall system to self-regulate during the time of performance.

Of course, the distinction between diachronic reliability and synchronic responsibility as well as the corresponding distinction between the underlying mechanisms of self-organisation and self-regulation are, to a large extent, theoretical artefacts. Given brains’ plasticity, every instance of epistemic self-regulation has the capacity to contribute to the process of epistemic self-organisation and vice versa. The diachronic structure of our cognitive systems is continuously shaped by, and at the same time shapes, its ongoing performance.

For example, Shea et al. (2014) argue that a central function of the reportable nature of metacognitive cues, such as the feeling of lack of fluency, is to allow agents to socially construct epistemic strategies for employing such cues. That is, even though metacognitive feelings are the by-product of subconscious cognitive control (which relies on our diachronically shaped neural architecture) and even though, when conscious, such feelings can be employed for synchronic self-regulation, they can also have a further diachronic epistemic effect, which is socially mediated:

Metacognition can also be used diachronically, for example, making it possible for people to discuss how metacognitive representations should be deployed, affecting their own cognitive control [(Job et al., 2010)]. Control strategies based on metacognitive representations, for example, what to do when memory fails, can be the subject of explicit instruction (Shea et al., 2014).

Overall then, cognitive science suggests that the interconnected nature of our cognitive systems allows them to:

  1. (1)

    diachronically self-organise into a structure that can support the automatic, yet reliable, production of true beliefs;

  2. (2)

    synchronically self-regulate in an epistemically responsible manner by issuing defeaters in the form of metacognitive cues such as the feeling of lack of fluency; and

  3. (3)

    do so in a way that (1) and (2) can continuously affect each other to the extent that (1) may be shaped by socially mediated epistemic norms, the need for which is prompted by (2).

Evidently, this picture from cognitive science vindicates and significantly elucidates the claim that cognitive integration in the form of interactions between different cognitive modules is necessary for a process to count as a cognitive ability capable of generating beliefs that are reliable and epistemically responsible.

9 The responsibility-truth connection

The above points offer naturalist support to System Reliabilism while significantly contributing to our understanding of cognitive integration and its epistemic significance. In this final section, I want to demonstrate how they can add to the appreciation of cognitive integration even further. Specifically, on the basis of the preceding remarks, it is possible to understand the central role that cognitive integration plays in making it objectively likely that responsible beliefs are true beliefs.

As noted in Sect. 1, there is the important intuition that responsible beliefs are likely to be true beliefs (alternatively, responsibly produced beliefs will also be reliable).Footnote 25 To give one more example of an author citing with this intuition, Bergmann (2004, p. 49) has noted that a virtue of his defeaters-based account of justification is the consequence that, according to it, there will be a “high objective probability that a justified belief will be a true belief if the properly functioning faculties that produce it are operating in the environment for which they were ‘designed.’”.Footnote 26

Bergmann’s insight that responsible beliefs are likely to be true when produced in the environment for which one’s faculties were ‘designed’ is a welcome specification. Unfortunately, however, Bergmann offers no explanation as to why any of this might be true: How is it that the absence of mental state defeaters makes it likely that the resulting belief will be true, when this is produced by a cognitive process that operates in its ‘normal’ environment? To adequately motivate this claim, we need a mechanistic explanation of the threefold relation between (α) a belief’s lack of mental state defeaters (i.e., its responsibility), (β) its objective high probability to be true (i.e., its reliability) and (γ) the environment within which it is produced.

On the present account, and given the foregoing structuralist understanding of cognitive integration, we can say the following. Our cognitive systems are bundles of modular cognitive abilities. Though modular, on the basis of cognitive integration, cognitive abilities are phylogenetically, ontogenetically and socially co-calibrated to synergistically detect any possible shortcomings—provided they operate in the environments they were ‘designed’ for.Footnote 27 This is because the part-modular-part-integrated, cross-monitoring structure of our cognitive systems makes it highly unlikely for underperforming abilities or faulty outputs to go undetected—unless, of course, the cognitive system operates under particularly abnormal conditions, where several of its faculties may be compromised at once. In normal environments, if one of the operating faculties, or its resulting beliefs, happens to be problematic, it is highly likely that at least one of the other reliably operating faculties will issue an alert signal in the form of a (rebutting or undercutting) mental state defeater. If no such mental state defeaters are presented, the cognitive agent can be confident that the resulting (responsible) beliefs are highly likely to be true (i.e., reliable).

In other words, given cognitive integration, within normal environments, the mental state defeaters that integrated cognitive systems can deliver are highly likely to track most true propositions that could defeat one’s beliefs. It follows that the undefeated (i.e., responsible) beliefs of such integrated systems will also be reliable beliefs (i.e., highly likely to be true). To see how this works, recall that the synchronic self-regulation of integrated cognitive systems continuously shapes their self-organized structure and it is, at the same time, diachronically affected by it. In this way, integrated cognitive systems continuously adapt their ability to self-regulate on the basis of an iterated process of trial and error that is being and has been fine-tuned phylogenetically, ontogenetically and socially in response to the forces that occur in the agent’s normal environment.Footnote 28 The result is a cognitive architecture that is poised to detect nearly all facts that might be relevant to its belief-forming processing, when this is exercised in its normal environment. That is, most times, there will be a near perfect overlap between the set of mental state defeaters that integrated systems can deliver and the set of true propositions that could defeat one’s beliefs. Therefore, in normal environments, it will be highly unlikely for beliefs that lack mental state defeaters to be false.Footnote 29

Thus, with this mechanistic explanation in its disposal, System Reliabilism is in a unique position to account for the intuition that responsible beliefs are also likely to be true beliefs.

10 Conclusion

I have been concerned with (A) demonstrating that basic, non-inferentially derived beliefs can avoid being arbitrary from the agent's point of view and (B) explaining why such non-arbitrary beliefs are likely to be true. My starting point was to argue that, while existing, externalist defeaters-based accounts of justification have been on the right track, they fail (A), because (i) they fail to accommodate related counterexamples such as Norman the clairvoyant and (ii) cannot explain how one can be epistemically responsible in holding basic beliefs—which essentially, amounts to a failure of explaining how basic beliefs can escape the charge of being arbitrary from the agent’s perspective. To solve these problems, I introduced System Reliabilism, a new externalist defeaters-based account of justification.

If the foregoing is correct, System Reliabilism compares favourably to alternative, externalist defeaters-based approaches to justification for two main reasons. Firstly, it offers a detailed, mechanistic explanation of how A is possible, i.e., of how basic beliefs can avoid being arbitrary (while, of course, successfully dealing with both (i) and (ii)). Secondly, this mechanistic explanation—which is naturalistically motivated on the basis of empirical considerations from cognitive science—is the only available means for explaining the truth of (B)—i.e., the claim that non-arbitrary basic beliefs are likely to be true.Footnote 30

Briefly, with respect to (A), System Reliabilism holds that basic beliefs are epistemically responsible and thereby non-arbitrary, when they are the undefeated products of cognitively integrated belief-forming processes (i.e., a cognitive abilities). The reason why such undefeated beliefs are responsible is that they are being monitored for being inconsistent with the rest of the agent’s cognitive system and they successfully survive the test. Simply put: They are undefeated while being defeasible.Footnote 31 With respect to (B), System Reliabilism holds that cognitively integrated systems have been phylogenetically, ontogenetically and socially calibrated to detect most true proposition that could defeat their outputs. Thus, any resulting undefeated beliefs are, within normal environments, likely to be true.

In this way, System Reliabilism has the means to explain not only how certain basic beliefs can avoid being arbitrary, but why, normally, these beliefs are also likely to be true.