Introduction

There has been an ongoing scholarly discussion on the topic of open science over the past decade (Nosek & Bar-Anan, 2012). Open science practices, which can be defined as tactics that increase the transparency and accessibility of scientific research (e.g., pre-registering hypotheses prior to analysis and using results-blind reviewing; van der Zee & Reich, 2018), have been discussed as tactics that, if widely adopted by investigators, would lead to a more robust, replicable, and cumulative scientific literature (Banks et al., 2016a, 2016b; Banks et al., 2018; Grand et al., 2018; Woznyj et al., 2018). More recently, many editorials have been written by leaders in our field in which journals signal their relative receptiveness to scholars’ use of open science practices (Antonakis 2017; DeCelles et al., 2021; Eby, 2022; Pratt et al., 2020). Undoubtedly, our science both values and benefits from a literature that is robust, replicable, and cumulative—and there are many praiseworthy merits to practicing open science. A recent survey of scholars from the four major social science disciplines (economics, political science, psychology, and sociology) suggests that over 80% of scholars have used at least one open science practice since 2017, rising up from ~ 25% in the decade prior (Christensen et al., 2019). This finding is encouraging, as it suggests that scholars are seeking to normalize those practices that contribute to a more robust, replicable, and cumulative scientific literature.

However, just like enacting any new behavior, deciding to practice open science can come with some hesitancy. There are positives and negatives (e.g., time, costs) of engaging in even the smallest of open science practices. Additionally, questions have been raised about the ability to apply open science practices in applied settings (Gabriel & Wessel, 2013; Guzzo, et al., in press; Leavitt, 2013) or in certain paradigms or methodologies (e.g., qualitative research, Pratt, et al., 2020; field research, see Hensel, 2021; Guzzo, et al., in press). Scholars face understandable constraints in their ability to engage in certain open science practices and do not want to be unduly punished for selective adoption of open science tactics. In our view, such concerns about open science appear driven by an ‘all or nothing’ framing that occludes the selective adoption of easy-to-implement open science practices. Despite the notoriety of the “all or nothing” approach, an incremental or piecemeal approach to open science has been strongly endorsed by other open science advocates (Corker, 2018; Kathawalla et al., 2019; Nosek et al., 2015; Nuijten, 2019). Even new journal publishing guidelines (e.g., the Transparency and Openness Guidelines, see Nosek et al., 2015) scale positive changes from low hanging fruit (e.g., improving citation practices) to more challenging tactics (e.g., registered reports) to spur more widespread positive change.

With this paper, we hope to bring greater attention to our shared values as practicing scientists (see also Aguinis et al., 2020) and encourage scholars to select those practices (however small) that put these widely shared values into practice as they see fit. According to the National Academy of Sciences, Engineering, and Medicine (2017), there are six scientific values that we believe are widely shared (see also Anderson et al. 2007): objectivity, honesty, openness, accountability, fairness, and stewardship. These values are depicted in Table 1. Affirming these values is an important—and easy—first step toward practicing open science. Once affirmed, enacting (or not enacting) any specific open science practice can be framed as putting at least one widely shared value into practice. To further frame the adoption of open science positively, we liken enacting any specific open science practice as akin to visiting a buffet filled with an assortment of cuisines worth trying. It is arguably wiser to sample the buffet over multiple visits where different cuisines are tried each time rather than attempt to take in everything the buffet has to offer all in one sitting.

Table 1  Core values of science

Enacting Core Values via Open Science Practices

To focus our readers’ attention on those open science practices that are widely accepted as useful for opening up our science, we adapted an existing framework—Nosek et al.’s (2015) Transparency and Openness (TOP) Guidelines—focusing on what a scholar can do to open up their work. Briefly, the TOP guidelines established eight standards regarding journals’ procedures and policies for publication: (i) citation standards, (ii) data transparency, (iii) analytic methods (code) transparency, (iv) research materials transparency, (v) design and analysis transparency, (vi) preregistration of studies, (vii) preregistration of analysis plans, and (viii) replication.

We adapted the guidelines to bring attention to key decisions scholars can make regarding enacting a core value via an open science practice. Additionally, drawing on the core values of science, we as a team identified which values, in our judgments, are enacted via an open science practice. Our adaptation of the TOP Guidelines, the open science practices implied by the TOP Guidelines, and the core values of science we viewed as enacted by these practices are depicted in Table 2. For example, if we were looking to enact our scientific value of objectivity, we could consider posting our data on the Open Science Framework (OSF), and reporting analyses to increase the reproducibility of our work. However, if this is not feasible to conduct within a specific study context due to confidentiality concerns (e.g., qualitative; Pratt et al., 2020), we could select another tactic from the buffet of options to enact our value of objectivity. Similarly, if we wanted to enact our value of stewardship, we could post our analytic code on a trusted repository so other researchers could readily reproduce reported analyses. However, if this was not feasible, we could opt to enact our value of stewardship by participating in a results-blind review. In essence, there is no one-size-fits-all approach to open science. Multiple practices could enact widely shared values. It is ultimately up to the researcher or research team to select which practice works for them, their project and circumstances, and to pick a practice that puts widely shared core values into practice.

Table 2  A hypothetical connection linking open science practices to the core values of scientific integrity

The Buffet of Open Science Practices

As can be plainly seen, there is an almost overwhelming number of tactics or behaviors that a researcher can select from the open science 'buffet,' and each of these tactics is accompanied with a plethora of journal articles, tutorials, workshops, checklists, and more that describe how to optimally conduct open science. These numerous resources are valuable but can be difficult to digest or apply to one’s own research given the volume. To make this effort more feasible, we organized the open science buffet into an even more digestible form (see Table 3). Under each of the eight TOP standards discussed above, we list specific tactics that researchers can engage in to enact their values. We paired the tactics with helpful resources and, where possible, we also listed exemplar cases of researchers enacting these tactics in business and psychological research.

Table 3 A buffet of open science tactics for authors to choose from along with exemplar cases and guidance for putting a scientific value into practice

Notably, Table 3 lists specific open science tactics that vary in degrees of ease of implementation. Such variation can be explained by such things as the possible cognitive load the tactic places on a scholar, the presence of contextual barriers associated with the research design or features (e.g., sharing data that may contain personally identifying information), and the amount of time the tactic requires from the researcher. Fortunately, many of these tactics are small and can be implemented quickly yet still make a meaningful contribution toward increasing the rigor and reproducibility of our research.

For instance, consider the decision to pre-register one’s work. The preregistration of study hypotheses and analytical decisions is a tactic widely viewed as introducing much more transparency and openness into decision making processes guiding the execution of a study. Like any plan, a pre-registration can be simple—for example including only one’s hypotheses to be tested—or more complex by including decision rules for excluding cases and conditions under which a hypothesis would receive support. Therefore, pre-registration tasks can only take as little as 30 minutes to complete (see Banks et al., 2018) or much longer depending on the complexity of the research design or the number of studies contained in a project (Toth et al., 2020).

Also consider the sharing of analytic code. The distribution of analytic code can serve as an invitation to check the authors results or as a contribution to the scientific community beyond the study findings alone. Ideally, the shared code would be clearly annotated and presented in a manner that would allow for it to be run independent of any package or library updates that may occur after publication (e.g., the use of Docker; see van Lissa et al., 2021). Nevertheless, if a researcher is unfamiliar with the strategies employed to create reproducible code, the simple act of uploading an R script file or SPSS syntax to an OSF repository is still advantageous over the decision to not share code. Regardless of its reproducibility, published code can provide the foundation for another scholar’s work or provide a better understanding of the methods employed by a research team in carrying out the published work. Additionally, sharing syntax is an exceptionally simple and quick open science tactic. Ultimately, the decision to apply one or more simple open science tactics in a research endeavor is preferred to not engaging in any open science activities in totality. And if a researcher consciously aligns their values with their scientific practices, they will choose to do something, however small, rather than nothing at all.

We hope that our illustration reveals that the “all-or-nothing” approach toward open science can lead a researcher to load more on their plate than they can reasonably handle. However, the choice to select just one practice from a broader sample of the open science tactics has the potential to create small yet meaningful benefits for the scientific community. We urge our readers to select just one that allows them to enact a widely shared value. Indeed, after going through our values affirmation exercise, our readers may realize that there are many ways in which they have been practicing open science, in which case we hope our buffet provides even more opportunities to enact one’s core values.

How JBP Is Further Promoting Open Science

We would like to take this opportunity to point out the many ways in which JBP awards scholars’ efforts to enact such widely shared core values of science. One is results-blind reviewing. This alternative format allows the results of a study to be critiqued after the methods have been peer-reviewed to deter critiquing after the results are known (see Nosek & Lakens, 2014; Woznyj et al., 2018). This alternative submission format was adopted to help scholars report their findings honestly, to be evaluated fairly on the basis of objective criteria, and to be held accountable for key decisions (e.g., trimming outliers). Notably, results-blind reviewing can help separate the process by which our knowledge is cultivated from the conclusions of an investigation (Grand et al., 2018). Results-blind reviewing stands in stark contrast with traditional peer-review processes widely believed to drive scholars to engage in questionable research practices (Banks et al., 2016a, 2016b, 2018; O’Boyle et al., 2019).

A second activity JBP has sought to encourage scholars to publish robustly tested hypotheses with null findings (e.g., Kepes et al., 2014). In fact, JBP has a special issue published in 2014 devoted to null findings (see Landis et al., 2014). The more we learn about what does not work, the closer we get to finding out what does and why (Kepes et al., 2014).

A third activity involves encouraging scholars to pre-register their hypotheses and decision-making processes as well as to make study materials available online. JPB has provided guidance for pre-registering study hypotheses (see Toth et al., 2020). Also, in partnership with the Center for Open Science, JBP has a repository where authorship teams can make methodological features of their contribution (e.g., measures, vignettes, data) even more accessible post-publication (see https://osf.io/collections/jbp/discover). For scholars interested in going beyond the guidance we have provided via Table 3, please consider this resource.

Our Challenge to Our Readers: Find Your Small Win for Practicing Open Science

Decades ago, social psychologist Karl Weick (1984) pointed out how the actions of many actors taking small steps can improve the collective lot when tackling large-scale problems: “A small win is a concrete, complete, implemented outcome of moderate importance. By itself, one small win may seem unimportant. A series of wins at small but significant tasks, however, reveals a pattern that may attract allies, deter opponents, and lower resistance to subsequent proposals (p. 43).” He suggested that large, intractable problems, such as ensuring our collective body of knowledge is robust, can be solved by many actors doing something small. He went as far as to suggest that naivete has its value, such as rejecting conventional wisdom, seeing problems as solvable, starting with fewer preconceptions, and favoring optimism. The mere idea of opening our science can easily strike a reader as a larger problem; one that requires large-scale changes in the way in which the scholarly publishing system is structured (e.g., addressing misaligned incentives; see Kerr, 1975). However, we do not have to accept this framing of open science. Rather, we can choose a different frame: one that allows us to view such a large, intractable problem, such as ensuring our collective body of knowledge is robust, reliable, and replicable—as a tractable one. If each of us actively commits to doing something small, rising to the open science challenge, then the rising tide will lift all boats.

Conclusion

To that end, we close by asking that you imagine three scholars: (i) a high-achieving open science practitioner who is working on a project that lends itself to enacting all of the practices we’ve highlighted in our tables; (ii) a scholar who enacted a significant sample of open science practices; and (iii) someone who is just getting started and perhaps can (at best, given the circumstances) commit to the easiest practices (e.g., disclosing whether or not a pre-registration was conducted, sharing scale items, disclosing outlier rules). Existing valuations and understandings of open science might lead one to conclude that first scholar best embodies the open science movement; however, all three scholars are doing what they can do to collectively open our knowledge base. The more we can do to encourage and reward open science practices, the more likely they are to be repeated in the future.