One truism that has become increasingly clear in the age of digital technologies is that technological development far outpaces the evolution of responsible policy and ethics. This has resulted in many current situations and scandals that once would have been thought only possible in a dystopian science fiction novel. In 2014, Cambridge Analytica harvested data from around 200,000 Facebook accounts to build psychological profiles of 87 million users in an attempt to “target their inner demons” and sway their voting behaviors (Cadwalladr and Graham-Harrison 2018). Advances in machine learning and big data have now enabled the use of heretofore low-risk data, like what one posts publicly on Twitter, to be used to predict likelihood of medical conditions (Schneble et al. 2020), contact with a pathogen (Keeling et al. 2020), or mental illness (Kim et al. 2019). Further, constant surveillance (empowered in part by mobile devices and social media) has enabled governments and corporations to develop social management infrastructures to control citizens’, employees’, and consumers’ behaviors through social credit, targeted influence, or fear of discovery (Kostka 2019; West 2019).

In education, the potentials and risks of similar practices with student data represent cause for concern, as student behaviors are increasingly tracked, analyzed, and studied to draw conclusions about learning, attitudes, and future behaviors. Yet, a recent review of the top 10 universities in the U.S., U.K., and Switzerland found that “only a small minority of academic institutions has developed guidelines for data science” in large part because “IRBs in many countries are not required by law to review such research” (Schneble et al. 2018). Couple this with the fact that more than 91–97% of consumers accept terms and conditions of apps and websites without reading them (Deloitte 2017), and the threat to students seems clear: their data are increasingly being used in ways that they are not aware of and that they might not willingly consent to. This occurs even though internet users generally claim to care deeply for their privacy (Paine et al. 2007) and will often experience pressures to adopt technologies that they otherwise would consider to be creepy (Shklovski et al. 2014), leading to various ethical dilemmas regarding student privacy that educational institutions must grapple with if they are to ethically engage in learning analytics or related activities with student data (Slade and Prinsloo 2013).

To help address this problem, Ifenthaler and Schumacher (2016) conducted a quasi-experimental study surveying students about their preferences for learning analytics systems, their attitudes toward privacy as they relate to specific types of data, and how their attitudes influence learning analytics system acceptance. The results are valuable in multiple ways but seem to clearly show that (a) students considered some types of data to be much more important to keep private than others (e.g., medical data vs. email addresses) and (b) students’ willingness to accept learning analytics systems was moderately predicted by their attitudes toward data control and sharing. One suggestion provided in the article is that students and their voices should be involved in decision making regarding learning analytics systems and that approaching student privacy in an omnibus manner (wherein all types of data are treated with equal levels of privacy consideration) is overly simplistic. Rather, if learning analytics systems are to be ethically used and accepted by students, then professionals should start by identifying and using data types that introduce minimal risk to students and that also are reasonably well-connected to the intended purpose and learning objectives of the system.

In this special issue, we have collected three responses from scholars who are currently doing work in learning analytics to react to, better explain, and expound upon the Ifenthaler and Schumacher’s findings in their own settings as researchers and practitioners. Provided responses include the following:

  • Rosenberg and Staudt Wilet approach the article from a research perspective, informed by open science practices, and provide guidance on how to navigate the balance between student risk and scientific potential, which “begins with a deep understanding of the specifics of the context.”

  • Corrin approaches the article from a policy perspective, highlighting potential limitations of the study and difficulties of implementing policies that adequately account for student voice without first having a shared vocabulary and understanding of benefits and also considering other ethical factors beyond privacy that might shape policy.

  • Ochoa and Wise approach the article from a practitioner perspective and highlight three general trends in learning analytics that may help safeguard privacy: (a) involving students in tool creation, (b) developing “analytics that are contextualized, explainable and configurable,” and (c) empowering student agency with analytics as part of a “larger process of learning.”

Collectively, these responses show a unified call for researchers, practitioners, and policy makers to be more aware of and responsive to student privacy in learning analytics systems, underscoring the fact that professional standards in this space remain somewhat nebulous at present and require ongoing leadership, thoughtfulness, and sensitivity to ensure ethical behavior. Indeed, as learning analytics systems introduce ever-increasing potential to benefit students, they also introduce concomitant potentials for harm that must be addressed quickly if we are to avoid any number of dystopian futures.