Skip to content
BY 4.0 license Open Access Published by De Gruyter Mouton June 23, 2023

Promoting responsible AI: A European perspective on the governance of artificial intelligence in media and journalism

  • Colin Porlezza EMAIL logo
From the journal Communications

Abstract

Artificial intelligence and automation have become pervasive in news media, influencing journalism from news gathering to news distribution. As algorithms are increasingly determining editorial decisions, specific concerns have been raised with regard to the responsible and accountable use of AI-driven tools by news media, encompassing new regulatory and ethical questions. This contribution aims to analyze whether and to what extent the use of AI technology in news media and journalism is currently regulated and debated within the European Union and the Council of Europe. Through a document analysis of official policy documents, combined with a data mining approach and an inductive thematic analysis, the study looks at how news media are dealt with, in particular regarding their responsibilities towards their users and society. The findings show that regulatory frameworks about AI rarely include media, but if they do, they associate them with issues such as disinformation, data, and AI literacy, as well as diversity, plurality, and social responsibility.

1 Introduction

On 9 March 2018, the European Commission published a press release titled Artificial intelligence: Commission kicks off work on marrying cutting-edge technology and ethical standards (European Commission, 2018). The statement announced that the Commission was setting up a group supposed to work on artificial intelligence by collecting expert input and rally a broad alliance of stakeholders. The press release marked a first milestone in the discussion revolving around AI governance. In April 2018, the Commission unveiled not only a new European AI Strategy, but also to the creation of a High-Level Group on Artificial Intelligence, tasked with making recommendations regarding ethical guidelines about AI-related challenges. The High-Level Group’s recommendations would then feed into the policy development process and be used to further specify the EU’s governance approach in the field of AI, with the creation of a White Paper on AI in 2020, as well as a coordinated plan on AI in April 2021 that led to the creation of a new AI Act: a European law that would put AI subject to specific legal requirements based on a risk assessment.

The Council of Europe (CoE) has looked into the issue of the impact of technology on law since the 1980s. For instance, in 1981 it established the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data – a predecessor of personal data protection laws such as the GDPR. Hence, on 11 September 2019, the Committee of Ministers of the CoE set up the Ad Hoc Committee on Artificial Intelligence (CAHAI). The expert group, established for a two-year term, was “tasked with examining, through broad multi-stakeholder consultations, the feasibility and potential elements of a legal framework for the development, design and application of artificial intelligence” (CAHAI, 2020). Based on the works of the CAHAI, the Committee of Ministers of the Council of Europe established the Committee on Artificial Intelligence (CAI), tasked with elaborating a legally binding instrument regarding the use of AI. Throughout the next years, the CoE took on the issue of artificial intelligence from different positions, not only within CAI, but also in specific groups such as the Committee of Experts on Increasing Resilience of Media, which works on drafting guidelines on the use of digital tools including artificial intelligence for journalism. Although the work of the CoE is sometimes overshadowed by the EU’s activities, it cannot be underestimated in terms of its legal standard-setting as a pan-European human rights organization (Valcke and Hendrickx, 2023).

In media and journalism, AI has become pervasive in almost all large news organizations, to the point that “[…] algorithms have begun to influence, to some extent, nearly every aspect of journalism. Their impacts may be observed from the initial stages of news production (e. g., story selection) to the latter stages of news consumption (e. g., commenting on stories)” (Zamith, 2019). In fact, AI-driven tools are being implemented in many different areas ranging from news gathering (Thurman et al., 2016) to news production (e. g., Carlson, 2015; Diakopoulos, 2019), and from news distribution (Ford and Hutchinson, 2019) to news personalization (Helberger et al., 2019). AI-driven technology, as diverse as its use-cases are, changes the nature, role, and workflows of journalism (Schapals and Porlezza, 2020; Lewis et al., 2019), and contributes to “mak[ing] journalism in new ways, by creating new genres, practices, and understandings of what news and news work is, and what they ought to be” (Bucher, 2018, p. 132).

In academia, the use of AI technology started to raise a range of ethical questions in terms of transparency, accountability, and responsibility (Dörr and Hollnbuchner, 2017; Helberger et al., 2019; Monti, 2018). Contrary to that, news coverage of AI was dominated by an industry-related portrayal of AI products seen as a solution to a range of public problems (Brennen et al., 2018). Also within the news industry, ethical concerns have not been a top priority (Porlezza and Ferri, 2022). As the automation of news media progresses, questions regarding a responsible use of AI technology become paramount: To what extent can news media and journalism be held to account for the use of AI-driven tools? And to what extent can the implementation of AI tools in news media be effectively regulated? Since news and journalism play an important societal role in any democracy, the way AI and algorithms impact public communication should be kept under scrutiny. However, the media sector is a particular field in which not only issues of privacy and non-discrimination play into the equation, but also matters related to press freedom and freedom of expression.

This contribution aims to analyze whether and to what extent the use of AI-driven tools in news media is currently regulated in Europe at a supranational level. The contribution looks at the European Union (EU) as well as the Council of Europe (CoE) in relation to three aspects:

  1. When it comes to news media, how should news production and news distribution be regulated?

  2. When it comes to news users, how should news media ensure a respectful use of AI-driven tools in terms of being respectful of the users’ privacy and their data?

  3. When it comes to news media’s responsibilities towards society, how can they ensure that AI-driven tools enhance key values such as news diversity?

This paper is organized in the following manner: In the next section, the paper will discuss previous research regarding the governance of AI, specifically in the field of media and journalism. After that, the paper presents the method used to analyze the documents on which this investigation is based. In the subsequent section, the paper discusses the current regulatory situation, and presents concluding remarks.

2 Literature review

The term AI is complex, not only because it denotes a field of enquiry and a particular technology (Gunkel, 2020, p. 3), but also because it encompasses a huge variety of subfields (Russell and Norvig, 2009, p. 1) that range from healthcare, to building chess engines, to self-driving cars, robotics, and many more. While these AI expert systems have seen an increasingly widespread use in society (Epstein et al., 2018), there is no universally accepted definition of AI; instead definitions depend on the different disciplines’ conceptualizations. This paper uses the following definition of AI as suggested by Brennen et al. (2018, pp. 1–2) and inspired by Dickens Olewe: “[…] AI is a collection of ideas, technologies that relate to a computer system’s capacity to perform tasks normally requiring human intelligence.” This particular definition has been chosen because it reflects the idea that AI cannot be reduced to a particular technology, software or purpose, but it rather is a collection of both technologies as well as the ideas behind these technologies that contribute to machines being able to carry out tasks without human intervention.

The news media framing of AI

The opportunities that come with the technology are countless, which can be seen in many national governments’ AI strategies.[1] But not only governments point out the opportunities of AI, news coverage tends to see beneficial effects too. Several studies have identified that (a) the benefits of AI are discussed more frequently than its risks (Chuan et al., 2019; Fast and Horvitz, 2017), (b) there is a tendency to present AI systems as outperforming human expertise (Bunz and Braghieri, 2022), and (c) AI is more often framed as a “gate to heaven” and “helping hand” rather than as “Frankenstein’s monster” (Cools et al., 2022). The more powerful AI becomes, the more positive the framing of AI tends to be (Cools et al., 2022, p. 17).

The media industry’s internal discourse is largely positive as well. Beckett’s (2019) research demonstrated that tech-savvy experts and journalists are less concerned about the dysfunctional consequences of the innovations. Gutierrez Lopez et al. (2021) came to similar conclusions showing that journalists are upbeat about the technology. A recent study by Porlezza and Ferri (2022) confirmed that experts from the industry and academia often point out the pervasiveness of news automation as well as the advantages in terms of efficiency and time gained for more investigative reports.

However, some studies also uncovered that the public communication about the impact of AI increased, and that the news discourse has become more critical over time (Nguyen and Hekman, 2022) because “worries of loss of control of AI, ethical concerns for AI, and the negative impact of AI on work have grown in recent years” (Fast and Horvitz, 2017, p. 963). Yet, it seems that positive depictions of the technology outnumber negative frames, which tend to be overly dystopian (Porlezza, 2019). It is important to remember, as Strömbäck and Karlsson (2011, p. 643) point out, that technological changes in news media and “how they may change the influence over the news should not be understood in isolation from other changes in media environments.” Hence, the way technology is portrayed in the news media might well have an impact on how AI and automation may or may not be implemented and regulated in the news sector. Brennan et al. (2022) also show that news coverage of AI often constructs an expectation of a pseudo-artificial general intelligence that might well impact the perception of the technology in political decision-making. But also within newsrooms, in particular when it comes to the specific effects on labor and working conditions such as the partial replacement of journalistic tasks, the perception of the technology can result in an increased skepticism towards the use of automation in the workplace. Carlson (2015, p. 430) refers to this as a technological drama in news work over the “entrenched cultural conflict between equating technological development with progress and deep distrust of machines as dehumanizing forces.”

AI governance

The hype surrounding AI has triggered a lot of attention in terms of regulatory debates:

As suggested by work on performative function of hypes, irrespective of how accurate predictions about AI are, they influence agenda setting including assigning high policy priority to AI. The hype about AI is accompanied by major public controversy about positive and negative effects of AI. (Ulnicane et al., 2021, p. 171)

Moreover, in the light of phenomena such as mass surveillance (Cheney-Lippold, 2017) or data colonization (Couldry and Mejias, 2019), scholars have repeated specific calls for action (Pasquale, 2015; Crawford, 2021). As a consequence, national governments and supranational institutions face increasing pressure to come up with regulatory frameworks (Taeihagh, 2021; Zhang and Dafoe, 2019). The call for action emerges thus both as a policy frame in national strategies (see for instance Porlezza, 2022), as well as public strategies that offer means to mitigate the risks of the technology: “As the move away from technological perspectives towards societal transformation begins to consolidate, governance approaches start to come under scrutiny” (Radu, 2021, p. 179).

There is not just one approach when it comes to the governance of algorithms and AI (Latzer and Just, 2020). However, those approaches focusing on ethical principles are among the most often adopted perspectives (Cath, 2018; Floridi et al., 2018; Taeihagh, 2021; Gianni et al., 2022), leading other scholars also to criticize the ethical focus as overshadowing the interest in regulation (Radu, 2021, p. 179). Overall, the field of AI governance looks into the way AI technology can and should be controlled, governed, and shaped (Dafoe, 2018, p. 5). It can be defined as the way “humanity can best navigate the transition to advanced AI systems, focusing on the political, economic, military, governance, and ethical dimensions” (idem). AI do not operate in a social vacuum, because they are deeply ingrained with society and affect our everyday lives. While some governance contributions focus on an “overall understanding of the wide systemic socio-technical phenomenon and suggest broader sets of integrated approaches and tools to govern such a phenomenon” (Gianni et al., 2022), others concentrate on an integrated approach between governance and ethics. They center on practical governance processes and principles and the question of how ethical principles can be operationalized. However, AI ethics approaches often present issues: from being regarded as limited in their effectiveness (Taeihagh, 2021), to deficiencies in power dynamics (Crawford, 2021), to cultural and economic inequalities (Jobin et al., 2019). Gianni et al. (2022) thus conclude that “the existing governance and policy-making seems to reside at an arm’s length from the suggested ethics guidelines and governance frameworks, leaving room for continuing discussion on the actual use of power and democratic mechanisms in the policy-making and governance of AI.”

Research shows that there are several challenges when it comes to their concrete application to AI technology: machine learning’s inherent opacity, given that it can be difficult to reach certain levels of transparency, scrutability or explainability (Mittelstadt et al., 2016). Hagendorff (2020) adds that ethical principles might be too abstract, puzzling therefore designers. Bietti (2020) also points out the risk of ethics washing by simply adopting codes of ethics without operationalizing and institutionalizing the principles. Besides, ethical principles are often developed on abstract presuppositions, but in the case of AI technologies context matters: “The resolution of ethics into a set of fixed principles overlooks their relationship with a socio-economic environment formed by a plurality of contextual values, power asymmetries, interests and material conditions necessary to implement AI-based technologies” (Gianni et al., 2022).

Media and journalism as specific challenges for AI governance

In the context of media and journalism, Beckett (2019) as well as Porlezza and Ferri (2022) show that ethical concerns are not a primary issue in relation to AI tools, although automation poses several ethical issues. Monti (2018) for instance points out that the quality and accuracy of the data used in the automated processes in order to avoid any form of bias or manipulation is crucial. In other cases, different ethical principles might clash. Journalists have specific responsibilities, such as protecting their sources. But in the case of automated processes, the need to be transparent about the functioning of the algorithm and the data being used, might clash with source protection as Dörr and Hollnbuchner (2016, p. 412) point out: “It is questionable whether source protection is possible or even desired as service providers and their journalistic clients should disclose all data sources in terms of data transparency” – which might also raise legal questions (idem, p. 414).

Specific responsibilities also arise in relation to news personalization: Adopting a human rights perspective, Helberger et al. (2019) state that the use of AI technology has already measurable consequences for the user and the public sphere that can go both ways: more relevant news, or concerns about selective exposure and access to information, thus making or breaking filter bubbles (Vrijenhoek et al., 2021). AI technology can thus undermine central values such as plurality or diversity because they can cause bias in news exposure, “the necessity to collect and store extensive data on all users, the risks of targeted manipulation, and the limited agency users experience while interacting with AI-driven tools” (Helberger et al. 2020, p. 11).

The institutionalization of AI in news work evokes many ethical challenges to which the news industry has not yet found a convincing answer – which might add to the challenges of developing and adopting ethical principles. The problem is also reflected at the level of self-regulation: Principles about AI are largely missing in codes of press councils, as Porlezza and Eberwein (2022) have shown. In a similar study, Díaz-Campo and Chaparro-Domínguez (2020) also show that principles with regard to controlling software or code are lacking. However, when it comes the identification of ethical values related to the use of AI in journalism – and this also includes matters of privacy and trust – it is not only about guidelines and codes of ethics. It is more about the “responsible organization of processes: the processes that result in the identification of relevant values, but also ways to concentrate, contest, formalize, implement, measure and continuously improve the way journalistic AI lives up to these values” (Helberger et al., 2022, p. 1620). This includes a discussion about what kind of purposes AI should serve in news and journalism, and how it should be governed in a broader normative framework that goes beyond short-term KPIs, and that (a) takes into account a societal perspective, and (b) is aware of potentially “competing values that must be balanced when deploying journalistic AI” (idem, p. 1621).

The issues concerning AI governance in the field of media and journalism have also been discussed in a timely essay by Helberger and Diakopoulos (2022) in which they discuss the European AI Act and how it matters for media and journalism research: According to the authors, a responsible use of AI technology is not limited to its implementation, but concerns its use and design as well. Broader organizational and societal contexts are therefore crucial but often overlooked because users such as journalists and consumers are not taken into account. In other words: “When developing regulatory approaches to AI and digital technologies, policy makers are moving in an arena of extreme technological, economic and societal complexity, a complexity few policy makers have been prepared to deal with” (Helberger and Diakopoulos, 2022).

But why are policy debates and regulatory efforts at a European level relevant for news and journalism? First of all, news and journalism need clarity on the regulation of AI in the context of press freedom, and freedom of expression – in particular as the European Convention on Human Rights, together with the European Court of Human Rights, strongly shape and protect the media’s freedom. Second, a clear regulation could improve data accessibility and transparency regarding the use and function of AI tools, which can eventually strengthen trust in news and journalism. And third, news and journalism should have an interest in a clear regulatory framework in order to avoid that rules, which have been developed primarily with social media platforms in mind, are applied to journalism as well. It is important to look at both the EU and the CoE, because they have different foci, as will be seen later on in the article. In this sense, this contribution aims to answer the following research question:

RQ:

To what extent are supranational institutions such as the European Union as well as the Council of Europe regulating the use of AI-driven tools in media and journalism?

3 Method

In order to shed light on the specific governance approach at a supranational level, the contribution is based on a two-step methodology, preceded by a desk research that identified relevant policy documents regarding the central topic of AI. The corpus of the analysis is composed of official documents of two supranational institutions: the EU and the CoE. Although the two European institutions share the same fundamental values such as human rights, democracy and the rule of law, they are two different entities that carry out diverse but complementary roles: While the European Union, founded in 1993, is a supranational political and economic union that includes 27 member states, the Council of Europe, which includes 46 member states and thus more than the EU counts, is an international forum for general debates on Europe, including intergovernmental agreements governed by international law like the European Convention on Human Rights. The Council of Europe, founded in 1949, is also the oldest interstate organization in Europe.

Table 1:

Corpus of documents from the European Union.

Document name

Date of release

Artificial Intelligence for Europe

25.04.2018

Coordinated Plan on Artificial Intelligence

25.04.2018

Coordinated Plan on Artificial Intelligence Annex

07.12.2018

HLEG A definition of AI – Main capabilities and disciplines

18.12.2018

Building Trust in Human-Centric Artificial Intelligence

08.04.2019

HLEG Ethics guidelines for trustworthy AI

08.04.2019

HLEG Policy and investment recommendations for trustworthy AI

26.06.2019

Liability for AI and other emerging technologies

27.11.2019

Report on the safety and liability implications of AI

19.02.2020

White paper on AI – A European approach to excellence and trust

19.02.2020

HLEG Assessment list for trustworthy AI (ALTAI)

17.07.2020

HLEG Sectorial recommendations of trustworthy AI

23.07.2020

European Parliament resolution of 20 October with recommendation to the Commission on a framework of ethical aspects of AI, robotics and related technologies

20.10.2020

AI: Presidency issues conclusions on ensuring respect for fundamental rights

21.10.2020

Fostering a European approach to AI

21.04.2021

Laying down harmonised rules on AI

21.04.2021

Laying down harmonised rules on AI – Annex

21.04.2021

Laying down harmonised rules on AI – Impact Assessment

21.04.2021

European Parliament resolution of 3 May 2022 on artificial intelligence in a digital age

03.05.2022

Digital Services Act

19.10.2022

Table 2:

Corpus of documents from the Council of Europe.

Document name

Date of release

Guidelines on Artificial Intelligence and data protection

25.01.2019

Unboxing Artificial Intelligence:

10 steps to protect Human Rights

14.05.2019

Recommendation CM/Rec(2020)1 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems

08.04.2020

Ad hoc Committee on Artificial Intelligence (CAHAI) – Feasibility Study

17.12.2020

Possible elements of a legal framework on artificial intelligence, based on the Council of Europe’s standards on human rights, democracy and the rule of law (CAHAI)

16.02.2022

The documents included were published between April 2018 and May 2022, and they range from white papers to resolutions of the Parliament, to proposed laws that explicitly refer to AI in the case of the EU, and recommendations as well as feasibility studies in the case of the CoE. The time frame is determined by the earliest publication of AI-related documents (by the EU), and the writing of this article. Overall, the sample consists of 24 documents. The reason why the European Union shows more documents compared to the Council of Europe may be related to the fact that it has in recent years set international standards in terms of regulating technological standards. It has shown this with the introduction of the General Data Protection Regulation (GDPR), which ensures that personal data can only be gathered under tight conditions, and with the Digital Services Act or the Code of Practice on Disinformation to fight the dissemination of dis- and misinformation. In addition, many of the documents were intermediate steps in the overall AI strategy of the EU that, overall, brought to the development of the AI Act.

Overall, the investigation carried out a document analysis (Prior, 2003). In a first desk research, relevant documents for the study were searched, identified, and selected based on a qualitative assessment of the topical reference to AI. All documents were downloaded from the websites of the two institutions, and they organized in chronological order to allow for a better understanding of how the policy documents regarding the governance and regulatory frameworks of AI evolved. The study did not apply a sampling process but included all relevant and publicly available policy documents.

We then combined the document analysis with a text mining approach (Ignatow and Mihalcea, 2018). The text mining analysis has been carried out on every document in the corpus by using Adobe’s Advanced Research. All the documents have been analyzed with the following objectives: first, to identify the presence of the terms media, journalism, and other proxies such as public service media, broadcasting, news website or newspapers, in the documents; and, second, to identify the way in which AI governance is being understood in relation to media and journalism. While the first part produced a quantitative data output that has been organized in a spreadsheet, offering insights about how often media and journalism are mentioned in the documents, the second part of the text mining analysis allowed for a closer qualitative analysis by isolating relevant paragraphs for further analysis.

The corpus of documents that has been analyzed in terms of text mining is larger than the number of documents that underwent further qualitative analysis, as will be seen. In other words, only the documents with references to (news) media and journalism were kept for the second step that encompassed an inductive thematic analysis (Braun and Clarke, 2012), which is a suitable qualitative method for the textual analysis of documents. The inductive thematic analysis was carried out in two steps, too: First, we coded those topics in the paragraphs relevant to our research. Second, the codes were aggregated to main themes that reflect major emerging topics. The thematic analysis served to understand the specific trajectories AI governance applies to media and journalism. Those text elements that revealed a close relation to the media were further analyzed for key terms such as “consumers” and “society”, allowing us to further analyze the policy documents’ argumentations.

Ethics statement

This study has been conducted in line with the Ethical Guidelines issued by the Association of Internet Researchers (franzke et al., 2020). The documents are freely available and accessible on the web, and they are explicitly designated for public use.[2] The author has no political ties with the studied organizations. No ethics approval was needed in order to carry out the research.

4 Findings

Media and journalism (not) mentioned in the policy documents

The findings show that the analyzed policy documents that deal with AI rarely refer to media or journalism. From a quantitative perspective, the keyword “media” can be found 20 times in all the analyzed documents of both institutions. Media as a key word is mentioned in eight EU policy documents, while in four out of five policy documents of the CoE. Turning the analysis to journalism, the numbers drop further: It appears four times in the Digital Service Act, and nowhere else in EU documents; the CoE never refer to journalism. Proxies such as media-related public service or broadcasting are equally mentioned in the Digital Service Act only.

In general, the policy documents tend to keep the object of discussion at an abstract level, and do not refer to media and journalism. However, the content often remains relevant to media. This can be seen in Art. 3 of the extensive definition of artificial intelligence in the AI Act draft:

For the purpose of this Regulation, the following definitions apply: (1) ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with (…)

The definition of AI refers to several technologies that are employed by news organizations, such as automated journalism or news recommenders. Compared to the more general definition of AI adopted in this paper, the EU’s definition focuses on specific components (software) and outputs such as content or recommendations. As will be seen, discussing AI-related issues on an abstract level, while understandable from a regulatory perspective, can cause uncertainties on whether a specific technology falls under the remit of a more or less strict regulation.

Regulating media, journalism, and AI

Where media are specifically mentioned in the policy documents, they are dealt with through four different themes:

  1. current and potential risks of AI;

  2. educational aspects in terms of data and AI literacy or digital resilience;

  3. social responsibility in order to uphold objective and freely available information, as well as media freedom and plurality; and

  4. media as main stakeholders that need to be included in the policy debate about AI governance.

When it comes to current or potential risks related to AI and media, the policy documents mainly refer to problems such as disinformation, hybrid warfare, or emphatic media:

Notes, in particular, that AI technology may entail potential risks as a means of pursuing various forms of hybrid warfare and foreign interference; specifies that it could for instance be mobilized to trigger disinformation, by using bots or fake social media accounts, to weaponize interdependence, by gathering valuable information or denying network access to adversaries, to create disturbances in the economic and financial systems of other countries, to pollute the political debate and favor extremist groups, or to manipulate elections to destabilize democracies (European Parliament resolution of 3 May 2022);

underlines that, if not regulated, it might also have ethically adverse effects by exploiting biases in data and algorithms that may lead to disseminating disinformation and creating information bubbles. (European Parliament resolution of 20 October 2020)

Furthermore, content is increasingly being “faked” by producing synthetic media footage, e. g. by mimicking real people’s appearance or voice using so called “deep fakes”. Such technology is already able to manipulate or generate visual and audio content with an unprecedented potential to deceive and to blur the line between real and fake content. (Council of Europe – CAHAI – Feasibility study)

The policy documents also tend to identify risks related to AI such as bots, deep fakes, targeted advertising, hate speech, or fake accounts. Documents by the CoE instead mention the risks of reinforcing outdated social norms such as gender-based stereotypes, or fueling polarization and extremism through the creation of “echo chambers” and “filter bubbles” (CAHAI, 2020, p. 9). While digital intermediaries are frequently mentioned in relation to risks, traditional news media appear less often, at least in the documents of the EU. This is surprising given the media’s central role for democracies, especially now that news media increasingly adopt personalization strategies such as news recommenders that bear significant risks for the diversity and plurality of the news ecosystem.

The second theme includes educational aspects in terms of data and AI literacy or digital resilience. This is one of the most often mentioned contexts in which media appear, emphasizing that the increasing use of AI in media and society requires additional media education to foster digital resilience as well as data and AI skills. The resolutions of the European parliament repeatedly refer to the centrality of digital education that should raise the awareness on “aspects of daily life potentially affected by machine learning” (European Parliament, 2022) such as recommendations engines, targeted advertising, and social media algorithms. In this sense, the European parliament suggests fostering digital resilience, but in order to do so the EU needs to improve media education both in terms of AI skills and AI literacy courses for all citizens:

Calls on the Commission to create an AI skills framework for individuals, building on the digital competence framework, to provide citizens, workers and businesses with relevant AI training and learning opportunities and improve the sharing of knowledge, best practices, and media and data literacy between organizations and companies at both EU and national level; (…) (European Parliament resolution of 3 May 2022)

The third theme focuses on questions of social responsibility, diversity, independence, media plurality, and the importance of objective and freely available information. The documents deliberate on the increasing impact of AI on information dissemination to users, endangering fundamental rights such as freedom of expression and information as well as media freedom and pluralism. These considerations are closest to the discussion about the way news media are currently using AI, because it touches upon central tenets such as the role of the media in society and democracy:

Socially responsible artificial intelligence, robotics and related technologies, including the software, algorithms and data used or produced by such technologies, can be defined as technologies which contribute to find solutions that safeguard and promote different aims regarding society, most notably democracy, health and economic prosperity, equality of opportunity, workers’ and social rights, diverse and independent media and objective and freely available information, allowing for public debate, quality education, cultural and linguistic diversity, gender balance, digital literacy, innovation and creativity.” (European Parliament resolution of 20 October 2020)

The CoE is particularly concerned about the conflict between media and fundamental rights such as freedom of speech or media freedom. According to the feasibility study carried out by the Ad Hoc Committee on Artificial Intelligence, “the use of AI systems – both online and offline – can impact individuals’ freedom of expression and access to information, as well as the freedom of assembly and association” (CAHAI, 2020, p. 8). The main worry expressed by the Committee is that AI systems intervene and alter information offers and interactions in the media space, leading to a decreasing (news) media consumption. Moreover, the report refers to issues like recommendation or aggregation systems, pointing out their non-transparency and lack of accountability:

recommendation systems and news aggregators are often non-transparent and unaccountable, both concerning the data they use to select or prioritise content, but also as concerns the purpose of the specific selection or prioritization which they can use for financial and political interest promotion. (CAHAI, 2020, p. 9)

The feasibility study carried out by the Ad Hoc Committee on Artificial Intelligence also points out that AI can have economic implications for news media: If the technology leads to a reduced news media consumption, combined with the constant increase of social media as the main gateway to news, this could worsen the already shaky economic situation of traditional news media, with all the dysfunctional consequences for a free, independent, and pluralistic media ecosystem – even though AI is often heralded as a savior for both journalism and media.

The last theme concerns the media’s role as stakeholders in the public deliberations when it comes to the opportunities and risks of artificial intelligence. Media have repeatedly been consulted, for example in relation to the Coordinated Plan on Artificial Intelligence or the Digital Services Act. The CoE mentions the important role of the media in a deliberative process about

the deployment of AI systems in the public sector, with special attention to the inclusion of under-represented and vulnerable individuals and groups, which is key to ensuring trust in the technology and its acceptance by all stakeholders. (CAHAI, 2020, p. 39)

These reflections are also integrated into the Recommendation of the CoE’s Committee of Ministers to member States on the human rights impacts of algorithmic systems in 2020. Overall, the media are seen as a relevant stakeholder in the public hearings and deliberations concerning AI. This is in line with some of the scholarly work produced so far that calls for a stronger presence of the media. Buhmann and Fieseler (2021), for instance, have developed the idea for a deliberative and bottom-up framework for the responsible use of AI. However, even if the institutions’ procedures apply a multi-stakeholder approach that includes different actors such as tech-companies, civil society, and news media in a more “bottom-up identification, interpretation, and problematization” (idem, p. 6) of the issue, the actual participation of news media has only recently increased: While news media were missing in the public consultations with regard to the EU’s AI White Paper, in the case of the AI Act news media are now more present (e. g., European Broadcasting Union [EBU], 2022). The fact that traditional news media are less involved in the public consultations (for instance on the significant risks to fundamental rights), compared to other heavyweight companies such as Google or Microsoft, might also explain why journalism is hardly ever mentioned in the policy documents.

The perils of AI regulation in media and journalism

After the analysis of how media and journalism are thematized in the policy documents, this section looks at specific regulatory plans. In order to carry out a timelier analysis, the paper will focus on the most recent policy documents, meaning the EU’s AI Act and the CoE’s Recommendation of the Committee of Ministers to member States on the human rights impacts of algorithmic systems together with the Guidelines on Artificial Intelligence and Data Protection.

The AI Act draft is grounded on a risk-based approach that ranges from unacceptable applications of AI (social scoring or applications that try to manipulate human behavior), to high risk (AI systems used in critical infrastructures, law enforcement, democratic processes etc.), limited risk (e. g., chatbots, where transparency issues are relevant), and minimal risk (AI-enabled video games or spam filters that do not pose any particular risk for the users). According to the EU’s regulatory draft, most AI systems will fall into the last category, where no specific regulatory interventions are required. Due to their sensitivity, high risk AI technology needs to satisfy specific risk management measures, anticipating the dangers and potentially dysfunctional effects of their systems. These measures not only apply when it comes to the design of these systems, but also in relation to their use, making sure that the tools are used “‘in the right way’ (meaning: in compliance with the legal requirements and standards of human-centric AI that respects fundamental rights and European values), for example through the adequate design of human interfaces” (Helberger and Diakopoulos, 2022, p. 2).

Eventually, it is up to the European Union to assess the risk factor of AI technology being used in news media. The actual decision-making process includes a conformity assessment prior to market entry in order to check whether the system complies with AI requirements as set forth in the EU AI Act. Only after the registration of the AI systems in an EU database and a signed declaration of conformity the system can be placed on the market. However, the “authorities are in charge of the market surveillance, users ensure human oversight and monitoring, while providers have a post-market monitoring system in place. Providers and users will also report serious incidents and malfunctioning” (European Commission, n.d.).

At the moment it seems that most technologies might fall either into the moderate or the minimal risk categories, in particular because the use-cases where AI is adopted vary considerably, ranging, for example, from face recognition to automated transcription to recommendation systems. However, according to the European Broadcasting Union EBU (2022), some of these applications might well fall into the high risk category, threatening the legitimate use of AI systems in the news industry. The EBU refers to automated journalism, where AI systems produce texts or images, to biometric categorization and identification of people through face recognition in video materials, and the issue of transparency, asking for light rules as they could otherwise impair the user experience. As pointed out above, this is an example where the general formulation of the draft leaves many open questions as to whether these technologies will be regulated in the same way, even if they are adopted in the media. This uncertainty becomes more pressing as the issue emerges at the intersection of different fundamental rights such as freedom of expression, press freedom, or the right to privacy.

The news media sector is particularly sensitive because it is an area that enjoys a particular protection from statutory intervention. The European Convention on Human Rights (ECHR) protects not only the so-called substance of expressed ideas: According to the European Court of Human Rights, Article 10 of the Convention also protects the form in which ideas are conveyed as well as their dissemination (ECHR, 2022). In other words, interfering with the form in which the journalistic content is being produced runs against press freedom, as the common understanding was that journalists must be able to deal with specific events or people as they see fit. Hence, under the current regulatory frameworks it might be difficult to intervene in the media sector since “neither national nor supranational courts or regulatory authorities may tell the press what techniques of reporting should be adopted by journalists (Helberger et al., 2019, p. 15).

Especially the plan of the EU to evaluate the algorithms driving the systems through a conformity assessment might eventually clash with press freedom guaranteed by the ECHR. The current draft also provides for the establishment of a European Artificial Intelligence Board (EAIB) as a new enforcement body at the Union level. Such a regulatory authority might potentially interfere with the media’s freedom to impart information and ideas without interference by public authority, in particular because the form of dissemination is protected as well.

In the case of the CoE, the recommendations focus on the promotion and support of digital and information literacy in order to enable a critical use and awareness of algorithmic systems. In addition, the Council focuses on data protection measures and mentions a specific responsibility of media regulators in ensuring compliance with data protection laws as well as ensuring access to data, for example for researchers. This is not really a new request, given that it has been stressed by the European Commission in their White Paper on Artificial Intelligence in regard to the privacy and protection of personal data if they are used by AI-enabled products and services. But it shows how the CoE is concerned with the protection of users’ rights: from the promotion of critical digital literacy skills, to the protection of personal data as well as the regulation of (improper) data exploitation and commercialization. In a similar vein, the EU’s Digital Services Act strongly focuses on consumer and data protection as well as non-discrimination. Overall, the regulatory frameworks of both the EU and the CoE demonstrate that the primary focus lies on subjects such as data protection, digital rights, and the development as well as the respect of ethical standards for a responsible use of AI technology.

While media do appear sometimes, media consumers remain almost invisible in the documents, a finding that is confirmed by Helberger and Diakopoulos (2022, p. 3). This deviates from other regulatory approaches such as the Digital Services Act, where consumers have a different standing in terms of their rights (see also Clegg, 2021). Most paragraphs mentioning consumers are written without referring to specific industries, which makes it difficult to understand what kind of responsibility media organizations have towards users. There are however two areas in which consumer rights play a role: First, with regard to transparency. Art. 52 of the AI Act draft states that “providers shall ensure that AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.” Some news organizations do have such systems, for instance in the form of chatbots. Under the new regulation, news media would not only be obliged to inform the user about the AI technology they are interacting with, but in this particular case the media are also providers of the service with all the legal responsibilities that come with such a tool (even if they adapt a third-party algorithm to their requirements). The other area concerns the collaboration between private actors and consumer associations “on the design, development, ongoing deployment and evaluation of algorithmic systems” (CoE, 2020, Art. 4.5), stressing once again a multi-stakeholder governance approach.[3]

In relation to more systemic challenges of AI technology used in media and journalism, the AI Act as well as the documents elaborated by the CoE do hint at the risks for society, but other policy documents such as the Digital Services Act (Art. 26) are more specific in terms of automated content or recommender systems. When it comes to societal aspects, the CoE focuses on democratic participation and awareness about the power and impact of algorithmic systems. In Art. B1.3 of the Recommendation to member States on the human rights impacts of algorithmic systems, the Committee of Ministers states that

all relevant actors, including those in the public, private and civil society sectors in which algorithmic systems are contemplated or are in use, should promote, encourage and support in a tailored and inclusive manner (taking account of diversity with respect to, for instance, age, gender, race, ethnicity, cultural or socio-economic background) a level of media, digital and information literacy that enables the competent and critical consideration of and use of algorithmic systems.

These principles go hand in hand with the growing body of evidence that media scholars have collected about the risks for society of AI used in media and journalism: rising digital inequality and exclusion, polarization, and the spread of disinformation, to name just a few (Balkin, 2017; Napoli, 2019). What the policy documents of both institutions have omitted so far, is a specific focus on the different areas in journalism where AI systems could be employed, that is, news gathering, production, and dissemination. However, this situation might change in due course given that the CoE’s Committee of Experts on Increasing Resilience of Media is currently developing specific guidelines on the use of AI tools in journalism. The guidelines are expected to be published by the end of the Committee’s mandate in December 2023.

5 Conclusions

This study aimed to analyze to what extent the use of AI-driven tools in media and journalism is regulated at a supranational level. It looked at the EU and the CoE, focusing mainly on recent policy documents such as the AI Act or the Council’s recommendations regarding algorithmic systems, human rights, and data protection. Media and journalism are not frequently mentioned in the policy documents as the data mining shows. Yet, the thematic analysis allowed us to identify four main topics where media play a central role: (a) risks such as disinformation; (b) data and AI literacy; (c) issues of diversity, plurality, and social responsibility; and (d) the media’s role as stakeholders in the public deliberation process. While the European Commission (2021) wants to “spearhead the development of new ambitious global norms” related to AI technology, the regulatory approach still presents uncertainties. Without a doubt, the new AI Act will set a new benchmark, but there are (at least) three open questions in relation to media and journalism.

First, the regulatory frameworks tend to focus on the algorithms of platforms. This is due to the fact that platforms are dominant economic players in the media industry, and have “become a precondition for digital citizens to participate and function in the Algorithmic Society” (Helberger and Diakopoulos, 2022). As such, they reshape the political economy of national and regional media and force them to recalibrate their position in the public space (Van Dijck et al., 2018). Not only does this create conflicts between national interests and the (co-) regulation of powerful global players as stated in the European democracy action plan[4], but it also eclipses the pervasiveness of AI tools used in news media and their impact on public communication.

Second, the media are a sensitive field where different fundamental rights such as press freedom, freedom of expression, or the right to privacy intersect. For instance, according to Article 10 ECHR, new technology needs to promote “the societal and democratic role of the media, and respects freedom of expression rights of users and competing media providers” (Helberger et al., 2020). However, the news media’s concerns might often be overstated, and the case of transparency exemplifies this. The EBU (2022) recently argued for less strict transparency rules due to diminishing user experience. This line of argumentation clashes with the fact that the relation between AI-driven tools and users is not always clear and transparent in the media. This might require “rethinking how to respect users’ rights to privacy, to form opinions and to non-discrimination” (Helberger et al., 2020). An increased pressure on both news media and designers to be more open about their algorithms – something news media have been struggling with for quite a while (Montal and Reich, 2017) – allows users to better evaluate the AI systems’ functioning.

Following the second point it seems that governance issues regarding the use of AI in journalism and media may be best tackled through an integrated, multi-level, and multi-stakeholder approach, not only because the media are a sensitive field in terms of their specific freedoms – and the European Convention on Human Rights also protects the form how information is conveyed – but also because journalism might need specific adaptations of more general rules. Instead, the EU adopts a broad approach in this relatively new field, although the consequences of an AI-Act for the news media sector are far from clear. On the other hand, the CoE instead is actually working on different levels: a first, broad approach through general guidelines (what the CAI is currently working on), and much more specific guidelines through their Committee of Experts on Increasing Resilience of Media, which works on guidelines on the use of digital tools including artificial intelligence for journalism.

But which institution is better equipped to set up a framework in the area of media and journalism? Currently, in the light of its transversal focus on human rights, democracy, and the rule of law, the CoE’s approach in terms of regulating AI seems particularly suitable because it tries to combine more general frameworks with detailed guidelines for the use of AI in journalism. The EU’s AI Act, instead, offers a fruitful framework in terms of its risk-based approach, not only for law enforcement, but also because not all AI systems (in journalism) are subject to the same risks. It will be interesting to see how and to what extent both institutions in accordance will set legally binding international standards in terms of AI regulation.

One of the biggest challenges is that these regulatory demands clash with professional journalistic values. Transparency is a good example where, for example, source identity and the disclosure of data sources can provoke conflicts. How should news media and journalists behave in such cases, given that the regulatory framework does not offer any specific answers? This leads to a third open question, that is, the need for further self-regulation and the elaboration of ethical principles and guidelines within newsrooms. Ethical reflections regarding AI technology are still not a top priority in news media (Porlezza and Ferri, 2022), and most ethics codes still focus on traditional routines (Diaz-Campo and Segado-Boj, 2015). The increasing automation of journalism calls for discussions about the impact of AI technology on news work, and specific guidelines for both the design as well as the use of algorithms – a request that the CoE has already formulated in its Recommendation on Self-Regulation Concerning Cyber Content.[5] In particular because media and journalism have a social responsibility about the way news is produced and disseminated, which means avoiding at all costs issues such as digital divides, marginalization, polarization, or disinformation.

Acknowledgements

The author would like to thank Laura Pranteddu and Petra Mazzoni for their help and engagement in the project. Last not least, I would also like to thank the anonymous reviewers for their constructive feedback.

  1. Funding: This research was funded by a grant from the Swiss Federal Office of Communication.

References

Ad Hoc Committee on Artificial Intelligence. (2020). Feasibility study. https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6daSearch in Google Scholar

Balkin, J. M. (2017). Free speech in the algorithmic society: Big data, private governance, and new school speech regulation. UC Davis Law Review, 51, 1149–1210.10.2139/ssrn.3038939Search in Google Scholar

Beckett, C. (2019). New powers, new responsibilities: A global survey of journalism and artificial intelligence. London School of Economics. https://drive.google.com/file/d/1utmAMCmd4rfJHrUfLLfSJ-clpFTjyef1/viewSearch in Google Scholar

Bietti, E. (2020). From ethics washing to ethics bashing. Association for Computing Machinery (Ed.), Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 210–219). New York: Association for Computing Machinery. https://doi.org/10.1145/3351095.337286010.1145/3351095.3372860Search in Google Scholar

Braun, V., & Clarke, V. (2012). Thematic analysis. In H. Cooper, P. M. Camic, D. L. Long, A. T. Panter, D. Rindskopf, & K. J. Sher (Eds.), APA handbook of research methods in psychology, Vol. 2. Research designs: Quantitative, qualitative, neuropsychological, and biological (pp. 57–71). American Psychological Association. https://doi.org/10.1037/13620-00410.1037/13620-004Search in Google Scholar

Brennen, S. J., Howard, P. H., & Kleis Nielsen, R. (2018). An industry-led debate: How UK media cover artificial intelligence. Reuters Institute for the Study of Journalism.Search in Google Scholar

Bucher, T. (2018). If… then: Algorithmic power and politics. Oxford University Press.Search in Google Scholar

Buhmann, A., & Fieseler, C. (2021). Towards a deliberative framework for responsible innovation in artificial intelligence. Technology in Society, 64, 101475.10.1016/j.techsoc.2020.101475Search in Google Scholar

Bunz, M., & Braghieri, M. (2022). The AI doctor will see you now: Assessing the framing of AI in news coverage. AI & Society, 37(1), 9–22.10.1007/s00146-021-01145-9Search in Google Scholar

Carlson, M. (2015). The robotic reporter. Digital Journalism, 3(3), 416–431.10.1080/21670811.2014.976412Search in Google Scholar

Cath, C. (2018). Governing artificial intelligence: Ethical, legal and technical opportunities and challenges. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080.10.1098/rsta.2018.0080Search in Google Scholar

Cheney-Lippold, J. (2017). We are data. New York University Press.10.2307/j.ctt1gk0941Search in Google Scholar

Clegg, N. (2021). You and the algorithm: It takes two to tango. Medium. https://nick-clegg.medium.com/you-and-the-algorithm-it-takes-two-to-tango-7722b19aa1c2Search in Google Scholar

Chuan, C. H., Tsai, W. H. S., & Cho, S. Y. (2019, January). Framing artificial intelligence in American newspapers. Association for Computing Machinery (Ed.), Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 339–344).10.1145/3306618.3314285Search in Google Scholar

Cools, H., Van Gorp, B., & Opgenhaffen, M. (2022). Where exactly between utopia and dystopia? A framing analysis of AI and automation in US newspapers. Journalism, 14648849221122647.10.1177/14648849221122647Search in Google Scholar

Couldry, N., & Mejias, U. A. (2020). The costs of connection: How data are colonizing human life and appropriating it for capitalism. Stanford University Press.10.1515/9781503609754Search in Google Scholar

Crawford, K. (2021). The atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.10.12987/9780300252392Search in Google Scholar

Dafoe, A. (2018). AI governance: a research agenda. University of Oxford. https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdfSearch in Google Scholar

Diakopoulos, N. (2019). Automating the news: How algorithms are rewriting the media. Harvard University Press.10.4159/9780674239302Search in Google Scholar

Díaz-Campo, J., & Chaparro-Domínguez, M. Á. (2020). Computational journalism and ethics: An analysis of deontological codes of Latin American. Revista ICONO14. Revista científica de Comunicación y Tecnologías emergentes, 18, 10–32.10.7195/ri14.v18i1.1488Search in Google Scholar

Díaz-Campo, J., & Segado-Boj, F. (2015). Journalism ethics in a digital environment: How journalistic codes of ethics have been adapted to the Internet and ICTs in countries around the world. Telematics and Informatics, 32(4), 735–744.10.1016/j.tele.2015.03.004Search in Google Scholar

Dörr, K. N., & Hollnbuchner, K. (2017). Ethical challenges of algorithmic journalism. Digital Journalism, 5(4), 404–419.10.1080/21670811.2016.1167612Search in Google Scholar

Epstein, Z., Payne, B. H., Shen, J. H., Dubey, A., Felbo, B., Groh, M., Obradovich, N., Cebrian, M., & Rahwan, I. (2018). Closing the AI knowledge gap. arXiv. https://doi.org/10.48550/arXiv.1803.07233.Search in Google Scholar

European Broadcasting Union. (2022). AI Act: high-risk AI systems need more nuance. https://www.ebu.ch/news/2022/09/ai-act-high-risk-ai-systems-need-more-nuanceSearch in Google Scholar

European Court of Human Rights. (2022). Guide on article 10 of the European Convention on Human Rights. https://www.echr.coe.int/documents/guide_art_10_eng.pdfSearch in Google Scholar

European Commission. (n.d.). Excellence and trust in artificial intelligence. Retrieved June 7, 2023 from https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/excellence-and-trust-artificial-intelligence_enSearch in Google Scholar

European Commission. (2018). Artificial intelligence: Commission kicks off work on marrying cutting-edge technology and ethical standards [Press release]. https://ec.europa.eu/commission/presscorner/detail/en/IP_18_1381Search in Google Scholar

European Commission. (2021). Fostering a European approach to artificial intelligence. COM (2021) 205 final.Search in Google Scholar

European Parliament. (2022). Resolution of 3 May 2022 on artificial intelligence in a digital age. https://www.europarl.europa.eu/doceo/document/TA-9-2022-0140_EN.htmlSearch in Google Scholar

Fast, E., & Horvitz, E. (2017). Long-Term Trends in the Public Perception of Artificial Intelligence. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.1063510.1609/aaai.v31i1.10635Search in Google Scholar

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(1), 689–707.10.1007/s11023-018-9482-5Search in Google Scholar

Ford, H., & Hutchinson, J. (2019). Newsbots that mediate journalist and audience relationships. Digital Journalism, 7(8), 1013–1031.10.1080/21670811.2019.1626752Search in Google Scholar

franzke, a. s., Bechmann, A., Zimmer, M., Ess, C., & the Association of Internet Researchers (2020). Internet Research: Ethical Guidelines 3.0. https://aoir.org/reports/ethics3.pdfSearch in Google Scholar

Gianni, R., Lehtinen, S., & Nieminen, M. (2022). Governance of responsible AI: From ethical guidelines to cooperative policies. Frontiers in Computer Science, 4, 873437. https://doi.org/10.3389/fcomp10.3389/fcomp.2022.873437Search in Google Scholar

Gunkel, D. J. (2020). An introduction to communication and artificial intelligence. Polity Press.Search in Google Scholar

Gutierrez Lopez, M., Porlezza, C., Cooper, G., Makri, S., MacFarlane, A., & Missaoui, S. (2022). A question of design: Strategies for embedding AI-driven tools into journalistic work routines, Digital Journalism, 11(3), 484–503. https://doi.org/10.1080/21670811.2022.204375910.1080/21670811.2022.2043759Search in Google Scholar

Hagendorff, L. (2020). The ethics of AI ethics: An evaluation of guidelines. Minds and Machines, 30, 99–120.10.1007/s11023-020-09517-8Search in Google Scholar

Helberger, N. (2019). On the democratic role of news recommenders. Digital Journalism, 7(8), 993–1012.10.1080/21670811.2019.1623700Search in Google Scholar

Helberger, N., & Diakopoulos, N. (2022). The European AI act and how it matters for research into AI in media and journalism. Digital Journalism, online first. https://doi.org/10.1080/21670811.2022.208250510.1080/21670811.2022.2082505Search in Google Scholar

Helberger, N., Eskens, S., van Drunen, M., Bastian, M., & Moeller, J. (2019). Implications of AI-driven tools in the media for freedom of expression. Institute for Information Law (IViR). https://rm.coe.int/coe-ai-report-final/168094ce8fSearch in Google Scholar

Helberger, N., Van Drunen, M., Eskens, S., Bastian, M., & Moeller, J. (2020). A freedom of expression perspective on AI in the media–with a special focus on editorial decision making on social media platforms and in the news media. European Journal of Law and Technology, 11(3). https://ejlt.org/index.php/ejlt/article/view/752Search in Google Scholar

Helberger, N., van Drunen, M., Moeller, J., Vrijenhoek, S., & Eskens, S. (2022). Towards a normative perspective on journalistic AI: Embracing the messy reality of normative ideals. Digital Journalism, 10(10), 1605–1626.10.1080/21670811.2022.2152195Search in Google Scholar

Ignatow, G., & Mihalcea, R. (2018). An introduction to text mining. Sage.Search in Google Scholar

Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399.10.1038/s42256-019-0088-2Search in Google Scholar

Latzer, M., & Just, N. (2020). Governance by and of algorithms on the internet: Impact and consequences. In Oxford Research Encyclopedia of Communication. https://oxfordre.com/view/10.1093/acrefore/9780190228613.001.0001/acrefore-9780190228613-e-90410.1093/acrefore/9780190228613.013.904Search in Google Scholar

Lewis, S. C., Guzman, A. L., & Schmidt, T. R. (2019). Automation, journalism, and human-machine communication: Rethinking roles and relationships of humans and machines in news. Digital Journalism, 7(4), 409–427.10.1080/21670811.2019.1577147Search in Google Scholar

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2), 2053951716679679.10.1177/2053951716679679Search in Google Scholar

Montal, T., & Reich, Z. (2017). I, robot. You, journalist. Who is the author? Authorship, bylines and full disclosure in automated journalism. Digital Journalism, 5(7), 829–849.10.1080/21670811.2016.1209083Search in Google Scholar

Monti, M. (2018). Automated journalism and freedom of information: Ethical and juridical problems related to AI in the press field. Opinio Juris in Comparatione, 1(1), 1–17.Search in Google Scholar

Napoli, P. (2019). Social media and the public interest. Media regulation in the disinformation age. Columbia University Press.10.7312/napo18454Search in Google Scholar

Nguyen, D., & Hekman, E. (2022). The news framing of artificial intelligence: A critical exploration of how media discourses make sense of automation. AI & Society. https://doi.org/10.1007/s00146-022-01511-110.1007/s00146-022-01511-1Search in Google Scholar

Pasquale, F. (2015). The black box society: The secret algorithms that control money and information. Harvard University Press.10.4159/harvard.9780674736061Search in Google Scholar

Porlezza, C. (2019, May 24). Artificial intelligence: Utopia or dystopia? A comparative study of news frames of two AI milestone events [Conference presentation]. Human-Machine Communication ICA Pre-Conference, Washington DC, United States.Search in Google Scholar

Porlezza, C. (2020). Ethische Herausforderungen eines automatisierten Journalismus [Ethical challenges of automated journalism]. In N. Köberer, M. Prinzing, & B. Debatin (Eds.), Kommunikations- und Medienethik – reloaded? (pp. 143–158). Nomos.10.5771/9783748905158-143Search in Google Scholar

Porlezza, C. (2022). Switzerland, algorithms and the news: A small country looking for global solutions. In J. Meese, & S. Bannerman (Eds.), The algorithmic distribution of news (pp. 233–250). Palgrave Macmillan.10.1007/978-3-030-87086-7_12Search in Google Scholar

Porlezza, C., & Ferri, G. (2022). The missing piece: Ethics and the ontological boundaries of automated journalism. #ISOJ Journal – The journal of the International Symposium on Online Journalism, 12(1), 71–98.Search in Google Scholar

Porlezza, C., & Eberwein, T. (2022). Uncharted territory: Datafication as a challenge for journalism ethics. In S. Diehl, M. Karmasin, & I. Koinig (Eds.), Media and change management (pp. 343–362). Springer.10.1007/978-3-030-86680-8_19Search in Google Scholar

Prior, L. (2003). Using documents in social research. Sage.10.4135/9780857020222Search in Google Scholar

Radu, R. (2021). Steering the governance of artificial intelligence: National strategies in perspective. Policy and Society, 40(2), 178–193.10.1080/14494035.2021.1929728Search in Google Scholar

Russell, S., & Norvig, P. (2009). Artificial Intelligence: A Modern Approach. Upper Saddle River, NJ: Prentice Hall.Search in Google Scholar

Strömbäck, J., & Karlsson, M. (2011). Who’s got the power? Journalism Practice, 5(6), 643–656.10.1080/17512786.2011.592348Search in Google Scholar

Russell, S., & Norvig, P. (2010). Artificial intelligence. A modern approach. Prentice Hall.Search in Google Scholar

Schapals, A. K., & Porlezza, C. (2020). Mastering the robots: Assessing the impact of newsroom automation on journalistic role conceptions. Media & Communication, 8(3), 16–26.10.17645/mac.v8i3.3054Search in Google Scholar

Taeihagh, A. (2021). Governance of artificial intelligence. Policy and Society, 40(2), 137–157.10.1080/14494035.2021.1928377Search in Google Scholar

Thurman, N., Lewis, S. C., & Kunert, J. (2019). Algorithms, automation, and news. Digital Journalism, 7(8), 980–992.10.1080/21670811.2019.1685395Search in Google Scholar

Ulnicane, I., Knight, W., Leach, T., Stahl, B. C., & Wanjiku, W. G. (2021). Framing governance for a contested emerging technology: Insights from AI policy. Policy and Society, 40(2), 158–177.10.1080/14494035.2020.1855800Search in Google Scholar

Valcke, P., & Hendrickx, V. (2023, January 25). The Council of Europe’s road towards an AI Convention: taking stock. Law, Ethics & Policy of AI Blog, KU Leuven. https://www.law.kuleuven.be/ai-summer-school/blogpost/Blogposts/AI-Council-of-Europe-draft-conventionSearch in Google Scholar

Van Dijk, J., Poell, T., & De Wall, M. (2018). The Platform Society. Public Values in a Connective World. Oxford: Oxford University Press.10.1093/oso/9780190889760.001.0001Search in Google Scholar

Vrijenhoek, S., Kaya, M., Metoui, N., Möller, J., Odijk, D., & Helberger, N. (2021). Recommenders with a mission: Assessing diversity in news recommendations. In Proceedings of the 2021 Conference on Human Information Interaction and Retrieval (CHIIR ’21) (pp. 173–183). Association for Computing Machinery. https://doi.org/10.1145/3406522.344601910.1145/3406522.3446019Search in Google Scholar

Zamith, R. (2019). Algorithms and journalism. In H. Örnebring, Y. Y. Chan, M. Carlson, S. Craft, M. Karlsson, H. Sjøvaag, & H. Wasserman (Eds.), Oxford encyclopedia of journalism studies. Oxford University Press.10.1093/acrefore/9780190228613.013.779Search in Google Scholar

Zhang, B., & Dafoe, A. (2019). Artificial intelligence: American attitudes and trends. University of Oxford.10.2139/ssrn.3312874Search in Google Scholar

Published Online: 2023-06-23
Published in Print: 2023-08-24

© 2023 bei den Autoren, publiziert von De Gruyter.

Dieses Werk ist lizensiert unter einer Creative Commons Namensnennung 4.0 International Lizenz.

Downloaded on 27.4.2024 from https://www.degruyter.com/document/doi/10.1515/commun-2022-0091/html
Scroll to top button