1 Introduction

Many definitions of creativity arguably tend toward anthropocentric conceptions, associating it with features such as value, flair (Gaut 2018), intuition, and impact on the wider society (Kaufman and Beghetto 2009), in addition to novelty. Some have recognized the need for a more inclusive and less human-centric account of creativity in order to avoid disciplinary biases and facilitate a broader understanding of this concept (Jordanous and Keller 2016). Indeed, a lack of commonly-agreed interpretations of creativity hinders our ability to assess and evaluate the creativity possessed by different systems and to compare them on the basis of their creative capacities (Jordanous 2011).

The technological revolution that we are living makes this necessity even more urgent. The rapid development of machine learning systems in the last decades and the improvement of their performance in multiple fields provoked an upsurge of debates regarding whether these systems can reach human-level performance and, if so, what would distinguish us from them. It is thus worth exploring a way to measure the distance that sets the performance of machine learning systems apart from human creativity, if there is any.

In this paper I aim to address this need by suggesting a framework to situate and assess instances of natural and artificial creativity that does not rely on external factors but only on the inner structure of the creative system itself. As a result, this model can be usefully applied to analyze disputed instances of creativity, for example when discussing the possibility that artificial or non-human animal systems are creative.

I open the paper by providing a quick overview of the literature on creativity, considering where scholars stand in relation to the ascription of ordinary or exceptional features to this concept. I then identify important features of creativity: problem-solving, evaluation, and naivety. On the basis of these components, I tentatively put forward a framework to situate and measure creativity. I close the paper by discussing some examples to show how the proposed model can be used to assess human, animal, and artificial creativity.

The end goal of the paper is to argue that the concept of creativity, albeit anthropogenic, need not be anthropocentric (Guckelsberger et al. 2017). Through a less discipline-oriented and human-biased view of creativity we could better understand contested cases of creativity and learn how to increase our own, human, creativity.

2 Degrees of measurability

‘Creativity’ is a polysemous notion and the discussions that concern creativity’s nature reflect the different meanings that this word can assume. On the one hand, this has the benefit of adding to the richness of the debates. On the other, it arguably has the consequence of restraining the possibility of finding a consensus about what creativity is.

In the history of the concept, the ascription of mysterious and unmeasurable features to creativity starts with the Platonic account, namely the idea that creativity is fundamentally unexplainable and results from divine inspiration (Gaut 2012: 259; Kronfeldner 2009: 581–582; Stokes and Paul 2016: 4). The interpretation of creativity as an unexplainable and mysterious property found a fertile ground in the Romantic tradition (Bridy 2012; Sawyer 2012). In the Romantic view, creativity is the mark of genius and the ‘creative person’ is an extraordinary individual - usually a painter, a poet, or a musician - who possesses the mysterious gift of creating something that ordinary people are not capable of. The irrationalist view of creativity as a phenomenon the causes and mechanisms of which cannot be described nor measured is shared also by later thinkers, such as Kant (Pluhar 1987), Schopenhauer, Nietzsche (Gaut 2012: 259–260), Hausman (Kronfeldner 2009: 582), and by Miller through his view on creative geniuses (Miller 1996).Footnote 1

Although no longer widely shared in the literature (Elster 2000), the concept of creativity as a property of genius that produces eminent products arguably lives up to more modern anthropocentric conceptions of creativity (O'Hear 1995) and it is contrasted by more ordinary forms of creativity (Amabile 1996; Kaufman and Beghetto 2009; Sternberg and Lubart 1995). Historiometric studies - the examination of the lives and products of creative individuals used as a method to measure and analyze creativity (Sternberg 1999: 116) - support the idea that the processes behind very creative products can be explained in terms of ordinary cognitive processes. Scientific discoveries and works of art which seem inexplicably creative to the vast majority of people, can instead be illustrated through the determination, skill, and hard work of their creators (Weisberg 1993: 18). Examples of this include Charles Darwin and the emergence of his theories from the simultaneous enterprises he engaged with (Sternberg 1999: 104–108; Weisberg 1993: 169) or Johann Sebastian Bach’s productivity explained by the custom of the times of borrowing material from other composers.Footnote 2

Views that focus on more ordinary features of creativity recognize that the latter is not limited to eminent instances of creative achievements but that, instead, it is more widely distributed (Kaufman and Beghetto 2009: 74). One of the most representative definitions of creativity along these lines is the one proposed by Herbert Simon and his collaborators: ‘creative activity appears simply to be a special class of problem-solving activity characterized by novelty, unconventionality, persistence, and difficulty in problem formulation’ (Newell et al. 1962: 66). The assumption behind this statement is that there is no difference in the thought processes of the ones we recognize as ‘geniuses’ and of ordinary people. It is just that the first have better heuristics (Simon 1985: 5). In order to test this hypothesis, Simon and his colleagues ran several simulations of problem-solving mechanisms to construct software that could perform creative processes as humans can. It is the case of the Logic Theorist, a program ‘capable of discovering proofs for theorems in elementary symbolic logic’ (Newell et al. 1962: 67), of BACON, a program which re-discovered Kepler’s Third Law and other laws of physics (Simon 1985: 9–10), and of ILLIAC, a program that composes music using Palestrina’s rules of counterpoint (Newell et al. 1962: 67).

A clear boundary between views that attribute to creativity more measurable features and views that instead identify aspects which are difficult to assess is hard to trace. Most theories, in fact, ascribe to creativity both more straightforwardly assessable aspects and features that are less so. It is the case of a definition that set the baseline for many of the debates that followed: ‘Creativity is the ability to come up with ideas or artefacts that are new, surprising and valuable’ (Boden 1990: 1). Boden’s definition emphasizes elements of creativity that we might define as difficult to quantify, given their dependency on the observer’s point of view, such as surprisingness. On the other hand, Boden is also interested in distinguishing the conditions of creativity from more context- and culture-laden aspects of the notion (Boden 1994). Through her idea of creativity as a matter of mapping, exploring, and transforming conceptual spaces, she indicates a way to make creativity more understandable and measurable.Footnote 3

Despite the obvious difficulty of delimiting more and less easily measurable features of creativity, a challenge which is apparent also from this quick overview, I believe that the investigation into how to measure creativity is an essential one to pursue, especially now that new forms of intelligence and creativity seem to be emerging. This need is dictated not by the wish to establish a hierarchy of value between systems on the basis of how creative they are but, rather, by that of enhancing our understanding of creativity and of its development in systems of different natures. My aim in the rest of the paper is to draw on existing views to identify key features of creativity in order to model a multidimensional account through which to measure degrees of creativity in human, animal, and artificial systems.

3 Key features of creativity

Discussions on the nature of creativity typically describe it as a property of a product (Hennessey and Amabile 2010; Ritchie 2008), an agent (Maslow 1959; May 1959), a process (Mednick 1962; Simonton 2013), or, more typically, they ascribe to it features pertaining to more than one of these elements (Boden 1990; Stein 1963; Sternberg and Lubart 1999).

In what follows, I identify and discuss key features that have been analyzed in the literature and that pertain to creative processes and to the agents that perform them: problem-solving, evaluation, and naivety.

A specification here is needed: I adopt an account of creative processes as search through a problem space, drawing on conceptions of creativity akin to Boden’s psychological creativity (1990: 43) and to Kaufman and Beghetto’s mini-c creativity (2009). The first concerns the ideas that are novel with respect to the individual and not the wider historical context, while mini-c creativity highlights the personal and developmental aspects of creativity. By shifting the focus from the contribution brought by a system’s creative endeavor in terms of eminence, to the development of creativity within a system, the conception of creativity I focus on distances itself from product-oriented views that assess creativity on the basis of external judgments.Footnote 4

In what follows, what I refer to as a ‘system’ is the combination of processes and the agent(s) that perform them.Footnote 5 As will be more evident from the discussion, I propose to evaluate the creativity of systems not on the basis of the impact of their products but, instead, from the perspective of the systems themselves. In so doing, the risk of overlooking systems which may not be deemed creative because their output is insufficiently relevant from an external viewpoint but that, instead, may be granted a considerable level of creativity in consideration of their inner components is avoided.

For a better grasp of the key features of creativity that I will later use to set up the framework for assessing creativity, I address each of these components in detail.

3.1 Problem-solving

Creativity has been defined by some as a special class of problem-solving activities (Simon 1985; Sawyer 2012; Weisberg 1993). Problem-solving processes consist in the exploration and analysis of the different routes that can be taken in finding a solution to the problem we are confronted with and that can be successfully concluded through sudden insight or through a trial-and-error sequence (Halina forthcoming).Footnote 6

The problem-solving component emphasizes how creativity is intrinsic to many areas of application. Problem-solving is traditionally associated with scientific research, and the scientific method is maybe the most famous example of a problem-solving process. Still, we can talk of problem-solving possesses also in relation to the arts, if by problem-solving we mean the act of defining a visual or auditory ‘problem’ and executing the work of art to ‘solve’ it (Sawyer 2000: 153). A caveat: I take it that a creative process of problem-solving must be directed toward some kind of output but I grant that the latter may be only loosely specified, leaving as open as possible how it may arise and develop (Jordanous and Keller 2016: 19).Footnote 7

Creativity as problem-solving can also be found in our everyday interaction with the environment and in activities that we may otherwise deem trivial. For example, if a child needs to ‘solve the problem’ of reaching a box of biscuits on a high shelf, she might perform in her mind the association that by using another box as a stool she might be able to get the biscuits. Similar processes can be found in animals and in human adults (Kaufman and Kaufman 2015; Mendes et al. 2007). Suppose that you forgot your reading glasses at home, you can use the zoom of the camera in your phone as a magnifier glass: you adopted a creative solution by making an association between two things which have similar functions.

An element of problem-solving is what we may call ‘connection-making’. Creativity as analogical thinking and the capacity of building metaphors and connections finds wide agreement in the literature (Currie 2019; Glover et al. 1989; Mednick 1962; Miller 1996; Sawyer 2012). A connection-making process consists in drawing links between apparently unconnected pieces of knowledge and in exploring novel paths toward the successful conclusion of the problem-solving process. These connections are created not necessarily only across unrelated fields and dimensions. They can take place also within the same conceptual space.Footnote 8 As I will discuss in more detail later when presenting the framework through which to measure creativity, it is important to note that the assessment of the novelty and unconventionality of the connections drawn is measured in relation to the agent’s background knowledge and not to some external standard value.

3.2 Evaluation

When engaging in a creative process, we normally reflect on what we produce and try to improve according not only to the feedback that we receive from the outside but also to the inner feedback we provide ourselves with. The ability to assess the process and to ‘know when to stop’ is a relevant element of creativity which I will refer to as ‘evaluation’ (Boden 1990: 43–44; Gaut 2010; Guckelsberger et al. 2017; Halina forthcoming; Rhodes 1961: 305).

Evaluation is crucial in the process of trial and error performed when considering whether the connections established during the creative process work or not (Collins 2012; Elton 1995; Glover et al. 1989; Sawyer 2012; Wyse 2019). The evaluative component of the creative process underlines how creativity does not happen by magic but is instead the result of hard work, a goal-directed attitude, and skillfulness in performing abstract analysis and associations (Sawyer 2012: 107–110; Weisberg 1993: 109).

The process of assessing one’s own performance needs to be autonomous, where by ‘autonomous’ I mean that the evaluative ability should be possessed by the agent who performs the process and not a result of external influences or feedback (Briot et al. 2017; D’Inverno and McCormack 2012; Luck and D’Inverno 2012). Thus understood, the evaluative ability directed to the task at hand is a crucial element of the aspect of creativity that Gaut calls ‘flair’: ‘consider a chimp brushing paint boisterously onto paper: her trainer removes the paper at the point at which it is aesthetically pleasing, but left to their own devices chimps will keep adding more paint and simply end up with a mess […]. The chimp has not been creative, since she lacks the evaluative capacity to assess her own work and thus to know when to stop.’ (Gaut 2010: 1039).

Still, it could be possible to argue that, when measuring the creativity of a system, we might extend the system to other agents to generate the needed autonomy.Footnote 9 For example, in Gaut’s example we could define the creative system as including the influence of the chimp’s trainer. The discussion on the legitimacy of this argument is especially relevant when assessing cases of artificial creativity, as I will discuss in more practical terms in Section 5.3.

Trial-and-error problem-solving and the evaluative ability directed to the task at hand highlight the relationship between creativity and learning (Briegel 2012; Kaufman and Beghetto 2009). Indeed, through a trial-and-error sequence, agents adjust their behavior to successfully conclude the problem-solving process. This critical thinking stage is usually referred to also as ‘selective retention’ (Campbell 1960; Glover et al. 1989: 24) or as ‘convergent thinking’ (Guilford 1967).

3.3 Naivety

The third and last feature I focus on is naivety. Naivety relates to various aspects that in the literature have a place among the core traits of creativity: spontaneity (Dewey 1934; Kronfeldner 2009; Sawyer 2012), unconscious thought processing (Baumeister et al. 2014), challenging domain norms, independence from rigid structures of thought.

The notion of naivety as independence from a model and from external causal inferences does not deny the relevance of domain awareness and expert knowledge (Glover et al. 1989; Jordanous and Keller 2016; Sawyer 2012; Simon 1985; Weisberg 1993, 2009), but it shifts the light onto more childlike and playful traits of creativity (Csikszentmihalyi 1996; Gaut 2012; Piirto 2010). A creative process is a process of exploration which does not necessarily demand expertise and self-education but, rather, can simply invoke everyday psychological abilities such as the capacity of observation and of making analogies.

The denotation of naivety I am mostly interested in for the discussion that follows is naivety as lack of previous exposure to the properties of the situation at hand (Halina forthcoming: 6). How much an agent is ‘ignorant’ with respect to the process she is undertaking will be a key element for the assessment and measurement of the creativity of a system.

4 Measuring creativity

On the basis of the features identified above, I suggest a framework to situate and assess creativity. Within this framework, the creativity of a system can be assessed without heavily relying on external standards and interpretations but, instead, by considering whether problem-solving, evaluative abilities, and naivety are features of the processes and agent(s) of the system and, if so, to which extent. Thus, the methodology followed distances itself from psychometric tests for measuring creativity in terms of the impact of the product (Beghetto and Kaufman 2007: 76), like the Torrance Test (Kim 2006; Torrance 1972), and from other methodologies used to assess creativity, on the basis of the properties of the output (Ariza 2009; Boden 2010; Briot et al. 2017; Wiggins et al. 2011).Footnote 10

I propose that the creativity of a system can be assessed as follows:

$$ \mathrm{C}\propto \mathrm{N}+\mathrm{D}+\mathrm{V}+\mathrm{E} $$

Creativity is proportional to naivety (N), novelty and distance of connections (D), evaluative ability (V), and efficiency (E).

The first parameter is naivety, i.e. the lack of past exposure to and knowledge of the situation and the impossibility for agents to undertake the task at hand by drawing on their behavioral repertoire (Halina forthcoming: 6). Naivety is, then, inversely proportional to the relevant knowledge and prior experience the agent has with respect to the process undertaken. The less knowledge the agent has in relation to the process, the more the overall system is creative (MacKinnon 1970).

The second element is constituted by the nature of the connections drawn during the process. How novel and far apart the elements which are connected through the creative process are is relevant for the measurement of the creativity of the system (Mednick 1962; Simonton 2013). As already mentioned, the estimation of the unconventionality and distance of the connections needs to be assessed in relation to the agents’ history and knowledge. An example is the solution to disturbances in telephone transmission found by Benoit Mandelbrot in the Sixties. IBM recruited Mandelbrot to investigate a long-standing issue which nobody seemed to find a solution to: a white noise disturbing the flow of information through phone lines.Footnote 11 Mandelbrot gave an explanation of this problem by visualizing the noise in terms of its shapes and discovering that the structure it generated was fractal, a geometrical structure that, later on in his career, Mandelbrot highlighted as widespread in nature (1983). Mandelbrot was able to make this discovery by connecting pieces of knowledge from very distant fields: acoustic and visual data, geometry and telecommunication. More importantly, he was not an expert in the field of data transmissions, thus the connections that he drew were unconventional in respect of his knowledge of the areas involved.

The capacity to autonomously evaluate when to stop the process is arguably less easily assessed than the other components here presented. Moreover, a concern may emerge here: are the features of the output of the process relevant for the assessment of the agent’s evaluative abilities? And, if so, would this not be in contrast with the aim to propose a model to measure creativity which is not dependent on external, product-oriented, judgments? In reply to this worry I argue that in this model the estimation of the agent’s evaluative abilities does not depend on the characteristics of the output produced but, instead, only on the capacity of autonomously ‘knowing when to stop’, regardless of the quality of the output. Still, a second worry may arise: how can we judge whether the agent’s decision to stop the process was autonomous, if we cannot access her inner processes of evaluation?Footnote 12 For example, how can we be sure that Beethoven stopped writing the Ninth Symphony when he autonomously judged that that was the right moment to stop and not, instead, because he was influenced by some external factor? It may, thus, be argued that it is impossible to totally abstain from an external assessment of the creativity of a process because, even if we possessed a detailed report made by the agent of her inner process of evaluation, we would still need to interpret it. I do not have a conclusive answer to this worry. Yet, since this problem exists for every system - artificial, animal, and human - and not only in relation to creativity but more generally in relation to every phenomenon that involves the interpretation and prediction of behavior, I deem it acceptable to suppose that the agent’s process of evaluation occurred without external influences when, all things considered, it reasonably appears to be so in that context.Footnote 13

I did not include efficiency among the key aspects of creativity but, still, I deem it relevant to measure the degree of creativity of a system. The efficiency of the system is related to the resources used by the agent to successfully conclude the process undertaken: the less resources the agent employs in the process, the higher is the creativity of the system. These resources are calculated in terms of cost, time, and/or effort used by the agent to perform the process.

The model here proposed can be used either to evaluate the creativity of a system in isolation, attributing a certain value to each variable of the formula, or to compare the creativity of different systems. Creativity can, thus, be mapped in a four-dimensional space with the maximally creative systems being those whose agents are more naive and more able to assess their processes, whose processes are more efficient and make more unconventional and distant connections.

In this paper, I focus on a qualitative assessment of creativity. If we wished to quantitatively measure the creativity of a system we could, for example, design human-subject surveys and tests to assign a score from 1 to 10 to each of the four parameters. The sum of these values would correspond to the overall amount of creativity of the system in question. To compare different systems, it would then be sufficient to compare the scores obtained by each.

An even more scrupulous measurement of creativity may be possible by quantifying the influence that each element - naivety, unconventionality of connections, evaluative abilities, efficiency - has on the overall creativity of the system. A relative weight can then be assigned to each:

$$ \frac{\left(w\cdotp N\right)+\left(x\cdotp D\right)+\left(y\cdotp V\right)+\left(z\cdotp E\right)}{w+x+y+z} $$

I suppose, in fact, that a common intuition might be that the four parameters do not have the same relevance for measuring the creativity of a system but that, for example, the nature of connections and the capacity for evaluation carry more weight than the efficiency of the system. In principle, this framework allows for such variations in weighting as required by researchers.

An objection that can be raised here is that this model for measuring creativity is not free from external influences if, ultimately, the creativity score is assigned by human subjects. While it is true that the choice of the components of creativity are – however accurately selected – subject-dependent, and that the assessment of these components is carried out by human subjects, what I believe is more relevant to guarantee an unbiased assessment of creativity is that, once the standard is set, the methodology and tools that are used are as much as possible unprejudiced by subjective influences. The way in which I propose to assess the creativity of a system is on the basis of neither the impact that the output of the creative process has for the wider historical and social context, nor of the agents’ property of ‘genius’. Rather, the creativity of the system can be measured on the basis of its components and its inner characteristics which can be assessed without recurring to subjective judgments as directly as product-oriented conceptions of creativity would require.

5 Creativities

The aim of this section is to show how the proposed model for measuring creativity can be applied to actual cases through the examination of three examples: (i) human creativity: the discovery of the structure of the benzene molecules by Kekulé, (ii) animal creativity: exemplars of New Caledonian crows building compound tools, and (iii) artificial creativity: Creative Adversarial Networks.Footnote 14

The first example shows how also a creative process which is often used as a reference for the idea that creativity occurs without explanation and almost unconsciously can be explained by breaking it down into its elements. The second example will, moreover, demonstrate how the analysis of creativity along the components I proposed is useful to assess more contentious cases of creativity and as a criterion to conduct comparative studies between different kinds of intelligence. The last example contributes to the debate on computational creativity.

I do not attempt here to quantify the creativity of each of these systems by assigning a score to them, since such quantification is not among the aims of this work. Rather, I present how a qualitative assessment, which can be later developed in a quantitative direction, may be conducted.

5.1 Human creativity

The discovery of the benzene molecule by the German chemist August Kekulé is one of the most cited personal reports in the literature on creativity and it has typically been referred to as an example of the connection between creativity and unconscious thought. As widely reported, Kekulé formulated the hypothesis that the benzene molecules form a ring when, dozing in front of the fire, he dreamt of some snakes biting their tails (Rothenberg 1995).Footnote 15

The process undertook by the chemist, aimed at finding out the structure of the benzene molecule, can be defined as a problem-solving one. It, indeed, presents all the aspects of a typical problem-solving process: it defines a problem, identifies and evaluates possible alternatives for its solution, and selects the, presumably, correct one. More importantly, Kekulé’s process involved the creation of connections between apparently unrelated areas: reptiles biting their tails and a law of nature, i.e. the structure of benzene molecules.

The unconventionality of this connection is expressed also by Kekulé’s own surprise in registering the dream, since he had previously experienced the ends of the benzene molecule structure as polar opposites and not as joined ends of a ring-like structure: ‘But look! What was that? One of the snakes had seized hold of its own tail, and the form whirled mockingly before my eyes.’ (quoted in Kronfeldner 2009: 587).

The nature of the connections drawn needs to be gauged with respect to the agent’s prior knowledge. While Kekulé had expertise in the field of chemistry, as far as I know he was not an expert of reptiles. The distance between these two fields and the unconventionality of the connection with respect to Kekulé’s relevant knowledge, allows us to assign an high score to the variable ‘D’ in the formula for assessing creativity. Suppose that, instead, Kekulé had achieved the same result by drawing connections between the structure of chemical elements with similar characteristics to benzene, then the system would have displayed a lower amount of creativity, since the connections made would have been less distant and unusual, and closer to Kekulé’s field of expertise.

Even if Kekulé was an expert in chemistry, it can still be argued that ‘his previous experience with benzene molecules did not suffice for him to discover their structure.’ (Kronfeldner 2009: 585). This argument vouches in favor of ascribing a certain level of naivety and, as a consequence, a corresponding value of creativity to the system he is part of.Footnote 16 The naivety of Kekulé here, thus, is not understood as a lack of knowledge in the field altogether, but as a lack of previous exposure to ring-like structures of chemical molecules and of benzene in particular.

The realization that his thought processes had produced a result occurred to Kekulé suddenly, after having visualized the structure of the molecule in mental imagery. According to his own report: ‘As if by a flash of lightning I awoke’ (quoted in Kronfeldner 2009: 587). Kekulé came to realize that the valences of benzene fitted into the ring-like structure he dreamt of without the need for external feedback and, thus, he was capable of autonomous evaluation of his creative process.

The system has been recognized as including a connection-making process between distant domains, and Kekulé was autonomous in the evaluation of its performance and sufficiently naive with respect to the specific field of research. The last component, i.e. efficiency, is not straightforwardly measurable. We could assess it by considering the amount of time and effort that Kekulé employed towards the discovery, including, for example, his previous research toward the development of the ‘structure theory’ that he conducted between 1857 and 1859, a theory that was crucial for his later discovery of the structure of the benzene molecule in 1865. For certain, assessing efficiency in this and in other systems is by no means easy but it is at least theoretically possible to gauge it.

This four-dimensional mapping of creativity may be used to compare creativity among different systems, a research path that is encouraged by scholars in the field (Auersperg et al. 2011, 2012). The example that I illustrate in the next section shows how the model can be used to analyze creativity in a system which includes not human but animal agents.

5.2 Animal creativity

In the last decades, corvids have attracted the attention of scientists and researchers for their remarkable ability to use and modify tools. In numerous experiments they have been observed bending pieces of wire into hooks and sculpting and using stick-shaped objects to access food rewards (Auersperg et al. 2011, 2012; Bird and Emery 2009; Cheke et al. 2011; Hunt 1996; Kaufman and Kaufman 2015; von Bayern et al. 2009). While there is quite a widespread consensus on ascribing problem-solving skills to birds that engage in such performances, more contentious is the attribution of creativity and insight to them. Here I apply the creativity framework I proposed to the specific case of New Caledonian crows building compound tools from individually useless objects (von Bayern et al. 2018).

In this experiment, 8 wild-caught exemplars of New Caledonian crows have been observed building compound tools out of syringes and wooden dowels, individually too short to serve the purpose of retrieving food by pushing it along a track. After making attempts with short sticks first and acknowledging the inadequacy of the latter, 4 of the 8 crows selected suitable elements to build longer sticks through which they managed to retrieve food from the box. Each of the four successful crows can be considered the agent of a separate system.

The crows had not previously been trained on building compound tools, nor did they observe other exemplars conducting the same task and, thus, they may be deemed naive with reference to the problem presented (von Bayern et al. 2018: 5).Footnote 17 If, instead, we possessed the evidence that, for example, crow A has never had the experience of bending sticks and building tools, while crow B has experienced similar activities, all other things being equal, the system that includes crow B in the process of building tools to retrieve food could be deemed less creative than the system that includes crow A performing the same process, since crow B had more relevant knowledge and past experience than crow A and was, therefore, less naive.

The problem-solving abilities of the crows are testified by the process of trial and error that the individuals perform in the construction of tools that are more suitable to the task at hand. Building compound tools from individually useless elements can be deemed a connection-making process. Indeed, the crow has to connect two previously unrelated pieces of knowledge: coming across short wooden sticks, and assembling together elements of various natures. In addition, at least one of the four exemplars, Mango, showed the ability to transfer understanding to novel task configurations and material in subsequent iterations of the experiment (von Bayern et al. 2018: 6).

The four successful crows joined the sticks together to build a longer pool taking no or a few attempts before successfully combining them (von Bayern et al. 2018: 3). The crows did not receive any feedback from humans nor from the environment while performing the experiment. They were, thus, autonomous in their evaluative abilities of knowing when to conclude the process of building compound tools.

In the first set of experiments with New Caledonian crows, compound tools made of two sticks were long enough to slide the food along the track. The exemplars concluded the process as soon as the tool they built reached the appropriate length to slide the food along the track and, therefore, they have been highly efficient in this respect, since they did not use more resources than the ones sufficient to achieve the result of retrieving food. Suppose, instead, that crow A had built tools made of more than two sticks. In this case, the system with crow A as an agent would have been less efficient, since A would have employed more resources than required to comply with the goal of retrieving food. As a consequence, the system as a whole would have been less creative.

It can be concluded that, according to the framework presented, the system including the crow and the process of building compound tools can be credited with a considerable amount of creativity. The connections made during the problem-solving process were indeed arguably novel with respect to the crow’s background knowledge and, with the mentioned caveat, the crow was naive with respect to the task at hand. In addition, the crow exhibited the ability of assessing its own performance in autonomy and, lastly, it was efficient in not using more resources than the required ones.

The model proposed offers a way to measure the creativity exhibited by different systems without incurring prejudiced interpretations which may discriminate against non-human species. The aim of the next section is to show how, just as in the case of New Caledonian crows, this approach can help disputing common prejudices against Artificial Intelligence (henceforth AI) and formulating a clearer idea on the creative potential of the latter.

5.3 Artificial creativity

The increasingly common use of AI for the generation of content has brought it closer to areas of application that, until not so long ago, were considered the prerogative of humans. The topic of creative AI is, if possible, even more contentious than the topic of creativity tout court. On the one hand there are theorists who support the possibility that AI can be creative (Newell et al. 1962; Simon 1985), on the other there are detractors or people who are hesitant to acknowledge this possibility (Amabile 1996; Kelly 2019). From here comes the necessity to ask the question of whether artificial creativity is possible. In this section I contribute toward answering the question by analyzing a state-of-the-art algorithm: Creative Adversarial Networks.

Machine learning has tested its prowess in a variety of different areas, and it has emboldened enough to approach also the fields of arts and scientific discovery. The examples of programs beating world champions in games, of robot painters, and of algorithms for music generation are many.Footnote 18 Here, I am going to focus on a particular type of algorithm which has revolutionized the world of machine learning in the last years, i.e. Generative Adversarial Networks (henceforth GANs) or, more specifically, a version of it: Creative Adversarial Networks (henceforth CANs). I will briefly explain its inner mechanism and discuss it in the light of the model of creativity presented in this paper.

Generative models are algorithms developed with the aim of analyzing and understanding data from a pre-existent set and they come in different shapes: variational autoencoders,Footnote 19 methods that maximize likelihood,Footnote 20 energy-based models,Footnote 21 and others. Given a label or a hidden representation, these models can predict the associated features and generate new data similar to those provided by the training set.

Generative algorithms recently started to be applied to the generation of paintings and music (Dong et al. 2017; Moruzzi 2018; Yang et al. 2017). The reason for exploring GANs and CANs is because their process of creation is inherently different from that of other generative algorithms. GANs are composed of two neural networks: a generator and a discriminator. The generator has the role of originating new data instances, while the discriminator evaluates them for authenticity. This model is defined as ‘adversarial’ because generator and discriminator are pitted one against the other in what Goodfellow describes as a game of cat and mouse (Goodfellow et al. 2014: 1). The scope of the generator is to trick the discriminator into believing that what has been produced is a sample which is part of the training set. The aim of the discriminator, instead, is to guess when the generator produces a fake sample. The two neural networks, generator and discriminator, are trained simultaneously in a competition to improve themselves. The process that leads to the creation of instances that are indistinguishable from those of the training sample is made possible by the fact that the discriminator guides the generator, providing feedback. This feedback loop between generator and discriminator allows the network to improve its performance.

CANs are a recent evolution of GANs. As the name suggests, this model aims to increase the creativity of the generative network, deviating from the training set to create new art styles (Elgammal et al. 2017: 5). As in the case of GANs, also CANs are composed of a generator and a discriminator but, in this case, the generator does not only need to fool the discriminator to think that the image that it produces is ‘art’, it needs also to confuse the discriminator about the style of the generated work (Elgammal et al. 2017: 6). In so doing, CANs ‘create’ instead of merely ‘emulate’ (Elgammal et al. 2017). The process that the CANs engage in consists in the generation of images which can be deemed artistic works but that are sufficiently different from any of the styles of the pictures of the training set.Footnote 22 The aim is ‘to generate art with increased levels of arousal potential in a constrained way without activating the aversion system […]. In other words, the agent tries to generate art that is novel, but not too novel.’ (Elgammal et al. 2017: 4) An interesting result achieved by CANs is the production of abstract paintings, deviating from the style norms of the given dataset which included mostly figurative artworks.

The two contrasting signals that the generator receives push it to explore distant parts of the creative space in order to generate style-ambiguous work. In this sense, the process is a connection-making one: the network establishes connections between different and unrelated styles. Here the connections are drawn within a specific domain, that of paintings, which is further delimited by the choice of the styles that are fed to the algorithm by the programmers.Footnote 23

A CAN was fed with 81,449 paintings, classified in 25 different styles (Elgammal et al. 2017: 10). This represents the domain-knowledge of the system. Since the task is performed and completed within this domain, the naivety that can be assigned to the system is arguably not very high. Indeed, it can be argued that, as long as a system explores possibilities within a problem space whose elements have been hard coded in detail by humans, naivety is hard to be ascribed to it. A CAN cannot extend its process outside of the specific domain of which it is an expert, in this case paintings classified in 25 different styles.

The fact that it is hard to recognize CANs as naive does not mean that no artificial system might be as such, though. A higher level of naivety could, indeed, arguably be assigned to ‘tabula rasa’ kinds of learning like that performed by AlphaZero, a program that taught itself how to master the games of Go, shogi, and chess (Halina forthcoming). Yet, not even AlphaGo is able to go beyond the domain, however wide, which has been set by the human programmers. And it is precisely the flexibility of extending beyond a specific domain to explore unknown fields and applications that researchers are trying to develop to reach the so-called Artificial General Intelligence (Goertzel 2014; Lake et al. 2011).

The feedback loop between generator and discriminator can be interpreted as an evaluation process that, in some ways, maps the human process of trial and error. Unlike other algorithms which make use of human judges to receive feedback when a fitness function is not known or hard to determine (like in the case of aesthetic qualities in the arts, see Galanter 2012), in CANs the feedback mechanism happens within the system itself. The CAN concludes the process when it reaches the right balance between generating works that would be accepted as ‘art’ and that, at the same time, are sufficiently style-ambiguous. This peculiar structure vouches in favor of attributing a capacity of autonomous evaluation to the system. There is no need for external feedback, the assessment of the generator’s performance is carried out by the discriminator within the system: ‘This interaction also provides a way for the system to self-assess its products.’ (Elgammal et al. 2017: 21).

A potential worry here is represented by the fact that the initial ‘problem’ of producing style-ambiguous images is hard-coded by the human into the CANs algorithm. So, while the evaluation of when to stop the process happens without the need for an external feedback, it may be argued that the system as a whole is not autonomous, since both the training set and the result to achieve have been included in the system as an input by human programmers. A system’s inability to account for its aims is often a reason for people to disapprove it of being creative (Guckelsberger et al. 2017). A way to overcome this worry is, as already mentioned, to include the humans responsible of the program as part of the creative system to generate the autonomy required (Ventura 2014).

Lastly, while we encountered some difficulties in measuring the efficiency of human creative systems, this parameter is probably easier to quantify in artificial than in natural systems. The efficiency of CANs can, indeed, be measured in terms of ‘computing performance’, namely the time, accuracy, and computational power employed by the program to produce the output (Hernandez and Brown 2020; Roth et al. 2020). Given the rapid evolution of technology in this field, this is a parameter that is likely to vary considerably through time and that depends also on the hardware rather than only on the program itself.

6 Conclusion

What emerges from the discussion of the last sections is that the measurement of creativity in artificial systems which include CANs is more problematic than the assessment of it in the human and animal systems addressed above (Table 1). This is mainly due to the doubts that arise with respect to the autonomy and naivety of the artificial system. These doubts can however be dispelled if the human is included as part of the system. In this case, it would be possible to assign autonomy to the human-artificial system and, possibly, also a higher level of naivety to it, in consideration of the prior knowledge, or lack thereof, that the human programmers have of algorithms that generate style-ambiguous images.

Table 1 Overview of qualitative assessment of creativity

In this work I aimed to contribute to the discussion on how to measure creativity. After having identified problem-solving, evaluation, and naivety as key features of creativity, I proposed a model to situate and assess it by considering the degree to which these elements, together with efficiency, are realized in the system under examination. The tentative proposal of measuring creativity in the way indicated, can contribute toward the achievement of a deeper understanding of the elements that are responsible for the overall creativity of a system.

One of the benefits of the adoption of this model is to avoid discipline compartmentalizations and potential biases. The creativity of systems that would be externally classified as non-creative, either because the output is not sufficiently novel or is insufficiently impactful with respect to the relevant historical context, can in this framework be assessed in relation to inner components of the system itself.

The suggested interpretation can offer a contribution to the debate on both natural and artificial creativity by providing a conceptual framework in which to couch estimates of the creativity displayed by a system, thus parading these results in favor of or against the possibility of non-human creativity. Pursuing further an unprejudiced analysis of natural and artificial creativity can help us dispel the biases that we may have toward non-human systems (Jordanous 2011; Moruzzi 2020) and to better design tools for human-computer interaction in the creative sector.Footnote 24