Introduction

In biology, morphology is the study of the form and structure of organisms (anatomy), their development (developmental biology), and their evolution (evolutionary developmental biology and evo-devo). It has a long tradition of observing and describing the forms and formation of species and, since the early twentieth century, of searching for “formative substances” and mechanisms that drive pattern formation and morphogenesis (Boveri 1901; Grove and Monuki 2013). The latter gained ground with Spemann’s and Mangold’s seminal work in experimental embryology on the Induktion von Embryonalanlagen durch Implantation artfremder Organisatoren (1924). In this work, they introduced the basic concept of “organizers,” small regions of tissue grafted to embryos, which induced a second embryo body. The grafted tissue (taken from the dorsal lip of the blastopore) had an organizing effect on its surroundings, inducing its own program of spatial differentiation and morphogenesis (Mangold 1924; Ribatti 2014; Saha 1991). Later, Spemann discovered that “the character of the induced organ depends much more on its own intrinsic (presumably genetic) constitution than on that of the inducer” (Ribatti 2014, p. 39; Spemann 1938). However, the nature of the signals that triggered induction remained unknown.

In the 1940s, Lester G. Barth and Johannes Holtfreter showed that the inducing stimulus was not specific; that is, various materials including embryonic or adult tissue, dead or living tissue could be chosen by a researcher to induce form. Even mere mechanical damage and chemical compounds could induce spatial differentiation and morphogenesis (Barth 1941; Holtfreter 1934, 1944). In particular, chemical compounds such as sterol, ATP, nucleic acids, fatty acids, and non-sterol chemicals have been experimentally studied as embryonic inducers. These were called evocators by Conrad Hal Waddington, who “reinterpreted induction in terms of molecular biology [and] linked embryonic induction to enzymatic induction. Inducible enzymes had been called adaptive enzymes until the early 1950s, and their relationship to development had been proposed by Jacques Monod as early as 1947” (Ribatti 2014, p. 42; Waddington 1938, 1940; Monod 1947). Only in the 1990s did the nature of the signals that triggered induction become increasingly accessible, because the molecular events that underlie induction are difficult to study and require advanced experimental technology.

This search for formative substances and mechanisms of induction theory as well as genetics was criticized by developmental biologist Lewis Wolpert in the late 1960s as “preventing progress in understanding pattern formation,” because it would render “the possibility of obtaining a set of general principles enabling one to deal with the translation of genetic information into cellular patterns and forms … almost hopeless” (Wolpert 1969, p. 3). Wolpert’s critique is characteristic of the scientific dissatisfaction with the descriptive and empirical nature of morphology, and this dissatisfaction has a long tradition. As early as 1866 Ernst Haeckel complained in Generelle Morphologie that morphology was more an art than a scientific theory, pleading for a scientific understanding guided by physics and mechanics and based on mathematics (1866, vol. 1, pp. 7, 20). Haeckel’s answer to this problem was his vision of promorphology, the theory of “stereometric basic forms of organisms,” which he called “Promorphen” (1866, vol. 1, pp. 26, 27). His vision was inspired by the crystallography and mathematics of his time and was to provide a rule-based description for synthesizing biological form—a mechanistic project which was taken up directly by Thompson (1917) and indirectly by early synthetic biologists like Herrera (1897, 1924, 1942), Loeb (1899, 1912a, b), and Leduc (2011, 2012).

One hundred forty years later, a contemporary answer to the scientific dissatisfaction with the descriptive and empirical nature of morphology is provided by Jamie A. Davies’s concept of synthetic morphology. “Theories of morphogenesis are routinely deduced from observation of normal and mutant embryos, but only when the theories are applied to the creation of novel, designed forms will they have been properly tested. As the physicist Richard Feynman once said, ‘What I cannot create, I do not understand’” (Davies 2008, p. 707). The vision of synthetic morphology, which is rooted in developmental biology and synthetic biology, aims at programming cells in order to organize themselves into designed arrangements, structures, and tissues. Thus, it follows the basic idea of synthetic biology best outlined by Elowitz and Leibler. Because “the ‘design principles’ underlying the functioning of such intracellular networks remain poorly understood … we present a complementary approach to this problem: the design and construction of a synthetic network to implement a particular function” (Elowitz and Leibler 2001, 2000, p. 335; Gramelsberger et al. 2013). By taking up this design approach, Davies hopes that synthetic biology will “play a very important role in the verification of the results of basic developmental biology” (2008, p. 707).

From the perspective of the history of science, this is an interesting shift in the logic of research. Davies acknowledges the traditional scientific way of mathematical and theoretical understanding—“The formation of mathematical laws had long since brought the physical universe to rational order; and German morphologists of the nineteenth century attempted something comparable in life science” (Richards 2008, p. 472). However, he replaces them with engineering and design values that have become basic epistemic concepts guiding research. The latter characterize modern technoscience: “In technoscientific research, the business of theoretical representation cannot be dissociated, even in principle, from the material conditions of knowledge production and thus from the interventions that are required to make and stabilize the phenomena. In other words, technoscience knows only one way of gaining new knowledge, and that is by first making a new world” (Nordmann 2006, p. 63). With his concept of “synthesis,” which refers to basic principles and rules for programming form genetically and, thus, aiming to overcome the descriptive status of morphology, Davies transforms morphology into a technoscience. From the perspective of philosophy of science, the epistemic gain promises an interesting shift. Does “making a new world” mean understanding the given one, that is, Nature? Will it help to understand morphogenesis better, or will it just enable biologists to engineer morphogenesis?

The present paper explores the outlined historical and philosophical aspects of Davies’s program. It will draw some historical links, although it does not aim to present a concise historiography of the field.Footnote 1 Rather, it attempts to investigate the relevance of some historical links for the synthetic approach. The paper begins with an introduction before presenting the main concepts of synthetic biology (“The Synthetic Approach to Biology”) as well as synthetic morphology (“Synthetic Morphology’s Programming Approach”). It draws some links to the history of morphology, namely to biomolecular concepts of morphogenesis as forerunners of Davies’s vision (“Formative Concepts for the Idea of Synthetic Morphology”). Finally, it discusses the epistemic role of the synthetic approach in morphology, overcoming its descriptive status (“Conclusion”).

The Synthetic Approach to Biology

In his visionary paper, “Synthetic Morphology: Prospects for Engineered, Self-constructing Anatomies,” Davies complained that morphology lacks “a strong synthetic aspect” (2008, p. 707). In his writings, synthesis refers to basic principles and rules, less for understanding in the explanatory sense and more for programming form genetically. Thus, synthesis should help to overcome the descriptive status of morphology, lamented by the physicist Richard Feynman, who Davies notes “once said, ‘What I cannot create, I do not understand’” (2008, p. 707). This engineering or “technoscientific” view is characteristic of synthetic biology, which aims at designing simplified intracellular biomolecular networks. Michael B. Elowitz and Stanislas Leibler designed one example of a proof-of-concept experiment in 2000, a wetware ring oscillator consisting of three genes repressing each other called the “repressilator.” Proposing a first “rational network design” to molecular biology, Elowitz and Leibler were able to engineer new cellular behavior, but also to aim at a better understanding of the artificial design principles of the repressilator (Knuuttila and Loettgers 2013; Gramelsberger 2013).

Understanding, in this case, resulted from manipulating the dynamic parameters necessary to let a biomolecular network oscillate. Based on a mathematical stability analysis carried out by a computer simulation of a nonlinear model of transcriptional regulation, Elowitz and Leibler identified a specific “dependence of transcription rate on repressor concentration, the translation rate, and the decay rates of the protein and messenger RNA” leading to oscillation (2000, p. 335). To realize this specific set of dynamic parameters, they had to equip the biomolecular network with a strong promoter (regions of DNA that initiate the transcription of particular genes) and a tag inserted into one of the repressor proteins, shortening the lifetimes of the protein to mRNA range (minutes instead of hours). Based on such an artificial synthetic design resulting from mathematical considerations, part of the repressilator exhibited oscillation, which became observable through the fluorescence intensity of the cells. However, only forty percent of cells showed the desired behavior, and, finally, oscillation ceased when the cell colony entered a stationary phase after about ten hours of growth.

Nevertheless, what started as a simple proof-of-concept experiment has turned into an engineering of biology business based on standardizing biomolecular tools, biological parts, and systems for “trimming microorganisms genetically to optimize their productivity” (Tomita 2001, p. 1091)—a business which has been taken up by IT corporations like Google and Microsoft, by commercial bio labs, and last but not least by the US Defense Advanced Research Projects Agency (DARPA) (Shanks 2015). As Bensaude Vincent outlined, two visions of synthetic biology have emerged, one inspired by computer engineering (“brick biology”) and one modeled on chemistry (2013, referring to Endy 2005 and Benner and Sismour 2005). However, “from an epistemological perspective they equally share Feynman’s credo that knowledge is acquired through creation or synthesis” (Bensaude Vincent 2013, p. 126). In particular, the brick biology approach, propagated by the Massachusetts Institute of Technology’s (MIT) Synthetic Biology Working Group, is of interest for Davies’ vision of synthetic morphology.

In 2003, Tom Knight had already proposed an Idempotent Vector Design for Standard Assembly of Biobricks. “The key notion in the design of our strategy,” he wrote, “is that the transformations performed on component parts during the assembly reactions are idempotent in a structural sense. That is, each reaction leaves the key structural elements of the component the same” (2003, p. 2). Biological parts are composed of standardized plasmids that are rings of autonomously replicating DNA strings. They encode functional genetic elements, for instance, promoter sequences initiate the transcription of DNA into messenger RNA; terminator sequences halt RNA transcription; repressor sequences block the transcription of another gene by producing a protein; and so on. Thus, functionally ordered classes of biological parts like operators, terminators, reporters, and promoters create a synthetic biological language as well as a tool kit.

Such a tool kit has been available since 2007 in material as well as in machine-readable form at the Registry of Standard Biological Parts, originally located at MIT in Cambridge, Massachusetts. This “brick biology” clearly applies ideas from engineering to molecular biology. To tackle the overwhelming problem of biological complexity, it should make use of standardization, decoupling, and the abstraction of biological parts (Endy 2005). The idea of standardization draws on the idea of decoupling, first by decomposing a system into devices and, then, turning devices into biological parts. Thus, design and fabrication can be decoupled in biological engineering. Such abstraction is supposed to help manage complexity. It refers to the redesign of biological parts and devices so that they are “simpler to model and easier to use in combination” (Endy 2005, p. 452).

However, it is anything but simple to design robust biological parts. Noise, variance, mutation, adaptation, and context-sensitivity compromise their designed functionality (Loettgers 2009). On the one hand, as the example of the repressilator showed, the reliability of biological designs is low. On the other hand, it is not easy to measure the performance of a designed biological part using fluorescent reporters because such measurement requires high expression. Furthermore, “the process of building a large genetic circuit requires the assembly of many DNA parts, and this process has been both technically challenging (until recently) and fraught with its own sources of errors” (Brophy and Voigt 2014, p. 509). Thus, strategies for creating more robust parts have been sought, for instance, by introducing the fine-grained control of synthetic circuits by using libraries of the same part (Elis et al. 2009), by developing “quantitative descriptions of [bio] devices in the form of standardized, comprehensive datasheets” (Canton et al. 2008, p. 787), and, more recently, by applying clustered, regularly interspaced, short palindromic repeat and associated systems (CRISPR/Cas) to synthetic biology (Jinek et al. 2012; Brophy and Voigt 2014; Xu and Qi 2018). The latter facilitates the engineering of larger genetic circuits, as it is much easier to work with CRISPR than with regulatory proteins (Nielsen and Voigt 2014).

Nevertheless, synthetic biology—also described by opponents as extreme genetic engineering (ETC Group 2007)—has become an integral part of biomolecular research and biotechnology. In particular, genetic logic circuitry for performing user-defined operations has become a major research endeavor since the development of genetically encoded transcription factor-based NOT and NOR gates (Kramer et al. 2004; Tamsir et al. 2011). What followed was the synthetic design of all logical gates of Boolean algebra (Bonnet et al. 2013), with NOR and NAND (NOT AND) gates serving as the fundamental gates, not only for the complete Boolean functionality of digital computing, but also for memory design. Using CRISPR/Cas to engineer larger genetic circuits by linking the outputs of genetic circuits opens up new possibilities for synthetic biology. For instance, “cells could be programmed to sense the cell density in a fermenter and respond by expressing enzymes to redirect flux through global metabolism. … Similarly, the cell phenotype could be controlled, like the ability to swim or associate into biofilms” (Nielsen and Voigt 2014, p. 2). Besides toxicity, degrading signals at each layer, and other problems, CRISPR/Cas has helped to make genetic gates more robust. “This could,” as Alec Nielsen and Christopher Voigt suggest, “allow the paradigm of analog and digital computing to be applied in vivo without requiring large and cumbersome constructs” (Nielsen and Voigt 2014, p. 7).

Synthetic Morphology’s Programming Approach

It is the epistemological attitude of “understanding by constructing” that Davies tries to transfer from synthetic biology to morphology by pleading for the genetic programming of form (2008, p. 708). He differentiates his approach from the reprogramming of tissue engineering based on stem cells because stem cells already contain “the complete ‘genetic programme’ for making all of the cell types in an embryonic and adult body” (2008, p. 708). Synthetic morphology, by contrast, aims at creating entirely novel genetic programs for the self-organization of cells into structures and forms. “Synthetic morphology, if it can be brought into being,” as he enthusiastically wrote in 2008, “has the potential to expand dramatically the range of possibilities of tissue engineering, both extra- and intracorporeal. It also has the potential to play a very important role in the verification of the results of basic developmental biology” (p. 707).

Morphogenesis in this synthetic approach is carried out by “morphogenetic effector modules,” covering the most common morphogenetic mechanisms and being switched on and off by a single master control gene (Davies 2005, p. 709; 2014). Thereby, it is convenient “that most normal mammalian morphogenesis seems to take place using about ten basic cellular behaviours, each tissue using them in different sequences and to different extents” (Cachat et al. 2014, p. 1). These morphogenetic mechanisms include elective cell death, cell proliferation, cell fusion, cell locomotion, chemotaxis, cell adhesion, sorting, and folding. The morphogenetic effector modules should work autonomously without requiring a developmental program in the host genome. As development is a complex, genetically encoded sequence and combination of morphogenetic mechanisms, synthetic morphogenesis has to start with a simple demonstration in populations of engineered cells. However, studies have shown that a single specific “driver” gene often drives morphogenetic mechanisms in cultured cells: “This is typically a non-constitutive signalling molecule, adhesion molecule or transcription factor that can, when expressed or activated, organize the directed self-assembly of housekeeping proteins that are constitutively present in the cell (actin, for example)” (Davies 2008, p. 710). According to Davies, the diversity of drivers in various cell lines doesn’t create a problem, because in keeping with the demand for abstraction in synthetic biology, each mechanism requires only one reliable morphogenetic effector module.

Creating morphogenetic effector modules is not enough to design morphogenesis. An information processing system is needed to integrate information about the environment of a cell in order to trigger the effector modules. To avoid unwanted cross-talk with normal cellular physiology, Davis advocates the use of synthetic NOR gates to construct information processing modules. These modules should integrate multiple and continuous signals, store, and process them digitally as on/off triggers. Finally, sensor modules should monitor the chemical environment. All modules together—the morphogenetic effector modules, the information processing modules, and the sensory modules—constitute the morphogenetic machinery. Such machinery can be plugged into an environment of cells which, ideally, “has lost its developmental programme entirely so that it is a blank slate free of the risk of unexpected, endogenous developmental responses to introduced programmes” (Davies 2008, p. 715).

In his 2008 paper, Davies called for the construction “of a library of morphogenetic effector modules (in a standard plasmid/BAC form), each controlled by one driver gene,” as the first step towards synthetic morphology (p. 716). A next step could be the control of these effector modules by information processing modules and sensory modules for simple morphogenetic behavior. In 2014, Elise Cachat collaborated with Davies to publish the construction of five inducible effector modules capable of controlling cell adhesion, elective cell death, cell fusion, cell locomotion, and proliferation. These effector modules are standardized, publicly available, and connectable with existing logic and sensory modules. Based on this library, “sequences of morphogenetic effectors in a series of proof-of-concept trials for synthetic morphology [are combined], where cells are genetically programmed to organize themselves into designed 2D or 3D structures in response to artificial external stimuli” (Cachat et al. 2014, p. 8).Footnote 2 Tissue engineering is, of course, the desired goal of synthetic morphology. While 3D printing has become a widespread technology for tissue engineering in recent years, used to print cells directly or seed them on printed templates, the genetic programming of form is still basic science. Approaching tissue engineering requires handling the entire cycle of development: patterning, differentiation, and morphogenesis. Patterning creates differences in identical cells, which result in differentiation—“a stable change in gene expression according to the patterning signals” (Davies and Cachat 2016, p. 697). While refinements to patterning systems that take triggers from outside have already been made synthetically, creating patterning of cells de novo has proved challenging. “Attempts are underway to produce a synthetic version of patterning by reaction–diffusion mechanisms thought to operate in real embryos, but, at the time of writing, no working system seems to have [been] published although some promising tools already exist” (Davies and Cachat 2016, p. 698). Other changes in gene expression initiate the morphogenesis of anatomical form. These processes run in loops that gradually shape an organism. However, mammalian synthetic morphology is a challenge because cells are extremely slow compared to prokaryotic cells, requiring generation times measured in days rather than minutes, and requiring that natural cell–cell interactions be blocked. Thus, synthetic morphology is more a testbed for examining morphogenesis in the lab than a field for creating working tissue engineering applications.Footnote 3

Formative Concepts for the Idea of Synthetic Morphology

Davies clearly outlined the engineering concept underlying his approach: “The design strategy advocated here is based closely on ideas that have proved successful in mechanical and electronic engineering. The two most important of these ideas are standardization of components and modularity of construction” (2008, p. 708). He can follow this approach because important formative concepts for modeling the cycle of development, including patterning, differentiation, and morphogenesis have already been provided in morphology.Footnote 4 One such precursory concept Davies refers to is Alan Turing’s study of patterning by reaction–diffusion mechanisms (Davies and Cachat 2016, p. 698, referring to Turing 1952).

In 1952, the mathematician Alan Turing interpreted patterning abstractly as heterogeneity. Inspired by Conrad Hal Waddington’s concept of evocators, Turing conceived a mathematical model of the growing embryo. The question that motivated his model was: how do uniformly distributed signals in cells can spread, self-organize, and form patterns; in other words: how can homogeneity turn into heterogeneity? Turing developed his model of morphogenesis on a purely chemical and physical basis. His model took into account Newton’s laws of motion for particles, but also osmotic pressures, chemical reactions, and the diffusion of the chemical substances. Based on such chemical and physical laws he was able to determine the changes of state of the uniformly distributed signals, which Turing called “morphogens.” Turing further explains that ‘morphogens’ was simply “the word being intended to convey the idea of a form producer. It is not intended to have any very exact meaning, but is simply the kind of substance concerned in this theory. The evocators of Waddington provide a good example of morphogens.… The genes themselves may also be considered to be morphogens. But they certainly form rather a special class. They are quite indiffusible” (Turing 1952, p. 38, referring to Waddington 1940). “Evocators,” as already outlined in the introduction, are chemical compounds that have been experimentally studied as embryonic inducers. Waddington’s evocator theory, in turn, “reinterpreted induction in terms of molecular biology [and] linked embryonic induction to enzymatic induction” (Ribatti 2014, p. 42; Waddington 1938, 1940).

Defining morphogenesis as the ability to turn homogeneity into heterogeneity, the mathematical core problem of Turing’s model was symmetry breakings of the embryo in its spherical blastula stage (spherical symmetry). In analogy to electrical oscillators, he described symmetry breaking through instabilities due to irregularities in the initial state, which lead to new equilibria once symmetry has been lost. “The ultimate fate of the system will be a state of oscillation at its appropriate frequency, and with an amplitude (and a waveform) which are also determined by the circuit. The phase of the oscillation alone is determined by the disturbance” (Turing 1952, p. 42). Based on this analogy from physics and mechanics, he arrived at various cases of biomolecular oscillation in a ring of cells that could lead to various forms of symmetry breaking—some more biologically relevant than others. More precisely, Turing induced symmetry breaking by diffusion-driven instability resulting from the interactions between two diffusing morphogen populations: one population acting as an activator (positive feedback loop of the production of morphogens) and the other as an inhibitor (negative feedback loop of the production of morphogens). By carrying out one of the earliest computer simulations and visualizations in the history of science (computed on the Manchester University Computing Machine), Turing found out that the domain size played a crucial role in the breakdown of homogeneity and, thus, in the formation of patterns. These forms of symmetry breaking resulted in “Turing patterns” of heterogeneously distributed cells, for instance, in a “dappled” pattern resulting from a specific type of a morphogen system. Although Turing thought that only “imaginary biological systems” could be treated by his model (1952, p. 72), his diffusion–reaction theory of morphogenesis is applied to pigmentation patterns in biology today (Woolley et al. 2017). Furthermore, it will probably play an important role in forthcoming synthetic morphology for patterning fields of cells de novo (Davies and Cachet 2016).

However, Turing’s mathematical model only provided a theory of uniformly distributed signals; in fact, the nonuniform graded distribution of morphogens plays a crucial role in pattern formation and morphogenesis. Lewis Wolpert described nonuniform graded distribution with his well-known French flag model, first published in Waddington’s influential edited volume Towards a Theoretical Biology. I. Prolegomena (Waddington ed. 1968). As Wolpert understood them, morphogens were signaling molecules that act directly on cells—not through serial induction. Depending on the morphogen concentration, specific cellular responses were produced that determined the cell’s fate. Wolpert is known for his criticism of induction theory and simplistic thinking in genetics. It was his explicit opinion as a theoretician that, without understanding spatial differentiation and morphogenesis, “appropriate questions at the molecular level, and thus linking gene action with the development of form” could not be posed (Wolpert 1969, p. 3). He criticized contemporary geneticists like Lederberg (1967) for their simplistic understanding of development as a sequential process of protein expression. Such thinking, he asserted, would render “the possibility of obtaining a set of general principles enabling one to deal with the translation of genetic information into cellular patterns and forms … almost hopeless” (Wolpert 1969, p. 3). Even harsher was his criticism of inductive theory, which was the main approach of classical experimental embryology. Induction and related concepts, Wolpert pointed out, had “completely obscured the problems of pattern formation … [failing] to consider the problem of spatial organization; … and I regard the misuse of concepts of induction as a major feature preventing progress in understanding pattern formation” (Wolpert, quoted in Ribatti 2014, p. 43).

Not surprisingly, Wolpert’s own theory concerned the spatial organization of morphogens for pattern formation and morphogenesis, as well as the idea that quantitative differences were translated into patterns.Footnote 5 He was aware that the

central problem of the development of form and pattern is how genetic information can be translated in a reliable manner to give specific and complex multicellular forms and varying spatial patterns of cellular differentiation. In considering this problem it is important and convenient to distinguish between molecular differentiation, spatial differentiation and morphogenesis, while recognizing their interdependence. (Wolpert 1969, p. 2)

While molecular differentiation is related to changes within a cell, spatial differentiation deals with the formation of spatial patterns by individual cells within a population and requires intercellular communication. Finally, morphogenesis is the “molding of form” due to cellular forces. Wolpert suggested that this mechanism was universal; in particular, his French flag model became famous for helping to understand the effect of a morphogen on cell differentiation (Grove and Monuki 2013). The model described the decline of gradients of morphogenetic concentration in a morphogenic field: high concentrations activate “blue” molecular differentiation (gene), lower concentrations activate “white” and “red” molecular differentiation. Thus, morphogen gradients of concentration were responsible for generating different cell types in a distinct spatial order, particularly during early embryonic development. The French flag model was a balancing model based on a positional principle; that is, a cell’s position determined the nature of its differentiation (positional information).

Wolpert’s understanding of morphogens turned out to be fundamental for biomolecular morphology. The first morphogen, Bicoid, was identified in a Drosophila syncytial embryo in the 1980s (Driever and Nüsslein-Volhard 1988). Many others followed, and most of them were secreted proteins. However, morphogen gradient systems are complex systems involving signaling centers and transcriptional effectors. According to Elisabeth Grove and Edwin Monuki, some common features of morphogen gradient systems are summarized as follows:

1. Morphogens are released from dynamic localized sources, assemble with other molecules, and move via diffusion through the extracellular space. 2. Gradient shape is determined by flux from sources, diffusivity, and clearance from the tissue. 3. Morphogen concentration and duration are transmitted linearly to intracellular molecules, ultimately resulting in the graded and proportional activity of transcriptional effectors. 4. Transcriptional effectors participate in complex regulatory networks that involve preexisting intrinsic factors, which ultimately determine target gene responses. 5. Feedback mechanisms act to buffer fluctuations in morphogen production, regulate signaling. (Grove and Monuki 2013, p. 29)

Wolpert’s model is represented in Davies’s account by the need of full morphogenetic machinery, including morphogenetic effector modules, information processing modules, and sensory modules. In particular, “sensory modules are needed for two purposes: to detect signals generated by cells of the system, and to monitor physical or chemical aspects of the environment” (Davies 2008, p. 714). Information processing modules then incorporate the signals of the sensory modules and turn morphogenetic effector modules on and off.

Conclusion

What Bensaude Vincent stated for synthetic biology is also true for synthetic morphology: “Knowledge is acquired through creation or synthesis” (2013, p. 126). Equating understanding with creating and synthesizing is the engineering and technoscientific approach, respectively, to overcoming the descriptive status of morphology. Another approach to understanding by creation is modeling and simulation, of which Davies was aware when he explicitly referred to Meinhardt’s book, Models of Biological Pattern Formation (1982) (Davies and Cachat 2016, p. 698). In an overview paper, Meinhardt wrote:

A central issue in developmental biology is how the complex structure of a higher organisms is generated from a single cell in a reproducible way. Basic concepts, such as positional information (Wolpert, 1969) or the embryonic organizer (Spemann and Mangold, 1924), have been derived from experiments involving perturbations of normal development. From the observed regulatory phenomena one cannot directly deduce the molecular mechanism on which development is based. We have used such observations to develop specific models for different developmental situations. By computer simulation, we have shown that the regulatory features of the models correspond closely to the experimental observations. (1996, p. 123)

Thereby, the challenge for modeling morphogenesis was and still is the fact that developmental systems generate patterns from a more or less structureless initial situation.

Meinhardt’s quote summarizes the basic epistemic strategies: first, that not observation but rather modeling leads to understanding; second, that complexity results from simple elements. The idea of generating complex structures from basic ones has been known as the synthetic approach since René Descartes. In his Discours de la Méthode, Descartes called for dividing each of the difficulties under examination into basic elements (in other words, analyzing them) and for synthesizing from these elements the solution to the difficulties (Decartes 1637, part II). Although modeling and simulation, as well as synthetic design, use the same epistemic strategies, in a model and simulation-based approach the synthesized result is, again, a representation, while in engineering and technoscience it is “a new world” (Nordmann 2006, p. 63). While the representation can be used for better understanding,Footnote 6 it is questionable whether a new and artificial world would advance understanding of the given one. This is all the more true when the synthetic creation follows the idea of abstraction: “simpler to model and easier to use in combination” (Endy 2005, p. 452). Because this idea of abstraction is guided by mathematical considerations and realizes mathematical abstraction in wetware, that is, in the DNA-design of organisms, this observation leads to the question, Will synthetic morphology help to understand morphogenesis better, or will it just enable biologists to engineer morphogenesis? In other words, Does Feynman’s credo hold for biology?

The answer to this question depends on the nature of biology. Is it general or specific? These last two questions are related: If the nature of biology is specific, synthetic biology would be conceived as engineering capable of creating artificial morphogenic systems for applications such as tissue engineering. This conception would only partly fulfill Davies’s hope that synthetic biology will “play a very important role in the verification of the results of basic developmental biology” (2008, p. 707). What is constructed, tested, and understood would be artificial as such. Such a system might perhaps work at some limited time under laboratory conditions and succeed in creating some specific kind of artificial tissue. But it would not explain molecular differentiation, spatial differentiation, and morphogenesis in general—which would be the goal of scientific understanding, let alone evolutionary developmental biology and epigenetics. It would not grasp the general nature of biology, as “synthetic morphology is intended to create structures that do not exist in any normal developmental programme” (Davies 2008, p. 708).

In turn, if the nature of biology is general, synthetic biology could become a science if it succeeds in elucidating underlying principles and if its designs are not too artificial. Perhaps the riddle of the “homeobox” in evolutionary developmental biology will be the salvation of synthetic morphology. “The importance of homeobox-containing genes (including the Hox genes), from an evolutionary perspective, is that they reveal the existence of a general mechanism underlying the development of morphologically diverse organisms. This riddle constitutes something of a paradox—where does the diversity come from if the genes are highly conserved?” (Arthur 2002, p. 758). However, the paradox would be good news for the general nature of biology as well as for Davies’s synthetic morphology approach. Again, the same applies to Haeckel’s promorphology. Haeckel was convinced that the nature of biology is general, that underlying principles can be found, and that he had conceived (as noted in the appendix of the first volume of his Allgemeine Morphologie) “the promorphological form as a general system of all forms” (1866, p. 554). Perhaps Davies’s synthetic machinery is this general system of all forms? Or, perhaps it is only a wetware version of Turing’s “imaginary biological systems” (Turing 1952, p. 72).