Skip to main content
Log in

From Motions to Emotions: Can the Fundamental Emotions be Expressed in a Robot Swarm?

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

This paper explores the expressive capabilities of a swarm of miniature mobile robots within the context of inter-robot interactions and their mapping to the so-called fundamental emotions. In particular, we investigate how motion asnd shape descriptors that are psychologically associated with different emotions can be incorporated into different swarm behaviors for the purpose of artistic expositions. Based on these characterizations from social psychology, a set of swarm behaviors is created, where each behavior corresponds to a fundamental emotion. The effectiveness of these behaviors is evaluated in a survey in which the participants are asked to associate different swarm behaviors with the fundamental emotions. The results of the survey show that most of the research participants assigned to each video the emotion intended to be portrayed by design. These results confirm that abstract descriptors associated with the different fundamental emotions in social psychology provide useful motion characterizations that can be effectively transformed into expressive behaviors for a swarm of simple ground mobile robots.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. In this context, the term valence designates the intrinsic attractiveness (positive valence) or aversiveness (negative valence) of an event, object, or situation [20]. The valence of an emotion thus characterizes its positive or negative connotation. Among the fundamental emotions, happiness and surprise have positive valence, while the remaining four—sadness, fear, disgust and anger—are classified under negative valence [51]. On the other hand, the term arousal refers the activation or deactivation associated with an emotion.

References

  1. Ackerman E (2014) Flying LampshadeBots Come Alive in Cirque du Soleil. IEEE Spectrum

  2. Alonso-Mora J, Siegwart R, Beardsley P (2014) Human–robot swarm interaction for entertainment: from animation display to gesture based control. In: Proceedings of the 2014 ACM/IEEE international conference on human–robot interaction, HRI ’14. ACM, New York, pp 98–98

  3. Aronoff J (2006) How we recognize angry and happy emotion in people, places, and things. Cross Cult Res 40(1):83–105

    Article  Google Scholar 

  4. Barakova EI, Lourens T (2010) Expressing and interpreting emotional movements in social games with robots. Pers Ubiquitous Comput 14(5):457–467

    Article  Google Scholar 

  5. Belpaeme T, Baxter P, Read R, Wood R, Cuayáhuitl H, Kiefer B, Racioppa S, Kruijff-Korbayová I, Athanasopoulos G, Enescu V, Looije R, Neerincx M, Demiris Y, Ros-Espinoza R, Beck A, Cañamero L, Hiolle A, Lewis M, Baroni I, Nalin M, Cosi P, Paci G, Tesser F, Sommavilla G, Humbert R (2013) Multimodal child–robot interaction: building social bonds. J Hum Robot Interact 1(2):33–53

    Article  Google Scholar 

  6. Bi T, Fankhauser P, Bellicoso D, Hutter M (2018) Real-time dance generation to music for a legged robot. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS). Madrid, pp 1038–1044

  7. Breazeal C (2003) Toward sociable robots. Robot Auton Syst 42:167–175

    Article  MATH  Google Scholar 

  8. Bretan M, Hoffman G, Weinberg G (2015) Emotionally expressive dynamic physical behaviors in robots. Int J Hum Comput Stud 78:1–16

    Article  Google Scholar 

  9. Brown L, Kerwin R, Howard AM (2013) Applying behavioral strategies for student engagement using a robotic educational agent. In: 2013 IEEE international conference on systems, man, and cybernetics. pp 4360–4365

  10. Cabibihan JJ, Javed H, Ang M, Aljunied SM (2013) Why robots? A survey on the roles and benefits of social robots in the therapy of children with autism. Int J Soc Robot 5(4):593–618

    Article  Google Scholar 

  11. Camurri A, Mazzarino B, Ricchetti M, Timmers R, Volpe G (2004) Multimodal analysis of expressive gesture in music and dance performances. In: Camurri A, Volpe G (eds) Gesture-based communication in human–computer interaction. Springer, Berlin Heidelberg, Berlin, Heidelberg, pp 20–39

    Chapter  Google Scholar 

  12. Collier GL (1996) Affective synesthesia: extracting emotion space from simple perceptual stimuli. Motiv Emot 20(1):1–32

    Article  Google Scholar 

  13. Cortes J, Martinez S, Karatas T, Bullo F (2004) Coverage control for mobile sensing networks. IEEE Trans Robot Autom 20(2):243–255

    Article  Google Scholar 

  14. de Rooij A, Broekens J, Lamers MH (2013) Abstract expressions of affect. Int J Synth Emot 4(1):1–31

    Article  Google Scholar 

  15. Dean M, D’Andrea R, Donovan M (2008) Robotic chair. Contemporary Art Gallery, Vancouver, B.C

    Google Scholar 

  16. Diaz-Mercado Y, Lee SG, Egerstedt M (2015) Distributed dynamic density coverage for human–swarm interactions. In: 2015 American control conference (ACC). pp 353–358

  17. Dietz G, Jane JE, Washington P, Kim LH, Follmer S (2017) Human perception of swarm robot motion. In: Proceedings of the 2017 CHI conference extended abstracts on human factors in computing systems. Denver, Colorado, pp 2520–2527

  18. Dunstan BJ, Silvera-Tawil D, Koh JTKV, Velonaki M (2016) Cultural robotics: robots as participants and creators of culture. In: Koh JT, Dunstan BJ, Silvera-Tawil D, Velonaki M (eds) Cultural robotics. Springer, Cham, pp 3–13

    Chapter  Google Scholar 

  19. Ekman P (1993) Facial expression and emotion. Am Psychol 48(4):384–392

    Article  Google Scholar 

  20. Frijda N (1986) The emotions. studies in emotion and social interaction. Cambridge University Press, Cambridge

    Google Scholar 

  21. Goodrich MA, Schultz AC (2007) Human–robot interaction: a survey. Found Trends Hum Comput Interact 1(3):203–275

    Article  MATH  Google Scholar 

  22. Hoffman G (2012) Dumb robots, smart phones: a case study of music listening companionship. In: 2012 IEEE RO-MAN: The 21st IEEE international symposium on robot and human interactive communication. pp 358–363

  23. Hoffman G, Kubat R, Breazeal C (2008) A hybrid control system for puppeteering a live robotic stage actor. In: RO-MAN 2008—the 17th IEEE international symposium on robot and human interactive communication. pp. 354–359

  24. Hoffman G, Weinberg G (2010) Gesture-based human–robot jazz improvisation. In: 2010 IEEE international conference on robotics and automation. pp 582–587

  25. Izard CE (1977) Human emotions. Emotions, personalityand psychotherapy. Springer, New York

    Google Scholar 

  26. Izard CE (2009) Emotion theory and research: highlights, unanswered questions, and emerging issues. Annu Rev Psychol 60:1–25

    Article  Google Scholar 

  27. Juslin P (2005) From mimesis to catharsis: expression, perception, and induction of emotion in music. Oxford University Press, Oxford

    Book  Google Scholar 

  28. Justh EW, Krishnaprasad PS (2003) Steering laws and continuum models for planar formations. In: 42nd IEEE International conference on decision and control. vol. 4, pp 3609–3614

  29. Knight H, Simmons R (2014) Expressive motion with x, y and theta: Laban effort features for mobile robots. In: The 23rd IEEE international symposium on robot and human interactive communication. pp 267–273

  30. Kolling A, Walker P, Chakraborty N, Sycara K, Lewis M (2016) Human interaction with robot swarms: a survey. IEEE Trans Hum Mach Syst 46(1):9–26

    Article  Google Scholar 

  31. Kozima H, Michalowski MP, Nakagawa C (2009) Keepon. Int J Soc Robot 1(1):3–18

    Article  Google Scholar 

  32. Laban R, Lawrence F (1947) Effort. Macdonald & Evans Ltd, London

    Google Scholar 

  33. LaViers A, Teague L, Egerstedt M (2014) Style-based robotic motion in contemporary dance performance. Springer, Cham, pp 205–229

    Google Scholar 

  34. Lee D, Park S, Hahn M, Lee N (2014) Robot actors and authoring tools for live performance system. In: 2014 International conference on information science applications (ICISA). pp 1–3

  35. Lee JH, Park JY, Nam TJ (2007) emotional interaction through physical movement. Springer, Berlin Heidelberg, pp 401–410

    Google Scholar 

  36. Levillain F, St-Onge D, Zibetti E, Beltrame G (2018) More than the sum of its parts: assessing the coherence and expressivity of a robotic swarm. In: 2018 IEEE international symposium on robot and human interactive communication (RO-MAN). pp 583–588

  37. Lourens T, van Berkel R, Barakova E (2010) Communicating emotions and mental states to robots in a real time parallel framework using laban movement analysis. Robot Auton Syst 58(12):1256–1265 Intelligent Robotics and Neuroscience

    Article  Google Scholar 

  38. Marshall JA, Broucke ME, Francis BA (2004) Formations of vehicles in cyclic pursuit. IEEE Trans Autom Control 49(11):1963–1974

    Article  MathSciNet  MATH  Google Scholar 

  39. Masuda M, Kato S, Itoh H (2010) A laban-based approach to emotional motion rendering for human–robot interaction. In: Yang HS, Malaka R, Hoshino J, Han JH (eds) Entertainment computing—ICEC 2010. Springer, Berlin Heidelberg, pp 372–380

    Chapter  Google Scholar 

  40. Nakazawa A, Nakaoka S, Ikeuchi K, Yokoi K (2002) Imitating human dance motions through motion structure analysis. IEEE/RSJ Int Conf Intell Robot Syst 3:2539–2544

    Google Scholar 

  41. Olfati-Saber R (2002) Near-identity diffeomorphisms and exponential \(\epsilon \)-tracking and \(\epsilon \)-stabilization of first-order nonholonomic se(2) vehicles. In: Proceedings of the 2002 American control conference. vol 6, pp 4690–4695

  42. Or J (2009) Towards the development of emotional dancing humanoid robots. Int J Soc Robot 1(4):367

    Article  Google Scholar 

  43. Perkowski M, Bhutada A, Lukac M, Sunardi M (2013) On synthesis and verification from event diagrams in a robot theatre application. In: 2013 IEEE 43rd international symposium on multiple-valued logic. pp 77–83

  44. Perkowski M, Sasao T, Kim JH, Lukac M, Allen J, Gebauer S (2005) Hahoe KAIST robot theatre: learning rules of interactive robot behavior as a multiple-valued logic synthesis problem. In: 35th International symposium on multiple-valued Logic (ISMVL’05), pp 236–248

  45. Pickem D, Lee M, Egerstedt M (2015) The GRITSBot in its natural habitat–a multi-robot testbed. 2015 IEEE International conference on robotics and automation (ICRA). Seattle, WA, pp 4062–4067

    Chapter  Google Scholar 

  46. Plutchik R (2001) The nature of emotions: human emotions have deep evolutionary roots, a fact that may explain their complexity and provide tools for clinical practice. Am Sci 89(4):344–350

    Article  Google Scholar 

  47. Pollick FE, Paterson HM, Bruderlin A, Sanford AJ (2001) Perceiving affect from arm movement. Cognition 82(2):B51–B61

    Article  Google Scholar 

  48. Ramirez JL, Pavone M, Frazzoli E, Miller DW (2009) Distributed control of spacecraft formation via cyclic pursuit: theory and experiments. In: 2009 American control conference. pp 4811–4817

  49. Rimé B, Boulanger B, Laubin P, Richir M, Stroobants K (1985) The perception of interpersonal emotions originated by patterns of movement. Motiv Emot 9(3):241–260

    Article  Google Scholar 

  50. Ross RT (1938) A statistic for circular series. J Educ Psychol 29(5):384–389

    Article  Google Scholar 

  51. Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39(6):1161–1178

    Article  Google Scholar 

  52. Schoellig AP, Siegel H, Augugliaro F, D’Andrea R (2014) So you think you can dance? Rhythmic flight performances with quadrocopters. Springer, Cham, pp 73–105

    Google Scholar 

  53. Sheridan TB (2016) Human’robot interaction: status and challenges. Hum Fact 58(4):525–532

    Article  Google Scholar 

  54. Shinozaki K, Iwatani A, Nakatsu R (2008) Construction and evaluation of a robot dance system. In: RO-MAN 2008—the 17th IEEE international symposium on robot and human interactive communication. pp 366–370

  55. St-Onge D, Levillain F, Elisabetta Z, Beltrame G (2019) Collective expression: How robotic swarms convey information with group motion. Paladyn J Behav Robot 10:418–435

    Article  Google Scholar 

  56. Sunardi M, Perkowski M (2018) Music to motion: using music information to create expressive robot motion. Int J Soc Robot 10(1):43–63

    Article  Google Scholar 

  57. Vlachos E, Jochum E, Demers L (2018) Heat: The harmony exoskeleton self-assessment test. In: 2018 27th IEEE international symposium on robot and human interactive communication (RO-MAN). pp 577–582

  58. Wilson S, Glotfelter P, Wang L, Mayya S, Notomista G, Mote M, Egerstedt M (2020) The robotarium: globally impactful opportunities, challenges, and lessons learned in remote-access, distributed control of multirobot systems. IEEE Control Syst Mag 40(1):26–44

    Google Scholar 

Download references

Funding

This work was supported by “la Caixa” Banking Foundation under Grant LCF/BQ/AA16/11580039.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to María Santos.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by “la Caixa” Banking Foundation under Grant LCF/BQ/AA16/11580039.

Appendices

A Swarm Behaviors

In Sect. 3.1, a series of swarm behaviors were designed based on the movement and shape attributes associated with the different fundamental emotions. This appendix includes the mathematical expressions of the control laws used to produce the different swarm behaviors. Note that all the control laws included here treat each robot in the swarm as a point that can move omnidirectionally according to single integrator dynamics as in (1). The transformation from single integrator dynamics to unicycle dynamics is discussed in detail in Appendix B.

Fig. 16
figure 16

Density functions associated to represent the emotions of anger (a), disgust (b) and fear (c). The higher the density (darker color), the higher the concentration of robots will be in that area. The red circles represent the position of the agents once the control law in (13) has converged

1.1 A.1 Happiness

The swarm movement selected for the happiness behavior consists of the robots following the contour of a circle with a superimposed sinusoid. This shape is illustrated in Fig. 15a and can be parameterized as

$$\begin{aligned} \begin{aligned} x_{h}(\theta )&= (R + A\sin (f\theta )) \cos \theta ,\\ y_{h}(\theta )&= (R + A\sin (f\theta )) \sin \theta , \end{aligned} \quad \theta \in [0, 2\pi ), \end{aligned}$$
(3)

where R is the radius of the main circle and A and f are the amplitude and frequency of the superposed sinusoid, respectively. For the shape in Fig. 15a, the frequency of the superimposed sinusoid is \(f=6\).

If we have a swarm of N robots, we can initially position Robot i according to

$$\begin{aligned} p_i(0) = [x_h(\theta _i(0)), ~y_h(\theta _i(0))]^T,\quad i=1,\dots , N, \end{aligned}$$
(4)

with

$$\begin{aligned} \theta _i(0) = 2\pi i/N. \end{aligned}$$
(5)

Then the team will depict the desired shape if each robot follows a point evolving along the contour in (3),

$$\begin{aligned} \dot{p}_i = [x_h(\theta _i(t)), y_h(\theta _i(t))]^T - p_i, \end{aligned}$$
(6)

with \(\theta _i\) a function of time \(t\in \mathbb {R}_+\),

$$\begin{aligned} \theta _i(t) = \text {atan2}(\sin (t + \theta _i(0)), \cos (t + \theta _i(0))). \end{aligned}$$
(7)

1.2 A.2 Surprise

In the case of the surprise emotion, each robot follows a point moving along a circle with expanding radius, as in Fig. 15b. Such shape can be parameterized as,

$$\begin{aligned} \begin{aligned} x_{sur}(\theta , t)&= R(t) \cos \theta ,\\ y_{sur}(\theta , t)&= R(t) \sin \theta , \end{aligned} \quad \theta \in [0, 2\pi ), \end{aligned}$$
(8)

with

$$\begin{aligned} R(t) = \text {mod}(t, R_{max}-R_{min})+R_{min},\quad t\in \mathbb {R}_+, \end{aligned}$$
(9)

to create a radius that expands from \(R_{min}\) to \( R_{max}\).

Analogously to the procedure described in Appendix A.1, in this case the robots can be initially located at

$$\begin{aligned} p_i(0) = [x_{sur}(\theta _i(0), 0), y_{sur}(\theta _i(0), 0)]^T,\quad i=1,\dots , N, \end{aligned}$$
(10)

with \(\theta _i(0)\) given by (5). The controller for each robot is then given by,

$$\begin{aligned} \dot{p}_i = [x_{sur}(\theta _i(t), 0), y_{sur}(\theta _i(t), 0)]^T - p_i, \end{aligned}$$
(11)

with \(\theta _i(t)\) as in (7).

1.3 A.3 Sadness

For the case of the sadness emotion, the robots move along a circle of small dimension as compared to the domain. The strategy is analogous to the ones in (6) and (11), with the parameterization of the contour given by,

$$\begin{aligned} \begin{aligned} x_{sad}(\theta )&= R \cos \theta ,\\ y_{sad}(\theta )&= R \sin \theta , \end{aligned} \quad \quad \theta \in [0, 2\pi ), \quad R>0. \end{aligned}$$
(12)

1.4 A.4 Anger, Fear and Disgust

For the remaining emotions—anger, disgust and fear—the swarm coordination is based on the coverage control strategy, which allows the user to define which areas the robots should concentrate around.

If we denote by D the domain of the robots, the areas where we want to position the robots can be specified by defining a density function, \(\phi :D\rightarrow [0,\infty )\), that assigns higher values to those areas where we desire the robots to concentrate around. We can make the robots distribute themselves according to this density function by implementing a standard coverage controller such as [13], where

$$\begin{aligned} \dot{p}_i = \kappa (c_i(p) - p_i), \end{aligned}$$
(13)

where \(p = [p_1^T,\dots ,p_N^T]^N\) denotes the aggregate positions of the robots and \(\kappa >0\) is a proportional gain. In the controller in (13), \(c_i(p)\) denotes the center of mass of the Voronoi cell of Robot i,

$$\begin{aligned} c_i(p) = \frac{\int _{V_i(p)}q\phi (q)dq}{\int _{V_i(p)}\phi (q)dq}, \end{aligned}$$
(14)

with the Voronoi cell being characterized as,

$$\begin{aligned} V_i(p) = \{q\in D ~|~\Vert q-p_i\Vert \le \Vert q-p_j\Vert , j\ne i \}. \end{aligned}$$
(15)

Figure 16 shows the densities selected for each of the emotions, where the red circles represent the positions of the robots in the domain upon convergence, achieved by running the controller in (13).

B Individual Robot Control

The swarm behaviors described in Appendix A assume that each robot in the swarm can move omnidirectionally according to

$$\begin{aligned} \dot{p}_i = u_i, \end{aligned}$$
(16)

with \(p_i=(x_i, y_i)^T\in \mathbb {R}^2\) the Cartesian position of Robot i in the plane and \(u_i=(u_{ix}, u_{iy})^T\in \mathbb {R}^2\) the desired velocity. However, the GRITSBot (Fig. 1) has a differential-drive configuration and cannot move omnidirectionally as its motion is constrained in the direction perpendicular to its wheels. Instead, its motion can be expressed as unicycle dynamics,

$$\begin{aligned} \dot{x}_i&= v_i \cos \theta _i,\nonumber \\ \dot{y}_i&= v_i \sin \theta _i,\nonumber \\ \dot{\theta }_i&= \omega _i, \end{aligned}$$
(17)

with \(\theta _i\) the orientation of Robot i and \((v_i, \omega _i)^T\) the linear and angular velocities executable by the robot, as shown in Fig. 17.

In this paper, the single integrator dynamics in (16) are converted into unicycle dynamics, as in (17), using a near-identity diffeomorphism [41],

$$\begin{aligned} \begin{pmatrix} v_i\\ \omega _i \end{pmatrix} = K \begin{pmatrix} \cos \theta _i &{} \sin \theta _i\\ -\dfrac{\sin \theta _i}{l} &{} \dfrac{\cos \theta _i}{l} \end{pmatrix} \begin{pmatrix} u_x\\ u_y \end{pmatrix}, \quad K, l>0. \end{aligned}$$
(18)
Fig. 17
figure 17

Parameters involved in the near-identity diffeomorphism in (18), used to transform the single integrator dynamics in (16) into unicycle dynamics (17), executable by the GRITSBots. The pose of the robot is determined by its position, \(p=(x,y)^T\), and its orientation, \(\theta \). The single integrator control, u, is applied to a point \(\tilde{p}\) located at a distance l in front of the robot. The linear and angular velocities, v and \(\omega \), that allow the robot to track \(\tilde{p}\) are obtained applying the near-identity diffeomorphism in (18)

A graphical representation of this transformation is included in Fig. 17: the input \(u = (u_x, u_y)^T\) is applied to a point located at a distance of l in front of the robot, \(\tilde{p}\), which can move according to the single integrator dynamics in (16). The effect of this parameter in the movement of the robot is illustrated in Fig. 9. The parameter K acts as a proportional gain.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Santos, M., Egerstedt, M. From Motions to Emotions: Can the Fundamental Emotions be Expressed in a Robot Swarm?. Int J of Soc Robotics 13, 751–764 (2021). https://doi.org/10.1007/s12369-020-00665-6

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-020-00665-6

Keywords

Navigation