Introduction

Emotions have wide horizons, ranging as they do from social to cultural, from political to religious contexts and realities, as well as, from entertainment settings that specifically target the emotions (such as cinematic drama) to identity-reinforcement practices, such as those of the supporters of football teams. This article emphasises the difference between computational models of the emotions that are concerned with individual sentiency, cognition, and agency even when social interactions among individuals take place, and such computational modelling that belongs in crowd dynamics. Whereas crowd dynamics has been around within psychology for over a century now, its modelling, combining emotion and cognition, within computational science (i.e., the branch of artificial intelligence that is concerned with populations of agents), is a quite recent development in research.

Emotion in computational modelling: introductory considerations

Already Colby’s (1975) Artificial Paranoia (cf. Colby 1981) attempted a computer simulation (however rudimentary) of pathological mind patterns. From the late 1990s, research into emotion, in artificial intelligence, has been conspicuously gaining momentum, be it by the sheer numbers of scholars and teams who have been devoting efforts to provide software models of emotion. Sometimes, the goal is to elicit emotional responses from viewers, by means of the behaviour of an artefact in which the model of emotions is rudimentary (Cassinis et al. 2007; Nissan et al. 2008), rather than providing a sophisticated model of the emotions in the artefact itself, which is the goal indeed of much ongoing research.

It would be fair to consider Rosalind Picard’s Affective Computing (1997), reviewed by Sloman (1999) and Nissan (1999), to have been the watershed. Whereas it was not the case that everybody has been emulating her, at any rate her book foreshadowed the burgeoning of the field. Not that what has been going, among computer scientists, by the name ‘emotion’ has always meant the same within artificial intelligence itself. Sloman (2000) remarked: “One of the notable features of recent AI literature is the proliferation of architecture diagrams in which there is a special box labelled ‘emotions’”, but “some theorists write as if all motives are emotions”, which is questionable (ibid., p. 7).

Research into emotion within artificial intelligence, virtual environments, or interface design within computer science has traditionally focussed on the emotions of individual agents, typically at the inter-individual level whether these agents be humanoid or zoomorphic characters, such as screen avatars (e.g. Cassell et al. 2000) or characters in narratives (e.g. Dyer 1983a, b, 1987), or more abstract agents, such as computer interfaces interacting with users.

Bear in mind that it is possible to devise models of a social narrative in which susceptibility and vindictiveness are represented without delving into the dynamics of anger (Nissan 2008a), and the fostering of mass prejudice, its dynamics and feedback structure can also be represented, when dealing with social narratives formally, without getting into the inner workings of the primitive functions (Nissan 2008b, c). Elsewhere, I have dealt with the relation between emotion and culture (Nissan 1997).

Emotion in computational modelling: crowds versus sets of individual agents against the backdrop of theories of crowd dynamics

Sport, politics, ideological or ethnic issues are volatile subjects. The former is not necessarily an exception to the statement that in these domains reasoning plays an important part. Emotions in turn affect reasoning. (Braud (1991, 1996) analyses the emotional dimensions of politics. Anger is sometimes prescribed by a given political ideology. Haggay Ram provides an analysis of a particular ideology in his ‘Mythology of rage: representations of the “Self” and the “Other” in Revolutionary Iran’ (Ram 1996). As to the relation between proximity/distance and attitudes to the Other, consider Freud’s notion of the Narcissism of the minimal differences. Markey (1983, p. 375) reports about the attitudes of Fox Native Americans to other Native American nations, in relation to language and identity loss: “I was told that: ‘Foreigners are more tolerated than nearer people. They have a different language and a different belief. It is just better to be from Switzerland than from here’” (ibid.).

A book by Andrew Adamatzky’s (2005) inaugurated the modelling of the beliefs and actions of crowds, and in particular, of crowds developing “irrational” behaviour. Or is it irrational? Perhaps there is a rationality, after all, to what to a given observer may appear to be irrational. Some, within and without crowd psychology, may disagree about Adamatzky’s sense of “irrational”. I in turn argue that rational aspects can clearly be discerned even in situations of mob violence.

Whereas Adamatzky does not explicitly model emotional contagion, he does model how patterns emerge in a crowd that can only be ascribed to emotion distorting beliefs and channelling actions. He insists that in the crowd of agents under some conditions, individual agency fades, and an irrational mob “mind” and agency emerges. And early on, Adamatzky (p. 7) quotes Martin (1920) to the effect that “Like the paranoiac, every crowd is potentially if not actually homicidal in its tendencies”.

The models developed by Adamatzky, if mathematically and computationally impressive, are per force aiming at some core phenomena. They can be excused for exceeding simplification of the identities of the agents in the crowd, the environment, and the kind of actions carried out by agents in such an environment.

Instead of developing an exegesis of Adamatzky’s work, suffice it to signal the importance of his book, as a springboard for future research into computational modelling of emotions, with particular reference to rational versus irrational behaviour in a crowd of agents, rather than in a set of individual agents. In my view, it is a desideratum to augment the general, bare core as provided by Adamatzky’s platform with richer models of particular patterns of human behaviour, models to be developed within the human sciences.

Mass hysteria is a tricky subject, within studies of crowd dynamics. Suffice it to mention Robert Bartholomew’s (2001) Little Green Men, Meowing Nuns, and Head-Hunting Panics: Illness and Social Delusion. What appears to be irrational in crowd behaviour does not need to be as picturesque. Bear in mind that in crowd behaviour, even what appears to be “irrational” may have its rationale. There are various theories of crowd dynamics. Suffice it to mention a few of them. According to convergence theory, crowd behaviour is carried into the crowd by particular individuals, and crowds are a convergence of like-minded individuals. This is in contrast to contagion theory; it is crowds that cause people to act in a certain way. Turner and Killian’s emergent-norm theory of crowd dynamics (e.g., Turner and Killian 1987; Killian 1980; Gordon 1983; Aguirre et al. 1998) holds that crowds are not irrational, even though social behaviour is not entirely predictable. Distinctive pattern of behaviour, amounting to a norm followed by example, can be observed to have emerged in the behaviour of a crowd, such that this norm did not exist in the given crowd earlier. Reicher’s critique of emergent norm theory claims that groups do not coalesce in a normless environment. The people bring with them a set of norms. To Reicher, there is no need for a norm to emerge.

Collective effervescence refers to people being energised at a gathering such as a sporting event, or a riot, a rave, or a carnival; Émile Durkheim (1912) used this concept in his theory of religion (considered as a fundamentally social phenomenon) as based on his study of the religious experience of Australian aborigines. According to Durkheim, gatherings of the tribe are sacred, and members experience a loss of individuality, unity with the gods, and unity with the group.

In some kinds of competition where partisanship is involved, other than sport, reasoning plays an important part: e.g., chess tournaments, or cultural competitions throughout history. Group identity is paramount, in shaping the emotions in all of those categories. For studies in sport psychology, see Bakker et al. (1993), and Morris and Summers (1995). To the extent that enhancing coordination is the goal, this is a shared trait with those theories of performance that are oriented to risk management or the enhancement of safety, for example, in as different a field such as the traffic psychology (Rothengatter and Carbonell Vayá 1997; cf. the Elsevier journal Transportation Research, Part F: Traffic Psychology and Behaviour).

Distinguish between the emotions of performers and viewers in sport. For example, Giulianotti et al. (1994) are concerned with football violence, itself an instance of emotional contagion (Hatfield et al. 1994) on the scale of a crowd. Ronald Ammons’ (1993) considers the treatment of emotional conditioning in baseball training. Snyder (1990) is also concerned with emotion and sport, and discusses a case study of collegiate women gymnasts. Jay Bass (1990), in the sociology of leisure sailing, focusses on the lived emotion involved: the enjoyment of fear. Strong emotion in relation to physical activity in an open public space is not necessarily while practising or watching some sport; an example of high-emotion physical activity that is not sport is riding a roller-coaster.

One possible direction of research, when trying to further develop Adamatzky’s framework, could be to semantically enrich both the environment and the agents. Yet, as the development of ontologies and common sense is practically feasible only in restricted domains, it should be possible to develop different models and computer simulations for given kinds of violent behaviour on the part of individuals within a crowd, for example car-drivers, or mobs at the stadium (in the case of sport).

The rule of reason, emotion, and affective computing

In his book, Adamatzky consistently refers to the negative role of irrationality. By and large, within the frameworks he has set for himself (violent mobs, and the beliefs behind their actions), this may be justified. This does not mean that he posits the rule of reason, in isolation from emotion. Far from it. His very point is that emotion affects belief. “So, emotions are not simply indispensable in cognition, collective knowledge and development of social structure (see e.g. Barbalet (2000))—emotions govern cognition” (Adamatzky, p. 65).

Still, it was only at a relatively late stage that, within computer science, the myth of cold reason has given way to respectability being bestowed on emotional factors; concerning this, refer to my review (Nissan 1999) of Rosalind Picard’s Affective Computing (1997). In the introduction Picard anticipates disbelief: “Is this not absurd?” (ibid., p. 1); “At first blush, emotions seem like the last thing we would want in an intelligent machine” (ibid., p. 2);

Why do I propose to bring emotion into computing, into what has been first and foremost a deliberate tool of science? Emotion is probably good for something, but its obvious uses seem to be for entertainment and social or family settings. Isn’t emotion merely a kind of luxury that if useful for computers, would only be of small consequence? This book claims that the answer is a solid ‘no’. Scientific findings contradict the conclusion that human emotions are a luxury. Rather, the evidence is mounting for an essential role of emotions in basic rational and intelligent behaviour (ibid.).

“This will no doubt sound outlandish to some people, who may wonder if I have not lost a wariness of emotions and their association with poor judgment and irrational behaviour. I have not” (Picard, p. ix). Quite on the contrary, in completely avoiding emotion, computer designers may actually lead computers toward those undesirable goals” (ibid.), by analogy with evidence about human patients with too little emotion (ibid., pp. x and 11).

There exist computational models of social influence within emerging subcultures. In an environment called Sugarscape, Epstein and Axtell (1996) grew an artificial society whose population of individual agents are each labelled with a bitstring (randomly assigned at birth). From local interactions of the agents with their environment and with each other, population-level cultural patterns emerged. Kennedy and Eberhart explain (2001, p. 230):

The effect of agents’ taking tags from their partners is the formation of homogeneous red or blue populations [i.e., with more than half of their tags being ones or zeroes respectively] within each of the resource-rich regions; one region may be occupied by the red culture and the other with the blue, or both sites are either red or blue. Sociograms depicting the patterns of connections between pairs of agents that have interacted with one another reveal dense interlinkage within clusters and almost no connections between them. In other words, agents interacting with their neighbours become more similar and gravitate together toward locations rich in resource—and cultures do not communicate with one another.

The basics of Adamatzky’s model

There is no escape from the fact that Adamatzky’s book (2005) is replete with mathematical modelling. We need to say something about how this model is structured, because Adamatzky’s model is concerned with crowds rather than with just an individual agent, and therefore Adamatzky’s approach is a departure from the mainstream of emotion modelling in computer science. In Adamatzky’s model, each individual agent (a mental entity) is represented by a finite automaton. Finite-state automata, or finite-state machines, are a very familiar abstraction from computer science. One typically visualises a finite-state automaton as a graph of circles (for its states) and arcs (for its next-state function).

The environment Adamatzky’s agents inhabit is a planar grid, organised algebraically as a two-dimensional lattice. “The automata-agents do not interact with each other, no information is exchanged; however, an agent can sense the presence or absence of agents in the eight cells immediately surrounding the agent’s current site” (p. 181). In this crowded environment, under some conditions there will be “increasing uncertainty, cognitive load and goal interference, which may result in fear and anxiety, cognitive strain and frustration”, with irrational behaviour eventually emerging (ibid.).

Special mathematical structures are introduced, at the simpler level of treatment, for states of mind: affectons and doxatons, respectively for the dynamics of affects and beliefs. “Affecton is a finite automaton whose internal states, input and output states, are happiness, anger, fear, sadness, confusion and anxiety” (ibid., p. 227). “Doxaton is a finite automaton whose internal states, input and output states, are knowledge, doubt, delusion, misbelief and ignorance” (ibid., p. 228), where delusion is defined as “an erroneous belief that cannot be justified” (ibid.), “Misbelief is a wrong belief” (ibid.), “Knowledge is a justified belief” (ibid.), and “Doubt is a state of neither belief nor misbelief about a locally justifiable fact” (ibid.). “Doxastic world is an element of a set of all possible sets of doxastic states” (ibid., p. 228), where “Doxastic state is an agent or automaton state derived from belief, which includes” the five kinds enumerated under doxaton (ibid.).

Domination relations among doxastic states lead to a state being extinguished (ibid., p. 149), under an assumption of equiprobability of the five doxastic states at the beginning of development. “Doxastic chemistry is an abstract system comprising doxastic states which interact which each other by chemistry-like rules” (ibid., p. 228). Adamatzky has applied, to collectives of abstract agents far from mental equilibrium, discrete models from cellular automata and artificial chemistry. Methods also include lattice swarms, algebra, finite automata and Markov chains, and differential equations.

The formation of a crowd, according to Adamatzky’s models, causes fusion of the minds of individuals into one collective mind, whereas individual members lose their individuality within the crowd. In this, he admittedly “follow[s] the idea of ‘pre-experimentalists’ of group mind theory, […] who explained unique features of crowds by three processes’’, namely, deindividuation, contagion, and suggestibility (ibid., p. 6, based on Turner 1987). Effects include patterns of emotional, impulsive and irrational behaviour. There are perceptual distortions and hyper-responsiveness, as well as self-catalytic activities.

The emotions of individuals and computation: verbal or otherwise auditory

In the early 1970s, the late Maria Nowakowska developed a motivational calculus (Nowakowska 1973b, 1984, vol 1, Ch. 6), and a formal theory of actions (Nowakowska 1973a, 1973b, 1976, 1978), whose definitive treatment was in Nowakowska (1984, vol 2, Ch. 9). She also developed a formal theory of dialogues (Nowakowska 1976, 1984, vol 2, Ch. 7), and a theory of multimedia units for verbal and nonverbal communication (Nowakowska 1986, Ch. 3). In “Theories of Dialogues”, Chapter 7 in her Theories of Research, Nowakowska (1984) devoted Ch. 5 to a mathematical model of the “emotional dynamics of a dialogue”, and Sect. 5.3 to a formalisation of a “provocation threshold”. This is relevant, and potentially useful for present-day conversational models involving emotions, in the design of computer interfaces, or of software supporting the interaction among a group of human users.

In natural-language processing (NLP), the PARRY programme (Colby 1975, 1981) embodies in its response mechanism a model of symptoms of paranoia. PARRY used to run in conversational mode, taking as input sentences from a human interviewer. The programme embodies in its response mechanism a model of symptoms of paranoia. PARRY impersonates a person experiencing negative emotions or emotional states, the latter being represented by numerical variables for “anger”, “fear”, and “mistrust”.

A much more sophisticated NLP programme than PARRY, namely, BORIS, was developed by Michael Dyer (1983a), an author who afterwards made significant contributions to connectionist NLP, as well as to the emerging paradigm of “artificial life”. BORIS—that by now must be put into a historical context, but still offers important insights—detects or conjectures characters’ affect on processing narrative textual accounts. Reasoning is carried out according to this kind of information. For example, a character is likely to be upset because of a plan failure (which to BORIS heightens arousal: possibly anger, though not specifically frustration).

Characters’ plan failures within a plot are the central concept employed by BORIS for understanding the narrative. Such NLP processing requirements were the only criterion when designing the representation and treatment of affects in BORIS (Dyer 1983a, p. 130). “BORIS is designed only to understand the conceptual significance of affective reactions on the part of narrative characters. To do so BORIS employs a representational system which shares AFFECTs to one another through decomposition and shared inferences” (ibid.). For example, somebody who has just been fired may go home and kick his dog; the former event explains the latter (p. 131). Admittedly, there was no intent to model emotions or emotional states as such. Also see Dyer (1983b, 1987) on the affect in narratives, or on computer models of emotions.

Apart from BORIS, the treatment of emotion in OSCAR deserves mention. This programme, described by John Pollock in his book How to Build a Person (1989), embodies a partial model of human cognitive states and emotions. Besides, Faught (1978) described “a model based on conversational action patterns to describe and predict speech acts in natural language dialogues and to specify appropriate actions to satisfy the system’s goals” (p. 383). Faught credits Izard’s (1971) differential emotion theory (cf. Izard 1977, 1982) and his own (Faught 1975) “extension of it into affect as motivation for other thought processing” (Faught 1978, p. 387).

A model of artificial emotions was by Camurri and Ferrentino (1999). They argued for its inclusion in multimedial systems with multimodal adaptive user-interaction. Their applications are to dance and music. In its simplest form, to Camurri and Ferrentino, an artificial agent’s “emotional state is a point in space, which moves in accordance with the stimuli (carrots and sticks) from the inside and the outside of the agent” (p. 35). The agent is robotic, and movements (for choreography) are detected. The stimuli change the agent’s affective “character”, which is a point in a space of two dimensions (p. 38).

The two axes represent the degree of affection of the agent towards itself and towards others, respectively. We call these two axes “Ego” and “Nos”, from the Latin words [for] “I” and “We”. A point placed in the positive x (Ego)-axis represents an agent whose character has a good disposition towards itself. A point towards the left (negative) Ego would mean an agent fairly discouraged about itself. The emotion space is usually partitioned into regions […] labelled by the kind of character the agent simulates.

The Emotions of individuals, and computation, I: theoretical works

Such computational models elude the embodiment of the agent, and embodiment is essential to robotics. Emotions in robots were envisaged in fiction, starting with Karel Čapek’s play R.U.R.: Rossum’s Universal Robots (Čapek 1921). One step forward, also in respect of embodiment, came with wearable computers, which a human user would wear on his or her body, and would estimate his or her emotions and respond accordingly. In 1997, the MIT Press published a book which was going to be influential, namely, the already mentioned Rosalind W. Picard’s Affective Computing, reviews of which included Sloman (1999) and Nissan (1999).

Different classes of emotions arise in Sloman’s COGAFF architecture (with “no specific component whose function is to produce emotions”), “as emergent properties of interactions between components that are there for other reasons” (Sloman 2000, p. 7). Some goals—and goal conflict resolution—are affected by emotion-related obligations indeed, such as “the decision whether to help granny or go to the marvellous concert”: which, Sloman points out, is a kind of handling competing motives that may be handled in a part of an architecture different from where another kind of goal conflict is solved, such as “whether to continue uttering the current unfinished sentence or to stop and take a breath”, or “to use placatory or abusive vocabulary when addressing some(one) who has angered you” (ibid., pp. 6–7).

In the Taylor and Francis journal Cybernetics and Systems, the issue 32(5), July 2001, was devoted to the modelling of emotions in a computational perspective. Arzi-Gonczarowski (2000) applied mathematical category theory to AI modelling of perceptual, cognitive and affective processes. See also, e.g., Arzi-Gonczarowski (1999). In her approach, “[e]motive reactions are part of the definition of perceptions, […] hence perceptual states are also affective states” (Arzi-Gonczarowski 2000, p. 12). For example, she is able to describe mathematically a situation with mixed feelings (2000, p. 18).

The Ortony–Clore–Collins (OCC) cognitive model of emotions has often been applied in computer models of emotion. “Ortony et al. (1988) wrote that they did not think it was important for machines to have emotions; however, they believed AI systems must be able to reason about emotions, especially for natural language understanding, cooperative problem solving, and planning. Some structure was needed so computers could begin to represent the thicket of concepts considered to be emotions” (Picard 1997, p. 195). The OCC model “group[s] emotions according to cognitive eliciting conditions. In particular, it assumes that emotions arise from valenced (positive or negative) reactions to situations consisting of events, agents, and objects. With this structure, Ortony, Clore and Collins outlined specifications for 22 emotion types […]. Additionally they included a rule-based system for the generation of these emotion types” (Picard, ibid., p. 196).

Valenced reactions may be to:

  • consequences of events (the reaction being: pleased, displeased, etc.), or to

  • actions of agents (the reaction being: approving, disapproving, etc.), or to

  • aspects of objects (the reaction being: liking, disliking, etc.).

Valenced reactions to aspects of objects give raise to attraction emotions: love or hate.

Valenced reaction to consequence of events may focus on consequences for the other, or consequences for the self. Consequences for the other are either desirability for the other, or undesirability for the other, and in both cases, valenced reactions to consequences for the other give raise to the fortunes-of-others emotions: happy for, or resentment, if the consequences of events are desirable for the other; or then: gloating, or pity, if the consequences of events are undesirable for the other.

Valenced reaction to consequences of events, if focussing on consequences for the self, are differentiated according to whether prospects are relevant or irrelevant. If prospects are irrelevant, then emotions arising are ones of well-being: joy, or distress.

If, by contrast, prospects are relevant, emotions which arise are prospect-based, and include hope or fear, which may be confirmed or disconfirmed.

  • Emotion arising if hope is confirmed is satisfaction.

  • Emotion arising if fear is confirmed is: fears-confirmed.

  • Emotion arising if hope is disconfirmed is disappointment.

  • Emotion arising if fear is disconfirmed is relief.

Valenced reaction to actions of agents may focus on the self agent, or on the other agent. Emotions arising are attribution emotions.

  • For focus on the self agent, they include: pride, or shame.

  • For focus on the other agent, they include: admiration, or reproach.

Well-being emotions together with attribution emotions give raise to well-being/attribution compounds. These include:

  • gratification or remorse (if the focus is on the self-agent), and

  • gratitude or anger (if the focus is on the other agent).

A schema showing these relations appears in Fig. 2.1 in Ortony et al. (1988) and in Fig. 7.1 on p. 197 in Picard (1997).

The emotions of individuals, and computation, II: visual as embodied

Facial displays of emotions are recognised across human cultures. Yet, there is a further level, at which culture intervenes: as shown by Jean-Jacques Courtine and Haroche (1988) in Histoire du visage, expressing one’s emotions or refraining from doing so, including through the face, is deeply shaped by culture. Apart from animatronic reproductions of human emotions (or more in general, of facial displays in humans or other mammals; see Nissan et al. (2008)), one should also consider, within computer science, a field being the automated recognition of emotional displays in human faces. This is an area within a subdiscipline, face recognition, of machine vision. Bear in mind that some animatronic heads with human features have inside their eyes cameras, which perceive the human face of the interlocutor, so that the facial expression can be recognised, in order for an appropriate response to be given.

London-based Kearney and McKenzie (1993) developed an expert system which, by using a rule set, could interpret facial expressions in terms of emotions they display. It was an unassuming project; while interesting, it had no sequel. At the Machine Perception Laboratory of the University of California, San Diego, a project has been developed, for the fully automatic recognition of expressions of basic emotion. The goal of automated recognition of human facial expressions in the research from San Diego is to enhance human–computer interaction, and as a step towards “social robots”. Some of the team’s papers discuss an application to humanoid robots. Publications of the Machine Perception Laboratory in San Diego can be downloaded from the website of the Machine Perception Laboratory at the University of California in San Diego (http://mplab.ucsd.edu/publications/publications.html).

Let us turn to emotions in computer animation. In Kalra et al. (1991), a realistic facial animation model in three dimensions is described. Among the other things, the problem of synchronisation between speech, emotions and eye motion is addressed (ibid., p. 97 ff). The solution involves several layers: abstract muscles; minimal perceptible actions; phonemes and expressions; words and emotions; and then synchronisation. “Based on Ekman’s work on facial expressions (Ekman and Friesen 1975, 1986; Ekman 1980, 1992), several primary expressions may be classified: surprise, fear, disgust, anger, happiness, sadness ([…]). Basic expressions and variants may be easi[ly] defined using snapshots” (ibid., p. 101).

Such facial snapshots are entirely synthetic, not photographs. Snapshots mixed together may be made to account, e.g., for the mouth region and the eyes. As shown in that paper’s figures (Kalra et al. 1991, pp. 100–102), facial expressions of a character, Marilyn [Monroe], are effectively simulated. Those figures depict static images. The software, instead, produces realistically animated scenes of motion, featuring the “synthetic actors” (Magnenat Thalmann and Thalmann 1991, 1996, 2001).

“An emotion is defined as the evolution of the human face over time: it is a sequence of expressions with various duration and intensities” (Kalra et al. 1991, p. 102). Emotions are parametrised by resorting to the concept of “generic emotion”. “An emotion has a specific average duration, but it is context-sensitive. For example, a smile may have a 5–6 s duration, but it may last 30 s in the case of a laughable situation. It is also important to note that the duration of each stage of the emotion is not equally sensitive to the time expansion” (ibid., p. 103). Statistical distribution is resorted to. “Once a generic emotion is introduced in the emotion dictionary, it is easy to produce an instance by specifying its duration and its magnitude” (ibid., p. 103). A scheduling language is used in the synchronisation mechanism. The system is part of a global project (Magnenat Thalmann and Thalmann 1991, 1996) that accounts also for local body- and cloth-deformation, gait, and so forth; for example, a very effective image of successive stages of motion appeared in 1993 on the cover of issue 4(3) of the journal edited by the Thalmanns, The Journal of Visualization and Computer Animation.

The images, for all their poignancy and realistic rendition of the physics and anatomy of the human body, as well as of facial expressions, arguably are strongly culture-specific. This refers to both certain stylisation conventions of the image (e.g., the background), and, much more importantly, the content: because of the very selection of the particular characters, Marilyn [Monroe] and Elvis [Presley]; because of their dated hairstyle and garb; because of the kind of glamour represented in the scenes (or symbolised by the dress); because of associated intertextual references (the simulation of a flying shirt); and because of the woman’s gait, which is not universal across cultures. Not only that but lip motion is simulated, and this reflects the language in which the utterances are formulated. Also in computer-graphic animation, the rendition of agents’ emotions is dealt with by Costa and Feijo (1996).

The goal of providing animated life-like characters, believable embodied agents, in computer interfaces (as embodied conversational characters) or virtual environments has been quite conspicuously providing motivation for an increase in research into the simulation of emotion within computer science. For example, a paper collection edited by Prendinger and Ishizuka (2004) is devoted to tools, affective functions, and applications in relation to life-like characters. Another paper collection on embodied conversational characters is Cassell et al. (2000).

Emotions in their narrative context

In Michael Dyer’s BORIS system (Dyer 1983a), a vast gamut of knowledge structures (ibid., p. 171) included AFFECTs (ibid., Chapter 4: pp. 105–139; and pp. 341–343), ACEs, i.e. Affects as Consequence of Empathy (ibid., pp. 120–124, 348–349), and interpersonal themes, actions, relationships and roles (ibid., Chapter 10 and Sect. 4.5). Also see Dyer (1987, 1983b) on affect in the automated understanding of narratives, or on (early) computer models of emotions. For artificial intelligence models of emotion from the 1980s, see also Pfeifer (1988). Characters’ plan failures within a plot are the central concept resorted to by BORIS in order to understand an input narrative.

Such natural-language processing requirements admittedly were the only criterion when designing the representation and treatment of affects in BORIS: see in Sect. 6. In particular, consider Dyer’s example of an employee having lost his job and then kicking his dog in anger. This contributes one building block towards the much more complex task of modelling scapegoating.

In his entry for ‘Emotion’ in the online Stanford Encyclopedia of Philosophy, Ronald de Sousa (2003) has pointed out the relation between emotions and their stories:

Some philosophers suggest that the directive power which emotions exert over perception is partly a function of their essentially dramatic or narrative structure (Rorty 1988). It seems conceptually incoherent to suppose that one could have an emotion, say an intense jealousy or a consuming rage, for only a fraction of a second (Wollheim 1999). One explanation of this feature of emotions is that a story plays itself out during the course of each emotional episode, and stories take place over stretches of time. de Sousa (1987) has suggested that the stories characteristic of different emotions are learned by association with “paradigm scenarios”. These are drawn first from a daily life as small children and later reinforced by the stories, art, and culture to which we are exposed. Later still, they are supplemented and refined by literature. Paradigm scenarios involve two aspects: first, a situation type providing the characteristic objects of the specific emotion-type (where objects can be of the various sorts mentioned above), and second, a set of characteristic or “normal” responses to the situation, where normality is first a biological matter and then very quickly becomes a cultural one. Once our emotional repertoire is established, we interpret various situations we are faced with through the lens of different paradigm scenarios. When a particular scenario suggests itself as an interpretation, it arranges or rearranges our perceptual, cognitive, and inferential dispositions.

A problem with this idea is that each emotion is appropriate to its paradigm scenario by definition, since it is the paradigm scenario which in effect calibrates the emotional repertoire. It is not clear whether this places unreasonable limitations on the range of possible criticism to which emotions give rise. What is certain is that when a paradigm scenario is evoked by a novel situation, the resulting emotion may or may not be appropriate to the situation that triggers it. In that sense at least, then, emotions can be assessed for rationality.

Computational consciousness? Concerning the philosophical background

There is, within computer science, a direction of research called artificial consciousness. See, e.g., Harvey (2002), Holland and Goodman (2003). I am sceptical about what it can achieve. Some scholars, in philosophy and also in robotics, maintain that consciousness is amenable to a mechanistic conception. See in Chalmers (2003) “an overview of issues concerning the metaphysics of consciousness” (ibid., p. 135, note 1). “Consciousness fits uneasily into our conception of the natural world. On the most common conception of nature, the natural world is the physical world. But on the most common conception of consciousness, it is not easy to see that it could be part of the physical world” (ibid., p. 102). Among proponents of materialist (or physicalist) theories of mind (I do not endorse these), philosopher Daniel Dennett (of Tufts University), the author of Consciousness Explained (Dennett 1991, 1996), is well known. See Bo Dahlbom’s edited volume (1993), for an exchange between Dennett and his critics ranging over his whole corpus, yet with special attention being given to his work Consciousness Explained. “Daniel Dennett (1978) found the ‘intentional stance’ lurking in artificial intelligence” (Bickle 2003, p. 346). “Dennett (1978, 1987) has been particularly concerned to deny that beliefs and desires are causally active inner states of people, and maintains instead that belief-ascriptions and desire-ascriptions are merely calculational devices, which happen to have predictive usefulness […]” (Lycan 2003, pp. 58–59).

A few considerations about published forums

Mind & Machine is a journal that has made it its vocation to bridge between the philosophy of mind, and computational models. Yet, its focus is on the individual. Adamatzky’s book is an opportunity for importing the computational modelling of qualitative phenomena within social human behaviour within the realm of non-quantitative research in the human sciences, or research into the human mind.

It must be said that Adamatzky’s book has not come out of the blue. There is an area of computational modelling that goes by the far too ambitious name of artificial life, and that is itself the offspring of artificial intelligence. The artificial life movement (e.g., Sims 1994), whose journal, Artificial Life, was established in 1994, is one of a number of research communities that made it their concern to develop simulations for behaviour inside populations. There exists a Journal of Artificial Societies and Social Simulation, established in 1997, and published online (http://www.soc.surrey.ac.uk/JASSS).

Yet, such models typically had scant regard for the individual agent. It was only with Adamatzky’s work, that agents inside populations were endowed with cognitive and mood/affect primitives significant enough (however conventional and schematic), affectons and doxatons, and with the dynamics of affects and beliefs computationally explored thoroughly and mathematically rigorously enough, for it not to be utopian for us to suggest to scholars in the human sciences that this is the moment to start and engage in bringing in their own expertise, in order to try and bootstrap into existence such further models that could realistically aim at achieving standards by the norms of the human sciences. It would make sense for a team, or group of teams, to devise such modelling for one class of phenomena at a time. Once a pool of such next-generation models would become available, try to tie them back together as tools in a toolbox.

Suggestions for research into computational models of emotional contagion: lessons to be learned from models of distributed stigmergetic control

I tentatively propose that in order to devise computational models of emotional contagion, possibly such crowd dynamics that results in mass hysteria, useful concepts can be derived from operations research techniques, also used in computational intelligence (ant colony algorithms, also known as distributed stigmergetic control), which are based on the metaphor of a colony of ants (or termites), when individual members of the colony follow a route once they detect the olfactive trace of chemicals (pheromones) released by their companions that already passed by that route.

In the case of the flaring of mass emotions, instead of chemical messages and resulting locomotions through space (which is the case of ants and termites), the outcome is entertained beliefs and adopted behaviour, whereas messages are constituted of communicated propositional content (utterances at rallies or broadcasts, or printed texts), and observed behaviour (such as looters on the rampage). Because of pheromone-following behaviour, passing the same points more frequently, ants will lay down a denser pheromone trail. Ant colony optimisation has been used to optimise the travelling salesman problem and other optimisation problems (Maniezzo and Carbonaro 2001; Maniezzo and Roffilli 2008; Dorigo et al. 1996; Dorigo and Gambardella 1997; Di Caro and Dorigo 1998; Dorigo and Di Caro 1999; Marshall et al. 2003; and in textbook form in Engelbrecht 2002, Chapter 17: “Ant Colony Optimization”, pp. 199–208).

Human beings within societies are more specialised than ants, for sure, but stereotypes of behaviour by human groups based on age, or perhaps profession, have also been investigated professionally. The cooking by a little child would not be as harshly judged, as failure on the part of an experienced cook, or even a person expected by most to be able to cook; this was reflected in SKILL, a simple project in which I supervised Fakher-Eldeen et al. (1993), and which is somewhat related to the more complex ALIBI model of seeking exoneration (e.g., Kuflik et al. 1989; Nissan and Rousseau 1997; Nissan and Dragoni 2000; Nissan and Martino 2004, pp. 200–206). Kunda and Thagard (1996) presented a more sophisticated model, for evaluating expectations based on stereotypes:

The model simulates the effect of stereotypical information on a concept, in this case the descriptor “aggressive”. Kunda and Thagard hypothesised that individuals are more likely to expect a stereotypical construction worker to punch someone and a lawyer to argue with someone, given that both targets are labelled “aggressive” (Kennedy and Eberhart 2001, p. 274).

A computational model—of a kind sometimes called an adaptive culture model (ACM) (Axelrod 1997; Kennedy and Eberhart 2001, pp. 263–283)—for the spreading of beliefs or attitudes (as well as tangible phenomena) through a population by neighbours’ contiguity, was applied by Kunda and Thagard to a population of artificial agents; 40 trials were carried out, and all of them resulted in the population converging on the stereotypical expectation that a lawyer would rather argue, and a construction worker would rather punch.

Such population dynamics allow important insights into the cognitive operations involved. Part of the definition, indeed, of a stereotype is that it is a belief shared by a group about members of another group. ACM shows the development of stereotyped thinking as it spreads through a population. A set of commonly held beliefs is arranged in various ways until the best explanations are found. The search is shared by the population, and the successful results spread to all members. (Kennedy and Eberhart 2001, p. 279).

Concluding remarks

Let us consider, first of all, a fundamental question: what are computational models of the emotions good for? Answers could be provided in more than one perspective. One of them is, so that computer interfaces could respond in a more user-friendly mode. Another is, so that the artificial intelligence capabilities of a tool would not be only focussed on logical reasoning, but would also take into account, in an application to any human matters, what human beings would expect the reasoning and the outcome should be, considering such human emotions that would apply to the case at hand.

Yet another answer would be, in order to provide a remedy to fundamental shortcoming in how the foundations of artificial intelligence have been laid; that is to say, for the sake of the integrity of the scholarly discipline per se. And then, one could consider such artificial intelligence tools whose very raison d’être is to test cognitive or more widely psychological theories. Some such tools carry out simulations, with the proviso that not everything that happens is deterministic.

There is mutual benefit to be expected from the collaboration of scholars from the social sciences or social psychology with scholars who are working with such models, without the investigators from the social sciences or psychology having to get down to the computational technicalities involved in such models. Insights that are useful for making those models more credible are more likely to come from quarters in both areas that are open to such collaboration. If any reader will be enticed to find out more, or even to consider engaging in kinds of investigation suggested by reading this overview, then this article will have achieved its intent.