Keywords

1 Introduction

Nonverbal communication plays a major role in the expression and perception of nonlinguistic messages that are exchanged between people during the negotiation [13]. Nonverbal messages in negotiation define the nature of the relationship between actors (e.g. [5]) provide a framework for interpreting communication (e.g. [7]) and guide decisions about subsequent manner in the interaction (e.g. [3]). Nonverbal messages also influence the negotiation outcomes [15] and how information is shared (e.g. [11, 19, 20]). Successful understanding of nonverbal messages can enhance the communication between the parties and facilitate reaching an integrative agreement. However understanding the meaning of nonverbal behavior of the negotiators can be a difficult task even for humans so if automatic models could be built for this purpose, not only computational agents that are designed for effective negotiation interactions would benefit from this ability but humans can also take advantage of it as well.

This paper describes the results of our attempt for automatic interpretation of the meaning and functions of the nonverbal cues in negotiation. We examine nonverbal cues in the interaction between two people when participants have different modes of relational affect (Positive, Negative) or level of involvement (Active, Passive). The dataset used in this paper consists of 180 individuals participating in negotiation. The negotiators were prompted to negotiate with different level of involvement and mode of relational affect by giving them different sets of instructions for approaching the negotiation (in terms of involvement and relational affect. This dataset thus provides a reliable rich test bed for us to train prediction models for interpretation of non-verbal messages in negotiation.

The communication literature has previously paid attention to nonverbal cues and different possible interpretations of the nonverbal features but machine learning techniques have not been used before to map non-verbal behavior features to the affect and involvement model [1] to our knowledge. We are assigning meaning to the non-verbal cues based on the “involvement-affect” model in order to interpret negotiators’ high level goals. Our ultimate goal is to use our findings for development of computational agents that can engage in negotiation with people. In what follows we discuss how different features of the nonverbal behavior in negotiation help us recognize the involvement and relation affect automatically. We introduce “involvement-affect” model as the theoretical framework we used for interpretation of the nonverbal cues. We describe our dataset and features and discuss the result of our machine learning experiments. The paper also outlines the direction for future work.

2 Background

Communication literature has studied factors associated with the meaning of communicated messages in the interaction and has made effort to develop dimensions to represent them. We can refer to Osgood’s semantic differential as an example which posits that meaning behind communicated messages can be grouped into three factors of responsiveness (ranging from “active” to “passive”), evaluation (ranging from “good” to “bad”) and potency (ranging from “strong” to “weak”) [17]. Research has shown that these factors are universal across cultures ([17, 21, 22]). Nonverbal behaviors such as body posture (hand movements and torso’s position), facial expressions and vocal cues as well as hand-movements have been shown to reflect these dimensions of meaning ([8, 10]). These non-verbal cues can potentially facilitate the exchange of information in the negotiation if both the sender of the behavior and the observer share the meanings attached to the nonverbal cues [9].

“Involvement-Affect” Dimensional Model of Relational Messages. We use “involvement-affect” dimensional model of relational messages ([1, 18]) for our interpretation of the non-verbal cues in the negotiation. According to Prager’s theorizing, nonverbal messages reflect two fundamental characteristics of a relationship: involvement and affect [1]. Various combinations of these produce various messages [1]. The involvement dimension captures the degree to which a person is engaged and involved, while the affect dimension reflects the extent to which a person experiences positive versus negative affect toward their counterpart. Nonverbal behaviors exhibited in conditions of high involvement are characterized as affiliative-intimate when accompanied with positive affect, and dominant-aggressive when amalgamated with negative affect. In contrast, nonverbal cues in conditions of low involvement combined with positive affect suggest social politeness, and avoidance-withdraw when accompanied by negative affect.

“Involvement-Affect” Model for Negotiations. The adaptation of the original “involvement-affect” model for negotiations proposes four negotiation approaches that convey a negotiator’s attitudes, motivation [20]. This adaptation These four negotiation approaches are:

  1. 1.

    Passive negotiation involvement: the extent to which negotiators are uninvolved

  2. 2.

    Active negotiation involvement: the extent to which negotiators are involved

  3. 3.

    Negative relational affect: the extent to which negotiators dislike their counterpart

  4. 4.

    Positive relational affect: the extent to which negotiators like their counterpart

Figure 1 illustrates these 4 negotiation approaches along the original affect-involvement model. For example a dominant behavior in this model is interpreted as Negative Affect and High Involvement. A submissive behavior is interpreted as Positive Affect and Low Involvement. The data used for our work in this paper was collected based on the “Involvement-affect” Model for Negotiations.

Fig. 1.
figure 1

Negotiation involvement-relational affect dimensions

3 Experimental Method

3.1 Dataset and Task

The dataset used consists of a total of 180 male students participating in a negotiation task. Only male participants were recruited because we did not want gender to influence the understanding of the nonverbal behavior. The negotiation task is an adapted 2-party version of the Towers Market negotiation simulation ([16, 24]) with two issues. The negotiations were videotaped. The negotiation scenario involved a baker and liquor storeowner negotiating their terms for sharing space in the Towers Market.

The manipulation for the modes of involvement and affect were incorporated in the role instructions. The manipulation reflected two levels of negotiation involvement: Active versus Passive and two levels of relational affect: Positive versus Negative. This resulted in over all four different sets of instructions to be provided to the participants (See Appendix A). The performance of the negotiators’ and compliance to the instructions were tested and measured. In order to avoid negotiators’ manipulated nonverbal expressions to influence one another, confederates were hired and trained blind to the hypotheses to act as counterparts to the study participants.Footnote 1

3.2 Nonverbal Behavior Features

The coding scheme and description of nonverbal behaviors were adapted from prior research ([6, 12]). These nonverbal behaviors are multi modal and defined based on the vocal and visual features simultaneously. The behaviors of the negotiators were coded for mouth movement, posture, head movement, hand movement and facial expression. Table 1 lists the categories of the non-verbal behavioral features coded.

Table 1. Nonverbal behavior features

3.3 Annotation of the Dataset

We manually coded the data. Trained research assistants were used for this purpose. These categories were coded one at a time to reliably identify all the behaviors of interest. For example, all coders were first trained on the posture category (distinguishing whether the participants were leaning back, leaning forward, or maintaining a neutral posture.). After coding all sessions on posture, research assistants were trained on another behavioral category. Coders recorded their observations using the Noldus Observer, a computer-based coding system that captures both the frequency and duration of nonverbal cues [14]. The Noldus software uses the frequency and duration codes to compute a score indicating the percentage of time a negotiator spent exhibiting a particular nonverbal expression. These scores are what we used as the value for each of the features. So for each negotiation we had one value for each feature representing the overall percentage of that behavior in the interaction.

4 Prediction Models

For our analysis, we left out the data corresponding to the negotiations that were missing some of the features and ended up with 138 data of negotiation that we used. We made two separate models one for determining the involvement: active versus passive and another one for determining the two levels of relational affect: positive versus negative.

4.1 Affect Prediction and Feature Selection

In this task we decide whether the negotiator has positive or negative affect. Accuracy of the prediction of the support vector machine (SVM) classifier with the polynomial kernel function (cache size 250007, exp = 1.0) is compared with Naïve Bayes classifier and two baseline prediction models: Majority baseline model which assigns the class of the most common observed class in the training dataset and the Random baseline that assigns the prediction label of “positive or negative” based on chance. These results are shown in Table 2. The SVM classifier performed significantly better than other models (P-value of the one way ANOVA on 10 fold accuracy < 0.05). Considering the size of our dataset, we decided to use the 10-fold cross validation paradigm for our prediction tasks.

Table 2. Comparison of the performance of prediction models for affect-involvement model

Feature Selection for Affect Prediction with “Information Gain Attribute Evaluation” and “Ranker” search method (Threshold - 1.79) by using 10 fold cross-validation (stratified) showed that the following features were ranked the 6 most important features for this task: (move.hands.NOT.speaking, hand.in.air, verbal.speech, lean.back. eye.contact.speaking, forward.lean) SVM Attribute Evaluation algorithm’s top 6 features are: (lean.back, move.hands.NOT.speaking, palms.down, open.smile, eye.contact.listening head.shake) These selections seem to resonate with the intuition that positive affect involves behavior such as: moving the hands around the body, or moving the body forward and keeping rapport with the other person by looking into their eyes ([2, 4, 23]).

4.2 Involvement Prediction and Feature Selection

In this task we decide whether the negotiator is highly involved in the negotiation or has low involvement in the task. The evaluation method for the model for prediction of involvement based on the non-verbal features was similar to the method used for affect. The results are presented in Table 2. Again the SVM classifier performed significantly better than the two major baseline (1-way ANOVA p-value and the Naïve Bayes classifier. The performance of SVM was on average better than the Naïve Bayes classifier but the differences were not statistically significant. Top features ranked by Information Gain Attribute Evaluation algorithm are: (non.smiling.mouth, open.smile, self.adaptors, lean.back) and with SVM Attribute Evaluation algorithm’s (non.smiling.mouth, lean.back, palms.down, head.shake, move.hand.speaking, palms.up, straight.back). For both prediction tasks the SVM classifier outperforms the other models (for instance the Naïve Bayes). This can be due to the nature of the features that we are using in our model, which are a set of descriptive non-verbal features at this point. The analysis of the most useful features for both tasks implies that features corresponding to the mouth and hand movements are critical in determining both the affect and level of involvement.

5 Discussion

In this work we showed the non-verbal features can be used for understanding the motivation of the negotiators. Our results show that we can use these features for making such predictions and the SVM classifier seem to be an appropriate choice for making such models. The fact that for each nonverbal feature in the model we only used one value (score that captures how much this behavior was observed in the interaction) associated with the negotiation, might be keeping us from getting higher accuracy from our models. If these feature are calculated at different stages of each negotiation we can make more detailed analysis of the interactions. In that case CRF classifiers might be a better choice due to the sequential nature of the negotiation.

6 Future Work

The annotations of the non-verbal coded features in our dataset was done manually. It is possible to automatically extract these features. Since our goal is to use these models in computational agents we want the pipeline to be fully automatized. This is the initial step in our effort to use the learned models of behavior from this paper for online and automatic detection of the affect and involvement of the negotiator in a dynamic interaction. This would enable the computational agent to make decisions on the fly about what to do in the negotiation.